Enterprise Interoperability X: Enterprise Interoperability Through Connected Digital Twins (Proceedings of the I-ESA Conferences, 11) 3031247701, 9783031247705

Enterprise Interoperability X presents contributions ranging from academic research and case studies, to industrial and

118 13 10MB

English Pages 328 [307] Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Organizing Committee
Organizers and Sponsors
Preface
Contents
Part I: Managing Uncertainty in Industry 4.0
Interoperability Challenges for a Corporate Interactive Situation Awareness System
1 Introduction—Situational Awareness
2 Extension of Situation Awareness for Production
3 Interoperability Requirements for Corporate Interactive Situation Awareness System
4 Solution Concept
5 Conclusion and Outlook
References
Identifying Uncertainty in Large-Scale Industry 4.0 Software Projects Through Model-Based Requirements Engineering
1 Introduction
2 Related Work
3 Approach
3.1 Methodological Framework
3.2 Procedure and Artifacts
3.3 Evaluation
4 Case Study
4.1 Requirements’ Elicitation of Software Demanders
4.2 Functional Specification of Software Providers
4.3 Requirements Mapping to System Functions
4.4 Status Comparison and Uncertainty Classification
5 Conclusion
References
A Machine Learning-Based System for the Prediction of the Lead Times of Sequential Processes
1 Introduction
2 Wind Turbine Tower Manufacturing
2.1 Production Planning and Control
3 Methodology
3.1 Data Gathering and Preprocessing
3.2 Feature Selection
3.3 System Design
3.4 Lead Time Prediction Modules Implementation
4 Results
5 Conclusions
References
Application of a Visual and Data Analytics Platform for Industry 4.0 Enabled by the Interoperable Data Spine: A Real-World Paradigm for Anomaly Detection in the Furniture Domain
1 Introduction
2 Related Work
3 Core Infrastructure for System Integration, Interoperability, Data Analysis, and Visualization
3.1 Interoperable Data Spine
3.2 Visual and Data Analytics Tool
4 Real-World Pilot Applications
4.1 Furniture Manufacturer’s Case Study
4.2 Application of the Solution in the Real-World Environment
5 Conclusions
References
Predictive Study of Changes in Business Process Models
1 Introduction
2 Theoretical Background and Related Work
3 Meta-Model of Implementation Approach
3.1 The Data Extraction Phase
3.2 Data Collection and BPMN 2.0 Dependencies Ontology
3.3 Change Impact Prediction in Business Process
4 Conclusion
References
Part II: Stakeholders’ Collaboration and Communication
Collaborative Platform for Experimentation on Production Planning Models
1 Introduction
2 Collaborative Platform to Experiment on Production Planning
3 Design and Implementation of the Web Collaborative Platform
3.1 Methodology
3.2 Platform Information Analysis (Requirements)
3.3 Platform HCI (Interfaces)
4 Conclusions
References
Enterprise E-Profiles for Construction of a Collaborative Network in Cyberspace
1 Introduction
2 Construction of a Virtual Supply Chain Based on CPS Concept
2.1 Model-Based Construction of a Virtual Supply Chain
2.2 Preparation of Enterprise Models
2.3 Generation of a Supply Chain Model
3 Enterprise E-Profiles
3.1 Extension of Enterprise Model to Enterprise E-Profile
3.2 Requirements for an Enterprise E-Profile
3.3 Structure of an Enterprise E-Profile
4 Use of Enterprise E-Profiles for Construction of Collaborative Network
4.1 Use Case 1: Construction of an Appropriate Virtual Network
4.2 Use Case 2: Decision-Making for the Enterprise's Beneficial Action
4.3 Use Case 3: Recruit of Collaboration Partners
4.4 Use Case 4: Search of Adequate Collaboration Partners
5 Proposal Toward Construction of an Enterprise Collaborative Network in Practical Future
6 Conclusions
References
A Multi-partnership Enterprise Social Network-Based Model to Foster Interorganizational Knowledge and Innovation
1 Introduction
2 State of Art
2.1 Enterprise Interoperability
2.2 Enterprise Social Networks
3 Ensuring Enterprise Interoperability Through Multi-relationships Enterprise Social Networks
3.1 Proposed Model
3.2 Analysis Procedure and Indicators
4 Conclusions and Further Research Directions
References
Developing Performance Indicators to Measure DIH Collaboration: Applying ECOGRAI Method on the D-BEST Reference Model
1 Introduction
2 Method
2.1 The D-BEST Reference Model
2.2 The ECOGRAI Method
2.3 Selection of the Cross-Collaboration DIHs
3 Results: Measuring DIH Cross-Collaboration Services
3.1 Two Cases: 2 DIHs and 2 Cross-Collaborations
4 Discussion
5 Conclusion
References
Interoperability in Measuring the Degree of Maturity of Smart Cities
1 Introduction
2 Background
2.1 Interoperability
2.2 Smart Cities
2.3 KPIs
2.4 Statement of the Problem
3 Procedure
4 Results
5 Discussion
6 Conclusions
References
Part III: Digital Twins Analysis and Applications
Analyzing the Decisions Involved in Building a Digital Twin for Predictive Maintenance
1 Introduction
2 DT Decision-Support Literature
3 Case Study: Predictive Maintenance DT for Automotive Manufacturer
4 Working Toward a Decision-Support Tool
4.1 Decisions During Build of the Case Study DT
4.2 Generalized Decisions
5 Conclusions and Further Work
References
Digital Twin Concept in Last Mile Delivery and Passenger Transport (A Systematic Literature Review)
1 Introduction
2 Research Area Definition and Contribution
2.1 Definition of Digital Twins
2.2 Scope of the Systematic Literature Review
2.3 Research Questions and Contribution
3 Methodology
4 Results
4.1 All Papers
4.2 Detailed Review of Included Papers
4.3 Level of Integration
4.4 Definition of a Digital Twin Regarding the Transport of People and Goods Around Cities
5 Conclusion
References
Recent Advances of Digital Twin Application in Agri-food Supply Chain
1 Introduction
2 Method
3 Results
3.1 Annual Publication Trend and Paper Classification
3.2 Contributions of Institutions
3.3 Contribution of Authors
3.4 Citation Analysis
3.5 Journals
4 Keyword Analysis
5 Discussion
6 Conclusion
References
Improving Supply Chain and Manufacturing Process in Wind Turbine Tower Industry Through Digital Twins
1 Introduction
2 Wind Turbine Tower Manufacturing Industry
3 Auditing the Logistic and Manufacturing
4 A Conceptual Interoperative Lean Framework in a Wind Tower Factory
4.1 Physical Level
4.2 Process Logic Level
4.3 Interoperational Logic Level
5 Digital Twins
6 Conclusion
References
Complementing DT with Enterprise Social Networks: A MCDA-Based Methodology for Cocreation
1 Introduction
2 Background
2.1 DT, Interoperability and Cocreation
2.2 DT, Data Sharing, and Cocreation
2.3 Enterprise Social Networks
3 A MCDA-Based Methodology
3.1 Phase 1. Characterization of the Problem
3.2 Phase 2. Selection of the MCDA Techniques
3.3 Phase 3. Application of the Selected MCDA Techniques
3.4 Phase 4. Selection of the ESN for Cocreation
4 Conclusions and Further Research Directions
References
Part IV: Smart Manufacturing in Industry 4.0
Interoperability as a Supporting Principle of Industry 4.0 for Smart Manufacturing Scheduling: A Research Note
1 Introduction
2 Literature Background
3 Analysis and Discussion
4 Concluding Remarks
References
A Deep Reinforcement Learning Approach for Smart Coordination Between Production Planning and Scheduling
1 Introduction
2 Problem Definition
2.1 Production Planning Problem
2.2 Production Scheduling Problem
2.3 Production Coordination Problem
3 The Reinforcement Learning Model Proposed
3.1 The DQN Agent Proposed
3.2 The Backlogged Agent
4 Conclusion and Further Work
References
Modeling Automation for Manufacturing Processes to Support Production Monitoring
1 Introduction
1.1 Problem Statement About BPMN Connection with IoT Devices
2 Model Driven Architecture
2.1 Model Driven Architecture Layers
3 System Architecture
3.1 Parametrization of the Device Task
4 Case Scenario
4.1 Parametrization of the Device Task
5 Conclusions and Future Developments
References
Interoperable Algorithms as Microservices for Zero-Defects Manufacturing: A Containerization Strategy and Workload Distribution Model Proposal
1 Introduction
2 Proposed Containerization Strategy
3 Workload Distribution
4 Use Case
5 Conclusions
References
An Interoperable IoT-Based Application to On-Line Reconfiguration Manufacturing Systems: Deployment in a Real Pilot
1 Introduction
2 Background
3 The Manufacturing Line Reconfiguration Toolkit I4Q Solution
3.1 Technical Background of I4Q LRT
4 I4q LRT in a Real Metal Machining Factory. Deployment in FACTOR Pilot
4.1 FACTOR Pilot Presentation
4.2 Technical Background to Deploy I4q LRT
4.3 An Example of Deployment of I4Q LRT in FACTOR
5 Conclusions
References
Part V: Standards, Ontologies and Semantics
New Ways of Using Standards for Semantic Interoperability Toward Integration of Data and Models in Industry
1 Introduction
2 Context
2.1 Innovation and Standardization
2.2 Industrial Context
3 The Necessary Shared Conceptual Framework
3.1 ISO/IEC 81346 and Systems Engineering
3.2 Application in the Context of Oil and Gas Pilot
4 TotalEnergies Approach of a Standard Semantic Framework: A Methodology Supported by Souslesensvocables Tools
4.1 TotalEnergies Semantic Framework (TSF) Foundations
4.2 The TotalEnergies Semantic Framework Processes Supported by SousLeSensVocables (SLSV)
4.3 Standards TBOX Discovery and Comparison Based on Label Similarity
4.4 ABOX Data Mapping to Standard TBOX
4.5 Knowledge Graph Construction and Management
5 Discussion and Conclusion
References
Business Context-Based Quality Measures for Data Exchange Standards Usage Specification
1 Introduction
2 Background
3 DES Usage Specification Quality Measures
3.1 Completeness of Coverage
3.2 Effectiveness
4 OAGIS Express Pack–Use Case
4.1 Business Context Knowledge Base
4.2 OAGIS Express Pack–Quality Measures’ Results
5 Discussion and Future Work
6 Conclusion
7 Disclaimer
References
An Ontology of Industrial Work Varieties
1 Introduction
2 Related Work
3 The WAx Framework
4 The WAx Ontology
5 Use of the WAx Ontology
6 Conclusion
References
Combining a Domain Ontology and MDD Technologies to Create and Validate Business Process Models
1 Introduction
2 Related Work
3 Combining the Domain Ontology and MDD Technologies to Create and Validate Business Process Models
4 Application in the Domain of Enterprise Management Systems
4.1 Domain Ontology
4.2 Business Process Description
4.3 Integrated Ontology
5 Conclusions
References
A New Polyglot ArchiMate Hypermodel Extended to Graph Related Technologies
1 Introduction
2 ArchiMate and Interoperability
2.1 Enterprise Interoperability
2.2 ArchiMate
3 Distributed Architectural Representations in a Networked Organization
3.1 Problematic
3.2 State of the Art and of the Practices
3.3 The Addressed Problem: ArchiMate Hypermodel Encompassing Compound Graph Related Technologies
4 Proposed Solution: Polyglot ArchiMate Hypermodel Extended to Graph Related Technologies
4.1 Principles and Proposed Approach
4.2 Validating the Approach
4.3 The Performed Experimentation and Results
4.4 Assessment of the Results
5 Conclusion and Future Work
References
Normalized City Analytics Based on a Semantic Interoperability Process
1 Introduction
2 Project Scope
3 Data Models
4 The Ontological Model Definition
5 Design of the Smart Cities Semantic Model
5.1 Characterization of Information Sources
5.2 Data Model Specification
5.3 Ontological Mapping
6 Conclusion
References
Author Index
Recommend Papers

Enterprise Interoperability X: Enterprise Interoperability Through Connected Digital Twins (Proceedings of the I-ESA Conferences, 11)
 3031247701, 9783031247705

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Proceedings of the I-ESA Conferences 11

Raúl Rodríguez-Rodríguez Yves Ducq Ramona-Diana Leon David Romero  Editors

Enterprise Interoperability X Enterprise Interoperability Through Connected Digital Twins

Proceedings of the I-ESA Conferences Volume 11

This series publishes the proceedings of the I-ESA conferences which began in 2005 as a result of cooperation between two major European research projects of the 6th Framework R&D Programme of the European Commission, the ATHENA IP (Advanced Technologies for Interoperability of Heterogeneous Enterprise Networks and their Applications, Integrated Project) and the INTEROP NoE, (Interoperability Research for Networked Enterprise Applications and Software, Network of Excellence). The I-ESA conferences have been recognized as a tool to lead and generate an extensive research and industrial impact in the field of interoperability for enterprise software and applications.

Raúl Rodríguez-Rodríguez · Yves Ducq · Ramona-Diana Leon · David Romero Editors

Enterprise Interoperability X Enterprise Interoperability Through Connected Digital Twins

Editors Raúl Rodríguez-Rodríguez Research Centre on Production Management and Engineering Universitat Politècnica de València Valencia, Spain

Yves Ducq Laboratoire de l’Intégration du Matériau au Système Université de Bordeaux Talence, France

Ramona-Diana Leon Research Centre on Production Management and Engineering Universitat Politècnica de València Valencia, Spain

David Romero Center for Innovation in Digital Technologies Tecnológico de Monterrey Mexico City, Mexico

ISSN 2199-2533 ISSN 2199-2541 (electronic) Proceedings of the I-ESA Conferences ISBN 978-3-031-24770-5 ISBN 978-3-031-24771-2 (eBook) https://doi.org/10.1007/978-3-031-24771-2 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Organizing Committee

Steering Committee Members Guy Doumeingts, INTEROP-VLAB, BE/University Bordeaux 1, FR Bob Young, Loughborough University, GB Raúl Rodríguez-Rodríguez, Universitat Politècnica de València, ES Yves Ducq, President INTEROP-VLab, BE Ramona-Diana Leon, Universitat Politècnica de València, ES Pedro Gómez Gasquet, Universitat Politècnica de València, ES Juan José Alfaro Saiz, Universitat Politècnica de València, ES Raúl Poler, Universitat Politècnica de València, ES Christophe Normand, INTEROP-VLab, BE Ricardo Goncalves, UNINOVA, PT Martin Zelm, INTEROP-VLab, GER

International Program Committee Members Adeel Ahmad, Université du Littoral Côte d’Opale, FR Alexandra Brintrup, University of Cambridge, UK Ana Lima, CCG\ZGDV Institute, PT Ana X. Halabi-Echeverry, NextPort, CO Angel Ortiz, Universitat Politécnica de València, ES Anna Nowak-Meitinger, Technical University of Berlin, DE Anne-Marie Barthe-Delanoe, INP-ENSIACET, FR Antonio De Nicola, ENEA, IT Antonio Lorenzo-Espejo, University of Seville, ES Antonio Rodrigues, ISCTE-IUL, PT Arturo Molina, Tecnologico de Monterrey, MX Bernard Archimede, ENIT, FR Bob Young, Loughborough University, UK v

vi

Organizing Committee

Bruno Vallespir, IMS Bordeaux, FR Chantal Reynaud, LRI, FR Claudia Diamantini, Delle Marche Polytechnic University, IT Claudia Ioana Ciobanu, “Gheorghe Asachi” Technical University of Iasi, RO Daniel Cubero, FACTOR Ingeniería y Decoletaje, ES David Romero, Tecnológico de Monterrey, MX Elena Jelisic, University of Belgrade, RS Farouk Belkadi, LS2N, FR Faustino Alarcón, Universitat Politécnica de València, ES Fernando Gigante-Valencia, AIDIMME, ES Frank-Walter Jaekel, IPK Fraunhofer Berlin, DE Gerasimos Kontos, Abu Dhabi University, UAE Hervé Panetto, University of Lorraine, FR Ip-Shing Fan, Cranfield University, UK Jan Mayer, Technical University of Berlin, DE Jesús Muñuzuri, University of Seville, ES Joao Mendonça, University of Minho, PT Linda Elmhadhbi, ENIT, FR Mamadou Kaba Traoré, University of Bordeaux, FR Manuel Noguera, University of Granada, ES Maria Angeles Rodriguez, Universitat Politécnica de València, ES Maria Jose Núñez, AIDIMME, ES Maria Luisa Villani, ENEA, IT María-Luisa Muñoz-Díaz, University of Seville, ES Mariana Pinto, CCG\ZGDV Institute, PT Marina Crnjac, University of Split, HR Marius Pislaru, “Gheorghe Asachi” Technical University of Iasi, RO Martin Wollschlaeger, TU Dresden, DE Michele Melchiori, University of Brescia, IT Michiko Matsuda, Kanagawa Institute of Technology, JP Miguel Angel Mateo Casali, CIGIP, ES Nejib Moalla, University Lumiere Lyon 2, FR Nemury Silega, Southern Federal University, RU Nicolas Daclin, IMT Mines Ales, FR Nicolas Figay, ADS, FR Nuno Santos, University of Minho, PT Raul Poler, Universitat Politécnica de València, ES Ricardo Gonçalves, UNINOVA, PT Thomas Knothe, IPK Fraunhofer Berlin, DE Tiago Pereira, University of Minho, PT Valentina Di Pasquale, University of Salerno, IT Ximena Rojas-Lema, Escuela Politécnica Nacional, EC Yacine Ouzrout, University Lumiere Lyon 2, FR Yves Keraron, ISADEUS, FR

Organizers and Sponsors

Organizers

Main Sponsor

vii

viii

Other Sponsors

Organizers and Sponsors

Preface

The current environment ecosystems have associated high uncertainty levels due to both endogenous and exogenous variables, which change very dynamically and therefore affect to companies. These organizations welcome tools that help them out not only to analyze and improve their products and processes but also to predict their performance. In this sense, digital twin technology, using real-world data, may be used to replicate such a product/process performance, by integrating IoT, AI, and software analytics tools. The appropriate usage of digital twins necessarily implies the ability to properly interoperate with other enterprises, as data would come from different sources at both the company level and the business partners’ one. Further, when speaking about connected digital twins, the need for deeper study and analysis of the needs and capacities regarding interoperability become even of greater importance. Enterprise interoperability (EI) is the ability of an organization to work with other organizations without special effort. The capability to interact and exchange information both internally and with external organizations (partners, suppliers, customers, citizens…) is a key issue in the economic and public sector. It is fundamental in order to produce goods and/or services quickly and at lower cost, while ensuring higher levels of quality, customization, services, and security. Industry’s need for EI is one of the key drivers for research into the connected digital twins. Such EI research will analyze and point out the actions that enterprises should carry out in order to assure proper conditions to enhance fully efficient connected digital twins. Then, the industry has found at I-ESA’22, whose motto was enterprise interoperability through connected digital twins, an outstanding opportunity to exchange both experiences and problems on business interoperability in their daily operations. The I-ESA conferences have been recognized as a tool to lead and generate an extensive research and industrial impact in the field of Interoperability for Enterprise Systems and Applications. I-ESA brings together the world’s leading researchers and practitioners in the area of enterprise interoperability, and it is a unique forum for the exchange of visions, ideas, research results, and industrial experiences, dealing with a wealth of interoperability research subjects.

ix

x

Preface

In this sense, I-ESA’22 is the eleventh edition of the I-ESA conferences: Genève (2005), Bordeaux (2006), Madeira (2007), Berlin (2008), an special edition in Beijing (2009), Coventry (2010), Valencia (2012), Albi (2014), Guimarães (2016), Berlin (2008), and Tarbes (2020). I-ESA’22 was hosted by the Polytechnic University of Valencia, organized by the Research Center on Production Management and Engineering (CIGIP) and jointly promoted by the CIGIP and the INTEROP-VLab (European Virtual Laboratory for Enterprise Interoperability—http://www.interop-vlab.eu). As it can be deduced from the content of this book, top worldwide researchers and practitioners within the enterprise interoperability area added value to the different chapters. Additionally, it is worth to mention the I-ESA’22 key speakers who were widely known from academia, industry, and R&D European Commission: • Dr. Michael Grieves, Internationally Renowned Expert in digital transformation, who originated the concept of the digital twin, the USA; • Mr. Rahul Tomar, Cofounder and Managing Director of the company DigitalTwin Technology GmbH, Germany; • Dr. Javier Orozco, Project Officer within the unit C3 in DG CNECT at the European Commission, Belgium. This book is organized into the next five parts, which address the main current topics regarding enterprise interoperability through connected digital twins: • • • • •

Part I: Managing Uncertainty in Industry 4.0 Part II: Stakeholders’ Collaboration and Communication Part III: Digital Twins Analysis and Applications Part IV: Smart Manufacturing in Industry 4.0 Part V: Standards, Ontologies, and Semantics

Acknowledgements The publication of this volume is financed by the Generalitat Valenciana, Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital under the program AORG 2022 [Reference Number: CIAORG/2021/119]. Valencia, Spain Bordeaux, France Valencia, Spain Monterrey, Mexico

Raúl Rodríguez-Rodríguez Yves Ducq Ramona-Diana Leon David Romero

Contents

Part I

Managing Uncertainty in Industry 4.0

Interoperability Challenges for a Corporate Interactive Situation Awareness System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Thomas Knothe, Julia-Anne Scholz, and Patrick Gering Identifying Uncertainty in Large-Scale Industry 4.0 Software Projects Through Model-Based Requirements Engineering . . . . . . . . . . . . Anna M. Nowak-Meitinger, Jan Mayer, Sanjana Kishor Marde, and Roland Jochem A Machine Learning-Based System for the Prediction of the Lead Times of Sequential Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Antonio Lorenzo-Espejo, Alejandro Escudero-Santana, María-Luisa Muñoz-Díaz, and José Guadix Application of a Visual and Data Analytics Platform for Industry 4.0 Enabled by the Interoperable Data Spine: A Real-World Paradigm for Anomaly Detection in the Furniture Domain . . . . . . . . . . . . Alexandros Nizamis, Rohit A. Deshmukh, Thanasis Vafeiadis, Fernando Gigante Valencia, María José Núñez Ariño, Alexander Schneider, Dimosthenis Ioannidis, and Dimitrios Tzovaras Predictive Study of Changes in Business Process Models . . . . . . . . . . . . . . Adeel Ahmad, Mourad Bouneffa, Henri Basson, Chahira Cherif, Mustapha Kamel Abdi, and Mohammed Maiza Part II

3

13

25

37

49

Stakeholders’ Collaboration and Communication

Collaborative Platform for Experimentation on Production Planning Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . María Ángeles Rodríguez, Ana Esteso, Andrés Boza, and Angel Ortiz Bas

63

xi

xii

Contents

Enterprise E-Profiles for Construction of a Collaborative Network in Cyberspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michiko Matsuda and Tatsushi Nishi A Multi-partnership Enterprise Social Network-Based Model to Foster Interorganizational Knowledge and Innovation . . . . . . . . . . . . . . Ramona-Diana Leon, Raúl Rodríguez-Rodríguez, and Juan-Jose Alfaro-Saiz Developing Performance Indicators to Measure DIH Collaboration: Applying ECOGRAI Method on the D-BEST Reference Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hezam Haidar, Claudio Sassanelli, Cristobal Costa-Soria, Angel Ortiz Bas, and Guy Doumeingts

75

87

99

Interoperability in Measuring the Degree of Maturity of Smart Cities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Luis Miguel Pérez, Raul Oltra-Badenes, Juan Vicente Oltra-Gutierrez, and Hermenegildo Gil-Gomez Part III Digital Twins Analysis and Applications Analyzing the Decisions Involved in Building a Digital Twin for Predictive Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Hazel M. Carlin, Paul A. Goodall, Robert I. M. Young, and Andrew A. West Digital Twin Concept in Last Mile Delivery and Passenger Transport (A Systematic Literature Review) . . . . . . . . . . . . . . . . . . . . . . . . . 135 Maren Schnieder, Chris Hinde, and Andrew West Recent Advances of Digital Twin Application in Agri-food Supply Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Tsega Y. Melesse, Valentina Di Pasquale, and Stefano Riemma Improving Supply Chain and Manufacturing Process in Wind Turbine Tower Industry Through Digital Twins . . . . . . . . . . . . . . . . . . . . . . 159 María-Luisa Muñoz-Díaz, Alejandro Escudero-Santana, Antonio Lorenzo-Espejo, and Jesús Muñuzuri Complementing DT with Enterprise Social Networks: A MCDA-Based Methodology for Cocreation . . . . . . . . . . . . . . . . . . . . . . . . 171 Raúl Rodríguez-Rodríguez, Ramona-Diana Leon, Juan-José Alfaro-Saiz, and María-José Verdecho

Contents

xiii

Part IV Smart Manufacturing in Industry 4.0 Interoperability as a Supporting Principle of Industry 4.0 for Smart Manufacturing Scheduling: A Research Note . . . . . . . . . . . . . . . 183 Julio C. Serrano-Ruiz, Josefa Mula, and Raúl Poler A Deep Reinforcement Learning Approach for Smart Coordination Between Production Planning and Scheduling . . . . . . . . . . . . . . . . . . . . . . . . 195 Pedro Gomez-Gasquet, Andrés Boza, David Pérez Perales, and Ana Esteso Modeling Automation for Manufacturing Processes to Support Production Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 José Franco, José Ferreira, and Ricardo Jardim-Gonçalves Interoperable Algorithms as Microservices for Zero-Defects Manufacturing: A Containerization Strategy and Workload Distribution Model Proposal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Miguel Á. Mateo-Casalí, Francisco Fraile, Faustino Alarcón, and Daniel Cubero An Interoperable IoT-Based Application to On-Line Reconfiguration Manufacturing Systems: Deployment in a Real Pilot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Faustino Alarcón, Daniel Cubero, Miguel Á. Mateo-Casalí, and Francisco Fraile Part V

Standards, Ontologies and Semantics

New Ways of Using Standards for Semantic Interoperability Toward Integration of Data and Models in Industry . . . . . . . . . . . . . . . . . . 243 Yves Keraron, Jean-Charles Leclerc, Claude Fauconnet, Nicolas Chauvat, and Martin Zelm Business Context-Based Quality Measures for Data Exchange Standards Usage Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Elena Jelisic, Nenad Ivezic, Boonserm Kulvatunyou, Scott Nieman, and Zoran Marjanovic An Ontology of Industrial Work Varieties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Antonio De Nicola and Maria Luisa Villani Combining a Domain Ontology and MDD Technologies to Create and Validate Business Process Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Nemury Silega, Manuel Noguera, Yuri I. Rogozov, Vyacheslav S. Lapshin, and Anton A. Dyadin

xiv

Contents

A New Polyglot ArchiMate Hypermodel Extended to Graph Related Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Nicolas Figay, David Tchoffa, Parisa Ghodous, Abderrahman El Mhamedi, and Kouami Seli Apedome Normalized City Analytics Based on a Semantic Interoperability Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Tiago F. Pereira, Nuno Soares, Mariana Pinto, Carlos E. Salgado, Ana Lima, and Ricardo J. Machado Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313

Part I: Managing Uncertainty in Industry 4.0

Interoperability Challenges for a Corporate Interactive Situation Awareness System Thomas Knothe , Julia-Anne Scholz , and Patrick Gering

Abstract In this contribution interoperability is taken into account from the perspective of resilient production. One of the key objectives of resilient production is to have always an up-to-date situation picture to monitor and reflect internal capabilities with external influences. Today’s widely applied solution concepts for production management, like ERP and MES (specifically for production), have the ambition to make transparent exactly the same aspects. But in fact, they are insufficient to cover the individuality of major factors in crisis management. Typical systems for managing situation awareness come from defense, police, or management of critical infrastructure that are designed to manage unforeseen crisis including the handling of these individual factors. Further on, as detected in the recent crises, production management systems have to extend their capabilities from just monitoring and analysis to support the creation and management of measures along the entire resilience process. By increasing the capabilities of such an instrument, called “Corporate Interactive Situation Awareness Management System” (CISAMS), interoperability challenges come by side. These can jeopardize the benefit and usefulness of CISAMS that can lead to very critical situations in crisis management. In this paper, requirements and to be extended functions of CISAMS will be discussed and the related interoperability challenges will be identified. Afterward, a model-based solution is presented, and experiences from implementation will be provided. Keywords Surveillance map · Enterprise model · Model-based configuration

T. Knothe (B) · J.-A. Scholz · P. Gering Fraunhofer Institute for Production Systems and Design Technology, Pascalstraße 8-9, 10587 Berlin, Germany e-mail: [email protected] J.-A. Scholz e-mail: [email protected] P. Gering e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_1

3

4

T. Knothe et al.

1 Introduction—Situational Awareness1 In various sectors of the economy, companies are facing increasing vulnerability toward unforeseen disruptive events [1]. The sudden loss of suppliers and customers as well as frequent changes in regulations at short notice present companies with considerable challenges [2]. Potential uncertainties and complexity complicate the planning and the implementation of measures. Here, companies can learn from public administration and military organization. Clausewitz highlighted in nineteenth century the importance of a common situation picture for networked forces [3]. Since Clausewitz, the creation and use of situational awareness have been of particular importance to emergency personnel of the military, police, and health services for the success of networked action in crises. The starting and end point of the military command and control process as well as the police planning and decision-making process is always the situation picture, since it forms the foundation of the entire operation [5]. The relevant data must be identified, captured, structured, and correlated. The second step is a structured analysis of the information obtained. The result leads to at least one, ideally to several decision options, within which the respective advantages and disadvantages are pointed out and weighed. On the basis of the decision taken, the implementation planning—which can be seen as a constantly recurring control loop—pursues the goal of a continuous actualtarget comparison and makes it possible to react promptly to the changing situation. The goal of networked operations is to link the information, command, and control systems from the infantryman’s helmet camera to a military satellite as well as across all military organizations and units in order to aggregate as much relevant information as possible into a common situation picture. The information is then made available to the organizational units on a user-specific basis—for example, via an integrated display system in the infantryman’s helmet. This enables individual decentralized systems and organizational units to work in a coordinated manner and within a shorter time. As flight control systems gradually became more automated and pilots started losing track of flight operations and aircraft status, concepts for situation awareness were further developed for the aviation industry [6]. One of the dominating definitions and theories was given by Endsley: The three-level model approach describes the three steps for information processing, starting from “the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and a projection of their status in the near future” [7].

1

Text in this section is largely reproduced from [4] released under a CC BY 3.0 DE license; https:// doi.org/10.15488/11272.

Interoperability Challenges for a Corporate Interactive Situation …

5

Fig. 1 Comparison between situation awareness and production in terms of information processing

2 Extension of Situation Awareness for Production2 What manufacturing companies have in common with the sectors described above is the bias between a decision-maker’s understanding of the system status—the company and its environment—and the actual system status. This is mainly driven by the complexity of environmental influences, like natural disasters, climate change, geostrategic as well as economic policy influences, for example. In order to create a basis for further investigations and decisions for action, a large amount of information from a wide variety of sources must be collected and evaluated within a short time. This information must then be compared with existing findings, correlated with each other, and communicated with responsible parties [4]. Hence, it is essential for companies to monitor the execution of their business in real time and to consistently have an overview of the current state of the company’s liquidity and its facilities [8, 9]. Therefore, many companies today make use of socalled information systems, which are often summarized under the term management cockpit. However, the information processing is largely shaped by the hierarchical system of the automation pyramid [10]. This enables companies to provide operational data from process execution in production, to assign it to the corresponding order data and production specifications, and to visualize it within the scope of performance analysis [4]. In total, the architecture and the application of production management systems and situation awareness are completely different, as can be seen in Fig. 1. Moreover, while task and system factors like system capability and automation are driving both concepts equally, individual factors like goals are specific for the event situation. 2

Text in the first two paragraphs is largely reproduced from [4] released under a CC BY 3.0 DE license; https://doi.org/10.15488/11272.

6

T. Knothe et al.

In case to react to specific events, situation awareness systems must be adapted to these individual factors. For classical production management systems, like MES, these factors are not relevant, which means a dynamic adaptation is not foreseen. Further on, the “Decision” and “Action” perspective is not part of the typical situation awareness monitoring systems. But exactly the management of these activities in relation to the awareness functions is especially important in crises, when several interconnecting issues have to be managed shortly at the same time. This happened due to the first countermeasures regarding SARS-CoV-2, for example, when companies had to organize and implement novel hygienic concepts, stabilize the supply chain, compensate customer losses, and organize short-term allowance.

3 Interoperability Requirements for Corporate Interactive Situation Awareness System Based on the challenges for using situation awareness monitoring to support crisis handling for production processes, the major requirements for extending current solutions are provided in Table 1. At the functional level, the reflective interacting views are prominent. Reflective views have long been used in enterprise cockpits and management boards [11]. In the frame of enterprise modeling, the concept of reflective views is used for connecting different modeling views as well as in active modeling, as proposed by Lillehagen and Krogstie [12]. A typical representation for reflective views in the area of enterprise modeling is the process assistant, which is derived from MO2 GO models. In Fig. 2, the principle structure of reflective views is shown according to an example from an airline maintenance repair and overhaul service provider. Table 1 Overview of major interoperability requirements Functional requirement

Non-functional requirement

Provision of interacting views

• Link data from manufacturing management systems

• Cover the major functions of crisis handling: “perception”, “comprehension”, and “projection” (as can be seen as major parts of situational awareness in Fig. 1) of the business environment from outside and inside the company (performance, liquidity, capacity) • Decision view to show the influencing parameter and connected decisions to derive activities • Overview and details of planned and running actions before, in, and after crises

• Views have to be adaptable to the individual factors along situational awareness functions (Fig. 1) • Views have to be active, which means that in the planning view of actions, new actions have to be defined inside the view • Organize the formal connection between the views, especially when they have to be adapted by the individual factors • Link data from outside sources, e.g., news portals in a way that only reliable and relevant data will be covered

Interoperability Challenges for a Corporate Interactive Situation …

7

Fig. 2 Principle structure of reflective views in the process assistant (prozessassistent.ipk.fraunhofer.de)

The reflective views as generated by the process assistant out of the enterprise model are using the connection between the different enterprise model objects. In the given example there are specific views of class elements regarding the generic IEM modeling objects: products, “resources”, “order”, and “activities” defined. For instance, “role” as a subclass of “resource” has an own hierarchical view, but is at the same time reflected in the process view for being responsible for specific processes. As can be seen in Fig. 2, the specific role “CAMO engineer” has a structural own view but as well a process view. Further on the CAMO engineer is assigned as responsible for a certain business process. The same reflective views can be generated, e.g., for documents, IT applications, or inputs and outputs of process steps. As enterprise models are normally used to configure in principle static views, they can be enabled to provide dynamic content to the respective user. For CISAMS, this is not the case anymore. In [13] it is described that processes and conditions to be observed in an enterprise cannot completely be foreseen. Processes suddenly become critical because of specific situations and respective objectives. For example in the recent pandemic situation, hygienic measures (e.g., distance rule between employees) in production had to be established quickly in order to ensure business continuity. These new business objectives have to drive the entire concept of reflective views. The major challenge is the very fast change of the entire view system with complete data flow and system connections according to dynamic changing of individual factors as described in Fig. 1 and in the right column in Table 1. This has to be performed mostly without IT expert knowledge. On the other hand, during crisis management, views have to be active, as they should not only be passive. The interaction has to be enabled from two different perspectives. The first perspective is derived from the projection step of situation awareness (see in Fig. 1). Here the definition, evaluation, and management of possible scenarios have to be supported. As mentioned above, to create such a scenario evaluation model, the required parameters, the related KPIs, and the assignment to the processes must be established according to suddenly changed conditions and objectives. The dynamic conditions come from inside and

8

T. Knothe et al.

outside of the company and have to be formalized into evaluation models. While internal conditions can be collected comparably easily, the collection, filtering, and assignment of information to the right processes are much more problematic. On top of this, the individual evaluation of external data related to the reliability and actuality is challenging. The second interaction is the definition of measures and their reflection to standard and exception processes. To conclude, a solution for CISAMS is requested that can create and operate a reflective and interactive view system for strategic, tactical, and operational management of normal and crisis business according to dynamically appearing business goals and conditions.

4 Solution Concept The existing solution architecture of MO2 GO and process assistant is used as a basis, because in principle major functions are available: • reflective views generated from the enterprise model; • model configured data access to inside and outside data sources [14]; • model-based operations support of individual processes [15]. The most difficult requirement is to enable the dynamic configuration of the views including all data flows and connections. Here the Goal Modeller of the MO2 GO system is used (see highlighted elements in Fig. 3). The Goal Modeller combines the easy creation of risks objectives and their reflection to instruments and drivers with integrated KPI definition [16] (see Fig. 4). With graphical and textual modeling capabilities, the setup of the relationships between objectives, enabling drivers, and KPIs is possible in a formal way. For

Fig. 3 CISAMS solution architecture

Interoperability Challenges for a Corporate Interactive Situation …

9

Fig. 4 Hierarchical objectives view with drivers and KPIs as part of MO2 GO Goal Modeller

handling crises, either predefined objective setups can be used or new objectives defined. Existing ones can be declined in order to find the focus, adequately to the actual situation. Based on objectives, their KPIs, and assignment to processes, the setup of the strategic views must be generated. In Fig. 5 a resulting CISAMS is shown. There are active items, like the measures definition field, which is connected to process definitions. In the upper right fields, simulation analysis is addressed. On left upper fields the risks and the status of the company regarding the level of preparation are reflected in a passive view. Currently the Fraunhofer IPK is running several publicly funded and industrial research projects where CISAMS are in focus. Overall, there is a huge request to apply the mentioned active and fast configurable views. Nevertheless, the following requirements are not yet fulfilled:

Fig. 5 Resulting surveillance map

10

T. Knothe et al.

• No adequate operational decision view with relevant information tracking to check if the information status has been changed. In addition, the interconnection of different decisions cannot be made visible. • To have all the relevant views on one screen at the same time without causing an overflow of information for the users. • Configuration requirements, as it still needs some programming. • The interaction map is not yet guiding people to operate in crisis situations. The original feedback from an industrial workshop participant: “It is just a collection of useful data”. On the other hand, the capabilities of CISAMS have already been identified to be suitable not only for operations in times of crisis, but also in normal operations.

5 Conclusion and Outlook The vulnerability toward unforeseen disruptive events affects companies in all sectors. The experience of the recent crisis shows that companies that are able to manage the major challenges very quickly and systematically have the best chance not only of surviving, but also of emerging from a crisis stronger and better prepared for new challenges. The core innovation of CISAMS is the integration of a flexible formal goal model-oriented approach for fast configuration of interactive reflective management views to perform operations before, in, and after crises. With the integration of the fast-changeable goal model, this approach is covering the individual factors, as requested for situational awareness. Nevertheless, implementation experience has shown that there are significant shortcomings, both in terms of functionality and user experience. The first is the inadequate decision definition and tracking support. Here, a consideration of the ECOGRAI method seems to be suitable, even though the method as it is developed today takes too much time with the 5-Phase approach and requires system experts to know how it is applied in the crisis modus [17, 18]. But especially the GRAI constructs: decision frame and decision link can be used for decision planning and tracking. Furthermore, the integration into the resilience life cycle is a candidate for a guided decision support during crisis management. Further on, new concepts have to be envisaged for the configuration of the reflected views, which fulfill usability requirements coming from different users by design. Existing solutions can be taken into consideration from model-based generation of performance cockpits [19]. As mentioned above, the collection, evaluation, and management of data coming from outside are even more challenging. Especially the evaluation regarding relevance, reliability, impact, and completeness is exceedingly difficult.

Interoperability Challenges for a Corporate Interactive Situation …

11

References 1. White Paper “RESYST”. Resiliente Wertschöpfung in der produzierenden Industrie—innovativ, erfolgreich, krisenfest. https://www.produktion.fraunhofer.de/content/dam/produktion/ de/dokumente/RESYST_WhitePaper.pdf. Accessed 2021/09/12 2. Khan, S.M.: Securing Semiconductor Supply Chains. Center for Security and Emerging Technology (2021). https://doi.org/10.51593/20190017 3. von Clausewitz, C.: On War. Everyman’s Library. Everyman’s Library, New York (1993) 4. Scholz, J.-A., Oertwig, N., Delleske, C., Fischer, K., Knothe, T., Stolz, A., Kohl, H.: Situation awareness monitor and liquidity assessment for enterprise resilience management. In: Herberger, D., Hübner, M. (eds.) Proceedings of the Conference on Production Systems and Logistics: CPSL 2021, pp. 32–42. Institutionelles Repositorium der Leibniz Universität Hannover, Hannover (2021). https://doi.org/10.15488/11272 5. Schaub, H.: Vernetzte Operationsführung zur Unterstützung militärischer Stäbe. In: Hofinger, G., Heimann, R. (eds.) Handbuch Stabsarbeit. Führungs- und Krisenstäbe in Einsatzorganisationen, Behörden und Unternehmen, pp. 291–295. Springer, Heidelberg (2016) 6. Stanton, N., Chambers, P., Piggott, J.: Situational awareness and safety. Saf. Sci. 39(3), 189–204 (2001) 7. Endsley, M.R.: Design and evaluation for situation awareness enhancement. Proc. Hum. Factors Soc. Annu. Meet. 32(2), 97–101 (1988). https://doi.org/10.1177/154193128803200221 8. Kung, P., Hagen, C., Rodel, M., Seifert, S.: Business process monitoring & measurement in a large bank: challenges and selected approaches. In: Proceedings of the 16th International Workshop on Database and Expert Systems Applications (DEXA’05), pp. 955–961. IEEE, Copenhagen (2005) 9. Harmon, P., Wolf, C.: The state of business process management 2014. Business Process Trends. http://www.bptrends.com/bpt/wp-content/uploads/BPTrends-State-of-BPMSurvey-Report.pdf. Accessed 2021/03/10 10. Heinrich, B., Linke, P., Glöckler, M.: Grundlagen Automatisierung. Sensorik, Regelung, Steuerung, 2nd edn. Springer, Wiesbaden (2017) 11. Kapp, R., Le Blond, J., Schreiber, S., Pfeffer, M., Wettkämpfer, E.: Echtzeitfähiges FabrikCockpit für den produzierenden Mittelstand. Ind. Manage. 22(2), 49–52 (2006) 12. Lillehagen, F., Krogstie, J.: Active Knowledge Modeling of Enterprises. Springer, Heidelberg (2008) 13. Knothe, T., Oertwig, N., Woesthoff, J., Sala, A., Lütkemeyer, M., Gaal, A.: Resilienz durch dynamisches Prozessmanagement. Z. wirtsch. Fabr. 116(7–8), 520–524 (2021) 14. Jaekel, F.-W., Torka, J., Eppelein, M., Schliephack, W., Knothe, T.: Model based, modular configuration of cyber physical systems for the information management on shop-floor. In: Debruyne, C., Panetto, H., Weichhart, G., Bollen, P., Ciuciu, I., Vidal, M.-E., Meersman, R. (eds.) On the Move to Meaningful Internet Systems. OTM 2017 Workshops. Lecture Notes in Computer Science, vol. 10697, pp. 16–25. Springer, Cham (2018) 15. Gering, P.: Industrie 4.0 aus dem Koffer. https://www.ipk.fraunhofer.de/content/dam/ipk/IPK_ Hauptseite/dokumente/themenblaetter/um-themenblatt-industrie40-koffer-web.pdf. Accessed 2021/03/10 16. Jäkel, F.-W., Schlosser, S., Otto, B., Petrovic, D., Niknejad, A.: Seamless interrelation between business strategies and tactical planning. In: Mertins, K., Jardim-Gonçalves, R., Popplewell, K., Mendorca, J.P. (eds.) Enterprise Interoperability VII. Proceedings of the I-ESA Conferences, vol. 8, pp. 321–331. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-319-309576_26 17. Doumeingts, G., Clave, F., Ducq, Y.: ECOGRAI—a method to design and to implement performance measurement systems for industrial organizations—concepts and application to the maintenance function. In: Rolstadås, A. (ed.) Benchmarking—Theory and Practice. IFIP Advances in Information and Communication Technology, pp. 350–368. Springer, Boston, MA (1995)

12

T. Knothe et al.

18. Lobna, K., Mounir, B., Hichem, K.: Using the ECOGRAI method for performance evaluation in maintenance process. In: 2013 International Conference on Advanced Logistics and Transport (ICALT), pp. 382–387. IEEE, Sousse 19. Oertwig, N., Gering, P.: The industry cockpit approach: a framework for flexible real-time production monitoring. In: Mertins, K., Jardim-Gonçalves, R., Popplewell, K., Mendonça, J.P. (eds.) Enterprise Interoperability VII. Proceedings of the I-ESA Conferences, vol. 8, pp. 175– 184. Springer, Cham (2016)

Identifying Uncertainty in Large-Scale Industry 4.0 Software Projects Through Model-Based Requirements Engineering Anna M. Nowak-Meitinger , Jan Mayer , Sanjana Kishor Marde, and Roland Jochem

Abstract Determining relationships between user requirements and software product functionalities enables an early-stage requirements evaluation and project risk assessment. This leads to preliminary uncertainty detection which can minimize gaps for the fulfillment of demanding needs. Therefore, a multi-step approach is introduced to elicit user requirements, define functional specification of software products, and classify project parts into four different uncertainty categories. The allocation is based on the determined relationship and considers requirements engineering as well as model-based approaches. Implementing the introduced approach provides distinct advantages, like the inclusion of unlimited number of project parts, the realization in any granular environment, and the unified communication between various numbers of stakeholders. Additionally, a continuous evaluation is given to the modular characteristics of the assessment criteria. Project management capabilities need to be considered during the simultaneous application of this methodology. This can have crucial impact on the achievement of desired project outcomes, especially in the context of the transition to Industry 4.0. Keywords Model-based requirements engineering (MBRE) · Industry 4.0 · Industrial data services · Software development

A. M. Nowak-Meitinger · J. Mayer · S. K. Marde · R. Jochem (B) Technical University of Berlin, Pascalstr. 8-9, 10587 Berlin, Germany e-mail: [email protected] A. M. Nowak-Meitinger e-mail: [email protected] J. Mayer e-mail: [email protected] S. K. Marde e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_2

13

14

A. M. Nowak-Meitinger et al.

1 Introduction Large-scale research and innovation projects focusing on industrial software solutions for the transition to Industry 4.0 often consist of multi-disciplinary teams from different partner organizations to address complex challenges [1]. To avoid a lack of transparency and to ensure interoperability at the beginning of the product development process, a systematic and efficient requirements engineering (RE) process from the identification of initial user needs to the development and implementation of solutions is required [1]. Early identification and analysis of potential synergies, gaps, and redundancies of solution functionalities in comprehensive software solutions with a high number of interfaces is crucial in the initial phase of development to ensure added value and interoperability in complex applications in factories [2]. However, procedures of warning systems are not clearly examined in RE and related methodologies. In addition, continuous monitoring of project items in terms of their requirement fulfillment can facilitate development strategies to meet user expectations. This paper presents a model-based requirements engineering (MBRE) approach that aligns requirements and given functionalities to enable project partners of largescale Industry 4.0 software projects to identify uncertainties and risks early during development. Precise interface specifications and detailed requirements descriptions should ensure both the interoperability of software solutions and the consideration of user needs. Risk mitigation strategies were investigated. Thus, the presented approach aims to reduce and handle the complexity of solution structures and communication gaps. In addition, an evaluation framework with combined key performance indicators (KPIs) should support the identification of critical solution parts to provide orientation for designing, building, and implementing software solutions.

2 Related Work Inkermann et al. [3] investigated a MBRE approach to support the development of complex systems in vehicle design using diagrams to describe use cases, functions, system structure as well as requirements elicitation and allocation. The approach includes several stakeholder perspectives and supports their collaboration through modeled system descriptions. Mordecai and Dori [2] proposed a MBRE framework considering stakeholder requirements with the help of Object-Process Methodology. This framework provides a transformation of stakeholder requirements to system requirements and subsequently to the functional-structured architecture. A different MBRE approach is described by Holt et al. [4] considering contextbased RE. The main objective is to form a framework for RE of systems of systems (SoS) consisting of three elements. An ontology with defined key concepts and terminology, a framework with predefined required views, and the process set with

Identifying Uncertainty in Large-Scale Industry 4.0 Software Projects …

15

defined processes are needed to generate the views in framework. Better visualization of the requirements sets, flexibility in process methodology, and scalability of project are some benefits of the application of this approach. Brace and Ekman [5] presented CORAMOD, a checklist-oriented, model-based requirements analysis approach that supports the development of requirements and functions to obtain a complete picture of the problem description and possible solutions. The software- and hardware-related use case scenario deals with reducing energy consumption in mobile work machines. The intensive and very detailed modeling with SysML [6] leads to complete results. Wanderley et al. [7] proposed a different approach suitable for agile development in research using mind maps in the SnapMind framework. With this framework, the goal of better visualization and stakeholder engagement was achieved. Sadovykh et al. [1] published an experience report on designing, developing, and implementing MBRE approaches carried out in three large collaborative projects. Among all projects, a large number of partners with different backgrounds are involved and high number of requirements are handled. The report concludes that MBRE approaches can ease the communication in large-scale software projects and facilitate scalability, heterogeneity, traceability, and automation. MBRE approaches are inevitable to achieve project KPIs and customer satisfaction [2]. Nevertheless, the application of frameworks is limited due to the prerequisites like modeling tools and their language in order to develop requirements [1, 4]. Additionally, continuous evaluation is considered to involve multiple disciplines for the requirements engineer, e.g., software development [7]. Furthermore, to the best of our knowledge and belief, no approach has implemented an early warning step with structural classification procedures. Thus, the methodology presented in Sect. 3 describes the functional specification and the visualization of requirements to classify distinct parts of the software project based in requirement KPIs in the early phase of the project.

3 Approach The introduced approach is based on established standards and guidelines. The goal is to reduce planning capacities and to secure a continuous version control of the solution during the development process by tracking restructurings made. Furthermore, Inkermann et al. [3] describes the necessity of managing interdisciplinary teamwork during the product development process which is supported by the following approach.

16

A. M. Nowak-Meitinger et al.

3.1 Methodological Framework Requirements Engineering (RE). Established standards and guidelines from the areas of RE, systems and software engineering, and methods for designing technical products serve as a basis for the introduced approach. The RE standards which were used are ISO/IEC/IEEE 29148 [8], ISO/IEC/IEEE 12207 [9], and ISO/IEC/IEEE 15288 [10]. Relevant VDI (Association of German Engineers) guidelines such as VDI2221 [11, 12] and VDI2206 [13, 14] also serve as principle for the procedure. In particular, the V-model of VDI 2206 and its validation [14, 15] underline the importance of RE during the process of product development. Model-Based Systems Engineering (MBSE). The MBSE approach [16] is the formalized application of systems modeling to support, but is not limited to, system requirements activities [17] and is adapted to the presented approach in order to perform systems modeling with its requirements, relations, and dependencies. Systems engineering is an interdisciplinary and holistic approach for the definition, documentation, verification, and validation of system requirements with respect to all aspects of the life cycle of the system [17, 18]. Model-based principles and techniques can provide relevant abstraction, genericity, or reusability capabilities [19]. In particular, function-based systems engineering (FBSE) [18] is used for the presented approach to define and document software solutions. Furthermore, the system modeling language SysML [6] is used to model requirements and functions in a comprehensible form. Applying this framework achieves an extensive understanding of the system by describing it independently of any discipline. As requested by Zafirov [20], the system model is centrally available, formal (unambiguous, computer interpretable), complete, coherent, and consistent. Consequently, the relationship information between system elements is explicitly included. MBRE is involved in the requirement engineering activities with the help of a model-driven approach, which can be beneficial in terms of a large number of requirements, multiple stakeholders involved, and general usability [1].

3.2 Procedure and Artifacts The approach is carried out in a continuous, iterative process of requirements elicitation and analysis. This requires parallel and integrative work on requirements, structures, interfaces, and specifications. Interactive, technical discussions between all partners involved are an important basis for the elicitation, analysis, and validation of stakeholder requirements. The common understanding of stated requirements and provided solution functionalities is a core issue in this process. During iterative steps, there are different types of artifacts created and continuously revised, such as the user requirement diagrams, functional structure diagrams (FSD), mapping diagrams, and lists.

Identifying Uncertainty in Large-Scale Industry 4.0 Software Projects …

17

Requirement Diagrams. Requirement diagrams are created by demanders to elicit and structurally describe their requirements. Resulting from multiple iterations, detailed information about atomic requirements is achieved [21]. Function Structure Diagrams (FSD). FSDs serve as supporting tool in MBRE procedures by identifying essential problems, describing essential functions, and showing possible outlines of a solution. This facilitates discussions for divisions of modular products and maps the functional description. The system functions are described and hierarchically decomposed into subfunctions to form the functional architecture. By connecting the functions with arrows, the data and information flow within the solution including the input and output is defined [22]. The procedure is consistent to the functional flow box diagrams in FBSE [18]. Mapping Diagrams. To describe how requirements are met by software functionalities, the relationships between them are documented in mapping diagrams. These diagrams expose both how requirements are met from the demander perspective and which requirements are placed on the system functions. As foundation, the functional architecture is taken from the FSD. The mapping diagrams are a core element of the functional specification of the software solution and enhance the basis for the design and development process. In addition, it serves as important means to measure and validate the outcome of the solution development.

3.3 Evaluation Suitable evaluation criteria for assessing the results of the requirements analysis and functional specification are methodically developed by using brainstorming methods, e.g., 6-3-5 brain writing. Furthermore, the outcome is compared with the criteria for FBSE [18]. Two main evaluation criteria are identified and can be described as follows: • Precision: Ratio of requirements mapped to subfunctions at the lowest level of the architecture to the complete set of solution related requirements. This is used to identify where further specification by demanders and software providers is required. Precision =

Number of requirements mapped to low-level system functions . Total number of mapped requirements (1)

• Interface specification: Interfaces to other solutions are examined and counted to identify dependencies and necessary specifications during software development to ensure interoperability. For a comprehensive set of software solutions, a visualized status comparison indicates possible risks and necessities, software providers might face during their

18

A. M. Nowak-Meitinger et al.

development phase concerning requirements. A diagram with the combination of the precision (y-axis), number of interfaces (x-axis), and number of requirements (size of circles) results in a classification into four different uncertainty areas (four quadrants). These areas are defined as follows: • (A) A precise mapping with a lower number of interfaces which leads to a detailed development description without having much interaction with other solutions. • (B) A precise mapping with a higher number of interfaces which leads to a detailed development description, but higher organizational effort for the interface definition. • (C) A less precise mapping with a lower number of interfaces which serves freedom in software development by reaching overarching goals but also leaves some low-level functionalities unspecified by requirements. • (D) A less precise mapping with a higher number of interfaces which leads to a generic development description from stakeholders and having a higher interaction with other software solutions from a requirements perspective; in this field a higher planning effort and risk for the positive outcome of the project is included.

4 Case Study In order to validate our approach, a simplified case study follows the determined actions to classify individual project items to the proposed uncertainty areas. As advantage to other methodologies, such as [23], the number of contained software solutions and partners is not limited and contains interpretable visualization. In this case study, a fragment of a large software project is presented with various stakeholders. The chosen software solution is an established part of comprehensive software projects and provides visualization technologies.

4.1 Requirements’ Elicitation of Software Demanders To gather expected outcomes from demanding stakeholders, both lists and SysML documents are involved in the process of requirements elicitation. The main contribution from demanding partners is the semantic description of their needs concerning the functionalities of the software solution. Therefore, main requirements from all users are gathered and refined iteratively to acquire detailed descriptions. The number of iterations for redefining the requirements is not limited. In addition, personal discussions and the use of diagrams facilitate the communication among the involved stakeholders. Table 1 shows a set of requirements after the first iteration consisting of the semantic description and its identification (ID). The ID serves both to identify the requirement and to provide traceability to its origin. In this case, the first requirement (r1) from software demander A (SDA) is equal to the ID SDAr1. As the first step of the

Identifying Uncertainty in Large-Scale Industry 4.0 Software Projects …

19

Table 1 Requirements table of software demander A (SDA) ID

Requirement (informal)

SDAr1

Define the correct data repository

SDAr2

Data scalability and/or criteria while changing the machine model or test parameters (e.g., temperature conditions, customer operating pressure, …)

SDAr3

Conduct an uncertainty analysis with the objective of determining the algorithm’s reliability

SDAr4

During the data capture, acquire the machine condition limitations (test cycles)

SDAr5

For each algorithm, determine how the data should be captured in respect of sample frequency, accuracy, and other factors

SDAr6

Building a self-diagnosis interface available for software demander support

SDAr7

Developing a self-diagnosis interface available for the customer

elicitation, formal requirements based on ISO/IEC/IEEE 29148 [8] are not mandatory. Table 1 provides an example of requirements of SDA. Likewise, a requirements diagram of SDA based on SysML standards of the redefined stakeholder requirements can be created and used as knowledge base and means of communication [24].

4.2 Functional Specification of Software Providers Alongside the elicitation and the redefinition of the requirements, software providers are requested to contribute FSDs in order to communicate their functionalities. At least two iterations are recommended. To create such FSDs, the approach of [22] is mainly followed. However, multiple prerequisites are necessary. Specifically, software providers should perceive their provided software solution as interaction between input and output connections. The structure of connected system functions should follow the data flow within the software. System functions should include internal hierarchical relationships to achieve a detailed description. The FSD (Fig. 1) presents the visualization software of the project. According to the proposed methodology, the FSD contains precise descriptions of their functionality in linked documents. As main functions, purple boxes surround lower-level system functions. Additionally, logical behavior from Business Process Modeling Languages, e.g., decisions, can be inserted. Counting the numbers of interfaces, the result is six, coming only from the input side.

20

A. M. Nowak-Meitinger et al.

Fig. 1 Function structure diagram of visualization software solution

4.3 Requirements Mapping to System Functions Subsequently, the gathered requirements are connected to the related functions of the provided software solution. Therefore, the FSD is converted into functional architecture which enables an overview of the provided software solution in different granularity levels. Mapping to a top level of architecture is allowed only if the requirements cannot be related to a lower level. This can be due to the generic description of both requirement and system function or the unavailability of low-level system functions. To ensure accurate mapping and to decrease the number of uncertain connections, additional meetings with experts are recommended. Figure 2 presents information collected from all stakeholders among the project. Consequently, multiple stakeholder IDs are involved in the functional architecture. The number of mapped requirements to the lowest level is eight, while three requirements were mapped to any higher level. Thus, the total number of mapped requirements to the visualization software is eleven.

4.4 Status Comparison and Uncertainty Classification As the last step and to create the visualization of the overall status comparison, the normalized number of interfaces and the precision are required. This includes an intermediate calculation step which enables the comparability across all project parts. Thus, a normalization calculation is used to transform the number of interfaces into the same scale as the precision. Thus, let x be the number of interfaces and xmax be the maximum number of interfaces which are connected to an individual project

Identifying Uncertainty in Large-Scale Industry 4.0 Software Projects …

21

Fig. 2 Requirements mapping to system functions in Cameo Systems Modeler

part. Then xnorm results as the normalized number of interfaces. xnorm =

x xmax

.

(2)

Furthermore, the total number of linked requirements for each project item is relevant to complete the status comparison. Consequently, the combination of the adjusted interface KPI and the requirements precision allows the classification to each uncertainty area. For instance, taking the mapped requirements from the visualization software, the KPIs are calculated as follows: Precision =

8 = 73%. 11

Normalized Interfaces =

6 = 67%. 9

(3) (4)

Calculating the evaluation criteria for all of the provided software solutions results in the classification shown in Fig. 3. As highlighted in the figure, the visualization software is part of area B. As described in Sect. 3.3, this means that a precise mapping of requirements to solution functionalities is provided, while a higher number of interfaces occur. This leads to a detailed development description and at the same time to a higher organizational effort for the interface definitions. The size of the bubbles refers to the total number of requirements which were assigned to the software solution.

22

A. M. Nowak-Meitinger et al.

Fig. 3 Status comparison

5 Conclusion This paper presents an iterative and multi-perspective approach to classify software solutions as part of large, interdisciplinary software projects in terms of their connected user requirements. To identify the most suitable industrial software system functions for the elicited user requirements, MBRE methodologies are used to elaborate the functional components of all demanded software solutions and extract expert knowledge. Conclusively, project evaluation is accomplished by introducing KPIs for the precision of the mapping procedure and the normalized number of interfaces. This allows the placement of individual software solutions into uncertainty areas and enables an overall status comparison of the software project which leads to a better understanding of undetermined project items. Applying this approach at an early stage of the project concedes the minimization of unsatisfied user requirements during the development phase and emphasizes the necessity of organizational effort of project items, which have to be addressed in the future project stages. Additionally, communication and information exchange between users and demanders is supported by this approach and the documentation of different artifacts. Further research which is not accomplished in this paper is to consider the assessment of technical risks and the complexity of software solutions. Undoubtedly, the risk degree of implementing software solutions is one aspect which enlarges fundamental information about uncertainties and risk assessment in software development. Concerning not only requirements, but technical realization, can lead to a higher comprehension after the classification procedure and should avoid additional organizational effort for project management, like interface definitions. Acknowledgements The research leading to this work has received funding from the European Union’s Horizon 2020 research and innovation program under Grant Agreement No.: 958205.

Identifying Uncertainty in Large-Scale Industry 4.0 Software Projects …

23

References 1. Sadovykh, A., Truscan, D., Bruneliere, H.: Applying model-based requirements engineering in three large European collaborative projects: an experience report. In: Proceedings of the 29th IEEE International Requirements Engineering Conference, pp. 367–377. Notre Dame, South Bend (2021). https://doi.org/10.1109/RE51729.2021.00040 2. Mordecai, Y., Dori, D.: Model-based requirements engineering: architecting for system requirements with stakeholders in mind. In: 2017 IEEE International Systems Engineering Symposium, pp. 1–8. IEEE, Vienna (2017). https://doi.org/10.1109/SysEng.2017.8088273 3. Inkermann, D., Huth, T., Vietor, T., Grewe, A., Knieke, C., Rausch, A.: Model-based requirement engineering to support development of complex systems. Procedia CIRP 84, 239–244 (2019). https://doi.org/10.1016/j.procir.2019.04.345 4. Holt, J., Perry, S., Brownsword, M., Cancila, D., Hallerstede, S., Hansen, F.O.: Model-based requirements engineering for system of systems. In: 2012 7th International Conference on System of Systems Engineering, pp. 561–566. IEEE, Genova (2012). https://doi.org/10.1109/ SYSoSE.2012.6384145 5. Brace, W., Ekman, K.: CORAMOD: a checklist-oriented model-based requirements analysis approach. Requirements Eng. 19(1), 1–26 (2014). https://doi.org/10.1007/s00766-012-0154-3 6. SysML Homepage. https://www.sysml.org. Accessed 2021/11/12 7. Wanderley, F., Silva, A., Araujo, J., Silveira, D.S.: SnapMind: a framework to support consistency and validation of model-based requirements in agile development. In: 2014 IEEE 4th International Model-Driven Requirements Engineering Workshop, pp. 47–56. IEEE, Karlskrona (2014). https://doi.org/10.1109/MoDRE.2014.6890825 8. ISO/IEC/IEEE 29148:2018-11: Systems and Software Engineering—Life Cycle Processes— Requirements Engineering 9. ISO/IEC/IEEE 12207:2017-11: Systems and Software Engineering—Software Life Cycle Processes 10. ISO/IEC/IEEE 15288:2015-05-15: Systems and Software Engineering—System Life Cycle Processes 11. VDI 2221-1:2019-11: Design of Technical Products and Systems, Part 1: Model of Product Design 12. VDI 2221-2:2019-11: Design of Technical Products and Systems, Part 2: Configuration of Individual Product Design Processes 13. VDI 2206:2004-06: Design Methodology for Mechatronic Systems (VDI 2206:2004-06) 14. VDI 2206:2020-09: (Entwurf), Entwicklung cyber-physischer mechatronischer Systeme (CPMS) (Draft, Development of Cyber-Physical Mechatronic Systems (CPMS)) 15. Graessler, I., Hentze, J.: The new V-Model of VDI 2206 and its validation. at-Autom. 68(5), 312–324 (2020). https://doi.org/10.1515/auto-2020-0015 16. MBSE Homepage. https://mbseworks.com. Accessed 2021/11/12 17. Weilkiens, T.: Systems Engineering mit SysML/UML. Anforderungen, Analyse, Architektur. 3. Auflage. dpunkt.verlag (2014) 18. Walden, D.D., Roedler, G.J., Forsberg, K.J., Hamelin, R.D., Shortell, T.M.: INCOSE Systems Engineering Handbuch. Ein Leitfaden für Systemlebenszyklus-Prozesse und -Aktivitäten. Wiley, New Jersey (2015) 19. Brambilla, M., Cabot, J., Wimmer, M.: Model-Driven Software Engineering in Practice. Morgan & Claypool, San Rafael (2012). https://doi.org/10.2200/S00441ED1V01Y201208SW E001 20. Zafirov, R.: Modellbildung und Spezifikation. In: Eigner, M., Roubanov, D., Zafirov, R. (eds.) Modellbasierte Virtuelle Produktentwicklung, pp. 77–96. Springer, Heidelberg (2014) 21. Gilz, T.: Requirements Engineering und Requirements Management. In: Eigner, M., Roubanov, D., Zafirov, R. (eds.) Modellbasierte Virtuelle Produktentwicklung, pp. 53–75. Springer, Heidelberg (2014)

24

A. M. Nowak-Meitinger et al.

22. Bender, B., Gericke, K.: Pahl/Beitz Konstruktionslehre. Methoden und Anwendung erfolgreicher Produktentwicklung. Springer, Heidelberg (2021). https://doi.org/10.1007/978-3-66257303-7 23. Nielsen, T.D., Hovda, S., Antontio, F., Langseth, H., Madsen, A.L., Masegosa, A., Salmerón, A.: Requirement Engineering for a Small Project with Pre-Specified Scope. http://ojs.bibsys. no/index.php/NIK/article/view/13. Accessed 2021/11/12 24. Soares, M., Vrancken, J.: Model-driven user requirements specification using SysML. J. Softw. 3(6), 57–68 (2008)

A Machine Learning-Based System for the Prediction of the Lead Times of Sequential Processes Antonio Lorenzo-Espejo , Alejandro Escudero-Santana , María-Luisa Muñoz-Díaz , and José Guadix

Abstract In this paper, a system for the prediction and analysis of the lead times of sequential manufacturing processes is proposed. The system includes two machine learning regression models, which estimate the process lead times based on the values of different independent variables: the lead times of up-stream processes, organizational variables, product specifications, and quality inspection reports. Additionally, the proposed system feeds the downstream processes lead time prediction modules with information regarding the lead time estimation of previous operations. The approach has been tested with real-world data from the case study of a wind turbine tower manufacturer. In particular, the applied system includes two modules for the prediction of the lead times of the bending and longitudinal welding processes. The system produces somewhat inaccurate predictions for the former, but significantly increases its predictive power for the longitudinal welding process, partly with the help of the bending lead time estimations. The output of the proposed system, that is, the lead time estimations, has direct applications in two key aspects of production planning and control: job scheduling and anomaly control. Keywords Lead time prediction · Machine learning · Production planning and control · Wind power

A. Lorenzo-Espejo (B) · A. Escudero-Santana · M.-L. Muñoz-Díaz · J. Guadix Departamento de Organización Industrial y Gestión de Empresas II, Escuela Técnica Superior de Ingeniería, Universidad de Sevilla, Cm. de los Descubrimientos, s/n, 41092 Seville, Spain e-mail: [email protected] A. Escudero-Santana e-mail: [email protected] M.-L. Muñoz-Díaz e-mail: [email protected] J. Guadix e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_3

25

26

A. Lorenzo-Espejo et al.

1 Introduction In this paper, a methodology for the integral analysis and prediction of the lead times of sequential manufacturing processes is presented. A forecasting system is proposed and applied to two of the main processes in wind turbine tower manufacturing: the bending and longitudinal welding (LW) operations. The efficiency of the proposed approach is evaluated using the case study of a Spanish wind turbine tower manufacturing plant and its production records between 2018 and 2021. The system consists mainly of two machine learning (ML) regression models, which, once trained, can predict the bending and LW process lead times. ML methods consistently outperform other traditional regression techniques in that they are able to identify and characterize complex and nonlinear patterns between the different input variables and, additionally, can be incrementally updated using the newly recorded data. Four sorts of input variables are fed to the models: historical records of the lead times of each process and its up-stream operations; context information, regarding personnel, machine, product type, etc.; quality control data of the raw materials and semi-processed parts used in each process; and, for the latter operation (in this case, LW), the output of the previous ML model. The approach is tested using various configurations of the LW ML regression model: using the predicted lead time values of the bending model; inputting the error of the predictions; using only the actual bending lead time; and not utilizing any information about the bending lead time at all. The alternatives are compared and evaluated, with the goal of minimizing the errors in the prediction of the LW times and, specifically, aiming to increase the accuracy of the forecast of the more extreme lead time values. The remainder of the paper is structured as follows: A description of the wind turbine tower manufacturing process is presented in Sect. 2, along with the distinctive constraints and characteristics of its production planning and control and a brief review of the relevant literature; in Sect. 3, the proposed system and methodology are depicted; the results of the application of the presented approach to the study case of a Spanish wind turbine tower manufacturer are presented and discussed in Sect. 4; the conclusions of the study are summarized in Sect. 5; and, finally, the references used in this work are listed.

2 Wind Turbine Tower Manufacturing Wind turbines are large-scale devices that transform the kinetic energy of the wind into electrical energy. The most common wind turbines are installed vertically either on-shore or off-shore, and they can be divided into four components: the rotor, the generator, the yaw system, and the tower. The rotor spins due to the forces exerted by the wind on its blades. The kinetic energy of this spinning motion is converted into electrical energy by the generator. The yaw system rotates the generator and

A Machine Learning-Based System for the Prediction of the Lead Times …

27

rotor around the vertical axis in order to face the wind currents. Finally, the towers, which are the focus of this work, are steel structures that support the other three components. To maximize the power output of these devices, wind turbine towers must be able to safely hold the rest of the components at considerable heights. There are two reasons for this: wind currents are more constant at higher altitudes; and the size of the rotor blades can increase as the distance from the rotor to the surface grows. Thus, taller wind turbines generate a higher, more constant, power supply. As a result, wind towers are increasingly larger, and efforts must be made to optimize their production. The assembly of wind towers is performed on-site by joining large steel cylinders or conical frustums (sections) together. The sections are bolted to each other using flanges that have been previously attached to their top and bottom ends. The top flange of section n is bolted to the bottom flange of section n + 1. Wind towers are composed of at least three sections: a bottom, a mid-, and a top section. When higher towers are required, more mid-sections are installed. These sections are built in wind tower manufacturing plants and transported to the wind farm location. The sections are assembled in the plant using ferrules, smaller cylinders, or conical frustums that are welded together. Previously, the ferrules are formed by bending steel plates bent into rings, which have to be welded together in order to become a closed conical frustum or cylinder. Succinctly, the operations involved in the process are the following: 1. Bending: Rectangular steel plates are bent into cylinders or conical frustums. 2. LW: The edges of the bent plate are welded to one another in order to form a fully closed ferrule. 3. Flange fitting: Flanges are fitted to the inferior and superior ferrules of the sections and given several weld spots so that they hold their position. 4. Ferrule fitting: The ferrules that compose a section are fitted to each other and given multiple weld spots to ensure that they hold their position. 5. Circular welding: The fitted ferrules and flanges are finally welded together, following the weld spots given in the fitting process. 6. Surface treatment: The sections then go through a series of process that prepare the internal and external surfaces for the conditions they must endure during service. A more exhaustive depiction of the wind turbine manufacturing processes and its recent advancements is provided by Sainz [1].

2.1 Production Planning and Control Wind turbine tower manufacturing is a challenging production process from a production and controlling standpoints, for several reasons: (a) the raw materials and products are voluminous and heavy; (b) as a consequence of the volume and weight of the parts, it is a mainly non-automated manufacturing process; (c) despite being a

28

A. Lorenzo-Espejo et al.

low-volume production process, there is a strong variability between client orders; and (d) in spite of the size of the parts produced, wind turbine towers are subject to very strict regulations and small tolerances. Additionally, as is commonly the case in heavy industries, wind turbine tower manufacturing plants lack a high degree of sensorization and digitization. This is also true for the manufacturing plant studied in this paper and has several implications for the planning and controlling of the processes: Firstly, a considerable amount of employee effort is required in order to record production data. Furthermore, this data is inevitably of poorer quality than that produced by sensors. Lacking a protocol (or not adhering to it) introduces bias and errors in the manufacturing records. It must be taken into account that workers understandably consider data recording as a secondary task, and that it is sometimes carried out in less than ideal conditions. Of the manufacturing records, perhaps process lead time variable is affected the most by errors in manual recording. In the case of the plant studied in this paper, employees had to move from their work stations in order to fill in the completion time of a part, and then return to their post to resume the operation. This led to them forgetting to fill in these records, or even waiting until the end of their shift. Undoubtedly, these circumstances affect the quality of the lead time data, which, in turn, has a significant effect on lead time forecasting accuracy. Simply using the averages of the lead times of these processes is bound to produce inaccurate predictions. Therefore, other determinant factors of the lead time must be utilized in order to generate precise estimations that, if good enough, may serve as input of the production planning and control of the manufacturing process. Particularly, two main applications of these lead time predictions have been identified: job scheduling and anomaly control. Regarding job scheduling, if both efficient and attainable schedules are to be produced, it is essential that the lead times of each job are accurately represented. If the time slots allocated to a job are lengthier than what is actually needed, the work station will most likely experience inefficient idle time. On the other hand, if the schedule includes less time than actually required to complete the process, the up-stream stock levels are likely to increase and, more importantly, it is possible that the product delivery dates are not fulfilled. Furthermore, accurate lead time predictions can enable anomaly control systems in cases where sensor data (such as vibration, temperature or noise records) is not available. In these instances, the comparison of an expected lead time and the actual processing time can warn of potential machine failures or defective parts. A review of the production management literature reveals that there is only a limited number of works addressing the use of ML techniques for the prediction of specific processes lead time. Most of the articles found regard completion time estimation [2–5]. More recently, Gyulai et al. [6] proposed a “situation-aware” production control system based on ML predictions of process lead times [7, 8]. These models are focused on real-time control of the lead times by using information about dynamic events occurring simultaneously. Instead, the system proposed in this paper focuses on short-term prediction, since all of the input variables are set in advance. Depending on the variables chosen for the model, which are discussed later in the paper, the lead time predictions can be produced with more or less anticipation.

A Machine Learning-Based System for the Prediction of the Lead Times …

29

3 Methodology The methodology followed in this study is presented in this section. For the sake of conciseness, the followed steps are directly presented as applied to the case study at hand. In particular, the system shown includes the bending and LW processes. However, the conceptual design of the system is of application to any sequence of manufacturing processes, as long as a correlation between their lead times is expected. There are four main steps in the proposed regression analysis: data gathering and preprocessing; feature selection; system design; and model implementation.

3.1 Data Gathering and Preprocessing This study utilizes information recorded in the plant of the nearly 900 tower sections manufactured from 2018 to 2021. These sections were composed of over 7700 ferrules. The data was collected using various databases of the company’s Enterprise Resource Planner (ERP) and Quality Management System (QMS), and the information gathered from different sources was collated, so the resulting database would contain the desired variables. The information regarding lead times and machine use is entered by the plant workers at the end of each operation. Therefore, as previously discussed, plenty of errors and missing values are to be expected, lowering the data quality. In order to include the most extreme and, thus, harder to predict, lead time values in the dataset, no outliers have been discarded. While this might lead to poorer results during the performance evaluation of the models, it is an attempt to prepare the models for abnormal instances. Regarding missing values, these have been imputed using the mean (numeric variables) or the mode (nominal variables). It must be noted that most of the variables missing a significant percentage of data are quality control variables.

3.2 Feature Selection The proposed system is based on two ML regression models. Regression is a ML task in which a model is trained to predict a numeric output variable based on multiple nominal and/or numeric input variables. Therefore, the data utilized is divided into the dependent variable and the independent variables. The response variable of the models is the time required to process a ferrule in each of the operations. This time is given in hours and, as shown later, presents significant variability. The explanatory variables are the factors that can potentially influence the time required to complete the operations. These predictor variables have been selected as such after exploring the available data and discussing them with employees of the plant. Of course, it is unlikely that all the factors suggested a priori are found to be

30

A. Lorenzo-Espejo et al.

significant after the analysis. The input variables have been divided into the following four categories: lead time records of the up-stream processes, context information, quality control reports, and ML regression model predictions. The first three are discussed below, while the latter is addressed in the following section. Historical Lead Time Records of Up-Stream Processes. The lead times of processes taking place before the bending and LW processes may serve as contributing predictors of the corresponding bending and LW process times. There are three main processes that precede the bending operation: sheet cutting, beveling, and bevel cleaning. There are not further significant operations between the bending and LW processes. Three hypotheses that could explain the potential correlation between an operation and its preceding processes: (a) since the dimensions of the parts are expected to greatly influence the lead time of the processes, it should be expected that taking longer to process a part at the, for example, beveling station could be correlated with a longer bending lead time; (b) long process times may be indicators of production anomalies or defective units/equipment. If undetected, these could extend downstream, increasing the lead times of coming processes; and (c) on the other hand, excessively short process times may be indicators of a poor-quality work. While this may not result in immediate defective units, it can show later along the production process. Therefore, while it is difficult to pinpoint a specific reason a priori, the correlation between the lead times of different processes seems reasonable and worth studying. Context Information. As previously discussed, when accurate sensor-based data is unavailable and the only information available is that recorded manually by the workers, it can be ill-advised to rely simply on historical lead times for the prediction. However, there are other variables referring to aspects of the process that are usually set in advance and involve less uncertainty. This category of variables is again split into two groups: product specifications and organizational variables. There are eight variables related to the product specifications a priori relevant to the lead time: • the position of the section that contains the processed ferrule in the tower, a numeric variable ranging from 1 (bottom section) to 6 (highest section produced); • the position of the ferrule in the section in which it is to be included, a numeric variable which can take a value from 1 (bottom ferrule of the section) to 16 (highest ferrule position); • the yield strength of the steel with which the plate was manufactured, for a nominal thickness of 16 mm or less (355 N/mm2 or 455 N/mm2 ); • the toughness subgrade of the steel with which the plate was formed, measured with the Charpy test (JR: 27 J of impact strength at 20 °C; J0: 27 J at 0 °C; J2: 27 J at − 20 °C; NL: 27 J at – 50 °C; and K2: 40 J at – 20 °C); • whether the steel plate has received a normalization treatment in order to increase its toughness or not; • nominal thickness, length, and width of the plate. These variables are common for every process, since they refer to product specifications. On the other hand, the organizational variables, the personnel and

A Machine Learning-Based System for the Prediction of the Lead Times …

31

machine variables, are particular to each of the processes. In a previous analysis [9], the bending lead time has been found to be significantly affected by which worker performed the operation. Therefore, it seems reasonable that the personnel and machine variables could also impact the lead times of the downstream operations. Thus, the models include this information not only for the bending and LW operations, but also for the sheet cutting, beveling, and bevel cleaning discussed above. Quality Control Reports. The QMS module of the system provides information regarding the several quality inspections performed throughout the process. The quality reports available when the parts reach the bending and LW processes refer to the sheet inspections made at the receiving warehouse and after the sheet cutting, beveling, and bevel cleaning operations are completed. The variables recorded in these quality inspections refer mostly to additional measures of the dimensions of the sheets. These are much more complete and accurate than the nominal dimensions obtained from the ERP system. Furthermore, the conformity of the bevels with the product specifications is checked, as well as the sheet dimensions after the cutting and beveling processes. For the sake of conciseness, the quality variables are not described here.

3.3 System Design The proposed system drains the data from two sources: the ERP and QM systems. The historical lead time records of up-stream processes, context information, and the dependent variables, that is, the bending and LW lead times, are obtained from the ERP system. The QM system, on the other hand, provides the information stored in the quality inspection reports. Both of these sources feed the ML regression modules, one for each of the dependent variables. With just that structure, two independent forecasting systems could be produced. However, by linking the bending and LW lead time prediction modules, an integrated prediction system is achieved. This follows the same hypotheses as those behind the inclusion of the sheet cutting, beveling, and bevel cleaning process lead times. However, two other aspects must be noted. Firstly, a very high correlation between the quality of the bending process and the lead time of the LW process can be expected, due to the specifics of the processes: If a poor-quality bending derives in a misalignment of the two edges of the sheet that are to be welded, the LW process can be significantly slowed down. Furthermore, there are two outputs of the bending lead time prediction module: the bending lead time prediction and the actual bending lead time. Additionally, as a combination of these two values, the prediction error can be obtained. Therefore, a configuration of the link between the two modules must be chosen from the different alternatives presented. To do so, the predictive power of the system is tested on its various settings. The results of these comparison are shown in Sect. 4.

32

A. Lorenzo-Espejo et al.

Fig. 1 System design chart

Finally, the predictions obtained with these models are to be used as input of other production and planning control systems. In this case, as previously discussed, the use of the lead time estimation for job scheduling and anomaly control is envisioned. The structure and data flows of the proposed system are depicted in Fig. 1, including the bending and LW (LW) lead time prediction modules.

3.4 Lead Time Prediction Modules Implementation The ML regression models have been implemented in Python using the PyCaret library. The ML pipeline followed in the system is described next. First, a ten-fold cross-validation analysis of various ML models is performed in order to determine the optimal algorithm for the bending regression module. In particular, the gradient boosting regressor produces the best average results in this analysis. Next, the resulting model is tuned using a ten-fold grid search of the model hyperparameters and applied to an 80–20 training–test split, i.e., 80% of the instances in the dataset are used to train the model, and the remaining 20% serve to test its performance. If the performance evaluation results are satisfactory, the model is retrained using the entire dataset and deployed for its use when the system receives further instances to predict, once in production. If the predictions or prediction errors of the bending lead time are to be used in the LW lead time forecasting module, predicted bending lead times values are needed for as many instances in the dataset as possible. To do so, the dataset containing the historical bending lead time records and the corresponding independent variables is split into four subsets with the same number of instances. Four iterations of the ML algorithm are executed. Each one of them uses three subsets as training data. The instances in the remaining one are used to generate the bending lead time predictions. After completing the four iterations, all instances in the dataset contain an actual

A Machine Learning-Based System for the Prediction of the Lead Times …

33

bending lead time value, a prediction, and the prediction error. This way, each of these values can be fed to the LW module. Regarding the prediction of the LW lead time, a similar ten-fold cross-validation analysis must be performed in order to identify the optimal regression model. In this case, the analysis returns the random forest model as the best prediction method. The random forest model is tuned using an analogous hyper-parameter grid search with ten-folds and trained using the 80–20 training–test split. If the results of the evaluation are acceptable, the model is retrained and deployed for production.

4 Results The results of the evaluation of the proposed models are presented in this section. Firstly, the performance evaluation metrics of the bending lead time prediction module are shown in Table 1. The metrics show that the predictive power of the gradient boosting regressor model is weak. For an industrial setting, and considering the inefficiency and inaccuracy in the manual lead time data recording, these values could be accepted. Regarding the LW lead time prediction module, it should be remembered that there are multiple feasible system configurations, depending on which variables link the bending and LW modules. Eleven system configurations have been tested, and the performance evaluation metrics of the random forest model are shown in Table 2. Using the predicted value and the absolute value of the prediction at once as input of the LW prediction module, optionally adding the actual bending lead time value, seems to produce the superior results, according to Table 2. Said combinations outperform the others regarding the mean average error and the coefficient of determination. However, using the combination of the actual value, the prediction value, and the prediction error (including its sign) yields a better root mean squared error result, which might indicate that said system could be able to predict extreme instances with less error (it must be recalled that, by squaring the residues of the model, larger individual errors are penalized more strictly in the RMSE). It must be noted that the model fitness metrics are clearly better than those observed for the bending process. Furthermore, the improved performance of the system when using the actual value, predicted value, and absolute prediction error shows that additional variables can help improve the predictive power of the model. While the predictions of the bending lead times are not significantly accurate, they do include some useful knowledge in the LW module, judging by the results. Table 1 Performance evaluation metrics for the bending lead time prediction module Mean absolute error (MAE, in h) Root mean squared error (RMSE, in h)

Coefficient of determination (R2 , %)

0.4087

21.5

0.6699

34

A. Lorenzo-Espejo et al.

Table 2 Performance evaluation metrics for the LW lead time prediction module with different system configurations LW LT model input from bending module

MAE (in h) RMSE (in h) R2 (%)

None

0.6682

1.0992

0.6144

Actual value

0.6064

1.0171

0.6825

Predicted value

0.685

1.1259

0.6304

Prediction error

0.6583

1.0852

0.6092

Absolute prediction error

0.667

1.1897

0.6439

Predicted value + prediction error

0.7106

1.2306

0.5658

Predicted value + absolute prediction error

0.5972

1.0096

0.7312

Actual value + predicted value

0.6269

1.0787

0.6502

Actual value + prediction error

0.6109

1.0279

0.6336

Actual value + absolute prediction error

0.6503

1.1785

0.6392

Actual value + predicted value + prediction error

0.6136

0.9771

0.6677

Actual value + predicted value + absolute prediction error 0.5832

1.0263

0.6909

5 Conclusions In this paper, a ML-based system for the prediction of lead times of sequential processes is proposed and applied to the case of a wind turbine tower manufacturing process. Specifically, the system is exemplified with the bending and LW operations. The results of the real-world implementation of the system show that the integration of the bending and LW lead time predictions can benefit the forecasting accuracy. While the system proposed in this paper is focused toward a predictive analysis of the lead times, and thus only includes variables that are available before the actual processes are carried out, it could be interesting to perform a descriptive analysis of the impact of these lead times on posterior processes. For example, association rule mining algorithms could highlight interesting correlations between process lead times and other variables of the manufacturing process, such as quality inspection data. Acknowledgements This research was cofunded by the European Regional Development Fund ERDF by means of the Interreg V-A Spain-Portugal Programme (POCTEP) 2014–2020, through the CIU3A project, and by the Agency for Innovation and Development of Andalusia (IDEA), by means of “Open, Singular and Strategic Innovation Leadership” Program, through the joint innovation unit project OFFSHOREWIND. The research was also supported by the University of Seville through a grant belonging to its predoctoral training program (VIPPIT-2020-II.2).

A Machine Learning-Based System for the Prediction of the Lead Times …

35

References 1. Sainz, J.A.: New wind turbine manufacturing techniques. Procedia Eng. 132, 880–886 (2015). https://doi.org/10.1016/j.proeng.2015.12.573 2. Backus, P., Janakiram, M., Mowzoon, S., Runger, G.C., Bhargava, A.: Factory cycle-time prediction with a data-mining approach. IEEE Trans. Semicond. Manuf. 19(2), 252–258 (2006). https:// doi.org/10.1109/TSM.2006.873400 3. Öztürk, A., Kayaligil, S., Özdemirel, N.E.: Manufacturing lead time estimation using data mining. Eur. J. Oper. Res. 173, 683–700 (2006). https://doi.org/10.1016/j.ejor.2005.03.015 4. Alenezi, A., Moses, S.A., Trafalis, T.B.: Real-time prediction of order flowtimes using support vector regression. Comput. Oper. Res. 35, 3489–3503 (2008). https://doi.org/10.1016/j.cor.2007. 01.026 5. Wang, C., Jiang, P.: Deep neural networks based order completion time prediction by using real-time job shop RFID data. J. Intell. Manuf. 30, 1303–1318 (2019). https://doi.org/10.1007/ s10845-017-1325-3 6. Gyulai, D., Pfeiffer, A., Bergmann, J., Gallina, V.: Online lead time prediction supporting situation-aware production control. Procedia CIRP 78, 190–195 (2018). https://doi.org/10.1016/ j.procir.2018.09.071 7. Pfeiffer, A., Gyulai, D., Kádár, B., Monostori, L.: Manufacturing lead time estimation with the combination of simulation and statistical learning methods. Procedia CIRP 41, 75–80 (2016). https://doi.org/10.1016/j.procir.2015.12.018 8. Lingitz, L., Gallina, V., Ansari, F., Gyulai, D., Pfeiffer, A., Sihn, W.: Lead time prediction using machine learning algorithms: a case study by a semiconductor manufacturer. Procedia CIRP 72, 1051–1056 (2018). https://doi.org/10.1016/j.procir.2018.03.148 9. Lorenzo-Espejo, A., Escudero-Santana, A., Muñoz-Díaz, M.-L., Robles-Velasco, A.: Machine learning-based analysis of a wind turbine manufacturing operation: a case study. Sustain 14, 7779 (2022). https://doi.org/10.3390/su14137779

Application of a Visual and Data Analytics Platform for Industry 4.0 Enabled by the Interoperable Data Spine: A Real-World Paradigm for Anomaly Detection in the Furniture Domain Alexandros Nizamis , Rohit A. Deshmukh , Thanasis Vafeiadis , Fernando Gigante Valencia , María José Núñez Ariño , Alexander Schneider , Dimosthenis Ioannidis , and Dimitrios Tzovaras

Abstract Nowadays, the rise of cloud and edge computing, alongside the rapid growth of Industrial Internet of Things (IIoT), has led to the creation of many services and platforms related to the manufacturing domain. Due to this wide availability of various solutions related to Industry 4.0, there is a high demand for interoperability and platforms that will enable the collection of different solutions and their access through a common entry point. Furthermore, the quick and effortless application of

A. Nizamis (B) · T. Vafeiadis · D. Ioannidis · D. Tzovaras Centre for Research and Technology Hellas—Information Technologies Institute (CERTH/ITI), 57001 Thermi, Thessaloniki, Greece e-mail: [email protected] T. Vafeiadis e-mail: [email protected] D. Ioannidis e-mail: [email protected] D. Tzovaras e-mail: [email protected] R. A. Deshmukh · A. Schneider Fraunhofer Institute for Applied Information Technology FIT, 53754 Sankt Augustin, Germany e-mail: [email protected] A. Schneider e-mail: [email protected] F. G. Valencia · M. J. N. Ariño AIDIMME Metal-Processing, Furniture, Wood and Packaging Technology Institute, Parque Tecnológico, C/Benjamín Franklin, 13, 46980 Paterna, Valencia, Spain e-mail: [email protected] M. J. N. Ariño e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_4

37

38

A. Nizamis et al.

the established solutions to the other organizations or domains is of utmost importance. In this paper, we are introducing the application of a Visual and Data Analytics platform that has been previously applied in other use cases to a furniture manufacturing industry for providing machine faults detection toward predictive maintenance. The applied solution utilizes unsupervised machine learning algorithms, boosting techniques, and web-based visual analytics. Furthermore, this paper presents how this platform was applied to the furniture pilot using the Data Spine as interoperability enabler that provides platform integration and interoperability in a standardized way and application in various domains and scenarios. Keywords Industry 4.0 · Anomaly detection · Platform interoperability · Machine learning · System integration · Furniture domain

1 Introduction A large number of solutions related to the manufacturing domain and various applications about machines’ anomaly detection and predictive maintenance were developed in the last decade. The digitalization of various industrial aspects, the rise of the Internet of Things (IoT) and Big Data technologies as well as the adoption of cloud and edge architectures enable the delivery of the aforementioned solutions to industrial environments. However, due to the wide variety and diversity in technologies that are used, for example, the communication and the security protocols, the data structures, etc., it is difficult to create solutions that are extensible and can be applied to more than one business case. Therefore, these solutions cannot be easily integrated with other platforms in order to realize various use cases and be available to the users of those platforms in a single-entry point, such as a common dashboard. This paper demonstrates a real-world application scenario of a Visual and Data Analytics tool for anomaly detection toward predictive maintenance, used by a furniture manufacturing company, LAGRAMA, enabled by the integration capabilities of the Data Spine [1] from the European Manufacturing Platform for Agile Manufacturing (EFPF). EFPF is a federated platform ecosystem, based on some established platforms coming from both research and industry that offers various solutions from Industry 4.0, IoT, artificial intelligence (AI), Big Data, and digital manufacturing domains. At the core of the EFPF ecosystem is the interoperable Data Spine that is used to enable interoperability for the application presented in this paper. Following the introduction section, the remainder of the paper is organized as follows. In chapter one, there is a brief state-of-the-art analysis related to industrial platforms concerning industrial integration and machines’ anomaly detection. In chapter three, the visual analytics platform and the Data Spine are presented. In chapter four, the anomaly detection solution applied to a furniture pilot case, enabled by the interoperable Data Spine, is analyzed. The conclusions are drawn in the last chapter of this work.

Application of a Visual and Data Analytics Platform for Industry 4.0 …

39

2 Related Work In the last five years, a lot of research projects have developed solutions related to anomaly detection targeting predictive maintenance and have delivered various digital platforms for manufacturing. In the COMPOSITION project, an integrated information management system was developed for enabling anomaly detection for various shop floors and different machines. Solutions were developed for polishing machines based on vibration sensors and behavior profile techniques [2] and for other cases such as anomaly detection for industrial ovens based on machine learning and deep learning methods [3]. This project was able to deliver a decision support system (DSS) that covers some common features and interfaces for different cases and users; however, it did not finally deliver an infrastructure that enables the integration of various solutions and services in a common way. Likewise, the MONSOON project delivered a DSS for Industry 4.0 based on unsupervised learning [4] for fault detection. However, in this case a core integration infrastructure that could be further adopted was not created. In other projects, such as BOOST 4.0, a conceptual reference architecture based on BDVA [5] and IDSA information model [6] was introduced. The architecture was adopted by over ten pilot cases in the project with some of them focusing on anomaly detection. Based on this architecture a cognitive platform [7] for anomaly detection supporting various supervised and unsupervised methods and visualizations was developed during the project. It was an example of adoption of the conceptual architecture and use of some proposed technologies but still it cannot be considered as an interoperable application. Some of the authors of the latter work had also introduced in [8] a smart maintenance platform based on microservices related to Z-BRE4K project. The platform was based on faults detection enabled by micro-cluster-based continuous outlier detection algorithm which is an unsupervised distance-based outlier detection algorithm. The solution’s architecture was similar to the BOOST 4.0 approach based on microservices, IDSA reference architecture, and roles of data provider and data consumer. However, it was limited in this case as it was not available as a generic infrastructure for services’ orchestration and application to address multiple use cases. During vf-OS project, an IoT analytics solution [9] was developed again toward predictive maintenance. The project introduced an I/O adapter and a device manager mechanism to enable analytics at the edge—targeting an interoperable solution in this case. ML analytics was applied for this edge computing case; however, the proposed integration pipeline was limited to this stage only. Continuing with the edge computing solutions for Industry 4.0 targeting predictive maintenance and creation of a digital platform, the lately established knowlEdge project proposes a system [10] for automated knowledge generation through its pipeline and standardized integration decision support system, digital twin, and a marketplace for AI models. In this case, the concept seems to be more focused on knowledge creation and its AI pipeline and not on integration and interoperability. This brief analysis of the current state of the art related to anomaly detection and predictive maintenance enabled by smart industrial platforms identifies the lack of

40

A. Nizamis et al.

integration capabilities in these solutions. Therefore, the abovementioned approaches could not enable the faster application of their solutions to other cases and create a federated platform that could be served as the main entry point for various digital solutions available to an industrial end-user.

3 Core Infrastructure for System Integration, Interoperability, Data Analysis, and Visualization 3.1 Interoperable Data Spine The Data Spine is a federated interoperability mechanism that aims to enable the creation of cross-platform applications in a standardized way, with minimal cost, time, and effort. Through enabling cross-platform interoperability, it removes the obstacles between service-level communications across the “connected” or “federated” platforms. Furthermore, it provides the infrastructure and defines the methodology for enabling cross-service interoperability for an easy and intuitive creation of composite applications. As the Data Spine enables the reuse of the existing tools and services regardless of the platforms they belong to and/or the locations they are deployed at, the easy adoption and application of the existing solutions to new use cases across various domains becomes possible. As illustrated in Fig. 1, the Data Spine connects tools, services, data application programming interfaces (APIs), and platforms, regardless of where they are deployed, and enables communication among them. Data Spine consists of the following components:

Fig. 1 High-level architecture of the Data Spine. Adapted from [1] released under a CC BY license; https://doi.org/10.3390/s21124010

Application of a Visual and Data Analytics Platform for Industry 4.0 …

41

• EFPF Security Portal (EFS) based on Keycloak [11]: The EFS [12] acts as the central Identity Provider (IdP) for the platform ecosystem enabled by the Data Spine. It enables the use of the access protected resources (such as tools, services, and data) of the connected platforms using the ecosystem-level user accounts, eliminating the need for using multiple user accounts to access the resources of multiple platforms. Thus, it enables “Security Interoperability” or single sign-on (SSO) in the ecosystem. • Integration Flow Engine (IFE) based on Apache NiFi [13]: The IFE is based on the concepts from visual programming and flow-based programming paradigms. It provides tooling support such as a multi-tenant, drag-and-drop style graphical user interface (GUI), built-in Protocol Connectors, and Data Transformation Processors for the creation of data flows or “Integration Flows (iFlows)”. The iFlows can be used for bridging the interoperability gaps among services and composing them together to create new applications with minimum coding effort. Thus, the IFE enables “Protocol Interoperability” and “Data Model Interoperability”. • Message Broker based on RabbitMQ [14]: The Message Broker enables asynchronous, Publish/Subscribe type of communication in the platform ecosystem. • Service Registry based on LinkSmart Service Catalog [15]: The Service Registry enables the discovery and life cycle management of the service API meta-data that is required to consume the existing APIs. • API Security Gateway (ASG) based on Apache APISIX [16]: The ASG acts as the Policy Enforcement Point (PEP) to protect access to the synchronous, HTTP-based APIs exposed by the iFlows created by users. In the EFPF project, the Data Spine enables the creation of the EFPF ecosystem. The Data Spine instance that is deployed as a cloud-native system supports rapid digitalization and sustainability though reusability to enable the execution of various use cases.

3.2 Visual and Data Analytics Tool The Visual and Data Analytics tool provides a complete solution for analyzing and visualizing various types of data. It is a web-based framework originally developed in COMPOSITION EU project to analyze and visualize both industrial [2] and supply chain data [17]. This work presents its application in a pilot case deployed in a different company and grounded in a different domain, which was enabled by the abovementioned Data Spine infrastructure and made available through EFPF platform. The tool was developed based on web technologies such as AngularJS, JavaScript, and ChartJS. The backend algorithms were developed in Python. The deployment was based on microservices logic, and Docker is used as the containerization technology. Both REST/HTTP and MQTT protocols are utilized for tool’s communication with other tools and IoT devices. By using the interfaces of this tool (Fig. 2), the user is able to:

42

A. Nizamis et al.

Fig. 2 Visual and data analytics tool—main user interface

1. Connect a data source for analysis and visualization. The user can select one of the available connected databases (MongoDB or InfluxDB), brokers, or load a.csv file containing historical data. 2. Select an analytics method in the case that more than one is available. This depends on the data source that the user has chosen in the previous step. 3. Select machine(s) and available sensor(s) as data sources for the visualization. 4. Select the date or range of dates for data visualization. 5. Select one of the available graph types for the visualization of the analysis. To sum up, the Visual and Data Analytics tool provides all the necessary services for data analysis and real-time monitoring; however it requires the data to be available in formats and communication protocols that are compatible.

4 Real-World Pilot Applications 4.1 Furniture Manufacturer’s Case Study LAGRAMA is an SME furniture manufacturer based in Vinaròs, Spain, that produces youth rooms and additional home furniture such as lounges, wardrobes, and home offices. The variability in consumers’ behavior requires continuous innovation in product development and its expansion range at a reasonable price. In this regard, the product customization involves a major change in the production flow where any problem detected in machines or processes leads to less efficiency. Therefore,

Application of a Visual and Data Analytics Platform for Industry 4.0 …

43

increasing the machine availability through predictive maintenance, avoiding unexpected failures (such as broken components) which take longer to resolve than planned maintenance, is crucial. In order to address this situation, the company utilizes Data Analytics on its industrial processes and equipment to optimize production by reducing machines downtimes, in order to improve the delivery scheduling. The case study focuses on the behavior of an edge-banding machine where wood boards are processed just after the cutting and before the drilling processes and where every unplanned maintenance task compromises the overall capacity of the factory. The case then focuses on the production of unit batches of parts of different sizes and shapes. Several motors are involved in the most relevant aspects of the machine operation, and the level of motor usage depends on the type of piece to be processed. Therefore, the data gathered from machine contains a high level of heterogeneity. In this regard, aspects such as the size of the datasets, the frequency of the data collection, and the reliability of the prediction are of great importance. The sensors are placed in selected motors in the machine, and they are connected to an interface board on a Factory Connector which monitors the measured values and publishes them to the Data Spine Message Broker. Data collected from the machine includes temperature, pressure, and electrical current. Thus, data from the shop floor is available to be used by other tools focused on offering functionalities such as analytics and machine learning, risk evaluation, and visualization [18]. For the presented case study, there was the need for secure transfer of data, services integration, data transformation to a format compatible with the analytics tools, and support for time series data. Furthermore, a visualization dashboard was required in conjunction with the integration of the dashboard alongside other digital solutions related to the furniture manufacturer such as risk analytics tools, fill level monitoring of bins, etc. To address the abovementioned requirements, the Visual and Data Analytics tool described in Sect. 3.2 was integrated into the manufacturer’s services in the EFPF Portal—the single-entry point for the EFPF ecosystem. The integration was enabled by the interoperable Data Spine which is described in Sect. 3.1.

4.2 Application of the Solution in the Real-World Environment Solution Architecture and Integration. To deliver an anomaly detection solution for the problem that was introduced in the previous section, the Visual and Data Analytics tool presented in Sect. 3.2 was used. The EFPF Data Spine enabled its integration with the pilot site and the EFPF Portal that the end-users use to access their applications. The solution’s architecture together with the data flow is depicted in Fig. 3.

44

A. Nizamis et al.

Fig. 3 Solution architecture and information flow

The factory data sent to the Data Spine through the installed Factory Connector adheres to a raw/custom data model. The API of the Factory Connector together with the specification of this custom data model is registered to the Data Spine Service Registry. The composite application developers fetch the API meta-data of these data APIs from the Service Registry to create iFlows. The Integration Flow Engine of the Data Spine is used to transform the raw data into a widely used data model such as OGC SensorThings, as expected by the Visual and Data Analytics tool. The transformed data is published to the Data Spine Message Broker over a new topic. The Visual and Data Analytics tool subscribes to this new topic from the Message Broker, and it is now able to consume the transformed data using the MQTT protocol. Internally, the Visual and Data Analytics tool preprocesses, stores, and analyzes the data based on the various modules it contains. Both processed and analyzed data are used by the visualization modules where they can be presented as graphs or tables to end-users. The tool’s user interfaces are accessible through the EFPF Portal. The Visual and Data Analytics tool (as every connected platform) has its own IdP. However, the Data Spine makes it possible to access the tool’s user interface embedded into the EFPF Portal using an EFPF user account. This “Security Interoperability” for the EFPF Portal is achieved using the EFS’ SSO capabilities [1, 12]. Finally, as the Data Spine enables services and tools interoperability, the Visual and Data Analytics tool can be used by the user through this single-entry point, the EFPF Portal, alongside the other available tools and services. Data Analysis Methodology. The abovementioned integrated analytics solution applied to the furniture pilot case is based on widely known anomaly detection techniques, such as k-means [19], k-nearest neighbors [20], one-class SVM [21], local outlier factor [22], and density-based spatial clustering of applications with

Application of a Visual and Data Analytics Platform for Industry 4.0 …

45

Fig. 4 Majority voting system based on well-known anomaly detection techniques

noise [23]. This approach is able to automatically detect anomalies in a univariate time series. The decision regarding a potential anomaly is made by using a majority voting system that uses the five anomaly detection techniques referred above as majority vote learners. A brief presentation of the anomaly detection techniques is provided in Fig. 4. The majority voting system uses the predictions yˆ _1 to yˆ _5 of the five anomaly detection techniques h_1 to h_5 as input, after the training step, on a new data input. The predictions pass through a voting system based on the mode, and the final prediction yˆ _f is occurred: yˆ _f = mode {h_1 (x), h_2 (x), h_3 (x), h_4 (x), h_5 (x)} where h_i (x) = yˆ _i. In Fig. 5, an implementation of the functionality of the majority voting system is presented. It is a screenshot of the Visual and Data Analytics tool accessible through the EFPF Portal, and it is used by the furniture manufacturer. For a selected machine sensor that creates a univariate time series, the voting system provides potential anomalies as red dots in the graph. The quality of the anomaly detection functionalities is still under evaluation by the furniture manufacturer as the sensors were newly deployed to the machine and any historical data related to the machines is not available. Furthermore, the analysis of the anomaly detection results is beyond the scope of this paper, as we are focusing here on interoperability and systems’ integration.

46

A. Nizamis et al.

Fig. 5 Machine anomaly detection based on majority voting system

5 Conclusions This paper summarizes the application of an already established Visual and Data Analytics tool to a new pilot case enabled by the interoperable Data Spine of EFPF project. This work demonstrates how the generic approach of the designed analytics tool enables its usage to various cases and their needs. Furthermore, the integration of the tool to address the new use case is realized in a secure and interoperable way using the Data Spine. The Data Spine provides a series of mechanisms for effortless and semi-automated integration of various systems and tools. It offers services for data transformation in order to promote data model interoperability, services for secure communication, and support for well-known communication protocols to further promote the integration processes. Moreover, this work highlights the process of data provision from a furniture manufacturer’s edge-banding machine by using the Data Spine’s integration functionalities. The data, transformed by the Data Spine for ensuring interoperability, is consumed by the aforementioned analytics tool which delivers a solution for a real-world use case. This solution is made available to the end-user through a common entry point, the EFPF Portal, by using the provided interoperability infrastructure. Acknowledgements This work was partially supported by the European Commission through the Horizon 2020 Framework Program, Innovation Actions (IA), and the European Connected Factory Platform for Agile Manufacturing (EFPF) project through Grant 825075.

Application of a Visual and Data Analytics Platform for Industry 4.0 …

47

References 1. Deshmukh, R.A., Jayakody, D., Schneider, A., Damjanovic-Behrendt, V.: Data Spine: a federated interoperability enabler for heterogeneous IoT platform ecosystems. Sensors 21(12), 4010 (2021). https://doi.org/10.3390/S21124010 2. Vafeiadis, T., Nizamis, A., Apostolou, K., Charisi, V., Metaxa, I.N., Mastos, T., Ioannidis, D., Papadopoulos, A., Tzovaras, D.: Intelligent information management system for decision support: application in a lift manufacturer’s shop floor. In: IEEE International Symposium on INnovations in Intelligent SysTems and Applications, pp. 1–6. IEEE, Sofia (2019). https://doi. org/10.1109/INISTA.2019.8778290 3. Rousopoulou, V., Nizamis, A., Giugliano, L., Haigh, P., Martins, L., Ioannidis, D., Tzovaras, D.: Data analytics towards predictive maintenance for industrial ovens. In: International Conference on Advanced Information Systems Engineering, pp. 83–94. Springer, Cham (2019). https://doi. org/10.1007/978-3-030-20948-3_8 4. Kolokas, N., Vafeiadis, T., Ioannidis, D., Tzovaras, D.: A generic fault prognostics algorithm for manufacturing industries using unsupervised machine learning classifiers. Simul. Model. Pract. Theory 103, 102109 (2020). https://doi.org/10.1016/j.simpat.2020.102109 5. Reñones, A., Dalle Carbonare, D., Gusmeroli, S.: European big data value association position paper on the smart manufacturing industry. In: Popplewell, K., Thoben, K.D., Knothe, T., Poler, R. (eds.) Enterprise Interoperability: Smart Services and Business Impact of Enterprise Interoperability, pp. 179–185. Springer, Cham (2018). https://doi.org/10.1002/978111 9564034.ch22. 6. Otto, B., Jarke, M.: Designing a multi-sided data platform: findings from the international data spaces case. Electr. Mark. 29, 561–580 (2019). https://doi.org/10.1007/S12525-019-00362-X 7. Rousopoulou, V., Vafeiadis, T., Nizamis, A., Iakovidis, I., Samaras, L., Kirtsoglou, A., Ioannidis, D., Tzovaras, D.: Cognitive analytics platform with AI solutions for anomaly detection. Comput. Ind. 134, 103555 (2022). https://doi.org/10.1016/j.compind.2021.103555 8. Naskos, A., Nikolaidis, N., Naskos, V., Gounaris, A., Caljouw, D., Vamvalis, C.: A microservice-based machinery monitoring solution towards realizing the Industry 40 vision in a real environment. Proced. Comput. Sci. 184, 565–572 (2021). https://doi.org/10.1016/j.procs.2021. 03.071 9. Anaya, V., Fraile, F., Aguayo, A., García, O., Ortiz, Á.: Towards IoT analytics: a vf-OS approach. In: Proceedings 9th International Conference on Intelligent Systems 2018: Theory, Research and Innovation in Applications, pp. 570–575. IEEE, Funchal (2018). https://doi.org/ 10.1109/IS.2018.8710476. 10. Alvarez-Napagao, S., Ashmore, B., Barroso, M., Barrué, C., Beecks, C., Berns, F., et al.: Knowledge project–concept, methodology and innovations for artificial intelligence in industry 4.0. In: Proceedings of the 2021 IEEE 19th International Conference on Industrial Informatics, pp. 1–7. IEEE, Palma de Mallorca (2021). https://doi.org/10.1109/INDIN45523.2021.9557410 11. Keycloak—Open Source Identity and Access Management. https://www.keycloak.org/. Accessed 15 Nov 2021 12. Selvanathan, N., Jayakody, D., Damjanovic-Behrendt, V.: Federated identity management and interoperability for heterogeneous cloud platform ecosystems. In: Proceedings of the 14th International Conference on Availability, Reliability and Security, pp. 1–7. ACM, Canterbury (2019). https://doi.org/10.1145/3339252.3341492 13. Apache NiFi. https://nifi.apache.org/. Accessed 07 Oct 2021 14. RabbitMQ Message Broker. https://www.rabbitmq.com/. Accessed 07 Oct 2021 15. LinkSmart Service Catalog Documentation. https://github.com/linksmart/service-catalog/wiki. Accessed 07 Oct 2021 16. Apache APISIX Cloud-Native API Gateway. https://apisix.apache.org/. Accessed 09 Oct 2021 17. Vafeiadis, T., Nizamis, A., Pavlopoulos, V., Giugliano, L., Rousopoulou, V., Ioannidis, D., Tzovaras, D.: Data analytics platform for the optimization of waste management procedures. In: Proceedings of the 2019 15th International Conference on Distributed Computing in Sensor Systems, pp. 333–338. IEEE, Santorini (2019). https://doi.org/10.1109/DCOSS.2019.00074

48

A. Nizamis et al.

18. Bhullar, G., Osborne, S., Núñez Ariño, M.J., Del Agua Navarro, J., Gigante Valencia, F.: Vision system experimentation in furniture industrial environment. Fut. Internet 13(8), 189 (2021). https://doi.org/10.3390/fi13080189 19. MacQueen, J.B.: Some methods for classification and analysis of multivariate observations. In: Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281– 297. University of California Press, Berkeley (1967) 20. Altman, N.S.: An introduction to kernel and nearest-neighbor nonparametric regression. Am. Stat. 46(3), 175–185 (1992). https://doi.org/10.2307/2685209 21. Pimentel, M.A.F., Clifton, D.A., Tarassenko, L.C.: A review of novelty detection. Sig. Process. 99, 215–249 (2014). https://doi.org/10.1016/j.sigpro.2013.12.026 22. Breunig, M.M., Kriegel, H.-P., Ng, R., Sander, J.: Lof: Identifying density-based local outlier. ACM SIGMOD Rec. 29(2), 93–104 (2000). https://doi.org/10.1145/335191.335388 23. Ester, M., Kriegel, H.-P., Sander, J., Xu, X.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, pp. 226–231. AAAI Press, Palo Alto (1996)

Predictive Study of Changes in Business Process Models Adeel Ahmad , Mourad Bouneffa , Henri Basson, Chahira Cherif, Mustapha Kamel Abdi , and Mohammed Maiza

Abstract Change impact analysis can play a major role in planning and establishing the feasibility of a change in forecasting the cost and complexity before its implementation. In this work, we focus in managing the change impact propagation on several levels of granularity and abstraction of business process models (BPM) systems in order to pursue the change management studies in BPM and related software units. We adopt a machine learning-based approach to study the importance of integral components of a BPM across different versions. The proposed approach allows to analyze the dependencies among the activities, data, and roles of actors in the “training” phase. The facts are collected and analyzed in the form of a matrix. Then, for the prediction phase, we use the Bayesian classification to optimize the gained experience and leverage the learning to trace the change impact propagation in similar systems. The proposed approach analyzes the dependencies among activities, data, and roles with respect to the change and related artifacts. In this regard, the current work evaluates the links, cohesion, complexity, and effectiveness to better analyze the impact of change and its propagation on the model through the dependencies A. Ahmad (B) · M. Bouneffa · H. Basson Laboratoire d’Informatique Signal et Image de la Côte d’Opale, Université du Littoral Côte d’Opale, Calais, France e-mail: [email protected] M. Bouneffa e-mail: [email protected] H. Basson e-mail: [email protected] C. Cherif · M. K. Abdi LRIIR Laboratory, Faculty of Exact and applied Sciences, Ahmed Benbella Oran 1 University, Oran, Algeria e-mail: [email protected] M. K. Abdi e-mail: [email protected] M. Maiza Faculty of Mathematics and Computer Science, University of Science and Technology of Oran, Oran, Algeria e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_5

49

50

A. Ahmad et al.

among the artifacts of the model. The major objective is to reduce the unforeseen cost and the unexpected risk on the evolving business operations. Keywords Business process model · Dependency rules · Dependency relationships · Change impact prediction · Enterprise applications analysis

1 Introduction The business processes explicitly identify and define the significant procedures for the enterprise activities and the relationships with their collaborators. The business process-based enterprises are privileged to have more transparent business relationships. The process models in this context help to discover, analyze, and design business operations in order to automate along with eventual optimization of business processes. It is necessary to identify and formally model the structured and semi-structured business activities to achieve this goal. The automation of business processes and their optimization must identify the redundant activities that do not support in eventual product manufacturing or service provisions. Business processes, like any other IT system, evolve over time as a result of certain causes such as market demands, improved performance, and innovation or reengineering of the system. A change can be a result of diverse objectives such as the process restructuring in order to comply with the new business rules to optimize the performance [1]. It is essentially important to collect the descriptive knowledge of business processes in order to better understand the consequences and influences of any change-related operation on the rest of the system. The major goal of our current study is to contribute in the field of software evolution with respect to the business process modeling in order to control the change impact propagation during the distinct stages of evolutionary phases. The approach is aimed on the definition of meta-modeling of internal and external dependencies of a business processbased information systems. Primarily, we propose to extract the dependency data and analyze the relevant information in the form of matrix datasets. We later on generalize the extraction of data on the different versions of the given system which allow us to obtain the final matrices following the comparison of matrices of the same type of dependencies between each successive versions of the system. Consequently, we not only include the most relevant information concerning all the dependencies of the system, but also the modifications made through the different editions of each version along with their influences in the system. The data obtained from the BPM dependency analysis is used to create a learning base. It shall later on guide in prediction of the degree of change impact for intended changes of a business process. The current work uses the Bayesian networks to study the feasibility of statistical machine learning models that establish the predicted results of a change. The rest of the contents in this paper are organized as follows: Sect. 2 is dedicated to a synthesis of closely related works. Section 3 describes the fundamental concepts

Predictive Study of Changes in Business Process Models

51

of business processes with respect to formalize the activities, data, and roles dependencies among BPMN artifacts. The exploitation of this information is achieved with the definition of a meta-model at the different modeling phases. Section 4 concludes the contribution of the presented research work.

2 Theoretical Background and Related Work For the past decade, most of the research work has been significantly focused in transactional improvements of the business processes, whereas the related research literature, in BPM and information systems domain, has given scarce attention on the change impact analysis. However, the business processes cannot evolve as integral units; indeed they interoperate with other processes and application systems. In this context, the enterprises must be able to rapidly identify and adopt the beneficial changes. The business process model (BPM) provides solid foundation for evaluating the operational framework of enterprises. The formally defined business processes can be improved on lower costs and increased performance [2]. However, it is often observed that the enterprises rely mainly on the expertise of the BPM consultants to improve enterprise business processes, in spite of the fact that a BPM expert is not necessarily a computer science expert. The BPM expert may ask the enterprise which are the most relevant components in their BPM? What are the impacts of a planned BPM change on application systems? Should the change be viewed from a structural, syntactical, logical, qualitative, behavioral, or functional angle? [1] The dependency analysis is one of the axes to deal with the propagation of modification in a business process; the authors in [3] explain several types of dependencies such as the control flow dependency or data dependency according to the BPM artifact. In [4] the authors propose a model transformation from standard BPMN notation to LOTOS New Technology (LNT) process algebra and labeled transition systems (LTS) formal models. The authors, later on, compare the business processes at the level of the formal model with the help of inherent relations. The set of comparison primitives addresses the change operations of renaming, refinement, ownership, and the context of change. But this approach mainly considers the non-functional requirements in BPMN processes, such as throughput and task latency. The authors in [1] classify the dependencies among the interlevel and intralevel artifacts according to the abstraction and the granularity. The work presented in [5] proposes the detection of change with certain number of defined representations of the change impact to analyze the change outcomes. The authors in [6] present an approach based on the description of a meta-model and the integration of graph rewriting rules for an a priori analysis of change impact propagation. The authors in [7] propose a serviceoriented business process modeling that identifies a taxonomy of changes, which makes it possible to analyze the different types of change between artifacts and its services. In [8], the authors present two distinct approaches: a process mining approach that uses version history to predict the impact of changes, and the second

52

A. Ahmad et al.

approach is based on dependency analysis. The authors believe on the hypothesis that the frequently modified processes, in the past, are more probabilistic to be changed in the future. But in practice, such version histories are not always available or otherwise it is complicated to construct a useable and automated repository to maintain the history of versions. The second approach, based on the dependency analysis, necessitates the semantic annotation of the process model instead of the history of different versions. The authors, in [9], present a dependency analysis tool that assists the control and data flow analysis between business activities and services. In [10], the authors propose an extensible feature along with the sliding window technique and the heuristic miner to detect and locate concept drifts in unfinished event logs. The approach employs the weight heuristic miner and the differential evolution for the genetic process mining (GPM) to extract the new process model from an evolving process. This analysis approach focuses on offline sets; other issues remain to be addressed, e.g., how to determine the preference of GPM parameters? Or how to measure the influence of noises on the concept of drift detection? The current contribution, which is inspired from certain of these research work, intends to achieve the goal in the context of studying the change impact in business processes through the analysis of dependencies between activities, data, and roles. The approach is based on the analysis the intralevel and interlevel propagation in respect of the abstraction and the granularity among the artifacts of the business processes. In this regard, a predictive change impact analysis approach is presented that predicts the degree of change impact using the versions history of a business process models.

3 Meta-Model of Implementation Approach BPMN model is designed and developed by a variety of individual or combination of software tools. Most of these software tools follow a description of the XML-based BPMN elements. We integrate the CAMUNDA API into our prototype validation tool. There is no particular reason for this choice. It allows the transparent integration of the XML descriptions of the BPMN elements in the prototype tool. For the sake of further explaining the composition and design of business process model, let us, for instance, take the example of a Zoo process. We use this example process model to elaborate the diverse concepts of business process throughout the contents of this paper. This model ornates the interaction of three process instances (Bank, Zoo, and Visitor). We followed an incremental approach to develop the prototype tool to validate the presented approach, as shown in Fig. 1. In the following sections we discuss the development phases of the implementation approach.

Predictive Study of Changes in Business Process Models

53

Fig. 1 Global architecture of the implementation approach

3.1 The Data Extraction Phase This phase allows the extraction of the dependency relationships between the elements of BPM. We opt to use the logical description formalisms [11] to mathematically illustrate the dependency rules. The extracted data, as a result of the execution of these rules, is stored and organized in the form of dependency matrices. We start with the parsing of the XML-BPMN file. We are interested by the descriptive part of the “process definition” that describes the description of the sequencing of the steps of this process and its subprocesses. It makes it possible to locate all the activities (as shown in Algorithm 1) of the model which can be formally represented by the set α, such that α = {a1 , a2 , . . . , an }.

(1)

The set of all the dependencies, represented as F, is the combination of activity dependencies (ϕ A ), data dependencies (ϕ D ), and role dependencies (ϕ R ), shown as follows:  = {ϕ A ∪ ϕ D ∪ ϕ R }, where ϕ A = {Sequence - Flow, Normal - Flow - Conditional, Flow - Default, Flow - Exception, Flow - Message, Flow - Compensation, Initiating - Message - Flow - With - Decorator,

(2)

54

A. Ahmad et al.

Non - Initiating - Message - Flow - With - Decorator, Association, Directional - Association, Association, Non - InitiatingMessage - FlowWith, DirectionalAssociation}.

(3)

ϕ D = {Data Association ∪ DirectedDataAssociation}.

(4)

ϕ R ⊆ γ × α,

(5)

where γ is the set of roles, such that γ = {r1 , r2 , . . . , rk }.

(6)

The set of dependencies F can be localized with the help of Algorithm 2 (Table 1) which shows the links of dependencies between the components of the BPMN diagram. The validation prototype is implemented using the Java programming language, for the edition and loading of business processes in order to process all BPMN 2.0type files. The prototype tool (as shown in Fig. 2) uses the parsed data to formalize the activity dependency matrix. We formalize in the below, three dependency matrices (activity, data, and role), and we detail the role of each matrix. Activity Dependency Matrix (ADM): The ADM contains the execution order of process model activities. Dependency relationships between activities direct the scheduling of BPM elements, and these relationships are represented in the ADMij matrix with the following rule: IF relActivity ∈ ϕ A THEN ADMi, j := 1 ELSE ADMi, j := 0 ENDIF

Table 1 Algorithms used

Algorithm 1. Locate activity Data: XML of BPMN diagram Result: Set of model activities 1 Parse XML file 2 Identify the activities α in the process definition part 3 Display all activities α and descriptions in a table Algorithm 2. Locate dependency Data: XML of BPMN diagram Result: Set of dependencies F 1. Parse XML file 2. Identify all the dependencies F in the process definition 3 3. Display all dependencies F and type in a table

Predictive Study of Changes in Business Process Models

55

Fig. 2 Extraction of data relevant to the activities of the process model

Triggering the above rule on the example process model, we obtain the activity dependency matrix as shown in Fig. 3. Data Dependency Matrix (DDM): The DDM contains the data dependency relationships. The existence of a data link by one of the three types of dependency either with flow, sharing, or mutual [1] is presented in the matrix DDMij . The matrix entries take the value 1 if an activity ai depends on another activity aj in terms of data and if there is no dependency DDMij is equal to 0 [12]. The data dependency relationship (relData) is represented as follows: IF relData ∈ ϕ D THEN DDMi, j := 1 ELSE DDMi, j := 0 ENDIF

Fig. 3 Activity dependency matrix (ADM) of Zoo process model

56

A. Ahmad et al.

Fig. 4 Data dependency matrix (ADM) of Zoo process model

Fig. 5 Role dependency matrix (ADM) of Zoo process model

Triggering the above rule on the example process model, we obtain the data dependency matrix as shown in Fig. 4. Role Dependency Matrix (RDM): The RDM indicates the degree of dependency of a role on some particular activity. The RDMij matrix represents the dependency of the roles of the process which are assigned to the activities, and for this the role dependency relationship (Relrole) takes the value 1 in the case where a role is assigned to such an activity, or otherwise it takes the value 0. We determine this relationship as follows: IF relRole ∈ ϕ R THEN RDMi, j := 1 ELSE RDMi, j := 0 ENDIF Triggering the above rule on the example process model, we obtain the role dependency matrix as shown in Fig. 5.

3.2 Data Collection and BPMN 2.0 Dependencies Ontology The data from each matrix formed in our approach carries important properties to the BPMN diagram and provides relevant information on the functioning of the diagram.

Predictive Study of Changes in Business Process Models

57

The dependencies data is extracted and collected from the multiple versions of the process model. Following the data collection, a knowledge base is formed with the integration of dependencies rules. The knowledge base in the presented approach contains the definition of “BPMN 2.0 dependencies ontology”. It allows the structured storage of information obtained from different versions of business processes, to map and manipulate the necessary elements to predict the impact propagation of changes in a business process. Ontology modeling is an essential tool allowing the automatic exploitation (machine processing) of knowledge and the realization of the principles of reusability and information sharing between different data sources. This is obtained by the common vocabulary provided for a real domain or imaginary knowledge. The definition of this knowledge is based on the components of ontology: concepts, relations, properties, axioms, and instances which are extracted from the dependency matrices. To validate this part of the work, we used one of the most popular platforms called Protégé-OWL based on the Web Ontology Language (OWL). This platform is both an ontology editor and a Java-based framework. The Protégé-OWL programming interface also provides the Java libraries for the application and the implementation of SWRL rules. BPMN 2.0 dependencies ontology is composed of two categories of classes: The first represents the different activities, and the second contains the participants of the business process, objects of properties, data of properties, and axioms. The nodes are linked by relationships, and the property objects are determined by relationship types that are located in dependency matrices among the different versions of the instances of the same process model. The defined ontology currently consists of 149 classes “Activity, Process, SequenceFlow, Task, ServiceTask, etc.”, 103 objects property “has loop activity, has_condition, has_flowElements, etc.”, and 29 data property “id, name, type, method, script, etc.” In order to represent the different versions of the business process in the form of ontologies, we compare them successively and follow the different changes between each couple of versions.

3.3 Change Impact Prediction in Business Process Following the extraction of the data in the form of the matrices in the first phase of our approach, we subsequently determine the influence of changes with the help of a learning base on the multiple versions of the system. Indeed, this study exploits the machine learning algorithms on business processes data to classify the degree of impact of change. In order to have a precise result, we define the explanatory variables “Add, delete, and modify” on activities and control flows. It is a question of analyzing the change in the elements of the BPMN notations belonging to the categories as flow objects and connection objects to predict the degree of change impact. We use the Naïve Bayes classifier as it is a classifier for supervised learning and it can be used to predict any new example if these functional relationships are known in advance [13]

58

A. Ahmad et al.

and suit our problem to classify the change impact degree into three classes (HIGH, MEDIUM, and LOW) from the explanatory variables characterizing the processes.

4 Conclusion The major objective of this research work is concerned with the prediction of change impact in business process models, which are designed using the BPMN 2.0 notations. Indeed, the dependency analysis of business process components helps us to analyze the change and its impact on the process. The proposed approach initially collects the data from the business processes and their version histories. We start with the extraction of activity, data, and role dependency in the first version of the process, then extend it to the rest of the versions. The next step in our approach allows us to define the data in the form of an ontology which provides a means to follow the propagation of the impact of the change in order to predict the impacted zones of BPM. The resulting information led us to create our learning base in order to begin the prediction of the level of process change. This leads to do a probabilistic study using Naïve Bayes classifier. Due to the performance of the classification using Naïve Bayes classifier, the probability calculations are not significantly expensive and they allow classification even with a small dataset. At the end of this phase, we can determine the degree of the impact of change.

References 1. Kherbouche, O.M., Bouneffa, M., Ahmad, A., Basson, H.: Analyse a priori de l’impact du changement des processus métiers. In: Proceedings of the Actes du XXXIème Congrès INFORSID, pp. 257–266. Paris (2013) 2. Rahmani, M.D., Radgui, M., Saidi, R., Lamghari, Z.: Defining business process improvement metrics based on BPM life cycle and process mining techniques. Int. J. Bus. Process. Integr. Manag. 9(2), 107–133 (2019) 3. Abdeen, H., Bali, K., Sahraoui, H., Dufour, B.: Learning dependency-based change impact predictors using independent change histories. Inform. Softw. Technol. 67, 220–235 (2015). https://doi.org/10.1016/j.infsof.2015.07.007 4. Krishnaa, A., Poizat, P., Salaünd, G.: Checking business process evolution. Sci. Comput. Program. 170, 1–26 (2019) 5. Amziani, M., Melliti, T., Tata, S.: A generic framework for service-based business process elasticity in the cloud. In: Barros, A., Gal, A., Kindler, E. (eds.) Business Process Management. BPM 2012. Lecture Notes in Computer Science, vol. 7481, pp. 194–199. Springer, Berlin (2012). https://doi.org/10.1007/978-3-642-32885-5_15 6. Bouneffa, M., Ahmad, A., Basson, H.: Gestion Intégrée du Changement des Modèles de Processus Métier. In: Proceedings of the Actes du XXXIème Congrès INFORSID. Grenoble (2015) 7. Bouneffa, M., Ahmad, A.: The change impact analysis in BPM based software applications: a graph rewriting and ontology based approach. In: Hammoudi, S., Cordeiro, J., Maciaszek, L.A., Filipe, J. (eds.) Enterprise Information Systems, vol. 190, pp. 280–295. Springer, Heidelberg (2014)

Predictive Study of Changes in Business Process Models

59

8. Wang, Y., Yang, J., Zhao, W.: Change impact analysis for service based business processes. In: Proceedings of the 2010 IEEE International Conference on Service-Oriented Computing and Applications (SOCA), pp. 1–8. IEEE, Perth (2010) 9. Bouchaala, O., Yangui, M., Tatal, S., Jmaiel, M.: DAT: Dependency analysis tool for service based business processes. In: IEEE 28th International Conference on Advanced Information Networking and Applications, pp. 621–628. IEEE, Perth (2014) 10. Li, T., He, T., Wang, Z., Zhang, Y., Chu, D.: Unraveling process evolution by handling concept drifts in process mining. In: IEEE International Conference on Services Computing (SCC), pp. 442–449. IEEE, Honolulu (2017) 11. Baader, F., Horrocks, I., Lutz, C., Sattler, U.: Introduction to Description Logic. Cambridge University Press, Cambridge (2017) 12. Cherif, C.: Towards a probabilistic approach for better change management in BPM systems. In: IEEE 23rd International Enterprise Distributed Object Computing Workshop (EDOCW), pp. 184–189. IEEE, Paris (2019) 13. Panigrahi, R., Borah, S.J.: Classification and analysis of facebook metrics dataset using supervised classifiers. In: Dey, N., Borah, S., Babo, R., Ashour, A.S. (eds) Social Network Analytics. Computational Research Methods and Techniques, pp. 1–19. Academic Press, London (2019)

Part II: Stakeholders’ Collaboration and Communication

Collaborative Platform for Experimentation on Production Planning Models María Ángeles Rodríguez , Ana Esteso , Andrés Boza , and Angel Ortiz Bas

Abstract Researchers usually use optimization software and solvers to run their production planning models. However, this software does not allow to collaboratively work with other researchers to improve existing models. In addition, and as far as we know, there is no platform that allows the exchange of production planning models and of experimental and real data instances for the production planning problem. This paper aims to design and implement a collaborative and interoperable web platform in which researchers can formulate and reuse models for production planning. In fact, it allows technical integration of interoperability at data and service level (sharing data models and obtained results and standardizing the design and execution of models). This tool can also be used to carry out experimentation on the different production planning models developed with diverse data instances, being able to draw conclusions about the applicability of each model. Keywords Collaborative platform · Interoperability · Production planning · Design of experiments · Web application

M. Á. Rodríguez (B) · A. Esteso · A. Boza · A. Ortiz Bas Research Centre on Production Management and Engineering (CIGIP), Universitat Politècnica de València, Camino de Vera S/N, 46022 València, Spain e-mail: [email protected] A. Esteso e-mail: [email protected] A. Boza e-mail: [email protected] A. Ortiz Bas e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_6

63

64

M. Á. Rodríguez et al.

1 Introduction Collaboration in research is less and less related to the physical distance between researchers [1], being increasingly common to find research with international collaboration. More concretely, the international research collaboration has become one of the hottest topics in recent years (see [2]). The research collaboration is not only limited to collaboration between research groups, but also can be established between researchers from the same and different groups, departments, institutions, sectors, and nations [3]. Therefore, it would be very useful for researchers to have an interoperable platform to facilitate the collaborative work. This paper focuses on production planning (PP), which is the definition of the use of production resources to meet the forecast demand or committed orders over a planning horizon at a reasonable cost [4]. In this context, this paper aims to answer the following research question: Is it possible to create a collaborative and interoperable platform to experiment on production planning? To answer this question, we propose a collaborative and interoperable web platform that allows researchers to upload and exchange operations research models for the PP problem and to use instances of data for these models to solve them. As a result, platform experimentation with different models and instances of data to analyze the results should be done. Also, a set of services is developed that allows the standardization of the mathematical programming models. Through the interoperability of the platform, researchers can share their content with other research groups and thus foster the continuous improvement of the PP models. The creation of the platform is a contribution to the literature as, to our knowledge, there are no other known collaborative and interoperable platforms where researchers work together on the design of PP models. In addition, the platform allows to randomly generate data instances to experiment with the available models, being another of the main contributions of the paper. The rest of the paper is structured as follows. Section 2 proposes the collaborative and interoperable platform to support the experimentation on PP problems. Section 3 explains the methodology used to develop the web platform, its main functionalities, and the most relevant interfaces. Finally, Sect. 4 outlines main conclusions and future research lines.

2 Collaborative Platform to Experiment on Production Planning An interoperable and collaborative web platform allows researchers from the same or different research center to collaboratively work to define models for PP and carry out experimentation. We propose a platform to support mathematical programming models. Mathematical programming has been chosen because it has proved its validity in literature to address PP problems. The proof of this is the large number

Collaborative Platform for Experimentation on Production Planning …

65

of mathematical programming models to plan production that can be found in the literature [5]. Regarding interoperability, we have based on ATHENA interoperability framework [6], which defines the technical integration of interoperability at four levels: enterprise/business (collaborative product design and development), processes (cross-organizational business processes), services (software service composition and execution), and information/data (data and schemas data). Specifically, we use two levels, data level and service level. At data level, three main elements should be defined in a mathematical programming model: indexes to identify the elements of the problem (e.g., products to be produced), parameters that introduce data for these elements (e.g., production time for each product), and decision variables that represent the variables whose value should define the model (e.g., quantity of product to be produced per week). In the case of PP, the indexes must include at least products and periods of time in which production is possible. Furthermore, as many indexes as necessary should be added to the model to represent all the representative elements of the problem (e.g., customers, plants, machines). Some examples of parameters for PP models could be the time required to produce one unit of each product (per machines, if machines are included in the model), the demand for products per period, or the capacity of the machines, among others. Finally, the main decision variable for the PP models will determine the quantity of each product to be produced per period. However, PP models can include other complementary decision variables such as the inventory of each product per period or the quantity or product served to customers. Therefore, it is remarkable that depending on the PP problem to be modeled (possibility of storing or not, possibility of deferring demand or not, existence of products with different qualities or characteristics, etc.), the models will have different indices, parameters, and decision variables. An example of this can be found in PP models defined for the agri-food sector where characteristics inherent to the agri-food products should be modeled such as their perishability [7, 8] or their classification based on their quality [9]. On the one hand, mathematical programming models are formulated through an objective function and constraints to which the problem is subjected. In this way, the complexity of PP models increases as more realistic problems are represented since more elements need to be contemplated by the model. On the other hand, on many occasions it is difficult to acquire real data from industry to validate and experiment with the mathematical programming models developed by the academy. To solve this problem in the PP environment, we propose to include in the platform a random generator of data instances. Furthermore, the possibility of creating multiple instances of data for the same PP problem would facilitate the experimentation in this area, allowing researchers to reach interesting managerial insights for the PP. At service level, the design of complex PP models can be facilitated in a collaborative context. Thus, we propose to develop a platform to allow researchers to collaboratively work on the design of one or multiple mathematical programming models for PP. This will allow researchers to not only collaborating with other researchers from

66

M. Á. Rodríguez et al.

the same institution, but also facilitating collaboration and improvement developed of PP models of researchers from different institutions around the world. For this purpose, the platform provides services that allow the standardization of the design and subsequent execution of these models.

3 Design and Implementation of the Web Collaborative Platform This section explains the design and implementation of the collaborative and interoperable web platform in which researchers can collaboratively upload and share mathematical programming models for PP, PP instances of data, and PP results. Also, the design and execution of the models are standardized due to the defined structure of the mathematical programming model and their results. To allow collaboration between researchers, the owners of the different models, instances of data, and results uploaded to the platform should define the accessibility for those contents. Thus, owners can decide to keep the content as private, share the content with a single user, share with a group of users (e.g., a research center), or keep the content open to public. In this way, researchers will have access to all content published openly and to those contents that have been shared with them. Given the difficulty of obtaining PP data for experimentation, this platform will also allow researchers both to upload instances of data and to randomly generate instances of data through the platform. In this way, researchers could define ranges with minimum and maximum values for each of the indices and parameters of the mathematical programming model, and the platform itself would randomly generate one or multiple instances of data respecting said ranges of values. After creating one or more instances of data, researchers can decide to share them, so they remain accessible to all users to be used in the future situations. Once models and instances of data for PP are available at the platform, the platform allows researchers to optimally solve them showing the production plan obtained. These results will be presented in a table format where the quantity of each product that must be produced is defined for each period of time (which may be days, weeks, months, … depending on the used instance of data). Finally, the results of each resolution or experiment will be stored at the platform so that depending on the accessibility defined by its creator, other researchers will be able to directly access the results without having to reinstantiate and rerun the models already solved; and if it is considered that the results can be improved, other researchers can redesign the model to obtain better results. This tool is created with web technologies such as CSS (to design the appearance of the web), HTML and JavaScript (to define the content and form of user interfaces), and Nginx (to design the web server). Python/Pyomo (programming language and library programming Python to solve optimization models) and API

Collaborative Platform for Experimentation on Production Planning …

67

Rest (communication between Python implementations and web application) are also used.

3.1 Methodology There are several methodologies for the development of web platforms (see [10]). In this paper, the Web Information System Development Methodology (WISDM) has been chosen because it has a user-centered and multi-view approach. These functionalities are necessary to design a collaborative web application. WISDM [11] is comprised by five methods: organization analysis in order to create value, work design to obtain user satisfaction, information analysis for the requirements specification, technical design to create software model, and finally, human–computer interaction (HCI) for the interaction between software design and work design. In this paper, we explain in detail the functionality gathered in the method information analysis and the interfaces generated in method HCI. The rest of the methods like the one to evaluate user satisfaction (work design) will be developed in the future.

3.2 Platform Information Analysis (Requirements) As previously indicated, the web platform is created to allow researchers upload and share operations research models to support the PP, instances of data for PP problems, or even results. For that purpose, based on the specification of requirements by the CIGIP research center users, several basic functionalities or requirements have been identified for the web platform: • Upload validated PP mathematical programming model (comprised by indices, parameters, decision variables, objective function, and constraints) programmed with Pyomo library programming from Python. It should be noted that the model has a defined structure (indexes, parameters, …), and the model is validated by the platform. The validity of the model means that the model follows the defined structure and does not have any Pyomo code errors. • Share validated PP mathematical programming models with a researcher (private), a group of researchers (private), or everybody (public). • Upload the instance of data for the PP problem. • Randomly generate instances of data for the PP problem by the web server. • Share instances of data for PP problems with one researcher, a group of researchers, or everybody. • Run the mathematical programming models for PP with available instances of data, and obtain results.

68

M. Á. Rodríguez et al.

• Share results with one researcher, a group of researchers, or everybody. • Show and export results in Excel format.

3.3 Platform HCI (Interfaces) The interfaces were designed and implemented to focus on user, interoperability, and collaboration. Hence the structure is simple and user-friendly, following the principles of World Wide Web Consortium (W3C) which develops international standards for the web. The elements in the interfaces is also organized by following the guide from the Information Architecture for the Web [12], which indicates the way in which data and elements are distributed in the interface. The interfaces of the platform can be classified into four types: the home interfaces, the model interfaces, the sharing interfaces, and the dialogues interfaces. The home interfaces group related with the user as the sign in functionality or the home of user. The model interfaces group those interfaces related with the PP models such as the designed to upload models, upload or generate data, run models, and show and export results. The sharing interfaces are collaborative interfaces through which users can share models, data, or results. Finally, dialogues interfaces are special interfaces that offer help or required information through pop-up web dialogues. Regarding navigation between the interfaces, the platform follows a process similar to PP model execution in other software environments, which covers all the functions described in Sect. 3.2. As can be seen in Fig. 1, the web process flow is composed of next steps: step (0) user login and access to user’s home, step (1) user upload a valid PP model and can share the model, step (2) upload or generate randomly the data instance of model and can share the data, step (3) run the model with the data instance, and step (4) visualize, share, and export the results. Besides that, the user’s home interface allows executing any step, even other functionalities such as manage collaboration or show the detail of model, data, or results and compare results. And the improvement of the models is carried out through the platform; for example, Fig. 2 shows how researcher 3 would improve on researcher 2’s model, if researcher 2 had shared that model. The most relevant interfaces for the designed platform are detailed below. User’s Home and Manage Collaboration Interfaces. Once the person has logged in, registered user can execute user-own or shared by third users’ models, visualize user-own or shared by third users’ data/results, and look up more details of the groups to which the researcher belongs, as can be seen in Fig. 3. The models include a description of the main characteristics of the PP problem they address, such as whether production capacity is limited or not. Also, referring to user-own model/ data/results, the screen shows for each model/data/result whether it is shared or not. And to manage the collaboration, for instance, stop or start sharing a model, after clicking on this button, it will be presented a new interface where users indicate which resources to share and with whom. For that purpose, when sharing a model,

Collaborative Platform for Experimentation on Production Planning …

69

Fig. 1 Web process flow

Fig. 2 Diagram of the shared model improvement

a dialogue interface will be displayed to ask the user for the recipient of the sharing (user, group, or public level). Random Data Generator Interface. After loading a validated model which is step 1 in the platform process, the user can load his/her data instance with a file or generate a data instance randomly (step 2). If the user chooses this last option, the platform loads from the previously uploaded model the indexes and parameters that comprise it. Figure 4 shows how the interface requests a minimum and maximum value for each index and parameter. Also, each parameter must be indicated the type of parameter (integer or decimal). It is noted that for indexes it must not be presented because they are always integers. As well as model, the user can share the data, and when clicked on the share toggle button a dialogue interface will be displayed to ask the user for the recipient of the sharing (user, group, or public level). Finally, the interface allows

70

M. Á. Rodríguez et al.

Fig. 3 User’s home interface

to generate any number of data instance. In that case, different data instances will be generated by the platform because it is random data between a minimum and maximum value for each index or parameter.

Fig. 4 Random data generator interface

Collaborative Platform for Experimentation on Production Planning …

71

Fig. 5 Results interface

Results Interface. Following instantiating the data (step 2) and running the validated model with data instances (step 3), the platform generates the results of the execution. These results are shown in Fig. 5. A summary of the results is presented in the interface: name of model, name of data instance, value of objective function, optimal solution (yes/no), time of execution in seconds, and several actions such as display more details of results, share results, print results, and export results in Excel format. It should be underlined that if the solution of the model is not optimal, the text “No” will appear in the “Optimal Solution” column, and the gap will be indicate in brackets. In that case of not obtaining optimal solution, it is relevant to note a solution is close to the optimum. In addition, next to summary results, a table is created with the production plan. This table represents for each product and period the sum of the output of all machines. Also, this table will change if another row is selected in summary table (highlighted with green color), because each model run (model with data instance) has its own results and production plan.

4 Conclusions This paper has contributed to a potential route toward a collaborative and interoperable web platform for generating and sharing PP optimization models, data instances, and results, which are currently under development and will be validated in the future works. Up to now, a random data instance generator for the PP problem, which is a novelty in this area, and the functionalities to upload and share the models for PP, their data instances, and results has been developed on the platform. Therefore, the platform can be employed as an experimentation tool, so that the same model can be solved for different data instances (uploaded by researchers or randomly generated by the platform), or different models can be solved for the same data instance, allowing their comparison. In addition, the integration of interoperability at the data

72

M. Á. Rodríguez et al.

and service levels allows not only the exchange of data, models, and results between researchers but also enables the standardization of design (or redesign) of models and their execution by Pyomo. In the future work, the platform will be extended to integrate the levels of process and business interoperability defined in the ATHENA interoperability framework not covered in this paper. Regarding the data-level interoperability, the description of the models could be categorized, so that users can search for models according to the characteristics of the PP problem to be addressed. In addition, the servicelevel interoperability will be extended to allow the integration of PP with other production problems such as scheduling. In this sense, the planning obtained with the collaborative platform will be used as input data for production scheduling. For this, it will be necessary to integrate the web platform explained in this work with an equivalent web platform already defined with the same technology for production scheduling. In addition, artificial intelligence algorithms will be added to the resulting integrated platform to improve both PP and scheduling. On the other hand, future versions of the platform will allow working with other types of operation research models for PP such as heuristics and meta-heuristics. Furthermore, regarding the technical aspects of the platform, the user satisfaction related to the technical and work design will be evaluated, and the data model for the PP models will be converted into an ontology. Unit tests have been performed but integration tests must be carried out to ensure the correct integrated functionality of the platform. Finally, an option for researchers to limit the resolution time for the uploaded models will be included in the future versions of the platform. Acknowledgements This research has been financed by Grant NIOTOME (Ref. RTI2018-102020B-I00 funded by MCIN/AEI/https://doi.org/10.13039/501100011033 and ERDF, A way of making Europe). The author María Ángeles Rodríguez was supported by the Generalitat Valenciana (Conselleria de Educación, Investigación, Cultura y Deporte) under Grant Agreement ACIF/2019/ 021.

References 1. Hoekman, J., Frenken, K., Tijssen, R.J.W.: Research collaboration at a distance: changing spatial patterns of scientific collaboration within Europe. Res. Policy 39, 662–673 (2010). https://doi.org/10.1016/j.respol.2010.01.012 2. Chen, K., Zhang, Y., Fu, X.: International research collaboration: an emerging domain of innovation studies? Res. Policy 48, 149–168 (2019). https://doi.org/10.1016/j.respol.2018. 08.005 3. Katz, J.S., Martin, B.R.: What is research collaboration? Res. Policy 26, 1–18 (1997). https:// doi.org/10.1016/S0048-7333(96)00917-1 4. Gelders, L.F., Van Wassenhove, L.N.: Production planning: a review. Eur. J. Oper. Res. 7, 101–110 (1981). https://doi.org/10.1016/0377-2217(81)90271-X 5. Mula, J., Poler, R., García-Sabater, J.P., Lario, F.C.: Models for production planning under uncertainty: a review. Int. J. Product. Econ. 103, 271–285 (2006). https://doi.org/10.1016/j. ijpe.2005.09.001

Collaborative Platform for Experimentation on Production Planning …

73

6. Berre, A.-J., Elvesæter, B., Figay, N., Guglielmina, C., Johnsen, S.G., Karlsen, D., Knothe, T., Lippe, S.: The ATHENA interoperability framework. In: Gonçalves, R., Müller, J.P., Mertins, K., Zelm, M. (eds.) Enterprise interoperability II, pp. 569–580. Springer, London (2007). https:// doi.org/10.1007/978-1-84628-858-6_62. 7. Esteso, A., Alemany, M.M.E., Ortiz, Á.: Impact of product perishability on agri-food supply chains design. Appl. Math. Model. 96, 20–38 (2021). https://doi.org/10.1016/j.apm.2021. 02.027 8. Alemany, M.M.E., Esteso, A., Ortiz, Á., del Pino, M.: Centralized and distributed optimization models for the multi-farmer crop planning problem under uncertainty: application to a fresh tomato Argentinean supply chain case study. Comput. Ind. Eng. 153, 107048 (2021). https:// doi.org/10.1016/j.cie.2020.107048 9. Esteso, A., Alemany, M.M.E., Ortiz, Á., Zaraté, P.: Optimization models to improve first quality agricultural production through a collaboration program in different scenarios. IFIP Adv. Inform. Commun. Technol. 12, 546–559 (2020). https://doi.org/10.1007/978-3-030-62412-5_ 45 10. Shaffi, A.S., Al-Obaidy, M.: Analysis and comparative study of traditional and web information systems development methodology (WISDM) towards web development applications. Int. J. Emerg. Technol. Adv. Eng. 3, 277–282 (2013) 11. Vidgen, R.: Constructing a web information system development methodology. Inform. Syst. J. 12, 247–261 (2002). https://doi.org/10.1046/j.1365-2575.2002.00129.x 12. Morville, P., Rosenfeld, L.: Information architecture for the World Wide Web. O’Reilly Media, Massachusetts (2006)

Enterprise E-Profiles for Construction of a Collaborative Network in Cyberspace Michiko Matsuda and Tatsushi Nishi

Abstract An enterprise e-profile which is used when constructing a collaborative network of an enterprise in cyberspace based on the concept of a cyber-physical system (CPS)/digital twin is proposed in this paper. By extending an enterprise model used in the virtual supply chain configuration method, the requirements, the structure, and implementation method for the enterprise e-profile are obtained. An enterprise eprofile is enterprise model data that describes an enterprise’s outline, characteristics, dynamic activity as behavior, etc. in a common understandable manner. An enterprise e-profile can be converted into a virtual enterprise (software agent) as a member of a network. When configuring a business-to-business collaborative network in cyberspace, by constructing a virtual network as a multi-agent system, it is possible to search the appropriate cooperation partner, to consider the appropriate action of oneself and so on. In other words, an enterprise e-profile will act as name card of the company in the cyber business world. Four use cases using the enterprise e-profile in the future digital society are introduced to show its usefulness. Finally, the necessity for the international standardization of enterprise e-profiles is discussed. Keywords Enterprise modeling · Model-based construction · Multi-agent system · E-business card

M. Matsuda (B) Kanagawa Institute of Technology, Atsugi-Shi, Kanagawa 243-0292, Japan e-mail: [email protected] T. Nishi Okayama University, Tsushima-Naka, Kita-Ku, Okayama-Shi 700-8530, Japan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_7

75

76

M. Matsuda and T. Nishi

1 Introduction The establishment of information collaboration infrastructures that support the digital economy, such as IDS [1, 2] and GAIA-X [3, 4], has begun in various places. As a result, people and things can be connected and knowledge and information can be shared, and the transition to a digital society is progressing rapidly. In the digital society, various social systems are constructed based on the concept of cyber-physical system (CPS) and digital twin [5, 6] and are modeled in cyberspace. In addition, each enterprise will form a collaborative network in cyberspace with an appropriate partner enterprise and provide various services to society through cyberspace. On the other hand, when constructing an enterprise collaborative network in cyberspace, there is an urgent need to provide methods and tools for constructing a safe and secure ecosystem with appropriate partners. Traditionally, when constructing an enterprise collaborative network, the person in charge of the company searches for a partner company by conducting a preliminary survey on the website or pamphlet, actually exchanges business cards, and has a discussion with the person in charge of the partner company. Then, when an agreement is reached after confirming each other’s condition, a collaborative network is configured after the contract. In the digital society, these procedures need to be done in cyberspace. In this paper, as a method to support the above construction procedure of an enterprise collaborative network (model) in cyberspace, the enterprise e-profile which is able to be used in cyberspace as a company business card or introductory pamphlet is proposed.

2 Construction of a Virtual Supply Chain Based on CPS Concept1 2.1 Model-Based Construction of a Virtual Supply Chain A supply chain is one of the typical collaborative networks. Retailer, manufacturer, and suppliers are connected via a network. There are researches where virtual supply chains were configured as multi-agent and agent-based systems e.g., [7–9]. These multi-agent configurations of a virtual supply chain are fit to the CPS concept. A virtual supply chain corresponding to a physical supply chain is constructed as a network of virtual enterprises such as retailer, manufacturer, and supplier, which are corresponding to component enterprises in the corresponding physical supply chain. Each virtual enterprise could be implemented as a software agent which is an 1

Text in this section presents the authors’ contribution, reprinted from: [10] © 2019, released under a CC BY-NC-ND license; https://doi.org/10.1016/j.procir.2019.03.230; with the permission of Elsevier; and [12] © 2021, released under a CC BY-NC-ND license; https://doi.org/10.106/j.pro cir.2021.11.059; with the permission of Elsevier.

Enterprise E-Profiles for Construction of a Collaborative Network …

77

Fig. 1 Model-based construction of a virtual supply chain under the cyber-physical system concept and digital twin concept

executable information model for each enterprise. As a result, a virtual supply chain could be constructed as a multi-agent system. Through this virtual supply chain, autonomous cooperation could become possible. Based on the above discussion, the model-based construction method for a virtual supply chain has been proposed by authors [10–12] as shown in Fig. 1. In the proposed method, as a preparation, modeling for each physical enterprise is done. This enterprise model is described using a machine-readable data description language. These models are stored in an open repository and are shared. They are sometimes called the enterprise catalog. The modeling of the supply chain as a virtual supply chain then proceeds. Selected enterprise models are converted to enterprise agent programs which are executable enterprise models, and connected enterprise agents are then used to construct a virtual supply chain as a multi-agent system.

2.2 Preparation of Enterprise Models Enterprise models are required to include descriptions which specify properties, behavior, and external interactions with other enterprises. An enterprise behavior is described as compositions of state and state transition which are represented by parameters and calculation equation/formula including variables. Models for retailers, manufacturers, and suppliers are prepared. Figure 2 shows a part of the manufacturer model implemented by XML. When describing an enterprise model, the state transition diagram which shows the activity flow is referred to. In the state transition diagram, states, state transitions, and input/output are represented, with their relations [10, 11].

78

M. Matsuda and T. Nishi

Fig. 2 Example of manufacturer enterprise model description, including enterprise data such as company name and company type, external interaction data, and behavior data

2.3 Generation of a Supply Chain Model An automatic construction system of a virtual supply chain can be implemented based on the proposed method, as shown in Fig. 3. The virtual supply chain is generated as a multi-agent model from selected enterprise models. Here, a supply chain consists of one retailer, one manufacturer, and several suppliers. In this virtual supply chain, a retailer, a manufacturer, and suppliers are implemented as software agents. This generated multi-agent model can be executed on NetLogo which is a multi-agent programmable modeling environment. In other words, using this system, agent programs of NetLogo are automatically generated from selected enterprise models. Using the generated supply chain model, various simulations were executed [12].

3 Enterprise E-Profiles 3.1 Extension of Enterprise Model to Enterprise E-Profile In general, business-to-business transactions involve searching for adequate business partners, negotiations, and selection of the business partners to achieve overall optimization. This selection work requires information about the partner company and the behavior data of the partner company at the time of collaboration. In the

Enterprise E-Profiles for Construction of a Collaborative Network …

79

Fig. 3 Implementation of automatic generation process for a virtual supply chain which is a multiagent system from selected enterprise models and related data

digital society, this business transaction is replaced by the configuration of the enterprise collaborative network in the cyberspace without human intervention. Also, it requires to provide the abovementioned digitized information at the time of digital configuration of this collaborative network. Providing the above information is possible by expanding the scope of use of the corporate model not only to virtual supply chains but also to the configuration of corporate collaboration networks in cyberspace. In the digital society, the corporate model becomes a new self-introduction tool that is like an e-business card and/or e-signboard of the enterprise in the digital society. These are called “enterprise eprofiles”. This extension of the enterprise model concept to an enterprise e-profile is shown in Fig. 4. An enterprise e-profile is a model data that describes a company’s characteristics, treated products, and activities (including behavior for producing its output results) in a standardized manner. When building a virtual enterprise cooperation network such as smart supply chain, it can be converted to a virtual enterprise (software agent) that is a member of the network.

3.2 Requirements for an Enterprise E-Profile The functional requirements for an enterprise e-profile are as follows: • Representing (profiling) the overall picture of an enterprise (enterprise system) as model data.

80

M. Matsuda and T. Nishi

Fig. 4 Extension of enterprise model concept to enterprise e-profile for digital construction of a virtual supply chain

• Model data that describes the outline of the enterprise, the services provided (e.g., the products manufactured), the characteristics of the enterprise, and the content of activities (including the behavior of the enterprise for providing the products and services) in a commonly available manner. • Being able to cooperate in cyberspace: By using the enterprise e-profile, it is possible to configure a collaborative network between enterprise systems in cyberspace and cooperate as a virtual enterprise system model with other members of the network. In other words, an enterprise e-profile can be converted to executable model data such as an agent program. • Ensured interoperability so that enterprise system cooperation can be established in cyberspace (without human operations).

3.3 Structure of an Enterprise E-Profile The basic representation elements of the enterprise e-profile are as follows. Figure 5 shows the basic structure of the enterprise e-profile. • Static property description: In addition to general enterprise data, data on products handled by the enterprise, services provided, resources, and providing method of these data is also represented. • Dynamic behavior description: Enterprise’s activities are described as behavior represented by possible states and state transitions of the enterprise. The behavior of the enterprise is different depending on the strategy of each enterprise. States and its transition can be represented by parameters. Parameters can be represented by variable, formula, and/or condition. Formula and condition can include

Enterprise E-Profiles for Construction of a Collaborative Network …

81

Fig. 5 Basic structure of an enterprise e-profile which consists by property description, activity description, and external interactions

variables. When an enterprise cooperative network is constructing and/or when activities are executed on the network, values of variables are provided. • External interaction description: Enterprise collaboration with the other enterprises is represented. It consists of request (input), service provision (output), and state transition trigger (transition to corporate activity state that matches the input).

4 Use of Enterprise E-Profiles for Construction of Collaborative Network 4.1 Use Case 1: Construction of an Appropriate Virtual Network This chapter introduces an example of using the e-company profile. This section is a use case for optimizing an existing network and planning when configuring a network for each purpose within a limited enterprise group. It can be said to be an extension of traditional planning systems. Using the prepared enterprise e-profiles, configure a virtual enterprise collaborative network (multi-agent system) in the cyberspace and use it as a base for simulation. Optimization of the activities of each enterprise, optimization of the entire network, searching for an optimal network configuration, and determination of collaboration methods, etc. are performed. The obtained collaborative network can also function as a digital twin with the actual network. Figure 6 shows a use case in the supply chain.

82

M. Matsuda and T. Nishi

Fig. 6 Examination of appropriate cooperation network configuration using enterprise e-profiles on the supply chain as an example

4.2 Use Case 2: Decision-Making for the Enterprise’s Beneficial Action In order to determine the rational behavior policy of one’s own company on the enterprise collaborative network, create an enterprise e-profile for each behavior pattern of one’s own enterprise. Then, the most rational behavior pattern is obtained by altering the enterprise model (enterprise agent generated from each e-profile) on the enterprise collaborative network model (virtual collaborative network) and executing the simulation. Figure 7 shows an example of a manufacturer considering rational behavior using the virtual supply chain.

4.3 Use Case 3: Recruit of Collaboration Partners Here, it is assumed that each enterprise creates its own enterprise e-profile and publishes it on the Internet. When an enterprise recruits new business partners to start a new service, e-profiles of the required enterprise are opened publicly. The e-profiles of the applicant enterprises are evaluated by matching the requirement and are adjusted on the virtual collaborative network system. Finally, the adequate collaboration partners are obtained. This use case is shown in Fig. 8.

Enterprise E-Profiles for Construction of a Collaborative Network …

83

Fig. 7 Determination of the beneficial and appropriate action on the cooperation network using enterprise e-profiles on the supply chain as an example

Fig. 8 Configuration of a collaborative network for new businesses using the enterprise e-profile as a digital recruitment guideline

4.4 Use Case 4: Search of Adequate Collaboration Partners Using the enterprise e-profile published on the Internet, a virtual collaborative network is constructed and evaluated by simulation, and participation is invited for a new network when adapted. This may lead to unexpected new business opportunities for the e-profile providing enterprise. This use case is shown in Fig. 9.

84

M. Matsuda and T. Nishi

Fig. 9 Search for enterprises who have an e-profile that matches the requirements profile as a member of the collaborative network for new businesses

5 Proposal Toward Construction of an Enterprise Collaborative Network in Practical Future By preparing their own enterprise e-profile and publishing it on the Internet, it becomes possible for enterprises to flexibly and collaboratively configure a business-to-business cooperation network in cyberspace according to their purpose. To achieve this, it is important to promote international standardization of enterprise e-profiles and share them with society as a digital construction method for interoperability-enhancing collaborative networks. Specifically, in advancing the standardization of enterprise e-profiles, ISO 16100 [13] and the ISO 16400 [14] are useful as references. As shown in Fig. 10, when creating an enterprise e-profile, a prepared template for each industry type is used. A template is a schema description of an enterprise profile. Each enterprise creates a self e-profile by filling each item in the template with specific values, calculation formulas, conditional expressions, and so on. In addition, each enterprise can prepare a template as needed. The targets of standardization are the formal structure of the template used to create the enterprise e-profile and the method how to create the template prepared for each industry type. The method includes the template description rule, but it must be a method that considers semantic interoperability [15].

Enterprise E-Profiles for Construction of a Collaborative Network …

85

Fig. 10 Standardization of formal structure of template for enterprise e-profile and of rules for preparation of a template by enterprise type

6 Conclusions An enterprise e-profile has been proposed, and its concept, implementation method, and usage method have been explained. By using the enterprise e-profile, it becomes possible to build an appropriate collaborative network in cyberspace. For future work, it is necessary to proceed with further technological development and demonstration experiments toward the international standardization of enterprise e-profiles. For example, how to define product/service/resource data for each enterprise, how to quantitatively express the activity contents of an enterprise, how to write unique rules for each enterprise, how to introduce a common dictionary and ontology to ensure semantic interoperability, etc. Acknowledgements This work is supported by JSPS KAKENHI KIBAN (A) 18H03826. The authors thank Dr. Udo Graefe, retired from the National Research Council of Canada, for his helpful assistance in writing this paper in English. And also, the authors thank Robot Revolution and Industrial IoT Initiative (RRI, Japan) for their support toward the international standardization of the enterprise e-profile.

References 1. Otto, B., Lohmann, S.: IDSA Reference Architecture Model v3. IDSA, Berlin (2019) 2. Eitel, A., Jung, C., Brandstädter, R., Hosseinzadeh, A., Kühnle, C., Birnstill, P., Brost, G.S., Gall, M., Korth, B.: Usage Control in the International Data Spaces. IDSA, Berlin (2019) 3. Eggers, G., Fondermann, B., Maier, B., Ottradovetz, K., Pfrommer, J., Reinhardt, R., Rollin, H., Schmieg, A., Steinbuß, S., Trinius, P., Weiss, A., Weiss, C., Wilfling, S.: GAIA-X: Technical Architecture. BMWi, Berlin (2020)

86

M. Matsuda and T. Nishi

4. Gaia-X: GAIA-X Architecture Document. AISBL, Brussels (2021) 5. Sanislav, T., Miclea, L.: Cyber-physical systems—concept, challenges and research areas. J. Control Eng. Appl. Inform. 14(2), 28–33 (2012) 6. Gunes, V., Peter, S., Givargis, T., Vahid, F.: A survey on concepts, applications, and challenges in cyber-physical systems. Trans. Internet Inform. Syst. 8(12), 4242–4268 (2014). https://doi. org/10.3837/tiis.2014.12.001 7. Long, Q.: An agent-based distributed computational experiment framework for virtual supply chain network development. Int. J. Expert Syst. Appl. 41(9), 4094–4112 (2014). https://doi. org/10.1016/j.eswa.2014.01.001 8. Wang, Y., Wang, D.: Multi-agent based intelligent supply chain management. In: Xu, J., Nickel, S., Machado, V.C., Hajiyev, A. (eds.) Proceedings of the Ninth International Conference on Management Science and Engineering Management, pp. 305–312. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-47241-5_26 9. Tanaka, K., Gu, S.M., Zhang, J.: Designing multi-agent simulation with big time series data for a global supply chain system. Int. J. Autom. Technol. 10(4), 632–638 (2016). https://doi. org/10.20965/ijat.2016.p0632 10. Matsuda, M., Nishi, T., Hasegawa, M., Matsumoto, S.: Virtualization of a supply chain from the manufacturing enterprise view using e-catalogues. Procedia CIRP 81, 932–937 (2019). https:// doi.org/10.1016/j.procir.2019.03.230 11. Matsuda, M., Nishi, T., Hasegawa, M., Terunuma, T.: Construction of a virtual supply chain using enterprise e-catalogues. Procedia CIRP 93, 688–693 (2020). https://doi.org/10.1016/j. procir.2020.04.093 12. Matsuda, M., Nishi, T., Kamiebisu, R., Hasegawa, M., Alizadeh, R., Liu, Z.: Use of virtual supply chain constructed by cyber-physical systems concept. Procedia CIRP 104, 351–356 (2021). https://doi.org/10.1016/j.procir.2021.11.059 13. ISO 16100 Series: Manufacturing Software Capability Profiling for Interoperability. https:// www.iso.org/standard/53378.html. Last accessed 22 Sept 2021 14. ISO 16400–1: Equipment behaviour catalogue for virtual production system—Part 1: Overview. https://www.iso.org/standard/73384.html. Last accessed 22 Sept 2021 15. IEC White Paper: Semantic interoperability: challenges in the digital transformation age (2019)

A Multi-partnership Enterprise Social Network-Based Model to Foster Interorganizational Knowledge and Innovation Ramona-Diana Leon , Raúl Rodríguez-Rodríguez , and Juan-Jose Alfaro-Saiz

Abstract This paper presents a multi-partnership enterprise social network-based model, which overcomes the more frequent drawbacks of such social networks, providing then a higher added-value when designing and implementing them. Further, it implies to all the main stakeholders involved in the value chain and bring them together into the technological space provided not only at the general value chain level but also at the individual groups one. Finally, it also mentions four analyses that can be carried out to enhance collaboration, fostering knowledge and value creation, which is of great value in important collaborative processes such as product/ service codesign and co-engineering activities. Keywords Multi-partnership · Enterprise social network · Interorganizational knowledge

1 Introduction For the last 30 years, the concept of interoperability evolved from an IT-based to a business approach. Thus, [1] describes it as “the ability of two or more systems or components to exchange information and use the information that has been exchanged” while [2] presents it as “the ability of different organizations to interact toward mutually beneficial and agreed common goals, involving sharing information and knowledge between organizations, through the business processes they support R.-D. Leon (B) · R. Rodríguez-Rodríguez · J.-J. Alfaro-Saiz Research Centre on Production Management and Engineering (CIGIP), Universitat Politècnica de València, Camino de Vera S/N, 46022 València, Spain e-mail: [email protected] R. Rodríguez-Rodríguez e-mail: [email protected] J.-J. Alfaro-Saiz e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_8

87

88

R.-D. Leon et al.

by exchanging data between their respective ICT systems”. Despite the valuable insights that both approaches provide, a change of perspective is required since, in the current sharing/collaborative economy, the focus is on exchanging knowledge instead of data and the parts involved in the exchange process include not only organizations or ICT systems but also individuals. Furthermore, the switch from the Social Internet of Things (SIoT), that aims to connect people to people and people to objects [3], to the Internet of Everything, that “is bringing together people, process, data and things to make networked connections more relevant and valuable than ever before, turning information into actions that can create new capabilities, richer experiences, and unprecedented socio-economic and environmental opportunities for businesses and individuals” [4], emphasizes the need for approaching interoperability from a broader perspective and also for developing new interoperability frameworks and models that take into consideration both organizations and individuals. Although previous studies bring forward the importance of interoperability in creating collaborative work, ensuring company’s productivity, fostering innovation, and increasing “healthy” competition [5–7], they tend to focus on one-to-one relationships, neglecting the fact that organizations’ success on the market depends on the relationships established with both the internal and external stakeholders. Some attempts have been made in analyzing interoperability from a multi-partnership perspective, but they focus on dynamic supply chains and enterprise networks [8–10] and not on the entire value network, which also includes the customers. Taking these into account and the challenges launched by [11], who state that the effects of business interoperability across multiple partners in a network should be further explored, and [12] who claim that the development of enterprise interoperability in the context of multi-relationship collaboration requires for better guidance, this paper aims to provide a conceptual framework for how enterprise social networks and their associated analytical tools can enhance interoperability in a multi-partnership context.

2 State of Art 2.1 Enterprise Interoperability Various enterprise interoperability frameworks have been defined during the time in an attempt to solve specific problems. The Levels of Information System Interoperability Models, launched in 1993, is often presented as the first significant initiative conducted for evaluating interoperability, and it serves as a reference model for information systems interoperability. Nevertheless, it is highly focused in the technical interoperability, and it neglects the organizational issues related with the modeling

A Multi-partnership Enterprise Social Network-Based Model to Foster …

89

and re-engineering of business processes [7]. These last aspects started to be considered in the IDEAS interoperability framework (IDEAS 2003), the European Interoperability Framework (iDABC 2004), the ATHENA interoperability framework (ATHENA 2004), and the Framework for Enterprise Interoperability [13]. Among these, the Framework for Enterprise Interoperability has become and international standard (ISO 11354-1:2011), and it is frequently used as a reference model since it brings forward the main barriers to interoperability and the approaches that may be adopted in order to overcome them. As it can be noticed from Fig. 1, the Framework for Enterprise Interoperability is developed around three dimensions, namely interoperability concerns (the levels on which interoperation occurs), interoperability barriers (the incompatibilities that may appear), and interoperability approaches (the potential solutions to the identified barriers), and 12 interoperability problem-spaces that are defined by the intersection of interoperability barriers and concerns. The Framework for Enterprise Interoperability focuses on dyadic relationships and sustains that adopting an integrated, unified, or federal approach could solve the

Fig. 1 Components of the framework for enterprise interoperability

90

R.-D. Leon et al.

interoperability problems. The integrated approach requires the development of a common format that serves almost as a standard for all parties involved while the unified approach involves the development of a meta-model capable of providing a mean for semantic equivalence [14]. Last but not least, the federal approach brings forward organizations’ adaptability and flexibility; it does not provide a common format but a shared ontology [12]. The last approach could be considered the most appropriate one in the context of a multi-partnership, involving various categories of stakeholders since it will be almost impossible for different suppliers, manufacturing companies, and customers to agree to elaborate models and build systems. However, due to the fact that they share a certain alignment in terms of vision, strategy, and culture, they may be open to the idea of finding together a common ground and sharing an ontology that will formally represent the specific knowledge of the partnership. While focusing on dyadic relationships, [12] argue that the socio-cultural alignment, which brings forward the shared values, goals, objectives, and vision of the partners, is not enough for ensuring interoperability; a special attention should also be given to the functional, structural, infological, and contextual alignments. The functional alignment concentrates on the relationship between required and available information capabilities and puts an emphasis on process effectiveness; in other words, it sheds light on how the technological resources support the core business processes. The structural alignment focuses on the link between information systems and decisional rights and responsibilities, and it reflects the relationship between the established and the accepted structure [12]; it brings forward the importance of knowledge that is treated as a source of authority. The infological alignment provides access to stakeholders’ knowledge while absorbing cognitive distance. Last but not least, the contextual alignment brings forward company’s behavior and its relationships with the external environment; it is assumed to have an indirect impact on interoperability since the company cannot change what is happening in the external environment. Although the model proposed by [12] provides valuable insights, it neglects the multiple interconnections established by the companies with their stakeholders which will make the model harder to manage, generating several technological incompatibilities. Besides, it assumes that the companies are always adapting to the external environment even though in the current sharing economy, they tend to be the ones generating changes through disruptive innovations. Last but not least, it does not take into account the increased development and use of collaborative networks and enterprise social networks which are not only facilitating the functional, structural, socio-cultural, infological, and contextual alignments in a multi-partnership context, but they also foster the enlargement of interoperability perspective from and IT-based one to a more diverse one, including IT, organizations, and customers / people.

A Multi-partnership Enterprise Social Network-Based Model to Foster …

91

2.2 Enterprise Social Networks Enterprise social networks support the development of SIoT and IoE by providing a high degree of visibility and association of people and content, on the one hand, and persistence and editability of content, on the other hand [15, 16]. Thus, they are web-based platforms that connect individuals, groups, and organizations, and transform them from passive information consumers to active content creators [17]. According to [18] and [19], they are based on trust and integrity, involve information and resource sharing, goal congruence and decision synchronization, and support knowledge sharing, integration, and creation. Furthermore, they combine various theories (such as graph theory, algorithm theory, mathematical optimization, and game theory) in order to provide multidimensional static and dynamic analyses that so far prove that enterprise social network is a powerful tool for capturing expertise, finding experts, recruiting, idea generation [20], customer relationship management [18], business process improvement [21], and so on (Fig. 2). Three types of analyses can be performed, using the data provided by an enterprise social network, namely structural analysis, social content analysis, and semantic analysis; each of these can either have a static approach, reflecting the characteristics of users and their relationships at a certain moment, or dynamic approach, emphasizing network’s evolution and users’ behavior during a period of time. Structural analysis involves the use of graph algorithm, and it helps to understand network’s attributes, like socio-metric features (density, centrality, etc.), clusters, and random walks [17], and users’ influence (centrality, betweenness, closeness, etc.). Nevertheless, the focus is on the established connections and not on their content. The qualitative approach is the cornerstone of the social content analysis that is based on text mining and

Fig. 2 Enterprise social networks: analyzed topics

92

R.-D. Leon et al.

sentiment analysis. The latter sheds light on the emotional tone of a post and users’ overall sentiments using either a lexicon or a machine learning algorithm, like Naïve Bayes or support vector machines. Since social content analysis concentrates on what is shared within the network and not on how information and knowledge flow among users, it is frequently used for categorization, topic tracking, clustering, and concept linkage [17]. This pitfall can be overcome by using semantic analysis that combines social network principles with ontology engineering in order to provide a more integrative and complex approach. Thus, based on a graph model (Resource Description Framework), a schema definition (established using Resource Description Framework Schema and Web Ontology Language) and a query (SPARQL), it supports decision-making by emphasizing not only users’ influence on certain topics or how knowledge flows among them but also users’ seeking behavior. The aforementioned analyses are performed on enterprise social networks that include either internal or external stakeholders. Hence, some scholars focus on employees and bring forward the benefits of using enterprise social networks for fostering knowledge sharing, encouraging idea generation and innovation, recruitment, performance evaluation, and enhancing training and learning [18, 20, 21]. Other researchers concentrate on external stakeholders and prove that the enterprise social networks are a valuable source of information when it comes to customers’ expectations and needs [22] and that the knowledge shared within a multi-level supply chain network has a positive influence on business process improvement [21]. Nevertheless, none of the studies developed so far takes into account the characteristics and benefits of a multi-partnerships network, capable of bringing together company’s internal and external stakeholders, although in the last years, customers’ interest in being involved in decision-making increased and cocreation became a source of competitive advantage [23]. Despite the fact that enterprise social networks label stakeholders as a source of knowledge, foster the development of a close relationship with internal and external stakeholders, ensure goal congruence and decision synchronization, and provides a clear separation of communication between users, elements that are described as success factors for the development of enterprise interoperability [12], the research in this field is still in an embryonic stage of development.

3 Ensuring Enterprise Interoperability Through Multi-relationships Enterprise Social Networks 3.1 Proposed Model Currently, the companies use different enterprise social networks to manage their relationships with their stakeholders and are exposed to paralysis by analysis. On the one hand, they have to keep track of customers’ needs and expectations, engaging in social listening; thus, they focus on using social media analytics in order to determine

A Multi-partnership Enterprise Social Network-Based Model to Foster …

93

the influencers, customers’ satisfaction, and preferences. On the other hand, they have to develop sustainable relationships with their suppliers since the competition is no longer between companies but between supply chain networks; hence, they develop online supply chain networks and use social network analysis in order to ensure socio-cultural alignment, to understand how knowledge flows among suppliers, and to determine potential partners for innovation projects. While this way of operating provides the benefit of using two different communication channels, it also has several pitfalls. First of all, it overloads the company with diverse and complex information, and since it acts as a mediator between suppliers and customers, it has to select, process, and decide which information is going to be transmitted to the other stakeholders and how, exposing itself to decisional and information processing biases (such as, confirmation bias, anchoring effect, IKEA effect, contrast bias, etc.). Secondly, it encourages the use of the mushroom management style, which may generate various issues at the organizational and business level of enterprise interoperability, affecting company’s socio-cultural, functional, and structural alignments. Last but not least, it leaves the customers outside of the decision-making process in a moment in which they represent the cornerstone of product innovation and cocreation is a key to gaining competitive advantage. These pitfalls can be overcome by developing a multi-partnership enterprise social network that brings together, under the same umbrella, all the stakeholders involved in value creation (Fig. 3). Within this framework, specific networks can be created for each category of stakeholders, providing the benefit of a private communication channel, and also a general network that reunites all the stakeholders involved, fostering knowledge creation and sharing.

Fig. 3 Proposed multi-partnership enterprise social network

94

R.-D. Leon et al.

The proposed social network acts as a federation, and it allows for the analysis to be performed at individual, group / organizational and general level. Furthermore, this approach: (i) brings forward that each individual and organizational stakeholder is an important source of knowledge; (ii) it fosters the development of a close relationship among the parties; (iii) it enables stakeholders’ involvement and engagement in the value creation process; (iv) it provides access to market validation; and (v) it facilitates data management and analysis. Each and every one of these represents a success factor for the development of enterprise interoperability.

3.2 Analysis Procedure and Indicators Once the multi-partnership network becomes active (the stakeholders use it to exchange information on a regular basis), data can be collected and analyzed. Three steps must be performed, namely data collection and management, data analysis, and results visualization. Data Collection and Management. It involves downloading the dataset using an application programming interface (API) and preparing it for analysis. The focus is on identifying and correcting/removing user-generated abbreviations, misspelling, white spaces, and stop words. Data Analysis. It involves analyzing individuals, organizational and general networks in order to capture stakeholders’ expertise and how knowledge flows within the network. The focus is on performing: • Centrality analysis: Degree centrality emphasizes stakeholders’ importance based on the number of relationships established with the other members while closeness centrality highlights stakeholders’ capacity to reach others. On the other hand, betweenness centrality sheds light on stakeholders’ capacity of intermediation, and it can be complemented by the Honest Brokerage Index (HBI) which identifies the stakeholders who provide unique connections, controlling the knowledge flows within and between network’s levels. • Cohesion analysis: The E-I Index measures group embedding based on comparing the number of relationships established within groups with those developed between groups. Its value ranges from − 1 to + 1, where − 1 indicates the stakeholders’ tendency to develop relationships within the same group, and + 1 denotes the stakeholders’ tendency to establish relationships with members from a different group. • Cluster and homophily analysis: Cluster analysis allows the groups developed within the network to be identified and brings forward the subsets of stakeholders among whom several strong, direct, intense, and frequent ties are established. This can be complemented by the homophily analysis which sheds light on the

A Multi-partnership Enterprise Social Network-Based Model to Foster …

95

Fig. 4 Data obtained through semantic analysis

similarities existing among stakeholders. It can be useful in order to ensure sociocultural alignment, making sure that the stakeholders share the same mission, vision, values, and culture. • Semantic analysis: It facilitates the identification of stakeholders’ expertise and topics of interest, showing how specific knowledge flows within the network and how knowledge is created. It adds emotions and content to the abstract data obtained previously (Fig. 4), and it fosters infological alignment. Results Visualization. It synthesizes the results using tables and graphical representations. They can be further used for decision-making or for analyses regarding stakeholders’ performance. The latter could ensure structural and functional alignments, and it could also fill the research gap identified in the specialized literature.

4 Conclusions and Further Research Directions This paper has presented a conceptual framework for enterprise social networks and associated analytical tools, which can enhance interoperability in a multi-partnership context. Then, this framework tackles some of the most important drawbacks when designing and implementing enterprise social networks at both the intra- and the interorganizational levels: (i) information processing biases; (ii) mushroom management style; and (iii) customers excluded from the decision-making processes. More specifically, the multi-partnership enterprise social network framework presented introduces all the stakeholders that add value to the product/service under the same umbrella, allowing them to communicate, interchange knowledge and innovation practices with their participation in not only a general network but also in specific ones. Once the multi-partnership network becomes active, data starts to be produced

96

R.-D. Leon et al.

and can be collected and analyzed from different perspectives (centrality analysis, cohesion analysis, cluster and homophily analysis, and semantic analysis). As a conclusion, it is possible to affirm that this multi-partnership enterprise social network framework is suitable to enhance knowledge information exchange and innovation creation, which can be of great importance when developing important high added-value interorganizational operations such as codesign or co-engineering activities. Further research directions include the application of the presented multipartnership enterprise social network-based model to real-world enterprises networks. For instance, it would be of interest to apply it to the earlier product lifecycle phases of collaborative product/service design and engineering. Acknowledgements The research reported in this paper is supported by the Spanish Agencia Estatal de Investigación for the project “Cadenas de suministro innovadoras y eficientes para productos-servicios altamente personalizados” (INNOPROS), Ref.: PID2019-109894GB-I00.

References 1. IEEE: IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries: 610. The Institute of Electrical and Electronics Engineers (1990) 2. Commission, E.: New European Interoperability Framework: Promoting Seamless Services and Data Flows for European Public Administrations. Publications Office of the European Union, Luxembourg (2017) 3. Atzori, F., Iera, A., Morabito, G., Nitti, M.: The social internet of things (SIoT)—when social networks meet the internet of things: concept, architecture and network characterization. Comput. Netw. 56, 3594–3608 (2012). https://doi.org/10.1016/j.comnet.2012.07.010 4. Jardim-Goncalves, R., Romero, D., Goncalves, D., Mendonça, J.P.: Interoperability enablers for cyber-physical enterprise systems. Enterprise Inform. Syst. 14(8), 1061–1070 (2020). https:// doi.org/10.1080/17517575.2020.1815084 5. Jardim-Gonçalves, R., Popplewell, K., Grillo, A.: Sustainable interoperability: the future of interner based industrial enterprises. Comput. Ind. 63, 731–738 (2012). https://doi.org/10.1016/ j.compind.2012.08.016 6. Gasser, U., Palfrey, J.: Breaking Down Digital Barriers: When and How ICT Interoperability Drives Innovation. Berkman Publication, Cambridge (2007) 7. Reino, S., Alzua-Sorzabal, A., Baggio, R.: Adopting interoperability solutions for online tourism distribution. An evaluation framework. J. Hospital. Tourism Technol. 7(1), 2–15 (2016). https://doi.org/10.1108/JHTT-08-2014-0034 8. Agostinho, C., Ducq, Y., Zacharewicz, G., Sarraipa, J., Lampathaki, F., Poler, R., JardimGoncalves, R.: Towards a sustainable interoperability in networked enterprise information systems: trends of knowledge and model-driven technology. Comput. Ind. 79, 64–76 (2016). https://doi.org/10.1016/j.compind.2015.07.001 9. Cabral, I., Grilo, A.: Impact of business interoperability on the performance of complex cooperayive supply chain networks: a case study. Complexity 2018, 1–30 (2018). https://doi.org/ 10.1155/2018/9213237 10. Pereira, L.X., de Freitas Rocha Loures, E., Portela Santos, E.A.: Assessment of supply chain segmentation from an interoperability perspective. Int. J. Logistics Res. Appl. 25(1), 77–100 (2022). https://doi.org/10.1080/13675567.2020.1795821

A Multi-partnership Enterprise Social Network-Based Model to Foster …

97

11. Zutshi, A., Grilo, A., Jardim-Goncalves, R.: The business interoperability quotient measurement model. Comput. Ind. 63(5), 389–404 (2012). https://doi.org/10.1016/j.compind.2012. 01.002 12. Khisro, J., Sundberg, H.: Enterprise interoperability development in multi relation collaborations: success factors from the Danish electricity market. Enterprise Inform. Syst. 14(8), 1172–1193 (2020). https://doi.org/10.1080/17517575.2018.1528633 13. Chen, D., Dassisti, M., Elvesaeter, B., Panetto, H., Daclin, N., Jaekel, F.-W., Knothe, T., Solberg, A., Anaya, V., Gisbert, R.S., Kalampoukas, K., Pantelopoulos, S., Kalaboukas, K., Bertoni, M., Bor, M., Assogna, P.: DI.3: Enterprise Interoperability-Framework and Knowledge corpusAdvanced Report Deliverable DI.3, INTEROP NoE. Network of Excellence on Interoperability research for Networked Enterprise (2007) 14. Daclin, N., Daclin, S.M., Chapurlat, V., Vallespir, B.: Writing and verifying interoperability requirements: application to collaborative processes. Comput. Ind. 82, 1–18 (2016). https:// doi.org/10.1016/j.compind.2016.04.001 15. Chin, P.Y., Evans, N., Liu, C.Z., Choo, K.K.R.: Understanding factors influencing employees’ consumptive and contributive use of enterprise social networks. Inf. Syst. Front. 22, 1357–1376 (2020). https://doi.org/10.1007/s10796-019-09939-5 16. Treem, J.W., Leonardi, P.M.: Social media use in organizations. Commun. Yearbook 36, 143– 189 (2012). https://doi.org/10.1080/23808985.2013.11679130 17. Bayrakdar, S., Yucedag, I., Simsek, M., Dogru, I.A.: Semantic analysis on social networks: a survey. Int. J. Commun. Syst. 33, 1–30 (2020). https://doi.org/10.1002/dac.4424 18. Fernandes, S., Belo, A., Castela, G.: Social network enterprise behaviors and patterns in SMEs: lessons from a Portugese local community centered around the tourism industry. Technol. Soc. 44, 15–22 (2016) 19. Leonardi, P.M., Huysman, M., Steinfield, C.: Enterprise social media: definition, history, and prospects for the study of social technologies in organizations. J. Comput.-Mediat. Commun. 19, 1–19 (2013). https://doi.org/10.1111/jcc4.12029 20. Turban, E., Bolloju, N., Liang, T.P.: Enterprise Social networking: opportunities, adoption, and risk mitigation. J. Organ. Comput. Electron. Commer. 21(3), 202–220 (2011). https://doi.org/ 10.1080/10919392.2011.590109 21. Leon, R.D., Rodríguez-Rodríguez, R., Gómez-Gasquet, P., Mula, J.: Business process improvement and the knowledge flows that cross a private online social network: an insurance supply chain case. Inf. Process. Manage. 57, 102237 (2020). https://doi.org/10.1016/j.ipm. 2020.102237 22. Kim, J.H., Sabherwal, R., Bock, G.W., Kim, H.M.: Understanding social media monitoring and online rumors. J. Comput. Inform. Syst. 61, 507–519 (2021). https://doi.org/10.1080/088 74417.2020.1762260 23. Chan, H., Yang, M.X., Zeng, K.J.: Bolstering ratings and reviews systems on multi-sided platforms: a co-creation perspective. J. Bus. Res. 139, 208–217 (2022). https://doi.org/10. 1016/j.jbusres.2021.09.052

Developing Performance Indicators to Measure DIH Collaboration: Applying ECOGRAI Method on the D-BEST Reference Model Hezam Haidar , Claudio Sassanelli , Cristobal Costa-Soria , Angel Ortiz Bas , and Guy Doumeingts

Abstract Digital Innovation Hubs (DIHs) are playing a strategic role in today’s digital transformation, supporting European Enterprises, in particular small and medium (SMEs), to adequately adopt digital technologies in their business. Several services can be provided to foster and speed up this path. These services can be mapped to configure DIHs portfolios and then allocated in typical customer journeys through the D-BEST reference model. Often, DIHs need to collaborate to support the digital transition. This paper provides detailed elaboration on how to use ECOGRAI method to define performance indicators (PIs) related to cross-collaborations among DIHs. The method was applied on two cross-collaborating DIHs member of the DIH4CPS project, Data Cycle Hub (DCH-ITI), and HUB4.0MANUVAL (UPV). Keywords Digital innovation hub · ECOGRAI · D-BEST · Performance indicators elaborations · Collaboration · Service portfolio

H. Haidar (B) INTEROP-VLab, C/O Bureau Nouvelle Aquitaine, 21 Rue Montoyer, 1000 Brussels, Belgium e-mail: [email protected] C. Sassanelli Department of Mechanics, Mathematics and Management, Politecnico di Bari, Via Orabona 4, 70125 Bari, Italy e-mail: [email protected] C. Costa-Soria ITI—Instituto Tecnológico de Informática, Camino de Vera S/N, 46022 Valencia, Spain e-mail: [email protected] A. Ortiz Bas Research Centre On Production Management and Engineering (CIGIP), Universitat Politècnica de València, Camino de Vera S/N, 46022 Valencia, Spain e-mail: [email protected] G. Doumeingts IMS Laboratory, University of Bordeaux, 33405 Talence, France e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_9

99

100

H. Haidar et al.

1 Introduction Digital Innovation Hubs (DIHs) are playing a strategic role in today’s digital transformation, supporting European companies, in particular small and medium enterprises (SMEs), to adequately adopt digital technologies in their business [1–3]. Several services can be provided to foster and speed up this path. Recently, this set of services have been mapped in the Data-based Business–Ecosystem–Skills–Technology (DBEST) reference model [4]. The model is useful to configure DIHs service portfolios and draw the related customer journeys (CJs) that service providers and technology users supported by DIHs need to go through [5]. However, not only enterprises need to understand and realize which is their digital level in this complex transition [6], but also DIHs need support to measure their action in supporting companies. Indeed, often DIHs don’t have internally all the competences and services needed by enterprises willing to be digitized [7, 8]. For this reason, DIHs need to collaborate among them to better flank companies in solving their issues and in overcoming blocking points that could hinder their digitalization path. Being a new type of collaboration, these relationships among DIHs need to be measured to realize their actual performance. However, the measurement of performance of cross-collaborations among DIHs has still not been explored, and a tailored method needs to be developed. Among the several extant ones already proposed in literature, the ECOGRAI method [9] is a well-known consolidated approach to support the elaboration of performance indicators (PIs) in the business (and in particular in the manufacturing) context. This paper provides detailed elaboration on how to use the ECOGRAI method to define cross-collaboration PIs between DIHs. The method was applied on two crosscollaborating DIHs member of the DIH4CPS project, Data Cycle Hub (DCH-ITI), and HUB4.0MANUVAL (UPV). For this reason, the paper does not cover all the services belonging to the D-BEST model and focuses only on the specific services provided in the two cross-collaborations analyzed in the research conducted. The paper is structured as follows. Section 2 presents adopted research method and the DIHs involved. Section 3 shows the results obtained with the application of ECOGRAI method. Section 4 discusses the results, and Sect. 5 concludes the paper.

2 Method 2.1 The D-BEST Reference Model The D-BEST is a reference model useful to configure DIHs service portfolios and model collaborative networks in the I4.0 era. It can also serve to define the service pipeline of DIHs and to detect possible collaboration among DIHs and their stakeholders with the aim of being combined in a pan-European DIH. The model is the final evolution of the original Ecosystem–Technology–Business (ETB) model [10], that was further improved in the MIDIH Project (2020) under the name of

Developing Performance Indicators to Measure DIH Collaboration …

101

Ecosystem–Technology–Business–Skills–Data (ETBSD). The D-BEST reference model is grounded on five main macro-classes of services, constituting the context in which DIHs can act, delivering services to their customers. Among them, Skills and Data have been recently added since considered key facets of the new digitized I4.0 context. They support one of the main roles to be played by DIHs (i.e., raising the awareness of manufacturers’ decision-makers concerning the chances that digitization can lead to their company). In detail, the five macro-classes are defined as follows. Ecosystem macro-class is aimed at creating, nurturing, expanding, and connecting the local SME constituency, involving in the SME digital transformation process different stakeholders. Technology macro-class follows the whole lifecycle of digital technologies with the aim of providing hardware and software services, assets, and solutions to technology providers and technology users. Business macro-class identifies, models, and sustains viable business models, enclosing fund-raising services. Skills macro-class both assesses process/organization and skills maturity of the digitizing companies and supports the skill empowerment (through educational and training programs, and also fostering brokerage aimed at knowledge transfer). Data macro-class is strategic to fully exploit digital technologies potentialities, following the different phases of the data lifecycle. Grounded on the D-BEST reference model, the D-BEST-based DIH customer journey (CJ) analysis method has been introduced with the scope of identifying typical digital transformation paths of the two main categories of customers of a DIH, technology users, and technology providers [5].

2.2 The ECOGRAI Method In enterprise management system, performance evaluation can be difficult to address due to the complex environment of such a system. The research literature has produced several methods and techniques for performance measurement systems definition and implementation. Reference [11] compared different approaches based on various elements such as their nature, the dimensions, and criteria. Among these approaches is ECOGRAI which is well-structured method with explicit guidelines to help the managerial staff in the development of effective PIs in the design and implementation phases. ECOGRAI is a method for designing and implementing PIs in various types of systems in industrial organizations. The method is highly participative, applied with the full engagement of the actors of the system [12]. The method was developed in the early 1990s at University of Bordeaux by the Research Group GRAI, then INTEROP-VLab. Some examples where the method was used in industrial environment: AIRBUS, SNECMA (now SAFRAN), and AIDIMA. The role of PI is to evaluate the performance of a system in various domains such as manufacturing, transport, services, etc. It allows the actors of the system to follow the evolution of the performance of the system and also, to evaluate the actions to reach specific objectives in the appropriate time. ECOGRAI method is decomposed

102

H. Haidar et al.

Fig. 1 ECOGRAI main steps

in two main steps (Fig. 1): a design step and an implementation step. Each step includes several activities. The design step starts by describing the system in which the PIs will be implemented. This includes, the elements, the main relations between them, the actors, the evolution of the system, and the actions which cause this evolution. This activity also includes the definition of the objectives assigned to the system. Second activity of the design step is identifying the drivers which allows to reach the objectives. A driver is in an action (process or function) which allows the system to reach the objectives. During the identification of the drivers, the actors belonging to the system and implied in these activities must be consulted in order he/she will be able to contribute to the choice and the definition of the drivers. If there are multiple drivers, it is necessary to limit the number and to choose those which could have the most important effect in the reaching of the objectives. In the third activity of the design step, the PIs are defined allowing to measure the reaching of the objectives. A PI is an indicator which evaluates the effect of the driver on the reaching of the objectives. The implementation step is the step in which the data is collected and processed allowing to measure the reaching of the objectives. The implementation step is divided in five activities. First activity of the implementation step is data collection. The purpose of this activity is to identify the raw data needed to measure the PIs. Sometimes a PI may require many raw data from numerous sources depending on the system complexity the frequency of the measurement, the time/period, and the data itself. The data collected is then reported to the management and presented with the suitable format, taking in consideration the evolution of the situations. An important activity at the implementation step is testing and adjusting the PIs. This activity is to rectify the conflicting indication given by several PIs implemented in the same system. In such cases, it is necessary to perform a detailed analysis on the system to find an explanation. Validating the PIs is split in two main activities. The first is assessing the success of the implementation by validating the reaching of the objective. The second, in case the reaching of the objective seems impossible, is to revise the initial elaboration of the objectives, drivers, and PIs.

Developing Performance Indicators to Measure DIH Collaboration …

103

2.3 Selection of the Cross-Collaboration DIHs To demonstrate how the ECOGRAI method could be used to evaluate the crosscollaboration among DIHs, a convenient sampling coming from the DIH4CPS network was done. Within its 13 DIHs members, we identified those that have or had a cross-collaboration, and we selected the collaborations that could serve to other DIHs to replicate or be inspired about potential collaboration scenarios. The selected one was the cross-collaboration realized among the DIHs “Data Cycle Hub”, and “HUB4.0MANUVAL”, because they had implemented two collaboration services covering different, but complementary, D-BEST macro-services classes: one covering the Skills macro-class, “Skills training”, and the other covering the Technology macro-class “Technology infrastructure provision”. The cross-collaboration was focused on: (a) identifying current and future training trends to improve the DIHs training offer for companies and (b) combining and improving infrastructure resources of DIHs, respectively. Next, the cross-collaborating DIHs selected are briefly introduced. On the one hand, the Data Cycle Hub (DCH-ITI) [13] is the reference DIH in the Valencia region to promote data, AI, and cyber-security-based innovation, covering the full data value chain. The DCH-ITI is an ecosystem composed of more than 40 members, bringing all the relevant agents: Industry, Regional Agencies, Accelerators, Chamber of Commerce, Education, as well as the group of technological companies. On the other hand, the HUB4.0MANUVAL [14] (the Hub 4.0 for Manufacturing Technologies in the Valencian Community) is an initiative born in 2017 within the framework of the I4MS [15] initiative. Founded by a group of public and private entities with diverse capabilities in the fields of robotics, cyber-physical systems (CPS)/IoT, artificial intelligence, and high-performance computing systems (HPC), it aims to facilitate the transition to the Industry 4.0 paradigm. The partners are composed of government, public agencies, universities, research centers, groups of companies, and federations of industrial and professional associations [14]. HUB4.0MANUVAL constitutes a group of more than 12 members whose capabilities are essential to achieve smart specialization components in the region.

3 Results: Measuring DIH Cross-Collaboration Services The ECOGRAI method is used to identify the PIs for the collaboration between two cross-collaborating DIHs member of the DIH4CPS project, namely the Data Cycle Hub (DCH-ITI) and HUB4.0MANUVAL (UPV).

104

H. Haidar et al.

3.1 Two Cases: 2 DIHs and 2 Cross-Collaborations Few years ago, the Government of the Valencian Community started a quest toward a new model of knowledge-based economy in the region. One of the initiatives that was proposed to facilitate this model shift is Inndromeda [16], a regional alliance that brought together all the Technology and Research centers of the Valencia region to gear regional synergies based on Key Emerging Technologies and revamp the local traditional industry competitiveness. In this context, and according to the objectives of Inndromeda, a collaboration initiative was launched including two main objectives of cross-collaboration among DCH-ITI and HUB4.0MANUVAL: 1. Cross-collaboration on Up-skilling of Training Personnel (Skills training): the aim is to identify what are the real needs for training toward digitalization of the Valencian industry. Then, once getting the sufficient data, to outline the program for future training offer and design a catalogue of training services more adapted to the real needs. 2. Cross-collaboration on Technology Infrastructure Provision: the objective is to enhance the range and scope of the “Access to infrastructure and technological platforms”, a “Test-before-invest” service, combining infrastructure resources of both DIHs. First, both DIHs performed an assessment of their respective technical infrastructure, with the aim to increase the capacity to train and execute AI machine learning models whereas deploying simultaneous Big Data experiments. Hence, the second cross-collaboration aimed to upgrade and to integrate the existing hardware infrastructure at the DCH-ITI with HUB4MANUVAL highperformance infrastructure managed by UPV. ECOGRAI method was used to identify the performance indicators (PI) of the two initiatives. We follow the main principle of linking Objectives, Drivers, and PI. In the next paragraphs, the application of the ECOGRAI method is described for the two cross-collaboration initiatives. Cross-collaboration 1: Skills training General Objective: Identify the current and future training needs from Valencian companies related to Key Enabling Technologies (KETs) that could serve to improve the training offer of the DIHs in line with real needs. Particular Objectives: 1. SO1: Obtain a sample of the professional training programs offered in the Valencia region. 2. SO2: Identify opportunities to design accurate training content programs that trigger, engage, and support a genuine digitalization transition of the companies. 3. SO3: Define a training catalogue in Key Enabling Technologies (KETs). Drivers. Actions to reach these specific objectives: • SO1.A1: Research and data analysis of existing training and educational programs involving EU KETs 4.0.

Developing Performance Indicators to Measure DIH Collaboration …

105

• SO2.A1: Collect feedback from enterprises about KET training needs. • SO3.A1: Identify gaps among the skills needed by companies and the current training offer. • SO3.A2: Define/Update the catalogue of training courses on KETs, to be offered in the future by the universities. PI. List of tentative PIs to measure the achievement of the objectives: • • • • • •

PI#1a: Nº of existing training and educational programs involving EU KET 4.0. PI#1b: Quality of the programs (excellent, good, medium, not satisfactory). PI#2: Nº of events organized to attract enterprises to collect feedback. PI#3: Nº of enterprises attending the events. PI#4: Nº of KET training needs identified. PI#5: Nº training courses aligned with industry needs. After the development of several actions:

• Research and data analysis of existing training and educational programs in Valencia region involving Key Enabling Technologies 4.0 (Aug–Sept 2020). • Organization of two industrial workshops (one for technology users and another for technology providers) to collect feedback from the demand side (Oct–Nov 2020). • Definition of a training catalogue in Key Enabling Technologies (Dec. 2020). The PI results (Period of progress evaluation: August 2020–December 2020) were: • PI#1.1: Nº of training and educational existing programs involving EU KET 4.0 = 362. • PI#1.2: Quality of the programs = Good. • PI#2: Nº of events organized to attract enterprises = 2. • PI#3: Nº of enterprises attending the events = 31. • PI#4: Nº of KET training needs identified = 23. • PI#5: Nº training courses aligned with industry needs = 180. Cross-collaboration 2: Technology Provision General Objective: Enhance the range and scope of the “Access to infrastructure and technological platforms” services, combining infrastructure resources of both DIHs. Particular Objectives: 1. SO1: Upgrade the hardware configuration involving GPUs in the existing CPU infrastructure based at the DCH-ITI. 2. SO2: Deep learning prototype demonstration in the infrastructure for Big Data experimentation and High-Performance Computing (HPC) at DCH-ITI. 3. SO3: Integration of the use of the HPC experimentation infrastructures of both DIHs for improvement of the services.

106

H. Haidar et al.

Drivers: Actions to reach these specific objectives. • • • •

SO1.A1: Design to deploy GPUs according to the expected demand/use. SO1.A2: Implementation and commissioning of the HPC configuration. SO1.A3: Review and validation of HPC infrastructure configuration. SO2.A1: Development and validation of a Deep Learning prototype demonstration. • SO2.A2: Prototype validation. • SO3.A1: Integration and centralized access point to the HPC infrastructure of both DIHs. • SO3.A2: Validation and certification of the solution. PI List of tentative PIs to measure the achievement of the objectives. • • • • •

PI#1: Nº of upgraded GPUs made available in the infrastructure. PI#2: Nº Deep learning application tests made. PI#3: Nº DL tests performed with different configurations of GPUs / CPUs nodes. PI#4: Nº of bandwidth tests to test DL configurations. PI#5: Nº of communication speed tests for different DL configurations. After the development of several actions:

• Upgrade Design CPUs to GPUs prototypes applied in the experimentation facilities (Jan–Jun 2019). • Deep learning prototype demonstration in the infrastructure for Big Data experimentation and High-Performance Computing for service improvement purposes (Sep–Dec 2019). • Integration of the use of the HPC experimentation infrastructures of both DIHs for improvement of the services (June 2020). The PI results (Period of progress evaluation: August 2019–June 2020) were: • PI#1: Nº of upgraded GPUs made available in the infrastructure = 18. • PI#2: Nº Deep learning application tests made = 7 (measuring the time cost per iteration and total time). • PI#3: Nº DL tests performed with different configurations of GPUs/CPUs nodes = 9 (measuring performance images per second processed). • PI#4: Nº of bandwidth tests to test DL configurations = 27 (tests with different sizes of transmitted data to be processed and measuring bandwidth (GB/s)). • PI#5: Nº of communication speed tests for different DL configurations = 8 (tests measuring time).

4 Discussion Applying ECOGRAI as a method to define performance indicators (PIs) through the use of drivers (actions/services) in order to reach the designated objectives derived from the application of D-BEST model provide successful results. For the

Developing Performance Indicators to Measure DIH Collaboration …

107

skills’ training cross-collaboration, collecting information about existing training and educational programs in the Valencia region was an essential action in order to analyze the gap in the available programs to cover SMEs needs, especially involving the 4.0 emerging technologies. For this analysis, the main services of the DIHs involved and related to D-BEST model are: • Technology scouting and identification of emerging technologies: Relevant knowledge from these services was brought to prepare the KETs catalogue. • Communication and Trend watching: Invitation of experts to bring their knowledge on latest technology trends and sharing of best practices. • SME and people engagement and brokerage: Organization of industrial events and workshops. Organizing two industrial workshops (one for technology users and another for technology providers) to collect feedback based on their need. • Skills strategy development: Defining a roadmap of training courses to cover the needs for new skills through gap analysis in Industry 4.0 skills. Cross-collaboration between the two DIHs helped in identifying the needs of the Valencian regional industry. Both DIHs adopted good practices in collaboration from defining the objectives, undertaking the actions to achieve the objectives as well as an effective coordination and constant communication. Technology provision cross-collaboration involved several technical drivers (actions/services) so that the objectives of the collaboration are reached. Some of these technical drivers and related to D-BEST model: • Provision of platform technology infrastructure: Computing and Data Analytics infrastructure to increase the capacity of experimentations. • Product qualification and certification: Bottleneck detection in the different distributed training configurations to support in certifying that the solution has passed functional and performance tests. In this cross-collaboration, the technical and operative performance of the HPC infrastructure was improved, keeping its accuracy and reliability. A trusted, secure, and reliable environment has been upgraded in demonstration and experimentation activities to allow companies carry out development and testing of data-fueled new solutions or projects that require Big Data, artificial intelligence, and HPC. This increases the opportunities to offer a high level of services for the SMEs in the region. The proposed approach provides an interesting set of PI to measure specific, but common, scenarios in DIHs. The application of the proposed approach in additional cases should provide a wide set of PI to be used in DIHs.

108

H. Haidar et al.

5 Conclusion The cross-collaboration between DIHs is very important in interweaving knowledge and technologies from different domains and connecting regional clusters with the pan-European expert pool of DIHs. Evaluating such collaboration will bring additional information to improve the relations between the different actors of the ecosystem and to establish Europe as a world-leading innovator of the fourth Industrial Revolution. Following the ECOGRAI method, the analysis of the two cases of the cross-collaborations involved allowed to define the general objectives, particular objectives, drivers or actions to reach these objectives then the definition of PIs. ECOGRAI method has proven a broader applicability where it can be applied for different situations and context. Among the main limitations of this research, it should be highlighted that only specific services have been covered by the study since the two cross-collaborations considered in the two pilot applications of the method involve a restricted set of services. Of course, thanks to the high degree of generalizability of the method proposed in the paper, the ECOGRAI method can be applied to all the services belonging to the D-BEST reference model. Finally, while noting the important contributions made by this paper, further research should consider the possibility of structuring a reward system to both recognize benchmarks, success stories, and award them inside and outside the DIH.

References 1. European Commission: Digitizing European Industry. Reaping the full benefits of a Digital Single Market. https://ec.europa.eu/digital-single-market/en/news/communication-digitisingeuropean-industry-reaping-full-benefits-digital-single-market. Last accessed 4 Feb 2020 2. European Commission: Digital Innovation Hubs in Smart Specialisation Strategies. Early lessons from European regions. European Union, Luxembourg (2018) 3. European Commission: European Digital Innovation Hubs in Digital Europe Programme— Draft working document. European Union, Luxembourg (2020) 4. Sassanelli, C., Panetto, H., Guédria, W., Terzi, S., Doumeingts, G.: Towards a reference model for configuring services portfolio of Digital innovation hubs: the ETBSD model. In: CamarinhaMatos, L.M. (ed.) IFIP International Federation for Information Processing 2020 (PRO-VE), pp. 597–607. Springer, Heidelberg (2020). https://doi.org/10.1007/978-3-030-62412-5_49 5. Sassanelli, C., Gusmeroli, S., Terzi, S.: The D-BEST based digital innovation hub customer journeys analysis method: a pilot case. In: Camarinha-Matos, L.M., Boucher, X., Afsarmanesh, H. (eds.) Smart and Sustainable Collaborative Networks 4.0. PRO-VE 2021. IFIP Advances in Information and Communication Technology, vol. 629, pp. 460–470. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85969-5_43 6. Sassanelli, C., Rossi, M., Terzi, S.: Evaluating the smart readiness and maturity of manufacturing companies along the product development process. In: Fortin, C., Rivest, L., Bernard, A., Bouras, A. (eds.) Product Lifecycle Management in the Digital Twin Area, pp. 72–81. Springer, Cham (2019) 7. Crupi, A., Del Sarto, N., Di Minin, A., Gregori, G.L., Lepore, D., Marinelli, L., Spigarelli, F.: The digital transformation of SMEs—a new knowledge broker called the digital innovation hub. J. Knowl. Manag. 24(6), 1263–1288 (2020). https://doi.org/10.1108/JKM-11-2019-0623

Developing Performance Indicators to Measure DIH Collaboration …

109

8. Asplund, F., Macedo, H.D., Sassanelli, C.: Problematizing the service portfolio of digital innovation hubs. In: Camarinha-Matos, L.M., Boucher, X., Afsarmanesh, H. (eds.) Smart and Sustainable Collaborative Networks 4.0. PRO-VE 2021. IFIP Advances in Information and Communication Technology, vol. 629, pp. 433–440. Springer, Cham (2021). https://doi.org/ 10.1007/978-3-030-85969-5_40 9. Vallespir, B., Ducq, Y., Doumeingts, G.: Enterprise modelling and performance—Part 1: implementation of performance indicators. Int. J. Bus. Perform. Manag. 1(2), 134–153 (1999) 10. Butter, M., Gijsbers, G., Goetheer, A., Karanikolova, K.: Digital innovation hubs and their position in the European, national and regional innovation ecosystems. In: Feldner, D. (ed.) Redesigning Organizations: Concepts for the Connected Society, pp. 45–60. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-27957-8_3 11. Ravelomanantsoa, M., Ducq, Y., Vallespir, B.: State of the art and generic framework for performance indicator system methods. IFAC-PapersOnLine 51(11), 544–551 (2018). https:// doi.org/10.1016/j.ifacol.2018.08.375 12. Doumeingts, G., Clave, F., Ducq, Y.: ECOGRAI—a method to design and to implement performance measurement systems for industrial organizations—concepts and application to the maintenance function. In: Rolstadås, A. (ed.) Benchmarking—Theory and Practice, pp. 350–368. Springer, Boston (1995). https://doi.org/10.1007/978-0-387-34847-6_39 13. Data Cycle Hub: Introduction: https://thedatacyclehub.com/en/about. Last accessed 15 Nov 2021 14. HUB4.0MANUVAL website: https://hub4manuval.es. Last accessed 15 Nov 2021 15. i4ms Homepage: https://i4ms.eu/. Last accessed 15 Nov 2021 16. Inndromeda Homepage: https://www.inndromeda.es/en/. Last accessed 15 Nov 2021

Interoperability in Measuring the Degree of Maturity of Smart Cities Luis Miguel Pérez , Raul Oltra-Badenes , Juan Vicente Oltra-Gutierrez , and Hermenegildo Gil-Gomez

Abstract Digitization has caused the transformation of people’s lives, companies, societies, and the surrounding nature that make up growing nuclei known as Smart Cities. This concept is considered a system of many systems, which implies a high complexity and maturity difficult measurement. Although there are currently various tools for measuring the maturity of Smart Cities described in the literature, these methodologies are updated in most cases every year to introduce new indicators when those considered outdated are discarded to maintain the precision of measuring the degree of maturity of Smart Cities. After reviewing the literature on Smart Cities and their evaluation methodologies, the lack of indicators designed to measure the interoperability of digital systems has been identified. In this sense, the main result of this research is a proposal for a group of indicators (KPIs) or category for the measurement of interoperability that a list of indicators present in the literature, in high-impact journal articles, screening the reason for the selection and its convenience in measuring the degree of interoperability of companies, institutions, and public– private consortia in a Smart City. This study proposes indicators to measure the degree of interoperability of industrial companies within the fourth Industrial Revolution of comprehensive digitization in Smart Cities. Keywords Smart City · KPIs · Interoperability · Measurement · Internet of Things (IoT) · Data · Smart Business

1 Introduction Smart Cities are becoming in development hotspots due to the integration of Information and Communications Technology (ICT) as strategic tools to take advantage of data generated by stakeholders in the city [1–3] and all the available resources

L. M. Pérez · R. Oltra-Badenes (B) · J. V. Oltra-Gutierrez · H. Gil-Gomez Universitat Politècnica de València, Camí de Vera, S/N, 460224 Valencia, Spain e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_10

111

112

L. M. Pérez et al.

looking for solving their challenges at several levels or dimensions. A city’s smartness is the capacity to bring together all the resources to generate opportunities and solve problems. Bring together implies the integration of stakeholders’ data. Nevertheless, much of this data is not transmitted among the stakeholders due to interoperability barriers generated by many factors [4]. Above the interoperability problems, there is an amplifying factor, an incipient imprecise measure of the interoperability problem [2]. A precise measurement is required for a deep understanding of the problem, diagnosis, and an effective solution. In accordance with the projects and the growth of cities, increased proposals for measurement instruments are emerging [1, 2]. Many systems of indicators in the literature attempt to evaluate Smart Cities as a whole, considering its many dimensions, necessarily connected among them, and exchanging information through them. Due to various circumstances, they have gaps that should be corrected: In the first place, there are key factors not measured, such as the interoperability, in the second place, selection of cities that are not comparable to each other, lack of data sources, among others [5]. Thus, the constant review of measurement methodologies is necessary. This research aims to propose a set of indicators capable of measuring interoperability within the framework of a system of indicators to measure the degree of maturity of a Smart City. The article is structured as follows: Firstly, the background is presented, obtained from the bibliographic review around the interoperability of companies and the evaluation methodologies of Smart Cities, followed by a review of the methodology used. Then, the results are presented, finally ending with the conclusions.

2 Background In this section, three basic concepts of this research were reviewed: interoperability, Smart Cities, and measurement indicators or KPIs.

2.1 Interoperability According to [6], interoperability is the ability of two networks, systems, devices, instruments, applications, or components or more to exchange with third parties and reuse information safely and effectively. In this respect, for the European Commission (EC), one of the primary references worldwide in promoting the creation of Smart Cities, interoperability is the ability that organizations must interact to achieve common objectives by sharing information and knowledge through the data exchange among the Information and Communications Technology (ICT) systems [6]. According to [4], interoperability is divided into three components:

Interoperability in Measuring the Degree of Maturity of Smart Cities

113

• Semantic Interoperability. It occurs when a system of elements allows the translation of languages from one system to another and can occur at the user and device levels. • Technique Interoperability. It refers to the proper communication between two computer systems and has two levels: M2M (machine-to-machine) communication, concerning standard programming languages and HMI (human–machine interface) [7]. In addition, this category is associated with the incompatibility of infrastructures [4]. • Organizational Interoperability. It involves policy and decision-making by companies’ control panels (decision-makers) regarding the data flow in an organization.

2.2 Smart Cities The concept of a Smart City is gaining significance due to the rapid development of tools and resources to take advantage of information and technologies to boost economies [1, 2] and facilitate decision-making in the planning, construction, and management of cities. The European Commission defines Smart Cities as places where traditional networks and services are made more efficient with digital solutions for the benefit of residents and companies [7]. On one hand, according to the bibliography reviewed, among the main current tools used to make organizations (institutions and Industry 4.0) more productive and that require interoperability to function within Smart Cities: Cloud Computing and Big-Data (including space/geographical info [8]). These technologies operate with data generated from stakeholders, which in technical language is known as Crowdsourcing [9]. The data is generated by companies and citizens and, institutions in the public sector both, which store data collected by sensors and provide it through the Internet of Things (IoT ) [10] and Cognitive Internet of Things (CIoT ) [11], and Industrial Internet of Things (IIoT ) [12], so it might be re-used by third parties in the city. On the other hand, to complete the interoperability infrastructure, there are sensors [13] and other devices used to collect information make part of Blockchain Networks to make transactions secure and Smart Grids of clean energy [6]. The number of systems increases each year, introducing new parameters and languages that might introduce innovative solutions and interoperability problems.

2.3 KPIs The literature review indicates that most of the ways of measuring the performance of cities are the use of frameworks of Key Performance Indicators (KPIs) [14]. In consequence with this principle, and with the desire for the development of societies

114

L. M. Pérez et al.

inherent in all actors, in the last decade various systems and methods of performance indicators have been developed to characterize a Smart City from a subjective perspective and evaluating the citizen perception and from an objective perspective. The systems for assessment have been conceived from the perspective of different stakeholders in the city, most of them from a theoretical and academic point of view. Some were promoted from a commercial and industrial perspective to develop and forecast/measure the impact and acceptance of new technological services for communities. The interest of companies is more focused on identifying new business models and making existing ones more efficient and profitable. At the same time, public entities seek to make efficient use of the resources and services they provide to citizens and offer transparency in their procedures. Despite the difference in the conception of these systems, they can be considered complementary. All the stakeholders mentioned above can take advantage of cooperation and the transmission of information; for this, it will be necessary to design and develop channels for interoperability jointly.

2.4 Statement of the Problem In current groups of indicators for Smart City maturity assessment, there is no sensitivity to interoperability commensurate with its importance. In accordance with this, [2] dedicates a specific paragraph to exposing the lack of interoperability as a necessary entity to describe the complexity of Smart Cities: It describes its importance in evaluating the city and the possibilities it offers to create innovative ideas collaboratively and recognizes this factor as one of the main drawbacks. The authors have found that only five frameworks consider the interoperability factor is incipiently or partially [2, 15–18], which it considers insufficient. There is no considerable correspondence between the importance of the factor and the solutions of measurement offered by the scientific community. In fact, only three of the methodologies identified in this study correspond to publications in indexed journals. In this sense, the ISO-2014, The Smart Cities Preliminary Report [8] listed some indicator systems emphasizing the deficiencies in measuring ICT performance. Many authors agree that vertical data system networks that experts call Vertical Silos [13, 19, 20], or Fragmented ICT Islands [4], with low interoperability and, consequently, low sharing possibilities. On the other hand, from a social and economic perspective, the problem is amplified by cities’ growth, which multiplies the data to be gathered and processed. According to the World Bank, more than half of the world’s population lives in cities, and it is a figure that will continue to grow at a rate of 70 million per year [21]. Another amplifying effect is the effect of technologies that increase the amount of data. The Cisco Annual Internet Report [22] estimates that in 2023 about two-thirds of the world’s population (5.3 billion people) will have Internet connectivity, while

Interoperability in Measuring the Degree of Maturity of Smart Cities

115

Table 1 Interoperability issues affecting Smart Cities Category

Interoperability issues

Semantic interoperability

Lack of consistency in existing standards in relation to semantics

Technical interoperability

The lack of mechanisms and infrastructures to compare and harmonize standardization initiatives among the various categories

Organizational interoperability

The lack of harmonization and coherence in the existing structural solutions

about 33% of the connections (14.7 billion connections) M2M. These figures reveal the degree of importance companies claim in developing societies. In line with the types of interoperability in Sect. 2.1, according to [22], the main challenges for Smart Cities revolve around (Table 1).

3 Procedure The objective of the procedure is to identify the specific bibliography that allows identifying the most appropriate indicators for the more precise characterization of the problems pointed out by the authors in terms of interoperability, as shown in Fig. 1. Steps of the Procedure. The procedure has three phases. The first phase was focused on the selection of the search criteria. The search criteria considered were the Impact Index and the number of situations for the terms “Smart Cit *” AND “Methodology”; “Smart Cit *” AND “Assessment”; “Smart Cit *” AND “Dashboard”; “Evaluation”; “Ranking”; “Indicators”; “KPIs” whose objective was to identify the most representative methodologies and “Smart Cit *” AND “Interoperability”. The second phase aimed to select the complete tools and identify the main shortcomings concerning the main problems in Smart Cities. Finally, the last phase (third) of the analysis consisted of identifying shortcomings in the methodologies and selecting the indicators presented in the results section. The selection criteria are summarized by: The frameworks are clearly described; they must include multi-level indicators and be written in English.

4 Results In the first place, the results obtained from the review of the evaluation of Smart Cities methodologies are presented, which have shown that the most referenced characterization in the literature of Smart Cities was formulated for the first time

116

L. M. Pérez et al.

Fig. 1 Search concepts. (Source Authors’ elaboration)

in 2007 as a result of research at the European level by [23] in the report entitled Smart Cities: Ranking of European Medium-Sized Cities and was later also used by [24–27]. This first proposal contains six groups of indicators: (1) Smart Economy, (2) Smart People (3) Smart Governance, (4) Smart Mobility, (5) Smart Environment (which encompasses Smart Energy, proposed by SCBC [28]), and (6) Smart Living (see Fig. 2). Then, across the years, other authors have introduced new dimensions: (7) Smart Infrastructure, (8) Smart R&D, (9) Smart Health, and (10) Smart ICT. In this research, the specific Interoperability Group of Indicators is proposed to be integrated into a general framework. They proposed framework is shown in the Fig. 2, a scheme that summarizes the main categories for grouping the indicators to assess the degree of development of Smart Cities. There is a correlation among these dimensions. A proposal of the theoretical framework of the relationship among each of the dimensions is presented in Fig. 3. Figure 4 presents the detailed open data flow scheme in a Smart City deploying the components of the Smart Interoperability dimension, which takes place through the Internet of Things (IoT). The performance of these systems can be measured by selecting indicators that cover the entire macro-system. Based on the literature review, the indicators summarized in Table 2 are proposed.

Interoperability in Measuring the Degree of Maturity of Smart Cities

117

Fig. 2 Smart Cities categories. (Source Authors’ elaboration)

Fig. 3 Correlation scheme between categories for the evaluation of Smart Cities. (Source Authors’ elaboration)

5 Discussion There are thousands of indicators in the literature; however, very few of the methodologies include among their indicators some to measure the degree of interoperability between the systems communicated with each other through the Internet of Things (IoT). One of the objectives of all the systems within Smart Cities must be the bilateral transmission of data for calculation and to be able to optimize resources and make processes more efficient. Due to this, it is proposed to monitor the percentage (%) of transmitted data, differentiating between data transmitted between machines (M2M) and those transmitted

118

L. M. Pérez et al.

Fig. 4 Open data flow scheme. (Source Authors’ elaboration)

Table 2 List of the main indicators considered for the assessment of the degree of interoperability in a Smart City Interoperability KPIs

Units

Number of researchers in interoperability

No. I

% interoperability specialists

No. E

No of datasets transmitted A2A

No. A2A

No. of datasets transmitted A2B

No. A2B

No. of datasets transmitted B2C

No. B2C

No. of datasets transmitted A2C

No. A2C

No. of datasets transferred, M2M

No. M2M

Resources invested in R&D for interoperability and infrastructure

MMe

No. of Organizations investing R&D in interoperability

No. B

between machines and humans (HMI). Furthermore, the differentiation of the type of communication is also suggested: data transmitted A2A (from Administration to Administration), A2B (from Administration to Companies), A2C (Administration to Citizens), and B2C (Business to Citizens). These proposed and differentiated indicators are aligned with the results of [2] that for interoperability to take place and to understand the complexity of the dynamics of cities, the participation of all actors is necessary.

Interoperability in Measuring the Degree of Maturity of Smart Cities

119

Regarding the role of the stakeholders in the creation of interoperability: In a smart economy, a large part of the resources is destined to research and innovation in projects related to the interoperability of organizations and computer systems. The smart economy makes the most of information systems for the optimal distribution of resources. In this sense, three indicators are proposed: The number of companies that dedicate resources to foster interoperability, the resources allocated by governments to encourage interoperability, and the resources allocated by private companies to create interoperability solutions. The objective of organizations should be to measure heterogeneity and try to reduce them through innovation and try to achieve standardization. In this sense, IES-City Framework proposes the two points that serve as the axis of interoperability [18]: (1) if everything is standardized, innovation is limited and, on the contrary, (2) if there is no standardization, it will not be able to give interoperability between systems. Concluding that the most convenient is an intermediate point between the two situations (1) and (2) [8], and to know which is the intermediate point, it is imperative to measure.

6 Conclusions Today’s industry is directly affected by everything that happens in its environment, Industry 4.0, which is an industry based on the use of information to be able to capture this information, achieving the highest degree of interoperability. Regarding interoperability, the city stakeholders advance in a heterogeneous way, since they have diverse sources of resources, which means that they must face their interoperability problems differently. In smart economies, a large part of the budgets is destined to the training of people in areas for the development of cities. Training grants for employees and productivity [2]. One of the appropriate or recommended measures for the public administrations responsible for these decisions is to encourage all parties involved to create bridges that facilitate interoperability. In this sense, a greater tendency or prioritization is perceived in the development of studies and initiatives to create spaces for interoperability between public institutions and citizens over companies and citizens. Using the proposed indicators, the maturity of the city is demonstrated about the three types of interoperability types. The objectives of the research were reached, a set of indicators considering the network of stakeholders involved in the Smart City, and the current technologies which made the flow of information through the stakeholder’s systems possible, and the global infrastructure for the interoperability system assessment. The main limitations for the interoperability evaluation infrastructure are related to the lack of availability of data to perform the diagnosis. In this sense, among the next lines of research, the integration of new indicators is proposed, to the extent that the availability of data related to interoperability increases. Furthermore, as, in

120

L. M. Pérez et al.

an interoperable scenario, these indicators could be calculated automatically. A new line of research may be focused on the automation of frameworks computation.

References 1. Seunghwan, M.: A study on determinant factors in smart city development: an analytic hierarchy process analysis. Sustainability 89, 141–153 (2019) 2. Sharifi, A.: A critical review of selected smart city assessment tools and indicator sets. J. Clean. Prod. 233, 1269–1283 (2019). https://doi.org/10.1016/j.jclepro.2019.06.172 3. Pérez, L.M., Oltra-Badenes, R., Oltra-Gutierrez, J.V.C., Gil-Gómez, H.: A bibliometric diagnosis and analysis about smart cities. Sustainability 12(16), 6357 (2020). https://doi.org/10. 3390/su12166357 4. Blanc-Serrier, S., Ducq, Y., Vallespir, B.: Organisational interoperability characterisation and evaluation using enterprise modelling and graph theory. Comput. Ind. 8(3), 32–58 (2018). https://doi.org/10.1016/j.compind.2018.04.012 5. Akande, A., Cabral, P., Gomes, P., Casteleyn, S.: The Lisbon ranking for smart sustainable cities in Europe. Sustain. Cities Soc. 44, 475–487 (2014). https://doi.org/10.1016/j.scs.2018. 10.009 6. El-hawary, M.E.: The smart grid—state-of-the-art and future trends. Electr. Power Comp. Syst. 42(3–4), 239–250 (2014). https://doi.org/10.1080/15325008.2013.868558 7. European Commissionm homepage, New European interoperability framework. Euro. Comm. https://ec.europa.eu/isa2/eif_en. Last accessed 11 Nov 2021 8. ISO Smart Cities Preliminary Report: ISO/IEC JTC 1 Information Technology International Organization for Standardization (ISO) 9. Yang, Z., Gupta, K., Gupta, A., Jain, R.K.: A data integration framework for urban systems analysis based on geo-relationship learning. In: ASCE International Workshop on Computing in Civil Engineering, pp. 467–472. ASCE Seattle, Washington (2017). https://doi.org/10.1061/ 9780784480823.056 10. European Commission. Smart Cities page. https://ec.europa.eu/info/eu-regional-and-urbandevelopment/topics/cities-and-urban-development/city-initiatives/smart-cities_. Last accessed 11 Dec 2021 11. Zaheer, K., Othman, M., Perumal, T., Gómez, D.: A survey of decision-theoretic models for cognitive internet of things (CIoT). IEEE Access 6, 22489–22512 (2018). https://doi.org/10. 1109/ACCESS.2018.2825282 12. Jung, J., Othman, M., Watson, K., Song, B., Uslander, T.: Design of smart factory web services based on the industrial internet of things. In: Proceedings of the 50th Annual Hawaii International Conference on System Sciences, pp. 5941–5946 (2017). https://doi.org/10.24251/ HICSS.2017.716 13. Robert, J.S., Kolbe, N., Cerioni, A.: Open IoT ecosystem for enhanced interoperability in smart cities—example of métropole De Lyon. Sensors 17(12), 2849 (2019). https://doi.org/10.3390/ s17122849 14. Rodriguez-Rodriguez, R., Alfaro-Saiz, J.J., Carot, J.M., Gómez, D.: A dynamic supply chain BSC-based methodology to improve operations efficiency. Comput. Ind. 122, 103294 (2020). https://doi.org/10.1016/j.compind.2020.103294 15. Angelakoglou, K., Nikolopoulos, N., Giourka, P., Svensson, I.-L., Apostolopoulos, V., Nikolopoulos, N., Kantorovich, J.: From a comprehensive pool to a project-specific list of key performance indicators for monitoring the positive energy transition of smart cities—an experience-based approach. Smart Cities 3(3), 705–735 (2020). https://doi.org/10.3390/smartc ities3030036 16. Berst, J., Enbysk, L., Ebi, K., Cooley, D., Peeples, D.: Smart cities readiness guide: the planning manual for building tomorrow’s cities today. Smart Cities Council, Seattle (2014)

Interoperability in Measuring the Degree of Maturity of Smart Cities

121

17. Walter, C., Woodling, R.: Smart city domain strategic growth map. Espresso-systemic standardisation approach to empower smart cities and communities. http://espresso.espresso-project.eu/ wp-content/uploads/2017/05/D5.1-Smart-City-Strategic-Growth-Map.pdf. Last accessed 28 Nov 2021 18. NIST.: IES-City Framework: A Consensus Framework for Smart City Architectures., https:// s3.amazonaws.com/nist-sgcps/smartcityframework/files/ies-city_framework/IES-CityFrame work_Version_1_0_20180930.pdf. Last accessed 11 Nov 2021 19. Javed, A., Kolbe, N., Malhi, J., Nurminen, A., Robert, J., Främling, K.: BIoTope: building an IoT open innovation ecosystem for smart cities. IEEE Access 8, 224318–224342 (2019). https://doi.org/10.1109/ACCESS.2020.3041326 20. Cirillo, F., Wu, F.-J., Solmaz, G., Kovacs, K: Embracing the future internet of things. Sensors 19(2), 352 (2019). https://doi.org/10.3390/s19020351 21. World Bank Urban Development Overview. https://www.worldbank.org/en/topic/urbandeve lopment/overview. Last accessed 11 Nov 2021 22. Fraschella, A., Brutti, A., De Sabatta, P., Gessa, N., Ianniello, R., Novelli, C., Pizzuti, S.: Smart city platform specification: a minimum set of common principles for enabling smart city interoperability. Internet of Things Smart Urban Ecosyst. 22–50 (2018) 23. Giffinger, R., Fertner, C., Kramar, H., Meijers, E.: Smart cities-ranking of European mediumsized cities. Vienna Univ. Technol. Euro. Smart Cities Proj. 142149 (2007) 24. Berrone, P., Ricart, J.E.: IESE cities in motion index. IESE. https://media.iese.edu/research/ pdfs/ST-0542-E.pdf. Last accessed 11 Nov 2021 25. Achaerandio, R., Bigliani, R., Curto, J., Gallotti, G.: White paper: smart cities analysis in Spain 2012. Smart J. Last accessed 11 Nov 2021 26. Carli, R., Dotoli, M.: Measuring and managing the smartness of cities: a framework for classifying performance indicator. In: Proceedings of the 2013 IEEE International Conference on Systems, Man, and Cybernetics 8(3), 32–58 (2013). https://doi.org/10.1109/SMC.2013.223 27. Cohen, B.: What exactly is a Smart City? https://www.fastcompany.com/3038818/the-smartestcities-in-the-world-2015-methodology. Last accessed 11 Nov 2021 28. SCBC. Smart Cities Benchmarking in China: China Academy of Telecommunication Research of MIIT (2014)

Part III: Digital Twins Analysis and Applications

Analyzing the Decisions Involved in Building a Digital Twin for Predictive Maintenance Hazel M. Carlin , Paul A. Goodall , Robert I. M. Young, and Andrew A. West

Abstract Producing a digital twin (DT) involves many interlinking decisions. Existing research tends to describe the parts of a DT and how they work, but not the decision-making that went into building the DT nor the consideration of alternative design options. There is therefore a need for a decision-support tool to guide developers to create DTs efficiently while meeting functional requirements such as accuracy and interoperability. To work toward creating a DT decision-support tool, this paper presents an analysis of the decisions required to create a predictive maintenance DT for an automotive manufacturer. The decisions at each stage of the build process are identified and represented as inputs, outputs, controls and mechanisms. This allows for a conversion into generalized decisions presented in the form of an IDEF0 diagram. The research is a fundamental component in producing a decision-support tool for DT design and deployment. Keywords Digital Twin · Decision support · Interoperability

1 Introduction The digital twin (DT) concept is seen as an essential part of digitalizing a business. In the realm of predictive maintenance in the manufacturing sector, DTs offer the potential to improve the efficiency and productivity of a plant. The DT, as a virtual duplicate of a system built from a fusion of models and data [1], takes advantage of H. M. Carlin (B) · P. A. Goodall · R. I. M. Young · A. A. West Loughborough University, Epinal Way, Loughborough LE11 3TU, UK e-mail: [email protected] P. A. Goodall e-mail: [email protected] R. I. M. Young e-mail: [email protected] A. A. West e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_11

125

126

H. M. Carlin et al.

networks of distributed sensors that can be attached to machinery in order to enable data-driven predictions of when failures are likely to occur and hence to prevent costly downtime. To produce a DT involves many complex and interlinking decisions, which can have repercussions on downstream activities. Developers must decide whether to create, reuse or modify DT components (i.e., models, simulations, middleware, physical sensors) [2] while considering their impact upon requirements related to factors such as fidelity [3], interoperability [4], computation [5], resource (labor and time) [2], and privacy [3]. Interoperability is a particular consideration when developing DTs due to their highly connected nature, requiring technical and semantic standardization between software components, sensors, end users, and developers [6]. To create DTs efficiently and effectively requires significant understanding of the DT development decision process and cross-domain knowledge of suitable design options, highlighting the need for decision support within this field. To work toward creating the decision-support tool, this paper presents a unique analysis of the decisions that took place while creating a predictive maintenance DT for an automotive manufacturer. The decisions at each stage of the build process are identified and de-constructed into inputs, outputs, controls, and mechanisms, where the controls on the decisions are the DT requirements identified above (i.e., fidelity, interoperability, computation, resource, and privacy). This enables the decisions to be easily presented in the form of an IDEF0 diagram. At the same time, they are generalized to be applicable to any scenario. The work presented is the first step in the overall aim to produce a decision-support tool for DT design and deployment. The paper is organized as follows: Sect. 2 presents the existing literature on DT decision support; Sect. 3 describes the automotive case study; Sect. 4 sets out the decision analysis research, and Sect. 5 provides the conclusions and describes future work.

2 DT Decision-Support Literature There are many examples of DTs reported in the literature [7]. However, the majority of publications are purely descriptions of the finished DTs, showing in detail what they are comprised of and how they work, but not detailing the methodology or decisions that went into building them, see, for example, [8–10]. Others present a methodology (i.e., a description of steps taken) and some reasoning behind their choice of DT components, but they do not discuss alternative options at the different stages of build [11–14]. As a typical example, [13] describes the difficulties in modeling a helicopter rotor assembly and the solution for measuring loads in rotating parts using a digital model and a test bench. Details of software (e.g., SAMCEF MECANO and Simcenter 3D) and modeling methods (e.g., super-elements and sliding joints) are given but not the alternatives that inevitably would have been considered during the project’s evolution.

Analyzing the Decisions Involved in Building a Digital Twin …

127

There are also frameworks showing the components or dimensions of a DT system and its structure [15–17]. However, these do not explain why certain options are chosen over others (e.g., daily data updates may be chosen over hourly data updates) or what steps need to be taken to achieve each component of the DT. There is therefore a need for improved understanding of the steps and decisions involved in building a DT. In particular, a decision-support tool is required to guide DT designers to produce DTs in an efficient and repeatable manner.

3 Case Study: Predictive Maintenance DT for Automotive Manufacturer This case study documents the development of a DT which is to be used to support predictive maintenance algorithm development for overhead gantry loader cable tracks. Overhead gantry loaders are commonly used within the automotive industry to transport parts automatically between manufacturing equipment (e.g., CNC machines) on production lines. Cable tracks are used to house the electrical cabling and pneumatic pipes which power the moving linear gantry loader robots (see Fig. 1). While cable tracks are designed and maintained to be reliable factory components, there are potential failure modes that could result in significant factory downtime and also pose a health and safety risk. Current preventative maintenance activities, undertaken at regular intervals, are time-consuming and labor-intensive; therefore, a novel approach based upon sensor-based predictive maintenance is being considered as an alternative solution to determine the onset of potential failure modes. The approach taken consists of a scaled down physical test rig and accompanying DT (see Fig. 2) to enable a cost-effective and controlled test environment to develop predictive maintenance algorithms. Wireless Inertial Measurement Units (IMU), force sensors and cameras are used to monitor the motion of the cable track elements on the physical test rig, while dynamic simulation techniques (i.e., deriving responses due to applied forces) are used to model the motion of the cable track within the virtual model (VM) components of the DT. The VMs of the cable track test rig in this example are used for visualization: of normal and failure modes of operation, simulation: to create relevant training data representative of operational and failure modes and validation: of a range of predictive maintenance algorithms over a wide set of cable track designs, operating conditions, and potential fault mechanisms. In practice, several virtual models were built as the development process progressed: models of the cable track in two different modeling environments; a model of the full test rig comprising the guide trough, frame, and carriage (see Fig. 2) and a model to produce synthetic training data (i.e., time series arrays of positional data) for the predictive maintenance algorithms. The modeling environments chosen were Unity 3D gaming engine [18] and MATLAB Simscape [19]. Gaming engines such as Unity are optimized for the fast run-time simulations needed for largescale failure mechanisms (e.g., track ejection) and have the potential for real-time

128

H. M. Carlin et al.

Fig. 1 Gantry loader and cable track

Fig. 2 Physical test rig (left) and digital twin (right)

data input, whereas simulation/analysis software such as MATLAB is optimized for mathematical accuracy [20] which is needed to model non-catastrophic faults such as link wear.

Analyzing the Decisions Involved in Building a Digital Twin …

129

4 Working Toward a Decision-Support Tool 4.1 Decisions During Build of the Case Study DT In order to create a decision-support tool, the first step is to determine the decisions a DT designer needs to make. The process used in creating the DT for the case study has been analyzed and key decisions identified. Table 1 lists a selection of the decisions that were made during the creation of the DT for the case study discussed in this paper. Decisions have been grouped into six areas: initial planning, geometry, physics settings, data input/output, evaluate and improve and predictive maintenance (PdM) model training. The column headings in Table 1 align with the labels in an IDEF0 diagram to simplify later analysis: inputs, outputs, controls, mechanism. In the context of decision-making, the “controls” are the constraints that influence the decision, and these are derived from DT requirements identified in the literature (see Sect. 1). The “mechanism” provides the “supporting means” or resources for performing the decision-making activity [21]. Initial planning—An early decision is which modeling environment to use. Many different factors influence this decision, such as the ability of different environments to interoperate with other software (e.g., CAD software) and whether information can be exported in formats that can be readily utilized within other modeling software (e.g., viewing MATLAB results in Unity). Geometry—The geometry can originate from external CAD software (e.g., Siemens NX) or it can be created using primitives in the chosen modeling environment (e.g., Unity). Importing the geometry from external CAD software requires consideration of interoperability concerns. The CAD files need to be in an acceptable format for the modeling environment (e.g., “.fbx” is preferred for Unity [18]). Physics settings—There are many Physics settings that can be modified to represent the performance of cable tracks (e.g., material and friction), but the main decision is which solver type (e.g., variable step solver) to use. The chosen MATLAB solver was selected due to its suitability for problems with many degrees of freedom (i.e., “ode15s” solver). Data input/output—Deciding how to input sensor data into the DT involves assessment of modeling environment capabilities (e.g., acceptance of scripting in Unity) and expertise in setting up a locally hosted server to interpret data packets. Extracting the data from Unity and MATLAB involves conversion into a common format (i.e., “.csv” files) for input into the predictive maintenance algorithms. Evaluate and improve—Once the DT has been built, there are decisions involved in validating its fidelity. In this research, the two different modeling environments (i.e., Unity and MATLAB) were able to be validated against each other, since their outputs were compatible (i.e., both “.csv” files). Typical validation activities included

130

H. M. Carlin et al.

Table 1 Sample of decisions made during design and build of the case study’s DT. (Requirement categories: F = fidelity; I = interoperability; C = computation; R = resource; P = privacy) Design stage

Activity (decision)

Inputs

Outputs

Controls

Mechanism

Initial planning

Decide which modeling environment to use

Various modeling environments (e.g., Unity)

Choice of modeling environment

Ability to finely tune Software parameters (F); manuals ability to use scripts (I); ability to import real-time data (I); speed and effort of computation (R, C)

Geometry

Decide how to NX, Unity generate the and cable track MATLAB model(s)

Workflow

Time constraints (R); Software CAD file type manuals compatibility (I); detail requirements (F)

Physics settings

Decide on solver type and settings

Unity and MATLAB

Choice of solver type and settings

Model accuracy (behavior of links) (F); computation effort (C)

Software manuals

Data input/ output

Decide how to Unity and extract data MATLAB from Unity and MATLAB

Method to extract data

Data format requirements of downstream models (I); labor and time constraints (R)

Software manuals

Evaluate and improve

Decide how to Models; test validate rig models

Method for validation of models

Test rig availability (R); compatibility of results (F)

Sample output data

PdM model training

Decide where Models; test to get training rig data from (models, test rig or factory)

Choice of training data source

Expertise (R); ability Software to run parallel manuals simulations (C); accuracy (F); security concerns of factory data (P)

comparing position-time plots quantitatively and comparing simulation images qualitatively. The fidelity of the virtual models can also be determined via comparison with the performance of the physical test rig. PdM (Predictive Maintenance) training—Training data for the predictive maintenance algorithm can be obtained from the physical test rig or from simulations of its VM. For the case study, the focus was on generating fault data from the simulations since expertise in this area was available, but if this had not been the case the test rig could have been used instead. The main advantage of generating synthetic data is the speed with which a range of potential fault data can be generated for training PdM algorithms.

Analyzing the Decisions Involved in Building a Digital Twin …

131

4.2 Generalized Decisions The range of decisions represented in Table 1 have been re-framed to be applicable to any PdM DT. This involves removing references to specific items in the case study (such as the links and test rig) and evaluating the resulting structure to determine if the decisions can be classified as generic. These initial generalized decisions are represented as an adapted IDEF0 diagram [21], an extract of which is shown in Fig. 3 to show the early decisions in the DT design process. This type of diagram is chosen since it is a recognized method to represent activities and the aspects that affect them, such as constraints and relationships with other activities. The goal is to adapt Table 1, and the generic decisions represented in the IDEF0 diagram as additional predictive maintenance case studies are analyzed. In the IDEF0 diagram (Fig. 3), the decisions are linked to highlight the time sequence order and, where relevant, links are made between early stage outputs and inputs to later decisions. Using this method enables the design of a decisionsupport tool to emerge since the constraints (controls) indicate what influences each decision and therefore why different options may be needed. For example, one of the constraints on “decide whether to use existing models” is “time”. If the time constraints on a given project are particularly tight then a decision-support tool would suggest re-using existing models, provided they are available, relevant, and adaptable.

Fig. 3 Part of the IDEF0 diagram showing the generalized decisions

132

H. M. Carlin et al.

As the decision-support tool develops, the meanings behind the concepts used will need to be formally defined. This domain knowledge will come from existing reference ontologies as far as possible. A future vision for the tool is to develop beyond a prompt for a manual DT design process toward an automated program where suggestions for the DT features are made.

5 Conclusions and Further Work Research undertaken by the authors on building a DT to support a novel predictive maintenance activity and analyzing the decisions made during the build process have been presented in this paper. The aim of the research is to use these decisions to populate a “DT for predictive maintenance” decision-support tool. Existing research has focused on the constitution of individual DTs and what each part of the DT does, but not the generic design and deployment processes and decisions required to create DTs. By generalizing the decisions used in the case study and presenting them in an IDEF0 diagram, the precursors and consequences of decisions are evident. The network of decisions is complex and needs to be analyzed for completeness, consistency and correctness since one decision necessitates another, but itself only exists due to the presence of a previous decision. Furthermore, the aspects of the DT being designed and deployed are interlinked. For example, choosing a modeling environment that accepts scripting mechanisms precipitates the decision to create scripts to export data to common file formats. The next stage of this research is to evaluate and enhance the range, detail, consistency, correctness, and completeness of the generic decisions via additional predictive maintenance case studies. Following an update of the IDEF0 diagram demonstrating the range of generalized decisions, a decision-support tool will be developed using a predictive maintenance DT ontology at its core. This will enable a machinereadable, easily accessible source of information and functionality fundamental to the capabilities required for the decision-support tool.

References 1. Wagg, D.J., Worden, K., Barthorpe, R.J., Gardner, P.: Digital twins: state-of-the-art and future directions for modeling and simulation in engineering dynamics applications. ASME J. Risk Uncer. Eng. Syst. Part B: Mech. Eng. 6(3), 030901 (2020). https://doi.org/10.1115/1.4046739 2. Zhang, C., Xu, W., Liu, J., Liu, Z., Zhou, Z., Pham, D.T.: A reconfigurable modeling approach for digital twin-based manufacturing system. Proc. CIRP 83, 118–125 (2019). https://doi.org/ 10.1016/J.PROCIR.2019.03.141 3. Borth, M., Verriet, J., Muller, G.: Digital twin strategies for SoS: 4 challenges and 4 architecture setups for digital twins of SoS. In: 2019 14th Annual Conference System of Systems

Analyzing the Decisions Involved in Building a Digital Twin …

4.

5. 6.

7. 8.

9.

10.

11.

12.

13.

14.

15.

16.

17.

18. 19. 20. 21.

133

Engineering, pp. 164–169. IEEE, Anchorage (2019). https://doi.org/10.1109/SYSOSE.2019. 8753860 Park, Y., Woo, J., Choi, S.S.: A cloud-based digital twin manufacturing system based on an interoperable data schema for smart manufacturing. Int. J. Comput. Integr. Manuf. 33, 1259–1276 (2020). https://doi.org/10.1080/0951192X.2020.1815850 Wright, L., Davidson, S.: How to tell the difference between a model and a digital twin. Adv. Model. Simul. Eng. Sci. 7, 13 (2020). https://doi.org/10.1186/s40323-020-00147-4 BSI Standards Publication: Automation systems and integration–Digital twin framework for manufacturing, Part 1: Overview and general principles, Part 2: Reference architecture. BS ISO 23247-1:2021. https://shop.bsigroup.com/products/automation-systems-and-integr ation-digital-twin-framework-for-manufacturing-overview-and-general-principles/standard. last accessed 15 Nov 2021 Cimino, C., Negri, E., Fumagalli, L.: Review of digital twin applications in manufacturing. Comput. Ind. 113, 103130 (2019). https://doi.org/10.1016/J.COMPIND.2019.103130 Tao, F., Zhang, M., Liu, Y., Nee, A.Y.C.: Digital twin driven prognostics and health management for complex equipment. CIRP Ann. 67, 169–172 (2018). https://doi.org/10.1016/J.CIRP.2018. 04.055 Zhu, Z., Xi, X., Xu, X., Cai, Y.: Digital twin-driven machining process for thin-walled part manufacturing. J. Manuf. Syst. 59, 453–466 (2021). https://doi.org/10.1016/J.JMSY.2021. 03.015 Fan, Y., Yang, J., Chen, J., Hu, P., Wang, X., Xu, J., Zhou, B.: A digital-twin visualized architecture for flexible manufacturing system. J. Manuf. Syst. 60, 176–201 (2021). https:// doi.org/10.1016/J.JMSY.2021.05.010 Lin, W.D., Low, M.Y.H.: Concept design of a system architecture for a manufacturing cyberphysical digital twin system. In: IEEE International Conference on Industrial Engineering and Engineering Management, pp. 1320–1324. IEEE, Singapore (2020). https://doi.org/10.1109/ IEEM45057.2020.9309795 Zhuang, C., Miao, T., Liu, J., Xiong, H.: The connotation of digital twin, and the construction and application method of shop-floor digital twin. Rob. Comput.-Integr. Manuf. 68, 102075 (2021). https://doi.org/10.1016/J.RCIM.2020.102075 Guivarch, D., Mermoz, E., Marino, Y., Sartor, M.: Creation of helicopter dynamic systems digital twin using multibody simulations. CIRP Ann. 68, 133–136 (2019). https://doi.org/10. 1016/J.CIRP.2019.04.041 Aivaliotis, P., Georgoulias, K., Chryssolouris, G.: The use of digital twin for predictive maintenance in manufacturing. Int. J. Comput. Integr. Manuf. 32, 1067–1080 (2019). https://doi. org/10.1080/0951192X.2019.1686173 Zhang, C., Zhou, G., He, J., Li, Z., Cheng, W.: A data- and knowledge-driven framework for digital twin manufacturing cell. Proc. CIRP 83, 345–350 (2019). https://doi.org/10.1016/J.PRO CIR.2019.04.084 Stark, R., Fresemann, C., Lindow, K.: Development and operation of digital twins for technical systems and services. CIRP Ann. 68, 129–132 (2019). https://doi.org/10.1016/J.CIRP.2019. 04.024 Uhlenkamp, J.F., Hribernik, K., Wellsandt, S., Thoben, K.D.: Digital twin applications : a first systemization of their dimensions. In: Proceedings—2019 IEEE International Conference on Engineering, Technology and Innovation, ICE/ITMC 2019, pp. 1–8. IEEE, Valbonne SophiaAntiplos (2019). https://doi.org/10.1109/ICE.2019.8792579 Unity Technologies: unity—manual: model file formats. https://docs.unity3d.com/Manual/3Dformats.html. Last accessed 10 Nov 2021 Mathworks: Simscape - MATLAB & Simulink. https://uk.mathworks.com/products/simscape. html. Last accessed 10 Nov 2021 Erleben, K.: Stable, robust, and versatile multibody dynamics animation. http://image.diku.dk/ kenny/download/erleben.05.thesis.pdf. Last accessed 10 Nov 2021 Lightsey, B.: Systems Engineering Fundamentals. Defense Acquisition University Press, Virginia (2001)

Digital Twin Concept in Last Mile Delivery and Passenger Transport (A Systematic Literature Review) Maren Schnieder , Chris Hinde , and Andrew West

Abstract The challenges for policymakers caused by the increased mobility demand in cities create a necessity for holistic tools to evaluate solutions in the virtual world before implementing them in reality. The study outlined in this paper presents a systematic literature review on digital twins used in transportation or the supply chain with a focus on the movement of people and goods around a city. The digital twins are compared based on their level of integration. The study concludes that the majority of transport centric digital twins are developed for logistics problems within the factory or the supply chain. Digital twins of the movement of people and goods around cities are comparatively rare. Most of the digital twins identified in the systematic literature review should be reclassified as digital models or digital shadows due to their limited integration. Keywords Digital Twin · Last mile delivery · City logistics · Urban Freight

1 Introduction Providing adequate access to the transportation infrastructure is a challenge for policymakers due to the increasing levels of urbanization [1]. In fact, with the increasing mobility requirements and demand for goods in cities, it is difficult for transport planners to keep providing the same level of service (i.e., parking spaces, delivery times, congestion) [1] without increasing the external effects (e.g., energy usage, pollution) while ensuring a high quality of life. Last mile deliveries have become M. Schnieder (B) · C. Hinde · A. West Electrical and Manufacturing Engineering, The Wolfson School of Mechanical, Loughborough University, Loughborough, UK e-mail: [email protected] C. Hinde e-mail: [email protected] A. West e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_12

135

136

M. Schnieder et al.

more challenging due to the increased customer expectations [2], which reduces the efficiency of last mile delivery due to the high-order fragmentation causing low load factors [3]. Hence, optimized traffic management is a key component of smart cities [4]. Smart cities are advocated as utilizing digital technology as well as Data Analytics to improve the cities [5]. Most major cities have traffic control centers that manage the traffic flow through Intelligent Transportation Systems (ITS) [4]. Solutions to alleviate the problems caused by the movement of goods and people around the city as well as methods to evaluate the solutions are required [3]. Numerous solutions have been researched including optimization algorithms, machine learning, urban consolidation centers, ITS, as well as utilizing emerging technologies such as drones [1, 6]. It is noted that most of these solutions require detailed real-time information to operate as efficiently as possible [6]. Digital twins have the potential of closing the gap between the intention of stakeholders (e.g., efficient and sustainable transportation) and the reality. Digital twins offer a holistic platform that allows decision-makers to test their proposed strategies/solutions on a virtual system before implementing within the real world [1]. Especially the increasing availability of novel Information and Communication Technologies (ICT) enables advanced data collection methods [1] and digital twins can be used as a real-time data-driven, interactive decision support system [1]. The remainder of this paper is organized as follows: First, the scope of the systematic literature review is defined. Then the methodology applied in the systematic literature review including the paper selection process and exclusion criteria are illustrated. Next, the results are presented which includes an overview of research areas as well as a detailed review and comparison of selected papers. Finally, the results are discussed, and conclusions drawn.

2 Research Area Definition and Contribution 2.1 Definition of Digital Twins Digital twins have originally been developed to optimize the design and deployment of manufacturing processes [7]. Digital twins have only existed since the early twenty-first century and are predicted to reach widespread application [8]. Some of the technologies required for digital twins are already reaching market readiness [8]. While there is no unique definition of a digital twin [3], digital twins are commonly defined as a simulation of a physical system to mirror or represent the real world [9] at a suitable fidelity [6]. More recently, stricter definitions of digital twins have been proposed. Kritzinger et al. [10] distinguished between different levels of integration and only consider systems that allow for an automatic dataflow in both directions between the real world and the virtual world as digital twins. Digital shadows only have a manual flow of data between the virtual world and the physical world while the physical-virtual direction is automated. In a digital model, all dataflows are manual.

Digital Twin Concept in Last Mile Delivery and Passenger Transport …

137

Fig. 1 Transporting people around a city (a) and supply chain components (b)

Digital twins should have a real-time data link [11]. Otherwise, the quality of decision making is reduced given that it would be impossible to react to ad hoc incidences in the physical system [12]. Digital twins have been used to simulate, optimizes, and test processes virtually, which would lead to time and cost savings in the real world [11]. With the help of digital twins is it possible to change the behavior of the real system based on the simulations and predictions of the virtual system [13]. The benefits of digital twins are the increased visibility, predictability, and support for what-if analysis as well as potentially offering a better understanding of the physical process [11].

2.2 Scope of the Systematic Literature Review Figure 1 illustrates the components of a transportation system for people in cities and the supply chain. The red lines highlight the research area of concern. The study focuses on how digital twins are used to represent the movement of people and goods around a city and does not focus on production logistics or individual vehicles.

2.3 Research Questions and Contribution The research questions addressed in this paper are as follows: • RQ1: What are common application areas for digital twins concerning transport and logistics? • RQ2: How are digital twins used in last mile delivery and transporting people around cities? • RQ3: What is a common level of integration for the proposed digital twins? The paper addresses the research questions above through a systematic literature review. The paper is focused on digital twins of transporting people and goods

138

M. Schnieder et al.

around a city. This specifically excludes digital twins of every aspect of the manufacturing process including internal transport/production logistics as they are a common research topic and a literature review about them has been recently published in, e.g., [14]. The systematic literature review provides a detailed overview of a niche research area. It could be argued that the literature review is too early considering the low number of publications in this research area. However, the aim of the literature review is to highlight this research gab.

3 Methodology A systematic literature review (Fig. 2) has been performed following the Prisma guidelines [15]. The search was conducted between the 4th and 8th of November 2021. The keywords (“Digital Twin” AND “transport”), (“Digital Twin” AND “Logistics”), and (“Digital Twin” AND “Last mile delivery”) have been used to find relevant papers on ScienceDirect (i.e., publishing database) and Scopus (i.e., aggregator database). The search has been limited to these two databases as commonly done by authors [2]. The search has been limited to title, abstract, and author-specific keywords. Only articles written in English have been included. No papers published before 2017 have been found. Relatively broad research terms have been chosen to ensure that all relevant papers are found. The papers included for review had to be focused on the creation of a digital twin that represents the movement of people or goods around the city on roads. Papers focusing on internal transport/logistics, production logistics, mine-to-mill or shop floor logistics/transportation as well as papers focusing on the infrastructure or an individual vehicle (i.e., energy consumption, maintenance, life cycle, financing) have been excluded. Papers addressing only a small aspect of the digital twin (e.g., messaging Protocols or security) have been excluded if the whole system has not been represented in a digital twin. Nineteen overviews of conference proceedings were automatically excluded.

Fig. 2 Structure of the systematic literature review

Digital Twin Concept in Last Mile Delivery and Passenger Transport …

139

Fig. 3 Word cloud of all keywords used in all studies (the figure is limited to keywords used more than once and the keywords digital twin and digital twins have been removed)

4 Results 4.1 All Papers The keywords of all papers (included and excluded) are shown in Fig. 3. The majority of papers identified in the literature review are focused on production or internal logistics. Almost twice as many studies were excluded for this reason than relevant studies being included in the study. This highlights both the frequent application of digital twins for manufacturing processes as well as the lack of digital twins of transport systems in cities. A possible reason is the increased difficulty of creating a digital twin of urban areas due to the limited data availability [16].

4.2 Detailed Review of Included Papers Movement of People. Recent studies have created digital twins of the movement of people as well as transportation services for people. Anda et al. [17] proposed a methodology to use mobile phone data to create digital twins. For privacy reasons, it is not possible to gain access to individual mobile phone traces. To overcome this challenge, mobility data (i.e., histograms) were disaggregated into individual-level trip data. These data could be used to create digital twin travelers.

140

M. Schnieder et al.

Shchegolova et al. [18] used the commercial software PTV VISSIM to create a digital twin of a road intersection. They simulated the effect of changing the traffic light pattern and adding a lane and concluded that PTV VISSUM can be used to create a digital twin of an intersection. Ritzinger et al. [19] developed a digital twin of a dial-and-ride service for patients. They compared the performance of different algorithms and re-optimization strategies. The advantage of having a digital twin of the highly dynamic dial-and-ride service is the possibility of evaluating novel strategies using the digital twin as to their suitability for deployment [19]. These examples show that there is a possibility of creating digital twins for transport services in cities. At present, most of these studies apply established methodologies to simulate transport in cities and refer to the solutions as digital twins. For example, [17] focused mainly on creating a demand model and [18] designed a traffic simulation of an intersection in PTV VISSUM like many others have done before. Real-Time Traffic Management. Traffic management within urban environments is becoming increasingly difficult due to the increasing demand [4]. Hence, adaptable real-time predictive traffic monitoring and management is key to creating smart and sustainable cities [4]. Thejaswini and Rajaraajeswari; Rudskoy et al. [4, 20] did not develop a digital twin. However, they reviewed ITS implementations and proposed a reference architecture of an intelligent transportation system. Tihanyi et al. [21] implemented a camera vision system to map the traffic environment in real time and tested it in a real-world application. Kumar et al. [22] proposed a real-time traffic congestion avoidance system measuring the traffic situation and driver intention. They simulated their algorithm in SUMO [23] using a traffic model from Cologne (i.e., no real-time data). One of their novelties is that their vehicles collect data co-operatively. These studies highlight the benefits of traffic centric digital twins or more specifically virtual vehicle (VV) networks for real-time traffic management. They propose to use sensors on the infrastructure or vehicles to map a digital representation of the real world. This can be used, e.g., to predict driver intention (e.g., [4]) or enable collaboration between vehicles to improve the traffic flow (e.g., [4, 22]). Digital twins have not only been developed to manage private vehicles but also for public transport systems. The digital twin by [24] is one of the more sophisticated digital twins reported in this domain including real-time data, machine learning, and a dashboard (i.e., visualization) for the operator to manage public transport services. The digital twin combines data such as static data (i.e., infrastructure, passenger demand) and real-time data (i.e., location of busses, bus passenger load). They used, e.g., [25] or real-time data processing and implemented the predictive models using the Python library Scikit-learn. The digital twin has been applied to a bus line. The digital twin [7] gave an overview about a proof-of-concept digital twin simulating commuters and public transport vehicles. Their main focus is on the equipment required for the digital twin and the communication protocols. They tested their digital twin on a bus line. Last Mile Delivery. Gutierrez-Franco et al. [1] proposed a digital twin capable of supporting decision-making for urban distribution. They applied it to Bogota in

Digital Twin Concept in Last Mile Delivery and Passenger Transport …

141

Columbia. The proposed framework combines multiple analytical techniques and includes the following six steps: (i) data acquisition (e.g., traffic data, customer behavior), (ii) data mining to find patterns, (iii) forecast (i.e., scenarios, strategic responses), (iv) optimization, and (v) execution including evaluation of performance through feedback loops. Liu et al. [6] focused a digital twin on freight parking management in cities using Thing’in. In contrast with many other authors, [6] implemented their digital twin in a software specifically developed to create digital twins. Thing’in allows to manipulate the individual properties and the structural and semantic relationships between the digital twin and the physical objects to be manipulated. Marcucci et al. [3] discussed the advantages and disadvantages of digital twins for urban logistics. The authors reported that digital twins are a useful tool to visualize the complex relationships in city logistics as well as the cause and effect of policy decisions. They argued that digital twins are beneficial for short-term decisions but are not as useful for longterm decisions. Pan et al. [5] reviewed the literature on Smart City logistics. They concluded that digital twins are an important part of smart cities and data are the most significant part for the creation of a digital twin of city logistics. Vallejo et al. [26] conducted an agent-based simulation of the taco supply chain to evaluate the CO2 . They represented the simulation in an ontology using Protege and exported it to the agent-based simulation software called NetLogo. Others. People and goods not only need to be transported in cities but also within buildings in elevators. The future opportunities and trends in building information modeling are explored in [27]. The full potential of digital twins in the tall building industry is yet to be realized according to [27]. Digital twins could be used to interconnect the physical data of the building, to enhance the construction and operation of buildings [27]. The data gathered (i.e., people flow pattern) could influence maintenance schedules and building management decisions [27].

4.3 Level of Integration Kritzinger et al. [10] defined three levels of integration: (i) Digital model: No form of automated data exchange between the physical and digital object. Changing the physical object has no immediate effect on the digital object and vice versa. (ii) Digital shadow: Real-time data is fed into the digital object automatically. The digital shadow is not automatically feeding data back into the physical system, and (iii) Digital twin: The data flow is fully integrated in both directions. The papers by [18, 19, 22, 26] simulated a physical system using (historical) datasets but they didn’t include automatic communications between the physical and the virtual system (i.e., real-time (input) data link). Real-time data links between the physical and the virtual system are an expected feature for digital twins [11]. In detail, [22] used an online available traffic simulation scenario of an average day in Cologne as their input. Ritzinger et al. [19] used test instances generated based on a

142

M. Schnieder et al.

real dataset. Shchegolova et al. [18] used the hourly traffic flow of an intersection, which has been determined by “monitoring” the intersection. Vallejo et al. [26] used, e.g., an unspecified public carbon footprint estimator and google maps for their calculations. Gutierrez-Franco et al. [1] proposed an automated link between the physical and the virtual system in both directions incorporating features such as dynamic routing and feedback loops. However, they implemented the digital twin within a simulation using mainly average or historical data and not real-time data. Campolo et al. [7] focused on messaging protocols for digital twins to track passengers. They used a real-world test case as proof of concept. The current system only includes a oneway data exchange from the system to the stakeholder. Amrani et al. [24] digital twin is able to monitor (i.e., represent current real-world system state), predict (i.e., forecasting) and evaluate (i.e., assess what-if scenarios). In addition to static data sources (e.g., infrastructure, automatic fare collection), they use real-time data of, for example, the status of the public transport vehicles and bus passenger load. The prediction system uses historical data and, for example, a random forest adopted from another study to predict the passenger load long term (i.e., 1 year) and short term (e.g., 15 min). Liu et al. [6] used a software toolkit specifically developed for digital twins, which has the required links and interfaces between the physical and the virtual world. The digital twin is built using four data sources integrated via API’s (i.e., vehicles, network, stops, and destinations). The data sources are linked in the form of nodes and edges in the software Thing’in and Protégé (i.e., to create and generate instances of a transportation ontology). However, it is noted that the majority of digital twins developed for the movement of people and goods around a city don’t use real-time data.

4.4 Definition of a Digital Twin Regarding the Transport of People and Goods Around Cities In line with the definition by [10], the authors propose a preliminary classifications of digital twins in regards to transport in cities. The definitions of digital model, digital shadow, and digital twin are the same as in [10]. Traffic simulations can be classed under digital traffic management models. These use historical data to evaluate whatif scenarios. Digital traffic management shadows are intelligent traffic management systems that support operators to manage traffic in real time such as PTV Optima 1. A digital traffic management twin would automatically optimize the traffic in cities without requiring human input. A digital logistics model simulates last mile delivery. This includes scheduled logistics (e.g., waste collection) and unscheduled logistics (e.g., parcel delivery, ondemand meal delivery). A digital logistics shadow is a software that allows logistics

Digital Twin Concept in Last Mile Delivery and Passenger Transport …

143

Table 1 Classification of studies Digital model

Digital shadow

Digital twin

Vallejo et al. [26]

Gutierrez-Franco et al. [1]

Liu et al. [6]

Ritzinger et al. [19]

Campolo et al. [7] Amrani et al. [24]

Traffic

Shchegolova et al. [18] Kumar et al. [22]

Logistics MaaS

companies to track and optimize last mile delivery services such as PTV Route Optimizer. A digital logistics twin would optimize last mile delivery services automatically. The same applies to Mobility as a Service (MaaS) to transport people around the city using, e.g., bike sharing or public transport. In the same way, a digital MaaS model simulates MaaS, a digital MaaS shadow is a real-time interface supporting the optimization of MaaS, and a digital MaaS optimizes the MaaS operations automatically. Table 1 shows the classification of the studies in this literature review. The table classifies the papers based on the digital twin created and not based on the authors proposed solution.

5 Conclusion The contribution of the systematic literature review outlined in this paper is to highlight two main limitations of the state of the art in research on digital twins for transportation systems in cities. First, the majority of transport or logistics-related digital twins have been created to address production logistics or the supply chain issues. Digital twins that focus on the movement of people and goods around a city are comparably rare. Second, there is no standard definition of digital twins within this domain. Some authors state that they have created a digital twin even though the level of integration/interoperation may not be appropriate. Some of the proposed systems might rather be described as digital models or digital shadows. These authors use readily available and tested simulation and optimization methodologies (i.e., routing algorithms, simulation software) to create their “digital twins”. Hence, the difference between a simulation and a digital twin is not evident from the work of these authors. However, the systematic literature review reported in this paper has also identified promising research approaches to integrated digital twins for the transport of people and goods around the city, which can be seen as options for future research. The opportunities for digital twins are vast from real-time traffic management to last mile delivery and public transport management. Acknowledgements The authors gratefully acknowledge the financial support of the Engineering and Physical Sciences Research Council (EPSRC) Centre for Doctoral Training in Embedded Intelligence under grant reference EP/L014998/1 and Ford Motor Company for their support and input to this research.

144

M. Schnieder et al.

References 1. Gutierrez-Franco, E., Mejia-Argueta, C., Rabelo, L.: Data-driven methodology to support longlasting logistics and decision making for urban last-mile operations. Sustainability 13(11), 6230 (2021). https://doi.org/10.3390/su13116230 2. Moshood, T.D., Nawanir, G., Sorooshian, S., Okfalisa, O.: Digital twins driven supply chain visibility within logistics: a new paradigm for future logistics. Appl. Syst. Innov. 4(2), 29 (2021). https://doi.org/10.3390/asi4020029 3. Marcucci, E., Gatta, V., Le Pira, M., Hansson, L., Bråthen, S.: Digital twins: a critical discussion on their potential for supporting policy-making and planning in urban logistics. Sustainability 12(24), 10623 (2020). https://doi.org/10.3390/su122410623 4. Thejaswini, R.S.S.S., Rajaraajeswari, S.: A real-time traffic congestion-avoidance framework for smarter cities. In AIP Conf. Proc. 2039, 020009 (2018). https://doi.org/10.1063/1.5078968 5. Pan, S., Zhou, W., Piramuthu, S., Giannikas, V., Chen, C.: Smart city for sustainable urban freight logistics. Int. J. Prod. Res. 59(7), 2079–2089 (2021). https://doi.org/10.1080/00207543. 2021.1893970 6. Liu, Y., Folz, P., Pan, S., Ramparany, F., Bolle, S., Ballot, E., Coupaye, T.: Digital Twin-Driven Approach for Smart City Logistics: The Case of Freight Parking Management. In: Dolgui, A., Bernard, A., Lemoine, D., von Cieminski, G., Romero, D. (eds.) IFIP International Conference on Advances in Production Management Systems, pp. 237–246. Springer, Cham (2021) 7. Campolo, C., Genovese, G., Molinaro, A., Pizzimenti, B.: Digital twins at the edge to track mobility for MaaS applications. In: Proceedings of the 2020 IEEE/ACM 24th International Symposium on Distributed Simulation and Real Time Applications, pp. 1–6. IEEE, Prague (2020) 8. Schislyaeva, E.R., Kovalenko, E.A.: Innovations in logistics networks on the basis of the digital twin. Acad. Strateg. Manag. J. 20(1S), 1–17 (2021) 9. Defraeye, T., Shrivastava, C., Berry, T., Verboven, P., Onwude, D., Schudel, S., Buhlmann, A., Cronje, P., Rossi, R.M.: Digital twins are coming: will we need them in supply chains of fresh horticultural produce? Trends Food Sci. Technol. 109, 245–258 (2021). https://doi.org/ 10.1016/j.tifs.2021.01.025 10. Kritzinger, W., Karner, M., Traar, G., Henjes, J., Sihn, W.: Digital twin in manufacturing: a categorical literature review and classification. IFAC-PapersOnLine 51(11), 1016–1022 (2018). https://doi.org/10.1016/j.ifacol.2018.08.474 11. Kuehn, W.: Digital twins for decision making in complex production and logistic enterprises. Int. J. Design Nat. Ecodyn. 13(3), 260–271 (2018) 12. Barykin, S.Y., Bochkarev, A.A., Kalinina, O.V., Yadykin, V.K.: Concept for a supply chain digital twin. Int. J. Math. Eng. Manage. Sci. 5(6), 1498–1515 (2020). https://doi.org/10.33889/ IJMEMS.2020.5.6.111 13. Bhatti, G., Mohan, H., Raja Singh, R.: Towards the future of smart electric vehicles: digital twin technology. Renew. Sustain. Energy Rev. 141, 110801 (2021). https://doi.org/10.1016/j. rser.2021.110801 14. Kosacka-Olejnik, M., Kostrzewski, M., Marczewska, M., Mrówczy´nska, B., Pawlewski, P.: How digital twin concept supports internal transport systems?-literature review. Energies 14(16), 4919 (2021). https://doi.org/10.3390/en14164919 15. Page, M.J., McKenzie, J.E., Bossuyt, P.M., Boutron, I., Hoffmann, T.C., Moher, D.: The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 372 71 (2021). https://doi.org/10.1136/bmj.n71 16. Richter, A., Löwner, M.O., Ebendt, R., Scholz, M.: Towards an integrated urban development considering novel intelligent transportation systems: urban development considering novel transport. Technol. Forecast. Soc. Chang. 155, 119970 (2020). https://doi.org/10.1016/j.tec hfore.2020.119970 17. Anda, C., Ordonez Medina, S.A., Axhausen, K.W.: Synthesising digital twin travellers: individual travel demand from aggregated mobile phone data. Transp. Res. Part C: Emerg. Technol. 128, 103118 (2021). https://doi.org/10.1016/j.trc.2021.103118

Digital Twin Concept in Last Mile Delivery and Passenger Transport …

145

18. Shchegolova, N., Talalai, V., Gorshenina, C., Smirnova, D.: Software modeling application for verification of transportation planning engineering hypotheses. In IOP Conference Series: Materials Science and Engineering, vol. 832, pp. 1–6. IPO Publishing, Bristol (2020) 19. Ritzinger, U., Puchinger, J., Rudloff, C., Hartl, R.F.: Comparison of anticipatory algorithms for a dial-a-ride problem. Eur. J. Oper. Res. 301(2), 591–608 (2021). https://doi.org/10.1016/ j.ejor.2021.10.060 20. Rudskoy, A., Ilin, I., Prokhorov, A.: Digital twins in the intelligent transport systems. Transp. Res. Proc. 54, 927–935 (2021). https://doi.org/10.1016/j.trpro.2021.02.152 21. Tihanyi, V., Rovid, A., Remeli, V., Vincze, Z., Szalay, Z.: Towards cooperative perception services for its: Digital twin in the automotive edge cloud. Energies 14(18), 5930 (2021). https://doi.org/10.3390/en14185930 22. Kumar, S.A.P.P., Madhumathi, R., Chelliah, P.R., Tao, L., Wang, S.: A novel digital twincentric approach for driver intention prediction and traffic congestion avoidance. J. Reliable Intell. Environ. 4, 199–209 (2018). https://doi.org/10.1007/s40860-018-0069-y 23. Krajzewicz, D.: Traffic simulation with SUMO-simulation of urban mobility. In: Fundamentals of Traffic Simulation, pp. 269–293. Springer, New York (2010) 24. Amrani, A., Arezki, H., Lellouche, D., Gazeau, V., Fillol, C., Allali, O., Lacroix, T.: Architecture of a public transport supervision system using hybridization models based on real and predictive data. In: Proceedings—Euromicro Conference on Digital System Design, pp. 440–446. IEEE, Ktanj (2020) 25. Kreps, J., Narkhede, N., Rao, J.: Kafka: a distributed messaging system for log processing. https://distributed-computing-musings.com/2022/03/paper-notes-kafka-a-dis tributed-messaging-system-for-log-processing/. Last accessed 07 Nov 2021 26. Vallejo, M.E., Larios, V.M., Magallanes, V.G., Cobian, C., De La Luz Guzman Castaneda, M., Tellez, G.B.: Creating resilience for climate change in smart cities based on the local food supply chain. In: 2021 IEEE International Smart Cities Conference, pp. 1–7. IEEE, Manchester (2021) 27. Jetter, M.: Lift and the city: how elevators reshaped cities. In: Proceedings of the CTBUH 10th World Congress, pp. 66–71 (2019)

Recent Advances of Digital Twin Application in Agri-food Supply Chain Tsega Y. Melesse , Valentina Di Pasquale , and Stefano Riemma

Abstract The world is facing challenges in the loss of agri-food products due to poor management at all levels of the supply chain. Even though there are trials in the implementation of technologies to minimize the waste of fresh produce from farm to fork, a nearly huge volume of product loss still occurs. The application of the digital twin is becoming one of the promising solutions to improve the management of perishable food items by enhancing visibility. This article aims to present a detailed analysis of work by researchers in the field focusing on the current advances of digital twin applications in enhancing supply chain interoperability of the agri-food sector. The finding showed that the research on the application of digital twins technology in the agri-food supply chain is in its incipient stage; therefore, more efforts are needed to utilize this technology. Keywords Agri-food products · Digital twin · Supply chain

1 Introduction Globally, a high volume of food and agricultural products in good condition for human consumption is wasted along the supply chain due to poor management. In contrast, the population is increasing steadily creating a complex food supply chain with inadequate supply chain infrastructure. According to the reports, an estimated 30% of the food is wasted globally somewhere along the food supply chain [1–3]. Over the last decades, many technologies have been implemented to minimize the quality loss of fresh produce from farm to fork. Nevertheless, nearly 50% of loss still T. Y. Melesse (B) · V. Di Pasquale · S. Riemma Department of Industrial Engineering, University of Salerno, Via G. Paolo II, Fisciano, SA, Italy e-mail: [email protected] V. Di Pasquale e-mail: [email protected] S. Riemma e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_13

147

148

T. Y. Melesse et al.

occurs during different phases of the supply chain including precooling, packaging, transportation, and storage due to inefficiencies of the entire supply chain [4, 5]. This issue is dominant in fresh foods that cannot maintain their quality for a long time due to their biological composition [6] and volatility of weather conditions. Improving digital interoperability of the agri-food supply chain is a critical step to enhance real-time data information and data exchange about product status and between the stakeholders. The application of the digital twin is recently grabbing the attention of researchers in the field. Its implementation is a key point for realtime (near real-time) monitoring of systems and quality evolution of products during preharvest and post-harvest stages of handling. By definition, a digital twin is a virtual representation of both living and nonliving entities and processes [7, 8]. Its applications are highly limited to production monitoring, predictive maintenance, and aftersales services [9]. However, it can also improve the business performance of the supply chain by developing predictive metrics, projections, and monitoring product quality changes as well as disruptions in the logistics network. The contribution of this article is to evaluate the recent advances in using digital twin-based models in the supply chain particularly the agri-food sector.

2 Method To explore the current advancements of the digital twin application in the agri-food supply chain, bibliographic analysis has been conducted based on generated data from the Scopus using keywords of (“digital twin”) AND (“post-harvest” OR “food”). This search results in 43 documents published in years between 2018 and July 25, 2021, following the review process described in Fig. 1. VOSviewer software [10, 11] was used to perform analysis of co-authorship, citation, and journal analysis for countries, authors, institutions, and keywords. Subsequently, the paper has discussed the contents with particular focus on the concepts and application of the digital twin in the agri-food domain from single product monitoring to the whole supply chain network.

3 Results 3.1 Annual Publication Trend and Paper Classification The global trend of research in digital twin in the agri-food sector has increased steadily in the past 4 years with incomplete data for the year 2021 as shown in Fig. 2. This can be considered as key evidence of the increase of interest to implement digital twin in the area. According to the data, most contributions (39.5%) are through

Recent Advances of Digital Twin Application in Agri-food Supply Chain

149

Fig. 1 Review process

Fig. 2 Records of publications

scholarly articles in reputed journals followed by conference papers (34.9%), review (14%), conference reviews (9.3%), and a book chapter (2.3%).

3.2 Contributions of Institutions According to the data source, a total of 93 institutions contributed to the publications in digital twin applications in the agri-food supply chain. The Empa—Swiss Federal Laboratories for Materials Science and Technology was the largest contributor in terms of numbers of publications with 5 papers, followed by the Wageningen University & Research and Russian Academy of Sciences, with 4 and 3 papers, respectively (Fig. 3). These institutions are still exerting efforts in the field and more research outputs are expected in the future.

3.3 Contribution of Authors Globally, 133 researchers have participated in 43 publications. A total of 23 papers accounted for 53% of the total search results are contributed by the top 10 authors. Defraeye, T. from Switzerland was the author with the highest number of publications followed by Verboven, P. from Belgium, and the rest of the authors in the top list have the same number of publications contributing two papers each.

150

T. Y. Melesse et al.

Fig. 3 Institutes involved in the research

3.4 Citation Analysis According to the result of citation analysis, Defraeye [8, 12–14] has still the highest number of citations (33 citations) followed by [8, 12, 14] with citations scores of 29 and 19, respectively. As for country co-authorship analysis, the USA was at the center of research in the area followed by Germany and Switzerland having equal contributions. In this analysis, publications originating from 24 countries were examined and for each of the countries, the total strength of the co-authorship link with the other country was calculated. In terms of citation numbers, Australia is leading with a total citation of 107 followed by the Czech Republic and the USA.

3.5 Journals The bibliographic coupling of the journals with network visualization based on weighted documents with a minimum threshold of 1 publication per source. Of the 28 sources, IEEE Access, Communications in Computer and Information Science, Food and Bioproducts Processing, ACM International Conference Proceeding Series, Advances in Biochemical Engineering Biotechnology are among the top 5 sources in the contribution. In the search in Scopus, 38 journals have been identified as a data source of published papers. For each of the sources, the total strength of the citation links with other sources was determined and the top 10 journals in the Total link were analyzed. Thus, IEEE Access has the greatest number of publications (4) and 21 citations with total link strength of 12. The highest score of citation (103) was recorded

Recent Advances of Digital Twin Application in Agri-food Supply Chain

151

from NJAS—Wageningen Journal of Life Sciences, the main scientific platform for research works related to agricultural production, food, and nutrition security, and natural resource management. On the other hand, the Journal of Trends in Food Science and Technology is found to be the best journal in terms of total link strength (45).

4 Keyword Analysis Keyword co-occurrence analysis was applied to trace current developments in the research topic. The density map in Fig. 4 was created using VOSviewer, and the result shows that there were 534 keywords in the 43 papers. Some of the keywords frequently appeared in the documents include “digital twin”, “internet of things”, “artificial intelligence”, “machine learning”, “supply chains”, “computer model”, “simulation”, “fruit”, “thermal processing (foods)”, “agriculture”, “food products”, etc. Different clusters have been shown in the figure, and the keyword “digital twin” is still dominant. This keyword has 22 occurrences and 415 total link strength which indicates a strong link with the other keywords. It is also shown that the keyword occurrence related to food and agriculture is limited based on the analysis which implies the application and understanding level of the digital twin in the agri-food supply chain is still low.

Fig. 4 Co-occurrence network map of keywords

152

T. Y. Melesse et al.

5 Discussion The finding showed that the research on the application of digital twins in the food and agricultural area is in its infancy over the study period of 2018 and July 25, 2021. Only 43 published works were found in the Scopus database cited 234 times. The USA is becoming the most dominant in the contribution of publications in the areas followed by Switzerland, Netherlands, Belgium, and South Africa. However, the highest citation score comes from Australia, France, and USA in decreasing order. According to the indications on the current progress in the research activity, the Netherlands is putting a lot of effort into digital twin related to life science. Particularly, based on the current research activities, Wageningen University is one of the higher institutes supporting the context of Industry 4.0 [15] and good reputation in the field of agriculture, forestry as well as artificial intelligence. In addition, projects like the European IoF2020 project have shown promising progress in the application of digital twins in the agri-food domain [16]. Overall, it can be predicted that more outputs may be available in the coming years. By definition, “digital twin is a digital representation of merely anything: it can be an object, product or asset” [17]. With the most agreed definition, it is can be a virtual representation of anything with all basic elements including product properties and geometric information including shape, size, and structural components of produce [4, 9, 12–14, 18–27]. In the case of fresh agri-food items, it can be defined as a virtual representation of fresh horticultural produce [13, 23, 28]. In this digital era, promising signs of progress have been observed in the use of digital twins in the areas like logistics, horticulture, animal farming, food product development, and food processing (Fig. 5). Nowadays, due to the advancement of enabling tools for Industry 4.0, the identification of food products in retailers using smart technologies is becoming a hot

Fig. 5 Common applications of the digital twin in the agri-food sector

Recent Advances of Digital Twin Application in Agri-food Supply Chain

153

topic of discussion. For instance, an Expert View has described possible scenarios that can be applied to monitor food product status and transfer to end-users or be put on clearance before losing quality implementing IoT technology [29]. In the specific case of fruits, the digital twin has been described as enabling tool containing all components and simulates the physical cooling behavior and the associated evolution of the product throughout the supply chain [4, 12–14]. In such items, mechanistic modeling is preferable to provide the temperature measurements at each point in the fruit. Research by [30] has demonstrated the use of digital twins in controlling the quality and shelf life of mangoes during transcontinental transport. In this study, a numerical modeling technique has been applied to create digital twins of the three most exported mango varieties to accurately simulate the cooling behavior of real mango using airflow rate and its temperature as key parameters. Similarly, a study by [14] has developed the digital twin using a mechanistic model to monitor fruit quality evolution during refrigerated transportation. In the post-harvest supply chain, a digital twin technology uses to monitor and predict the status of the fresh produce throughout its life [13, 31]. The prediction capability is highly dependent on the microbiological, physical, biochemical, or physiological state of fresh produce. This approach can play a meaningful role for perishable items particularly fruits and vegetables by enhancing visibility for retailers, exporters, consumers, and other stakeholders on the remaining shelf life of the product in each shipment and storage. Moreover, it can enable the prediction of potential problems in the food supply chain and optimization of food production in the farming systems like aquaponics [32, 33]. First In, First Out picking rule and monitoring the deterioration rate of foods [5, 13] is a commonly used model to minimize the food waste during storage. The replacement of this trend with a digital twin solution starts with the creation of the virtual copy of fresh produce that mimics microbiological, physical, and biochemical conditions as well as changes throughout the supply chain using measured data. Commonly, the evolution of the product is monitored using temperature, metabolic gas concentration, or measuring relative humidity in real time. This can take the agrifood supply chain one step forward by delivering advanced forecasts on the remaining shelf life of products during transportation and storage. The solution can also enable to follow the entire life of agricultural products from flower to fork ensuring product safety [14, 34–41]. However, the implementation of these approaches is still difficult due to the complexity to acquire data about the environmental condition that has a direct influence on the shelf life of food products. Besides, system complexity, technology, supply chain disruptions, and societal issues are reported as challenges in the implementation of Industry 4.0 tools [20, 42, 43]. At its core, the digital twin of the supply chain is founded on GIS technology. In food supply chain networks, it is an important tool to proactively detect risk and prediction of delivery time and route based on GIS information. This is an advanced tool that can capture real-time information and create a virtual replica of the natural world on the map combining various forms of data into a single view that users can access easily. A study by [27] has demonstrated the use of GIS-based cold chain management in the transportation and distribution of fresh products under the control

154

T. Y. Melesse et al.

of predefined temperatures. The proposed methodology includes vehicle routing, and different scenarios have been tested by considering the fleet size and distribution centers, total travel time, travel distance, etc. This work has used offline optimization, but recent works show improvements in the formulation of delivery routes using real-time data and optimizer of the multi-objective genetic algorithm [44]. This is a big step toward managing dynamic incidence in the supply chain causing food deterioration before it reaches end-users. Recently, IoT technologies are found to be prominent in real-time tracking of vehicles and automatic data acquisition which can improve optimization of delivery schedules, process visibility, and the capability of order management. Coordination is another important value gained from digital twin applications in the supply chain. Research by [45] has demonstrated a digital twin framework for supply chain coordination using real-time logistic simulation. The virtual asset has been created and simulated with different logistics cases integrated along with a routing application. The work has proven the positive role of digital twin implementation in cooperation along the supply chain using models synchronized by IoT sensors to gain insights and consistently realign with plans. A combination of simulation modeling, optimization, and Data Analytics makes up the full range of technologies that are needed to create a supply chain digital twin model [43]. In general, with the increase in attention to public health issues, logistical complexity, and high perishability [46], the supply chain of fresh produce has received great attention. Other relevant issues in the agri-food supply chain include demand and price variability, quality, and weather variability [47, 48]. Therefore, optimizing the overall supply chain using digital twin technology is considered a promising step toward improving supply chain integration.

6 Conclusion This review has summarized the current trends in the application of digital twin technology in the agri-food supply chain. This area is not well explored in literature, and there has been a limited number of studies in the area although many countries and organizations are showing interest to involve in the studies. The finding showed that the research on the application of digital twins in the agri-food supply chain is at the beginning phase; however, there are promising research activities and signs of progress by top companies to predict more outcomes in the coming years. Due to the lack of literature in the field, a limited number of papers have directly mentioned the potential uses of the digital twin in the agri-food supply chain. Remarkably, many IoT applications in the agriculture and food sector are not described as a digital twin. This review provides insights on trends of research activities that are related to digital twin applications in the agri-food supply chain discussing the contents related to the digital twin in the agri-food sector, exploring the concept, product twin, and supply chain twins.

Recent Advances of Digital Twin Application in Agri-food Supply Chain

155

References 1. Lipinski, B., Robertson, K.: Measuring food loss and waste. In: BioCycle. JG Press, Vancouver (2017) 2. European Commission: A digitized web of food. http://ec.europa.eu/newsroom/horizon2020/ document.cfm?doc_id=48443. Last accessed 15 July 2021 3. FAO Sustainable Development Goals. http://www.fao.org/sustainable-development-goals/ind icators/1231/en/. Last accessed 11 July 2021 4. Onwude, D.I., Chen, G., Eke-Emezie, N., Kabutey, A., Khaled, A.Y., Sturm, B.: Recent advances in reducing food losses in the supply chain of fresh agricultural produce. Processes 8(11), 1431 (2020). https://doi.org/10.3390/pr8111431 5. Micale, R., La Scalia, G.: Shelf life-based inventory management policy for RF monitored warehouse. Int. J. RF Technol. 9(3–4), 101–111 (2018). https://doi.org/10.3233/RFT-181794 6. La Scalia, G., Nasca, A., Corona, O., Settanni, L., Micale, R.: An innovative shelf life model based on smart logistic unit for an efficient management of the perishable food supply chain. J. Food Process Eng. 40(1), e12311 (2017). https://doi.org/10.1111/jfpe.12311 7. Van Der Burg, S., Kloppenburg, S., Kok, E.J., Van Der Voort, M.: Digital twins in agri-food: societal and ethical themes and questions for further research. NJAS Impact Agric. Life Sci. 93(1), 98–125 (2021). https://doi.org/10.1080/27685241.2021.1989269. 8. Verboven, P., Defraeye, T., Datta, A.K., Nicolai, B.: Digital twins of food process operations: the next step for food process models. Curr. Opin. Food Sci. 35, 79–87 (2020). https://doi.org/ 10.1016/j.cofs.2020.03.002 9. Melesse, T.Y., Di Pasquale, V., Riemma, S.: Digital twin models in industrial operations: stateof-the-art and future research directions. IET Collaborative Intell. Manuf. 3(1), 37–47 (2021). https://doi.org/10.1049/cim2.12010 10. Wu, H., Tong, L., Wang, Y., Yan, H., Sun, Z.: Bibliometric analysis of global research trends on ultrasound microbubble: a quickly developing field. Front. Pharmacol. 12 (2021). https:// doi.org/10.3389/fphar.2021.646626 11. Ritchie, A., Teufel, S., Robertson, S.: Using terms from citations for IR: some first results. In: European Conference on Information Retrieval, pp. 211–221. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78646-7_21 12. Tagliavini, G., Defraeye, T., Carmeliet, J.: Multiphysics modeling of convective cooling of non-spherical, multi-material fruit to unveil its quality evolution throughout the cold chain. Food Bioprod. Process. 117, 310–320 (2019). https://doi.org/10.1016/j.fbp.2019.07.013 13. Defraeye, T., Shrivastava, C., Berry, T., Verboven, P., Onwude, D., Schudel, S., Rossi, R.M., et al.: Digital twins are coming: will we need them in supply chains of fresh horticultural produce? Trends Food Sci. Technol. 109, 245–258 (2021). https://doi.org/10.31224/osf.io/j8pzs 14. Defraeye, T., Tagliavini, G., Wu, W., Prawiranto, K., Schudel, S., Kerisima, M.A., Bühlmann, A., et al.: Digital twins probe into food cooling and biochemical quality changes for reducing losses in refrigerated supply chains. Resources Conserv. Recycl. 149, 778–794 (2019). https:// doi.org/10.1016/j.resconrec.2019.06.002 15. Catal, C., Tekinerdogan, B.: Aligning education for the life sciences domain to support digitalization and industry 4.0. Proc. Comput. Sci. 158, 99–106 (2019). https://doi.org/10.1016/j. procs.2019.09.032 16. Tekinerdogan, B., Verdouw, C.: Systems architecture design pattern catalog for developing digital twins. Sensors 20(18), 5103 (2020). https://doi.org/10.3390/s20185103 17. Marmolejo-Saucedo, J.A., Hurtado-Hernandez, M., Suarez-Valdes, R.: Digital twins in supply chain management: a brief literature review. In: International Conference on Intelligent Computing & Optimization, pp. 653–661. Springer, Cham (2019). https://doi.org/10.1007/ 978-3-030-33585-4_63 18. Neethirajan, S., Kemp, B.: Digital twins in livestock farming. Animals 11(4), 1008 (2021). https://doi.org/10.3390/ani11041008

156

T. Y. Melesse et al.

19. Marmolejo-Saucedo, J.A.: Design and development of digital twins: a case study in supply chains. Mobile Netw. Appl. 25, 2141–2160 (2020). https://doi.org/10.1007/s11036-020-015 57-9 20. Mo, J., Beckett, R.C.: Transdisciplinary system of systems development in the trend to X4. 0. In: Transdisciplinary Engineering for Complex Socio-technical Systems–Real-life Applications, pp. 3–12. IOS Press, Amsterdam (2020). https://doi.org/10.3233/ATDE200055 21. Kharchenko, V., Illiashenko, O., Morozova, O., Sokolov, S.: Combination of digital twin and artificial intelligence in manufacturing using industrial IoT. In: 2020 IEEE 11th International Conference on Dependable Systems, Services and Technologies (DESSERT), pp. 196–201. IEEE, Kyiv (2020). https://doi.org/10.1109/DESSERT50317.2020.9125038 22. Pylianidis, C., Osinga, S., Athanasiadis, I.N.: Introducing digital twins to agriculture. Comput. Electron. Agric. 184, 105942 (2021). https://doi.org/10.1016/j.compag.2020.105942 23. Nikitina, M.A., Chernukha, I.M., Lisitsyn, A.B.: About a “digital twin” of a food product. Theory and Practice of Meat Processing 5(1), 4–8 (2020). https://doi.org/10.21323/2414-438X2020-5-1-4-8 24. Scheper, T., Beutel, S., McGuinness, N., Heiden, S., Oldiges, M., Lammers, F., Reardon, K.F.: Digitalization and bioprocessing: promise and challenges. In: Digital Twins, pp. 57–69. Springer, Cham (2020). https://doi.org/10.1007/10_2020_139 25. Glaessgen, E., Stargel, D.: The digital twin paradigm for future NASA and US Air Force vehicles. In: 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, p. 1818. US Gov, Honolulu (2012). https://doi.org/10.2514/6.2012-1818 26. Melesse, T.Y., Di Pasquale, V., Riemma, S.: Digital twin models in industrial operations: a systematic literature review. Proc. Manuf. 42, 267–272 (2020). https://doi.org/10.1016/j.pro mfg.2020.02.084 27. Suraraksa, J., Shin, K.S.: Urban transportation network design for fresh fruit and vegetables using GIS—the case of Bangkok. Appl. Sci. 9(23), 5048 (2019). https://doi.org/10.3390/app 9235048 28. Nikitina, M., Chernukha, I.: Personalized nutrition and “digital twins” of food. Potravinarstvo 14(1), 264–270 (2020). https://doi.org/10.5219/1312 29. Sequeira, N.: How Digital Twins Can Help Retailers Give More to Food Banks. https://www. rfidjournal.com/how-digital-twins-can-help-retailers-give-more-to-food-banks. Last accessed 07 July 2021 30. Pattanaik, S., Jenamani, M.: Numerical analysis of cooling characteristics of Indian mangoes using digital twin. In: IECON 2020 The 46th Annual Conference of the IEEE Industrial Electronics Society, pp. 3095–3101. IEEE, Singapore (2020). https://doi.org/10.1109/IECON4 3393.2020.9254303 31. Digital Twin Corporation. Technologies for reducing agricultural waste. https://www.digitaltw incorporation.com/our-team. last accessed 09 July 2021 32. Ahmed, A., Zulfiqar, S., Ghandar, A., Chen, Y., Hanai, M., Theodoropoulos, G.: Digital twin technology for aquaponics: towards optimizing food production with dynamic data driven application systems. In: Asian Simulation Conference, pp. 3–14. Springer, Singapore (2019). https://doi.org/10.1007/978-981-15-1078-6_1 33. Ulum, M., Ibadillah, A.F., Alfita, R., Aji, K., Rizkyandi, R.: Smart aquaponic system-based Internet of Things (IoT). J. Phys: Conf. Ser. 1211(1), 012047 (2019). https://doi.org/10.1088/ 1742-6596/1211/1/012047 34. Verdouw, C., Tekinerdogan, B., Beulens, A., Wolfert, S.: Digital twins in smart farming. Agric. Syst. 189, 103046 (2021). https://doi.org/10.1016/j.agsy.2020.103046 35. Sharma, A., Zanotti, P., Musunur, L.P.: Drive through robotics: robotic automation for last mile distribution of food and essentials during pandemics. IEEE Access 8, 127190–127219 (2020). https://doi.org/10.1109/ACCESS.2020.3007064 36. Pantano, M., Kamps, T., Pizzocaro, S., Pantano, G., Corno, M., Savaresi, S.: Methodology for plant specific cultivation through a plant identification pipeline. In: 2020 IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor), pp. 298–302. IEEE, Trento (2020). https://doi.org/10.1109/MetroAgriFor50201.2020.9277567

Recent Advances of Digital Twin Application in Agri-food Supply Chain

157

37. Delgado, J.A., Short, N.M., Jr., Roberts, D.P., Vandenberg, B.: Big data analysis for sustainable agriculture on a geospatial cloud framework. Front. Sustain. Food Syst. 3, 54 (2019). https:// doi.org/10.3389/fsufs.2019.00054 38. Eppinger, T., Longwell, G., Mas, P., Goodheart, K., Badiali, U., Aglave, R.: Increase food production efficiency using the executable digital twin (xDT). Chem. Eng. Trans. 87, 37–42 (2021). https://doi.org/10.3303/CET2187007 39. Bottani, E., Vignali, G., Tancredi, G.P.C.: A digital twin model of a pasteurization system for food beverages: Tools and architecture. In: 2020 IEEE International Conference on Engineering, Technology, and Innovation (ICE/ITMC), pp. 1–8. IEEE, Cardiff (2020). https://doi. org/10.1109/ICE/ITMC49519.2020.9198625 40. Hong, Y.K., Stanley, R., Tang, J., Bui, L., Ghandi, A.: Effect of electric field distribution on the heating uniformity of a model ready-to-eat meal in microwave-assisted thermal sterilization using the FDTD method. Foods 10(2), 311 (2021). https://doi.org/10.3390/foods10020311 41. Koulouris, A., Misailidis, N., Petrides, D.: Applications of process and digital twin models for production simulation and scheduling in the manufacturing of food ingredients and products. Food Bioprod. Process. 126, 317–333 (2021). https://doi.org/10.1016/j.fbp.2021.01.016 42. Diaz, R.A.C., Ghita, M., Copot, D., Birs, I.R., Muresan, C., Ionescu, C.: Context aware control systems: an engineering applications perspective. IEEE Access 8, 215550–215569 (2020). https://doi.org/10.1109/ACCESS.2020.3041357 43. Barykin, S.Y., Bochkarev, A.A., Kalinina, O.V., Yadykin, V.K.: Concept for a supply chain digital twin. Int. J. Math. Eng. Manage. Sci. 5, 1498–1515 (2020). https://doi.org/10.33889/ IJMEMS.2020.5.6.111 44. Tsang, Y.P., Wu, C.H., Lam, H.Y., Choy, K.L., Ho, G.T.: Integrating Internet of Things and multi-temperature delivery planning for perishable food E-commerce logistics: a model and application. Int. J. Prod. Res. 59(5), 1534–1556 (2021). https://doi.org/10.1080/00207543. 2020.1841315 45. Lee, D., Lee, S.: Digital twin for supply chain coordination in modular construction. Appl. Sci. 11(13), 5909 (2021). https://doi.org/10.3390/app11135909 46. Ahumada, O., Villalobos, J.R.: Application of planning models in the agri-food supply chain: a review. Eur. J. Oper. Res. 196(1), 1–20 (2009). https://doi.org/10.1016/j.ejor.2008.02.014 47. Salin, V.: Information technology in agri-food supply chains. Int. Food Agribusiness Manage. Rev. 1(3), 329–334 (1998). https://doi.org/10.1016/S1096-7508(99)80003-2 48. Keates, O.: The design and validation of a process data analytics methodology for improving meat and livestock value chains. CEUR-WS 2420, 114–118 (2020)

Improving Supply Chain and Manufacturing Process in Wind Turbine Tower Industry Through Digital Twins María-Luisa Muñoz-Díaz , Alejandro Escudero-Santana , Antonio Lorenzo-Espejo , and Jesús Muñuzuri

Abstract Renewable energies have gained prominence due to global warming and the need to reduce the use of fossil fuels. One of the energy sources that nature offers is wind, the advantages of which are widely known. In this research, a conceptual framework is presented with the aim of maximizing the performance of the wind tower production process in a real factory. This conceptual framework is divided into three levels: physical level, logical level of processes, and plant logic level. The development of each of these levels is supported by Lean Manufacturing techniques. In addition, due to the uncertainty caused by the variability and complexity of the process, the implementation of each level will be directly based on a digital twin. This digital twin will be used to simulate different situations prior to implementing them in the factory. This conceptual framework will provide new planning and sequencing methodologies and algorithms for the dynamic design of layouts in the production processes of large pieces. Keywords Framework · Machine Learning · Scheduling · Layout · Digital Twin

M.-L. Muñoz-Díaz (B) · A. Escudero-Santana · A. Lorenzo-Espejo · J. Muñuzuri Escuela Técnica Superior de Ingeniería. Departamento de Organización y Gestión de Empresas II. Camino de los Descubrimientos, Universidad de Sevilla, s/n. 41092, Seville, Spain e-mail: [email protected] A. Escudero-Santana e-mail: [email protected] A. Lorenzo-Espejo e-mail: [email protected] J. Muñuzuri e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_14

159

160

M.-L. Muñoz-Díaz et al.

1 Introduction Today, energy is linked to most of the economic and social issues affecting the sustainable development of countries [1]. This, together with the global campaign for renewable energy that has gained prominence due to global warming and the need to reduce fossil fuel use [2], has led to a strong demand for sustainable energy supply. This trend has led to the discovery of and intensive research into both clean energy sources and efficient energy management practices [3]. Globally, most countries have committed to do more to achieve a clean energy transition, with this transition being strongly determined by economic development, technological innovation, and political changes in each nation [4]. In the European Union, one of the key objectives in this context is that this transition toward a total decarbonization of the economy is achieved by 2050 [5]. More specifically, Andalusia is one of the European regions with the largest and oldest participation in the municipalities of the Covenant of Mayors. This Covenant is an initiative by which European towns, cities, and regions voluntarily commit to reduce their CO2 emissions beyond the European Union’s climate targets [6]. Fortunately, nature offers various energy sources such as biomass, wind, wave, solar, hydroelectric, and geothermal energy which, if used, could replace existing energy sources and the deficit they would leave behind. In particular, wind energy is a cost-effective and environmentally friendly source of energy that can make a significant contribution to the reduction of growing carbon emissions and, consequently, to the achievement of the various targets set by each region in this transition [7]. The advantages of renewable energy production from wind are widely known. Compared to other non-fossil energy sources, this technology does not require the same level of investment as nuclear, tidal or thermal energy [8]. Furthermore, investment in this energy has the potential to improve economically depressed rural areas [9]. Therefore, we must acknowledge the fundamental role for this transition process of the research and development in energy generation, in the manufacturing of the necessary equipment, in the training of the necessary human resources, and in the search for efficiency in the manufacturing process of the wind towers themselves. This search for maximum performance in the wind tower manufacturing process is the objective of this study. More specifically, the aim is to establish a reference framework for maximizing plant performance through the use of digital twins that gather real-time information from the plant itself. This conceptual framework is divided into three levels: physical level, logical level of processes, and plant logic level. The rest of this document is structured as follows: Sect. 2 explains the wind tower production process, and Sect. 3 presents a brief audit of this process based on the seven wastes of Lean Manufacturing. Section 4 shows the target theoretical framework of this document, breaking it down into its different levels: physical level, process logic level, and interoperational logic level. In Sect. 5, the digital twin concept and its

Improving Supply Chain and Manufacturing Process in Wind Turbine …

161

application in this study is presented and, finally, Sect. 6 lists the references used in this paper.

2 Wind Turbine Tower Manufacturing Industry The production process described in this work focuses on the manufacture of one of the fundamental parts of windmills, the wind towers. The wind towers are the large cylinders or masts that connect the foundations of the windmill with the turbine and the blades. These masts are usually more than 100 m long, which prevents them from being manufactured and transported as a single piece. The manufacturing process for wind towers is, therefore, the process of manufacturing the sections that compose them, which are then transported to the point where they will be assembled and where they will spend the whole of their useful life. The different processes and stages that make up the complete manufacturing process of wind tower sections in a factory are explained below. Throughout the production process, there are three main by-products which, from now on, we will call states: The first of them, the steel plates with an average thickness of 3 cm, which are the main raw material input to the system. The second will be the cans or ferrules, rings made from these steel plates with an average length of 2.8 m and an average diameter of 4.2 m. Finally, the tower sections, composed of approximately 8 cans. This last state corresponds to the final product of the production process before being transported to its place of destination and assembly. Figure 1 shows a diagram of the production process and, marked by circles with a dashed line, the three stages mentioned throughout the process.

Fig. 1 Production process performed in the factory under study. The different processes involved, the two main raw material inputs and the points at which the product in process changes from plate to can and from can to section can be seen

162

M.-L. Muñoz-Díaz et al.

In the first station of the production system, the raw steel plates enter and are cut to obtain the desired area. This area will be, once the product changes state, the area corresponding to the perimeter and length of the cans. This station is called “Cutting” in Fig. 1. In the second station, “Bevelling”, the edges of the steel plate are processed, preparing them for the subsequent welding joints. In the third station, the steel plate is bent into a cylindrical shape. At the end of this process, called the “bending” process, the two ends of the steel plate make contact and a spot weld is made to help maintain the curvature of the plate. At this point, the steel plate passes into the can state. In the fourth station, the complete weld joining the two ends of the plate is carried out definitively. This process is called “Longitudinal Welding” as it follows the longitudinal direction of the end section. This is followed by the “fit-up” process, in which two cylindrical pieces are placed together concentrically and the perfect fit of their perimeter is sought. These cylindrical pieces can be cans or flanges. The flanges are raw material input to our system, provided directly by another supplier (Fig. 1). At the beginning and at the end of each section, a flange will be fitted. These pieces serve as the future connection between the sections in the assembly process at the final destination, so each section will consist of a variable number of cans and exactly 2 flanges, one at each end. At station 6, “Circular Welding” is carried out. In this process, each pair of cylindrical parts corresponding to the same section is welded together along their circular perimeter. This process could be understood as an assembly process in which, once all the pairs belonging to the same section tower section have been welded, the piece passes to the section state. At station 7, called “Internal parts and door”, the marking and welding of the internals and the marking, cutting, and welding of the door are carried out in each section. The next three stations are those corresponding to the surface treatments carried out on the section, following this order: blasting, metallizing, and painting (Fig. 1). The next station, called “Assembly of inner parts”, is dedicated to the assembly of internals that had been previously prepared at station 7 and, finally, station 12 corresponds to “Cleaning and conditioning”, leaving each tower section ready for subsequent shipment.

3 Auditing the Logistic and Manufacturing The first necessary step to propose a framework for continuous improvement in wind tower manufacturing industries is to carry out an audit of the production process as it is carried out nowadays. In this phase, it is necessary to understand in depth how each process is performed, which and how many operators carry them out, what materials, tools and equipment are used, what data are being measured, by which methodologies these measurements are carried out and how these data are incorporated into the factory’s database. In parallel, the information collected and available in the Enterprise Resource Planning (ERP) available in the company was analyzed.

Improving Supply Chain and Manufacturing Process in Wind Turbine …

163

This process was carried out following a Lean Manufacturing philosophy. Lean Manufacturing and its culture of continuous improvement in processes and production are widely applied today by both industry and the academic world [10]. The techniques provided by Lean are used to increase productivity, reduce waste, and address the environmental impact of production and logistics processes. Lean defines seven wastes, which are: inventory, either of raw materials, products in process or final products; transportation, unnecessary movements of products during processing; movement, displacements of people or equipment that are not necessary for the processing of the products; waiting, time spent waiting before starting the next step in the production process; over-production, producing more than is actually required; over-processing, either by adding more value to the product than the customer really demands or because the equipment is not suitable for carrying out the required task and, finally, defects, production of parts that do not meet the established requirements [11]. In addition, the Lean Value Stream Mapping tool was also used during this waste audit process. VSM is a simple but effective method used for the illustration and redesign of value streams [12]. This method originated with the Toyota Production System and consists of two main steps that can be performed as many times as necessary. First, an exhaustive analysis of the system in its initial state is carried out, and this information is captured in a diagram or map. During the development of this map, different sources of waste within the system are often discovered, which leads to the second step, the reduction of waste and the improvement of the system. Figure 2 shows the first VSM obtained for the process under study, and the following is an analysis of the main defects detected in the factory under study. In the production process under study, there is no over-production waste, as it is produced to order, but there is over-processing waste. The main over-processing that occurs in this factory is during the fit-up operation, as most of this process is dedicated to correcting the curvature of each pair of cans to make them coincide. This may be due to the bending process not being precise enough or to subsequent changes in the curvature. These post-bending changes are mostly generated during the longitudinal welding process or during the waiting times that the cans spend stored between processes, a problem discussed below. Likewise, both transport and movement waste appear in this plant, some of them being problems that cannot be addressed. This is due to the fact that the production process is carried out in different buildings that require the transport of parts and the movement of staff and equipment between them. Nevertheless, the problem of minimizing transport and movement within the same building can be dealt with. On the other hand, there is a relationship between the three remaining wastes for this production system: inventory, defects, and waiting times. There is an accumulated inventory of both intermediate and final parts. Specifically, the inventory of cans between the different stations is considered the most important, not only because of the space required for the storage of large parts, but also because of the possibility of deformations in the circumference of these parts which, subsequently, would make the fit-up process between each can–can or can–flange pair more difficult. Therefore, this leads to wastage of defects, since, although defects do not usually occur during

164

M.-L. Muñoz-Díaz et al.

Fig. 2 VSM of the initial state of the factory after the preliminary analysis

the processes themselves, they can be generated during the waiting times of the intermediate inventory. Finally, we arrive at the waiting waste, the most important in this production system. Waiting waste occurs mainly due to the existence of a bottleneck during the production process of this factory. This bottleneck is the circular welding process, being precisely the process in which the state of the product changes from cans to sections. This is what makes these three directly correlated Lean Manufacturing wastes one of the main objectives of this action plan. In general, after this analysis in search of waste, the main conclusion was that there is a lot of variability in the factory’s processes, which makes it difficult to control production. Monitoring and understanding the dynamics of the processes in order to standardize them, as well as allocating resources to the processes with the greatest influence on the total chain was considered critical. Consequently, it was deemed necessary to develop a conceptual framework for monitoring, control and simulation to support this work. This conceptual framework is presented in the following section.

Improving Supply Chain and Manufacturing Process in Wind Turbine …

165

4 A Conceptual Interoperative Lean Framework in a Wind Tower Factory Knowing the current process and its possible deficiencies, as well as the particular characteristics of this type of production and logistics systems, a conceptual framework is proposed to improve the current process through a three-level supervision of the process. This system is designed to coordinate the interoperability between the different processes and actors, due to the high degree of uncertainty that exists in them. The proposed framework is divided into three levels of monitoring and improvement: (1) physical level, (2) process logic level, and (3) interoperational logic level. A schematic of the proposal is shown in Fig. 3. Finally, all the collected information is incorporated into a digital twin that allows forecasting through process simulations.

4.1 Physical Level The first level of the proposed framework is related to the direct supervision of the plant itself, i.e., of the production and logistics system. This level includes both standard quality supervision procedures and the implementation of sensorization elements that bring manufacturing closer to the concept of Manufacturing 4.0. Sensorization and the implementation of Edge Computing technology will enable

Fig. 3 Diagram of the conceptual framework developed for the production process under study

166

M.-L. Muñoz-Díaz et al.

the reduction of some of the aforementioned wastage through the collection of data and its Edge Computation. A clear example of the advantages of sensorization is the reduction of overprocessing and defects in the bending process. In this process, with new sensors and the instruments adapted to them, a process with much less manual load is achieved, avoiding future defects in the circumference achieved and, obviously, avoiding the corresponding over-processing for the fit-up activity. On the other hand, sensorization will bring other benefits at the next levels, as it will provide access to much more and higher quality data, much of it in real time. This will be useful for predicting times or finding patterns in the appearance of defects. This means that, indirectly, this sensorization can be used to avoid other wastage.

4.2 Process Logic Level At this level, the existing data in all information systems of the company are analyzed for each of the processes (ERP, sensor databases, etc.). Through the analysis of all sources of information, an aggregated data model is developed that serves as a basis for the search for correlations between different process parameters. In this analysis, the search for correlations is not limited to parameters of the same process or with foreseeable influences: Machine learning techniques are also used for the detection of underlying correlations between parameters. Among other analyses, we can highlight those carried out for the estimation of production times of future orders [13], for the search for correlations between parameters and production times of different processes [14], as well as the search for correlations between defects and characteristics of manufacturing processes prior to these defects, so that this knowledge can be incorporated into the decisions of the manufacturing system.

4.3 Interoperational Logic Level The last level of this conceptual framework focuses on decision-making that affects the interoperability between all the processes at the plant, with the aim of meeting customer requirements. To this end, the information provided by the two previous levels is used to develop decision-making elements of great value in this type of industry: obtaining dynamic layouts for the plant and the planning and sequencing of production orders. For the first of the objectives, it is important to know that the wind tower factory under study usually receives large orders and works on the production of the same customer’s order for several months without interruption and without interspersing other orders. This is why, in order to improve the wastage of movement and transport and, with this, the improvement of production times, the design of the factory layout will be solved periodically depending on the order to be carried out.

Improving Supply Chain and Manufacturing Process in Wind Turbine …

167

On the other hand, for the second objective, one of the outputs of the plant logic level of this theoretical framework is taken as input: the time estimates of the different processes. These estimates will be used for the planning and sequencing of tasks for each order. One of the most widely explored methods for its rigorousness, flexibility, and extensive modeling capabilities for production scheduling processes is mathematical programming, especially mixed linear programming (MILP) [15]. This type of models is often discarded due to the size of their combinatorial problems and, consequently, their high-resolution times. In this case, taking into account the large production times associated with a single wind tower section, the rapid advance in computational capacity, and the great academic interest in these models, MILP models are developed for the station of most interest in the production process. This station is the circular welding station, the bottleneck station of the production process and, therefore, the one that sets the pace of the rest of the factory. This situation makes it possible to develop a MILP model that obtains the sequencing and assignment for this single station [16] and, subsequently, to extend this scheduling to the rest of the factory. As an alternative to these MILP models, heuristics will be developed to provide an answer to the same type of problem. We will act in a similar way, first obtaining the schedule for the bottleneck station and then for the rest of the processes. Once solved, both the results and the execution times necessary to obtain each of the schedules will be analyzed and compared in order to choose the option that best suits the problem.

5 Digital Twins The digital twins concept dates back to 2003, when Michael W. Grieves first introduced it in his PLM course. However, it was not until 2010 that NASA gave a detailed definition of digital twin that was well received and accepted. This definition reads: “an integrated multiphysics, multiscale simulation of a vehicle or system that uses the best available physical models, sensor updates, fleet history, etc., to mirror the life of its corresponding flying twin” [17]. Based on this, researchers from other universities and institutions have proposed other definitions of digital twins. Among them, the following stands out for this work: “A digital copy of a product or a production system, going across the design, preproduction, and production phases and performing real-time optimization” [18]. Taking into account the complexity and variability currently present in the plant and, of course, the economic and logistical effort required to perform a project of this magnitude, creating an environment in which to simulate different situations that could occur in the future is considered necessary. Consequently, this study develops a digital twin for our production system with the aim of achieving perfect integration between the real world and the cyber world. With this, the goal is to replicate the different actions before they are taken to the real factory, reducing uncertainty and checking whether they will give the desired results. This will be put into practice

168

M.-L. Muñoz-Díaz et al.

both with the different planning and sequencing options and to simulate the different layouts and check the consistency of each of these solutions.

6 Conclusion In industrial environments with high uncertainty and variability, process planning can be complex and high risk. Therefore, this article proposes a conceptual model for monitoring and improvement that incorporates a digital twin of the plant in order to carry out future simulations. This model is being implemented in a real manufacturing environment and is currently in development up to level 2 of the conceptual framework. It allows, thanks to the digital twin, to identify the critical process of the factory. The digital twin also serves as a mechanism to assess the robustness of the proposed planning of a project. In the short term, the aim is to advance in the third level of this conceptual model, providing new planning and sequencing methodologies for the manufacture of large parts. Additionally, the development of algorithms for the dynamic design of layouts depending on the manufacturing orders received is foreseen. Acknowledgements This research was cofunded by the European Regional Development Fund ERDF by means of the Interreg V-A Spain–Portugal program (POCTEP) 2014–2020, through the CIU3A project, and by the Agency for Innovation and Development of Andalusia (IDEA), by means of “Open, Singular and Strategic Innovation Leadership” program, through the joint innovation unit project OFFSHOREWIND. The research was also supported by the University of Seville through a grant belonging to its predoctoral training program (VIPPIT-2020-II.2).

References 1. Barhoumi, E.M., Okonkwo, P.C., Zghaibeh, M., Belgacem, I.B., Alkanhal, T.A., Abo-Khalil, A.G., Tlili, I.: Renewable energy resources and workforce case study Saudi Arabia: review and recommendations. J. Therm. Anal. Calorim. 141(1), 221–230 (2020). https://doi.org/10.1007/ s10973-019-09189-2 2. Olalekan Idris, W., Ibrahim, M.Z., Albani, A.: The status of the development of wind energy in Nigeria. Energies 13(23), 6219 (2020). https://doi.org/10.3390/en13236219 3. Klemeš, J.J., Varbanov, P.S., Ocło´n, P., Chin, H.H.: Towards efficient and clean process integration: utilisation of renewable resources and energy-saving technologies. Energies 12(21), 4092 (2019). https://doi.org/10.3390/en12214092 4. Kazimierczuk, A.H.: Wind energy in Kenya: a status and policy framework review. Renew. Sustain. Energy Rev. 107, 434–445 (2019). https://doi.org/10.1016/j.rser.2018.12.061 5. Quintana-Rojo, C., Callejas-Albiñana, F.E., Tarancón, M.A., Martínez-Rodríguez, I.: Econometric studies on the development of renewable energy sources to support the European union 2020–2030 climate and energy framework: a critical appraisal. Sustainability 12(12), 4828 (2020). https://doi.org/10.3390/su12124828 6. Pablo-Romero, M.D.P., Pozo-Barajas, R., Sánchez-Braza, A.: Analyzing the effects of energy action plans on electricity consumption in covenant of mayors signatory municipalities in Andalusia. Energy Policy 99, 12–26 (2016). https://doi.org/10.1016/j.enpol.2016.09.049

Improving Supply Chain and Manufacturing Process in Wind Turbine …

169

7. Noman, F.M., Alkawsi, G.A., Abbas, D., Alkahtani, A.A., Tiong, S.K., Ekanayake, J.: Comprehensive review of wind energy in Malaysia: past, present, and future research trends. IEEE Access 8, 124526–124543 (2020). https://doi.org/10.1109/ACCESS.2020.3006134 8. Álvarez-Farizo, B., Hanley, N.: Using conjoint analysis to quantify public preferences over the environmental impacts of wind farms. An example from Spain. Energy Policy 30(2), 107–116 (2002). https://doi.org/10.1016/S0301-4215(01)00063-5 9. Hanley, N., Nevin, C.: Appraising renewable energy developments in remote communities: the case of the North Assynt Estate, Scotland. Energy Policy 27(9), 527–547 (1999). https://doi. org/10.1016/S0301-4215(99)00023-3 10. Purushothaman, M.B., Seadon, J., Moore, D.: Waste reduction using lean tools in a multicultural environment. J. Clean. Prod. 265, 121681 (2020). https://doi.org/10.1016/j.jclepro.2020. 121681 11. Samolejová, A., Lenort, R., Lampa, M., Sikorova, A.: Specifics of metallurgical industry for implementation of lean principles. Metalurgija 51(3), 373–376 (2012) 12. Haefner, B., Kraemer, A., Stauss, T., Lanza, G.: Quality value stream mapping. Proc. CIRP 17, 254–259 (2014). https://doi.org/10.1016/j.procir.2014.01.093 13. Lorenzo-Espejo, A., Escudero-Santana, A., Muñoz-Díaz, M.L., Robles-Velasco, A.: Machine Learning-based analysis of a wind turbine manufacturing operation. In: Galán, J.M., Díaz-de la Fuente, S., Alonso de Armiño, C., Alcalde-Delgado, R., Lavios, J.J., Herrero, A., Manzanedo, M.A., del Olmo, R. (eds.) 15th International Conference on Industrial Engineering and Industrial Management and XXV Congreso de Ingeniería de Organización, pp. 155–156. Pressbooks, Burgos (2021). 14. Lorenzo-Espejo, A., Escudero-Santana, A., Muñoz-Díaz, M.-L., Guadix, J.: A machinelearning based system for the prediction of the lead times of sequential processes. In: 11th International Conference on Interoperability for Enterprise Systems and Applications (2022). In Press 15. Floudas, C.A., Lin, X.: Mixed integer linear programming in process scheduling: modeling, algorithms, and applications. Ann. Oper. Res. 139(1), 131–162 (2005). https://doi.org/10.1007/ s10479-005-3446-x 16. Muñoz-Díaz, M.-L., Lorenzo-Espejo, A., Escudero-Santana, A., Robles-Velasco, A.: Modelos lineales mixtos para la programación de la producción con una sola etapa: estado del arte. Dirección y Organización (2022). In Press 17. Shafto, M., Rich, M.C., Glaessgen, D.E., Kemp, C., Lemoigne, J., Wang, L.: Modeling, simulation, information technology & processing roadmap, technology area 11. Natl. Aeronaut. Space Adm. 32, 1–38 (2012) 18. Söderberg, R., Wärmefjord, K., Carlson, J.S., Lindkvist, L.: Toward a digital twin for real-time geometry assurance in individualized production. CIRP Ann. 66(1), 137–140 (2017). https:// doi.org/10.1016/j.cirp.2017.04.038

Complementing DT with Enterprise Social Networks: A MCDA-Based Methodology for Cocreation Raúl Rodríguez-Rodríguez , Ramona-Diana Leon , Juan-José Alfaro-Saiz , and María-José Verdecho

Abstract Even though some authors affirm that DT cover the whole product lifecycle including cocreation practices such as codesign, two main problems appear: interoperability and data sharing restrictions. These two barriers avoid to efficiently carry out cocreation processes, as the provided technological space is not adequate to exchange ideas, generate knowledge, and foster innovation. In this sense, enterprise social networks (ESN) have recently demonstrated their potentiality for carrying out cocreation activities. Then this paper presents a MCDA-based methodology to select the most appropriate ESN to support and foster cocreation activities, complementing the implementation of DT in other inter-organizational processes such as manufacturing or delivery, where DT have proved their high level of performance. Keywords Collaborative manufacturing · Knowledge management in networked enterprises · Tools for collaborative decision-making in supply networks

1 Introduction The digital twin (DT) concept was first proposed by Michael Grieves in 2003 [1], based on connecting both physical and virtual entities and spaces. This concept has evolved over the years, being possible to find many definitions of what a DT is. In this sense, [2] presented 30 definitions of DT; for example, [3] defined DT as R. Rodríguez-Rodríguez (B) · R.-D. Leon · J.-J. Alfaro-Saiz · M.-J. Verdecho Research Center On Production Management and Engineering, Universitat Politècnica de València, C.P.I., Ed. 8B, Acceso L, Camino de Vera SN, 46022 Valencia, Spain e-mail: [email protected] R.-D. Leon e-mail: [email protected] J.-J. Alfaro-Saiz e-mail: [email protected] M.-J. Verdecho e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_15

171

172

R. Rodríguez-Rodríguez et al.

“a digital copy of a real factory, machine, worker, etc., that is created and can be independently expanded, automatically updated as well as being globally available in real time”. Many of these definitions and other that can be found within the scientific literature, mention similar concepts/points, some of which are next highlighted: (i) simulation of processes; (ii) manufacturing/production systems; (iii) virtual and physical world; (iv) digital entity/avatar; (v) cyber-physical systems; or (vi) product life cycle. Regarding the latter, even though some authors affirm that DT can be used for the whole product lifecycle and, more concretively, to make decisions about product design [4–6], when it comes to interorganizational contexts and to cocreation processes, some well-known problems of DT such as interoperability and data sharing processes appear. Then, the former relates to the ability of organizations to collaborate by exchanging data to achieve some common goals [7], whereas the latter focuses on the problems and barriers that come up when different organizations share data [2]. Then, this paper presents a methodology to complement DT with ESN at the cocreation context, in order to enable and foster cocreation practices, overcoming the problems pointed out. In this sense, it first further presents these problems, associated with DT, interoperability, and data sharing at the interorganizational collaborative creation processes; it describes how and why ESN can be useful and appropriate for complementing DT regarding these cocreation processes; then, the MCDA-based methodology is presented; finally, some future research work and conclusions are highlighted.

2 Background 2.1 DT, Interoperability and Cocreation As mentioned, interoperability is an important drawback when implementing DT within a collaborative context. In fact, some authors point out the key importance of interoperability in creating collaborative activities between organizations, even though they affirm that interoperability mainly focuses, regarding DT, on dealing with one-to-one relationships instead of promoting multi-stakeholders’ communication where also customers and other important stakeholders could participate from the cocreation process [8]. Interoperability deals with the task of connecting different actors from collaborative organizations (mainly manufacturers, suppliers, distributors, and customers), and there are still some open issues that need further research, mainly related to the definition of both standards and communication protocols, in order to make possible efficient interoperability between DT located in different supply chain actors [9]. In general, it is possible to affirm interorganizational collaboration and creation practices needs better be supported regarding interoperability, especially regarding

Complementing DT with Enterprise Social Networks: A MCDA-Based …

173

the common virtual space needed to this end, which must go beyond collecting and analyzing data [10].

2.2 DT, Data Sharing, and Cocreation From a data-sharing point of view, DT provides the basis to interchange and analyze real-time data, mainly applying machine learning (ML) algorithms in order to make decisions to improve the efficiency of processes, usually at the production site. In this sense, the development and application of ML applied to different collaborative entities has augmented the dilemma between the effectiveness of such ML algorithms and the protection of data privacy [11]. In this sense, federated learning (FL) is a new ML paradigm that preserve privacy when training a global model constituted with a federation of distributed clients [12]. Some models and frameworks have been lastly developed in the manufacturing context, e.g., the FL model for smart products design, based on a classification of FL into horizontal-FL and vertical-FL across three dimensions (product, user, and context); or a hierarchical asynchronous federated learning framework, which divides the data request task and uses the multiple subtasks as the task basis for selecting high-quality data nodes, improving both privacy protection and data quality. However, smart manufacturing applications for distributed and collaborative environments under a module-based platform are still needed [13]. The data needed to be shared regarding cocreation activities, for instance collaborative product design, has to do with both formal and informal communication, exchange of knowledge, ideas, and debate, generating a climate of trust and coherence between the users. Hence, DT are not the best option, from either an interoperability or data sharing point of view, to carry out cocreation activities between the main actors of collaborative organizations.

2.3 Enterprise Social Networks Hung et al. [14] stated that, within ICT solutions, ESN are the most proficient knowledge sharing tools. They are based on trust and integrity and involve resources sharing, goal congruence, and decision synchronization. Furthermore, due to the fact that they go beyond the interaction level, proved to be able to strengthen links between both intra and inter-organizational members [15], serve as knowledge repositories, and deliver analytics that add value to knowledge structure and business performance [16]. Then, ESN provides a common technological space and have been successfully implemented at the interorganizational context to foster knowledge exchange practices, generate innovation, and generate new products/services collaboratively [17]. In this sense, ESN could complement DT to more successfully cover the whole

174

R. Rodríguez-Rodríguez et al.

product lifecycle phases, being responsible for the cocreation activities to support, for example, collaborative products and services design. This combination of ESN and DT would be highly efficient, from a business standpoint, as ESN could output more efficient products/services designs, which would turn into better market acceptance rates and, on the other hand, DT applied to the production processes would serve to improve the quality levels and to reduce waste. However, there are many available ESN, and it is still necessary to define a methodology to select the most appropriate ESN for carrying out inter-organizational cocreation practices. Such a methodology should identify and comparatively assess the different alternatives (ESN) against the most important attributes or criteria identified. This sort of problem can be solved by taking a multi-criteria decision-aid approach (MCDA), where these MCDA techniques are able, based on experience and judgment of decision-makers, to assess and rank the available ESN. This paper develops a MCDA-based methodology in the next point.

3 A MCDA-Based Methodology Figure 1 presents the four phases of the MCDA-based methodology to select the most appropriate ESN to carry out cocreation activities.

1. Characterisation of the problem

• Identification of attributes and subattributes (criteria and sub-criteria) • Identification of available ESN (alternatives)

2. Selection of the MCDA techniques

• MCDA techniques taxonomy • Criteria to select the most appropiate techniques

3. Application of the selected MCDA techniques

• Fuzzy ANP and COPRAS-G • Validation with sensitivity analysis

4. Selection of the ESN for cocreation

Fig. 1 Proposed MCDA-based methodology to select the ESN for cocreation

• Prioritised list of the ESN • Selection of the ESN for cocreation

Complementing DT with Enterprise Social Networks: A MCDA-Based …

175

3.1 Phase 1. Characterization of the Problem In this phase, decision-makers should identify both the selected attributes and subattributes that have an impact regarding ESN, which will be assessed in the next phase. These attributes and sub-attributes would come up from two different sources: a literature review and experts’ opinion. For the problem of this investigation, it will be necessary to keep in mind that the ESN sought is to be applied in a multi-stakeholder cocreation process, mainly in the product codesign context. Then, Table 1 shows an initial characterization of the problem in terms of the main attributes and sub-attributes identified [17–20]. Table 1 Attributes and sub-attributes of the ESN for cocreation tasks Attributes

Sub-attributes

Data

Data aggregation Type of sharing information Data storing. It allows both local and external (cloud-based) data storing for each user

Technical capabilities

Functionality. It is easy to implement and incorporates up-to-date functionalities regarding multi-device communication and integration with existing working tools such as company email Maintainability. It is easy and non-expensive to maintain through the recommended updates Reliability. It is solid and reliable regardless of the number of simultaneous users and exchange/storage of information in any format (documents, videos, etc.) Security. It is secure, and it does not compromise sensible information either from the own ESN or the companies participating

Usability

Design. It is attractive and motivating from a visual point of view Complexity. The level of complexity to use, from a technical perspective, is low/moderate, being recommended for all the type or workers of the company regardless of their computing skills Friendly user. It is similar to other well-known popular social networks, which makes easier its use Customization level. It allows users to extend some of the standard capabilities, customizing them through programming

Decision-making

Sensemaking. It brings together relevant experts to make sense of provided information (provide a virtual separate space to interact and make decisions) Innovation tools. It provides, direct or indirectly, the access to innovation tools such as trend matrices, digital mock-ups, or drawing tools that will foster value creation regarding codesign tasks Advanced network analytics. It provides specific analyses based on KPIs such as centrality, betweenness, or range as well as mathematical models, for example the P1 or P1´, to forecast the expected evolution of the ESN. Additionally, it provides real-time analyses such as who is connected with who, who is the most active user, etc

176

R. Rodríguez-Rodríguez et al.

Further, it will be necessary to identify the available ESN, alternatives in our study, being some of the most popular and extended ones the following: Yammer, eXo, Igloo, Zoho, Workvivo, Zimbra, Klasmic, or Xoxoday [20]. The next step would be to choose the most appropriate MCDA technique/s to rank the importance of both the attributes and sub-attributes, being then able to build a ranking of all the ESN chosen for the study. This is done in the next phase of the methodology.

3.2 Phase 2. Selection of the MCDA Techniques There are many MCDA techniques that have been applied in the last decades to rank, according to specific attributes (or criteria), some alternatives in many different sectors and disciplines. In this sense, [21] presented a work in which they identified 56 different MCDA techniques, providing a framework for selecting the most appropriate ones according to the following criteria: • Available binary relations, which can be of indifference, preference, weak preference, no relationships, or a combination of some of these. • Linear compensation effect, with no compensation, partial compensation, or total compensation. • Type of aggregation, which can be with a single criterion, outranked or mixed. • Type of preferential information, with can be deterministic, cardinal, nondeterministic, ordinal, or fuzzy. In order to select one or more techniques to be applied in this research, it is widely accepted that fuzzy approaches within MCDA techniques help to reduce and better handle uncertainty and imprecise values, as it allows to provide a range of values instead of an ordinal value when performing the pair-wise comparisons between variables [22]. Then, it is possible to narrow down the possibilities regarding MCDA techniques to be applied, as [21] points out, under a fuzzy approach, the following: Fuzzy AHP, Fuzzy ANP, Fuzzy PROMETHEE I, Fuzzy PROMETHEE II, Fuzzy Topsis, and Fuzzy VIKOR. Further, looking at both the attributes and sub-attributes, their qualitative and quantitative nature and their high degree of mutual interactions not only between and among them but also with the alternatives of the problem, makes the Fuzzy ANP adequate for being selected [23]. The Fuzzy ANP (FANP) is an extension of the ANP [24], where the fuzzy logic is used to translate the experts’ judgments when comparing the variables of the study. However, the FANP has several problems from a practical point of view, being the main ones the following: (1) It needs complex computation processes due to the high number of fuzzy assessed comparisons; (2) Decision-makers are asked to complete large questionnaires, which are highly time consuming. These circumstances prevent from its application often, being possible to combine FANP with other MCDA techniques, resulting in a hybrid approach easier to implement. In this sense, [23] proposes the integration of FANP with COPRAS-G (complex proportional assessment of alternatives with gray relations), which reduces

Complementing DT with Enterprise Social Networks: A MCDA-Based …

177

greatly the number of questions during the pair-wise comparisons and is, therefore, appropriate to solve the large dataset resulting of comparing the four attributes and 14 sub-attributes identified for this decision problem. Further, [23] compared, to assess the robustness of the FANP with COPRAS-G, the results achieved with the ones from applying FANP with other gray theory methods: GRA, SAW-G, and TOPSISG, carrying out a sensitivity analysis too. Their results pointed out the good results of FANP with COPRAS-G and the high degree of robustness. Then, since the decisionmaking problem solved by [24] is similar to the one of this paper, it is recommended to use the FANP with COPRAS-G, checking the robustness of the achieved results comparing them with the ones from applying FANP not only with GRA but also with F-GRA [25].

3.3 Phase 3. Application of the Selected MCDA Techniques Once the hybrid MCDA approach, FANP with COPRAS-G, has been chosen, it is time to apply it in order to obtain a ranking of the available ESN. The main tasks to carry out are the following: • Application of FANP. Development of the pair-wise comparisons and identification of the weights of the attributes. • Application of COPRAS-G. Calculation of the weights of the alternatives as well as the utility function, obtaining a ranking of the alternatives. • GRA and F-GRA. Performance of a sensitivity analysis, assessing the robustness of the approach.

3.4 Phase 4. Selection of the ESN for Cocreation From the application of the Phase 3, decision-makers will have a prioritized list of the initial ESN identified in the Phase 1, choosing the number one of such a list. It is necessary to point out that the FANP allows to add/eliminate not only attributes/ sub-attributes but also alternatives easily, being possible to add/eliminate ESN and/ or attributes and sub-attributes overtime.

4 Conclusions and Further Research Directions This paper has first pointed out the main drawbacks associated with DT when used to foster cocreation practices such as codesign activities: interoperability and data sharing. Such a collaborative creative process demands of a technological space where the main stakeholders from the collaborative organizations (suppliers, manufacturers, distributors, or customers) can meet and interact, interchanging ideas and

178

R. Rodríguez-Rodríguez et al.

generating knowledge and innovation that would turn into more efficient products/ services, space that can be provided via ESN. Then, it has presented a four phases MCDA-based methodology to select the most appropriate ESN. The main findings of each of these phases are the following: Phase 1) Identification of four attributes (data, technical capabilities, usability, and decision-making) and 13 associated subattributes of the ESN for cocreation tasks; presentation of some available ESN; Phase 2) Identification of the Fuzzy ANP in combination with COPRAS-G as the most adequate MCDA technique to rank the available ESN regarding the attributes and sub-attributes; Phase 3) The results achieved with the application of Fuzzy ANP in combination with COPRAS-G could be checked applying other hybrid MCDA methods: FANP with GRA and FANP with F-GRA; Phase 4) The methodology will offer a prioritized list of the ESN identified in the Phase 1. As future research based on this work, it is recommended to: (i) fully carry out phases 3 and 4 in real interorganizational cocreation environments; (ii) based on the results achieved in (i) to identify, if necessary, other hybrid MCDA techniques to check the robustness of Fuzzy ANP in combination with COPRAS-G; (iii) to assess the acceptance rate of cocreated products under this ESN approach and, when possible, to compare it with the rate achieved under a DT cocreation environment. Acknowledgements The research reported in this paper is supported by the Spanish Agencia Estatal de Investigación for the project “Cadenas de suministro innovadoras y eficientes para productos-servicios altamente personalizados” (INNOPROS), Ref.: PID2019-109894GB-I00.

References 1. Grieves, M., Vickers, J.: Digital twin: mitigating unpredictable, undesirable emergent behavior in complex systems. In: Transdisciplinary Perspectives on Complex Systems, pp. 85–113. Springer, Cham (2017) 2. Semenaro, C., Lezoche, M., Panetto, H., Dassisti, M.: Digital twin paradigm: a systematic literatura review. Comput. Ind. 130, 103469 (2021). https://doi.org/10.1016/j.compind.2021. 103469 3. Brenner, B., Hummel, V.: Digital twin as enabler for an innovative digital shopfloor management system in the ESB logistics learning factory at Reutlingen-University. Proc. Manuf. 9, 198–205 (2017). https://doi.org/10.1016/j.promfg.2017.04.039 4. Schleich, B., Anwer, N., Mathieu, L., Wartzack, S.: Shaping the digital twin for design and production engineering. CIRP Ann. Manuf. Technol. 66(1), 141–144 (2017). https://doi.org/ 10.1016/j.cirp.2017.04.040 5. Schroeder, G.N., Steinmetz, C., Pereira, C.E., Espindola, D.B.: Digital twin data modeling with automation ML and a communication methodology for data exchange. IFAC Papers On Line 49(30), 12–17 (2016). https://doi.org/10.1016/j.ifacol.2016.11.115 6. Zhang, L., Zhou, L., Ren, L., Laili, Y.: Modeling and simulation in intelligent manufacturing. Comput. Ind. 112, 103123 (2019). https://doi.org/10.1016/j.compind.2019.08.004 7. Pereira, L.X., de Freitas Rocha Loures, E., Portela Santos, E.A.: Assessment of supply chain segmentation from an interoperability perspective. Int. J. Logistics Res. Appl. 25(1), 77–100 (2022). https://doi.org/10.1080/13675567.2020.1795821 8. Agostinho, C., Ducq, Y., Zacharewicz, G., Sarraipa, J., Lampathaki, F., Poler, R., JardimGoncalves, R.: Towards a sustainable interoperability in networked enterprise information

Complementing DT with Enterprise Social Networks: A MCDA-Based …

9.

10.

11. 12. 13.

14.

15. 16.

17.

18.

19.

20. 21.

22.

23.

24. 25.

179

systems: trends of knowledge and model-driven technology. Comput. Ind. 79, 64–76 (2016). https://doi.org/10.1016/j.compind.2015.07.001 Platenius-Mohr, M., Malakuti, S., Grüner, S., Schmitt, J., Gold-Schmidt, T.: File-and APIbased interoperability of digital twins by model transformation: an IoT case study using asset administration shell. Future Gener. Comput. Syst. 113, 94–105 (2020). https://doi.org/10.1016/ j.future.2020.07.004 Khisro, J., Sundberg, H.: Enterprise interoperability development in multi relation collaborations: success factors from the Danish electricity market. Enterprise Inf. Syst. 14(8), 1172–1193 (2020) Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. 10(2), 1–19 (2019). https://doi.org/10.1145/3298981 Li, L., Fan, Y., Tse, M., Lin, K.Y.: Review applications federated learning. Comput. Ind. Eng. 149, 106854 (2020) Ang, L., Qiuyu, Y., Boming, X., Qinghua, L.: Privacy-preserving design of smart products through federated learning. CIRP Ann. 70(1), 103–106 (2021). https://doi.org/10.1016/j.cirp. 2021.04.022 Hung, S.W., Chen, P.C., Chung, C.F.: Gaining or losing? The social capital perspective on supply chain members’ knowledge sharing of green practices. Technol. Anal. Strategic Manage. 26(2), 189–206 (2014). https://doi.org/10.1080/09537325.2013.850475 Riemer, K., Stieglitz, S., Meske, C.: From top to bottom: investigating the changing role of hierarchy in enterprise social networks. Bus. Inf. Syst. Eng. 57(3), 197–212 (2015) Aboelmaged, M.: Knowledge sharing through enterprise social network (ESN) systems: motivational drivers and their impact on employees’ productivity. J. Knowl. Manag. 22(2), 362–383 (2018). https://doi.org/10.1108/JKM-05-2017-0188 Leon, R.-D., Rodríguez-Rodríguez, R., Gómez-Gasquet, P., Mula, J.: Business process improvement and the knowledge flows that cross a private online social network: an insurance supply chain case. Inf. Process. Manage. 57(4), 102237 (2020). https://doi.org/10.1016/ j.ipm.2020.102237 Menon, K., Kärkkäinen, H., Wuest, T., Gupta, J.P.: Industrial internet platforms: a conceptual evaluation from a product lifecycle management perspective. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 233(5), 1390–1401 (2018). https://doi.org/10.1177/0954405418760651 Maamar, Z., Costantino, G., Petrocchi, M., Martinelli, F.: Business reputation of social networks of web services. Proc. Comput. Sci. 56, 18–25 (2015). https://doi.org/10.1016/j.procs.2015. 07.171 Enterprise Social Networking Applications Reviews and Ratings. https://www.gartner.com/rev iews/market/enterprise-social-networking-applications. Last accessed 20 Jan 2022 Watrobski, J., Jankowski, J., Ziemba, P., Karczmarczyk, A., Ziolo, M.: Generalised framework for multi-criteria method selection. Omega 86, 107–124 (2019). https://doi.org/10.1016/ j.omega.2018.07.004 Yatsalo, B., Radaev, A., Martínez, L.: From MCDA to fuzzy MCDA: presumption of model adequacy or is every fuzzification of an mCDA method justified? Inf. Sci. 587, 371–392 (2022). https://doi.org/10.1016/j.ins.2021.12.051 Nguyen, H.T., Dawal, S.T.M., Nukman, Y., Aoyama, H.: A hybrid approach for fuzzy multiattribute decision making in machine tool selection with consideration of the interactions of attributes. Expert Syst. Appl. 41, 3078–3090 (2014). https://doi.org/10.1016/j.eswa.2013. 10.039 Saaty, T.L.: Decision Making with Dependence and Feedback: The Analytic Network Process. RWS Publications, Pittsburgh (2001) Olabanji, O.M., Mpofu, K.: Appraisal of conceptual designs: coalescing fuzzy analytic hierarchy process (F-AHP) and fuzzy grey relational analysis (F-GRA). Results Eng. 9, 100194 (2021)

Part IV: Smart Manufacturing in Industry 4.0

Interoperability as a Supporting Principle of Industry 4.0 for Smart Manufacturing Scheduling: A Research Note Julio C. Serrano-Ruiz , Josefa Mula , and Raúl Poler

Abstract The job shop is a production environment that is frequently analyzed and modeled as an isolated cell with little or no interaction with other areas of the production system or the supply chain of which it forms part. For decades, the abstraction on which this endogenous approach is based has provided a profound understanding of the job-shop scheduling problem and the static aspects characterizing it. Nowadays, it is worth highlighting the dynamic and interconnected nature of the contemporary job shop, a space where the design principles and enabling technologies of Industry 4.0 acquire a significant role. This paradigm provides the job shop with new opportunities to improve competitiveness through its digital transformation, but also poses a challenge with risks and barriers. From this interconnected digital job-shop perspective, the efficiency of its operations, including that of the job scheduling process, is critically conditioned by the interoperability between its own production resources and those of the entities in its intra- and supracompany environment that make up the value chain. This article studies the support that interoperability can specifically provide in the job-shop scheduling itinerary toward higher levels of automation, autonomy, and capacity for real-time action. This transformation process is known in the literature as smart manufacturing scheduling. Keywords Industry 4.0 · Job shop · Interoperability · Scheduling · Smart manufacturing scheduling · Digital twin · Zero-Defect Manufacturing

J. C. Serrano-Ruiz (B) · J. Mula · R. Poler Research Centre on Production Management and Engineering (CIGIP), Universitat Politècnica de València (UPV), Camino de Vera s/n, 46022 Valencia, Spain e-mail: [email protected] Escuela Politécnica Superior de Alcoy (EPSA), C/Alarcón 1, 03801 Alcoy, Alicante, Spain J. Mula e-mail: [email protected] R. Poler e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_16

183

184

J. C. Serrano-Ruiz et al.

1 Introduction The production planning and control (PPC) concept is basically faced with the task of optimally organizing supply networks and production resources to meet market demand. This task, which is already complex per se, is made more difficult by integrating into PPC models the transitory disturbances that the dynamism and uncertainty of the environment subject supply and production systems. These disturbances impact PPC systems and unequally spread through processes at their strategic, tactical, and operational decision levels by mitigating or amplifying certain aspects of disturbances, which depend on their origin, the system’s characteristics, and the considered PPC process peculiarities. Nevertheless at the operational decision level, the job-shop scheduling process is affected by not only disturbances from other supply chain areas, such as sourcing failures, changing delivery dates, new orders arriving, and/or existing ones being canceled; but also by those caused in PPC organization itself at higher decision levels; e.g., changes in order size, cancelation of existing orders like changes in batch sizes or order priority, and those caused by events occurring locally in the job shop, which can range from simple process interruptions or stockouts to machine breakdowns, tool damage, and detected faulty parts or products [1]. Nor should the effect of the uncertainty of the job-shop environment be underestimated as a source of disruption; e.g., uncertainty in production, maintenance, repair, and/or delivery times, etc. [2]. All this, together with the accelerated pace of evolution that characterizes the short-term planning horizon, make the jobshop scheduling process ideal to apply digital technologies and Industry 4.0 management models, especially those that intend to promote higher automation levels during this process [3, 4], autonomy [5, 6], and ability for real-time action [5, 7] to make it robust and resilient to changing circumstances in the short and mid-terms. This jobshop scheduling transformation approach, characterized by providing the process with higher levels of automation, autonomy, and real-time action ability, is referred to in the literature as smart manufacturing scheduling (SMS) [8]. Its implementation is based mainly on two instruments: the digital twin (DT)-enabling technology and the Zero-Defects Manufacturing management (ZDM) model. Employing digital twinning uniquely shapes the environments subject to its application. The need for asset virtualization requires interconnectedness, an Industry 4.0 principle that describes how machines, devices, sensors, or people are connected to one another, to a greater or lesser extent, through communication technology solutions [9]. For example, from the classic industrial communication pyramid with AS-i, Profibus, and Ethernet protocols for device, field, and cell/management levels, respectively, to the Internet of Things (IoT), wireless communication, cloud services, machine-to-machine (M2M), mobile communications, among others [10]. The communication flow is a critical resource in this paradigm of virtually replicated and widely interconnected assets and processes, where another design principle of Industry 4.0, interoperability, plays an important role. Interoperability allows one system to access the resources of another with efficient data and information exchanges [10] despite differences in language, interfaces, or execution platforms

Interoperability as a Supporting Principle of Industry 4.0 for Smart …

185

[11]. From a broader managerial perspective, interoperability between companies or organizations is achieved when interaction takes place at the data and application levels. It also transcends and goes beyond these and is also carried out at business, processes, and services levels [12]. This tactic is oriented to the ZDM model by avoiding errors or defects in information exchange. From this integrative viewpoint, interoperability, along with automation, autonomy, or the ability to act in real time (which are all design principles of Industry 4.0) are considered to conceptually underpin SMS [10]. DT technology and the ZDM management model are instruments that enable SMS feasibility. So they collectively form a set of synergistic vectors that contributes to greater scheduling robustness and, thus, to a more resilient and sustainable job shop when faced with environmental disturbances. This article conceptually analyzes the potential of interoperability as a design principle of Industry 4.0 to effectively support SMS and its implications to answer the following research questions: RQ1) Are SMS and interoperability synergistic concepts? RQ2) What consequences can be expected from the synergy between the two? RQ3) In which contexts is synergy possible? The rest of the paper is organized as follows. Section 2 introduces the main concepts presented throughout the research and reviews the state of the art. Section 3 analyzes and discusses the main implications. Finally, Sect. 4 presents conclusions and future research directions.

2 Literature Background A literature search was considered based on the terms “interoperability”, “digital twin”, “scheduling”, “job shop”, and “zero defect”. These concepts are semantically presented in Table 1. As a common criterion, the search was carried out by checking occurrences in the fields “title”, “keyword”, and “abstract” for articles, reviews, conference papers, and conference reviews published in English in the last 10 years in subject areas compatible with the present research. These areas were: engineering; computer science; decision sciences; business, management, and accounting; and multidisciplinary. The term “interoperability” is widely used in the literature, but predominantly so in the ICT, military, and healthcare fields. A search in Scopus yields 62,237 articles. Its first entry dates back to 1959, but it was not until 1991 when annual entries in the literature exceeded 100. The presence of the “digital twin (DT)” in the literature provides a significantly smaller number of results: 5446. The academic interest shown in it, which grew from 2017 onward with more than 100 entries that year, has exponentially increased to the present day with more than 2000 entries in 2021 alone, which is more than one-third of the total amount. Its first result dates back to 1973, albeit with a different semantic approach to that herein considered. It was not until 1991 before the term “DT” was used with the same meaning, and it initially appeared mainly in the aeronautical and aerospace sectors. Of the five above terms, “scheduling” is the most numerous and oldest: 276,652 entries since 1900, with hundreds of entries per year since 1920. It started to appear mainly in the

186

J. C. Serrano-Ruiz et al.

production shop space, with other areas being later incorporated, such as healthcare in the second half of the last century, or ICT at the end of the past century. “Job shop” has been a researched concept since 1953. Forty years later in 1993, the academic shown interest in it resulted in more than 100 entries a year. Currently, there are 11,480 articles about it. Finally, the first entry of the “zero-defect” concept appeared as early as 1961, but has aroused less academic interest than all the previous concepts. It did not exceed 93 entries in 2019, and has totaled 1378 results since it emerged, which is the smallest figure of all. After making the abovementioned individual searches which, of the indicated results, simultaneously contained all five concepts. Subsequently, the results derived from the combination of four and three concepts were also verified with one restriction; one of them always had to be interoperability. Despite the many individual results, no single article in the literature simultaneously combines all five concepts, not even four of them. Of the possible conjunctions of the interoperability concept with two other concepts, only three groups appear with a positive relevant result, and combine it with: (i) “scheduling” and “job shop”: one article; (ii) DT and ZDM: three articles; (iii) scheduling and ZDM: one article. Together, these results total five articles. Apart from these groups of articles, some articles in the literature relate “interoperability” only to “job shop”, and others link “interoperability” with “DT”. Table 1 Concept definition Concept

Definition

Interoperability

System interoperability is a system’s essential feature for it to be able to interact with other systems with no problems or to anticipate them, and their effects when and if necessary [13]. The ability of two or more systems or components to exchange information and to use the information that has been exchanged [14]

Digital twin

The DT is a virtual duplicate of a product, machine or process, or a complete production facility, which contains all the data models and simulations related to its physical realization [15]. A virtual model that is used to simulate the behavior and characteristics of the corresponding physical counterpart in real time [16]

Scheduling

Scheduling is a decision-making process that deals with the allocation of resources to tasks over given time periods. Its goal is to optimize one objective or more. Resources and tasks in an organization can come in many different forms. Resources may be machines in a workshop, airport runways, crews at a construction site, processing units in a computing environment, etc. [17]

Job shop

Machine shop in which each job has its own predetermined route to follow [17]

Zero defect (manufacturing)

Strategy which, by assuming that errors and failures will always exist, focuses on minimizing and detecting them online so that any production output that deviates from specification does not advance to the next step [18]

Interoperability as a Supporting Principle of Industry 4.0 for Smart …

187

Although these groups are quite distant from the joint approach herein proposed, seven additional articles from them were selected for being somewhat relevant. The relation of interoperability with the job-shop scheduling process is the subject of one study by Chou et al. [19]. Those authors present a bioinspired mobile agentbased integrated system for flexible autonomic job-shop scheduling. The proposed system bases its interoperability on conforming to the IEEE FIPA (Foundation for Intelligent Physical Agents) standard so that it can be ensured between the agents of the system and the agents of many active heterogeneous FIPA compliant platforms. Three articles combine interoperability with DT technology and the ZDM model. Gramegna et al. [20] demonstrate the applicability of a data-driven DT to small- and medium-sized enterprises (SME) and complex manufacturing sectors by integrating process monitoring with advanced data mining and cognitive approaches to predict quality and efficiency versus costs, and to react in real time with the help of a decision support system. For those authors, interoperability is an essential element that characterizes the DT, along with connectivity, traceability, data mining, and cognitive modeling. In this case, interoperability comes from the Open Protocol Communication (OPC) Standard and the Open Protocol Communication Unified Architecture (OPC UA). The initial model for ZDM investigated and proposed by Lindström et al. [18] from the Industry 4.0 perspective is a further development of this management model. It advances toward its implementation in real cases with a solid mathematical foundation. According to the authors, it contributes making production processes smarter and more interoperable. It also allows the possibility of reporting subsequent or previous steps in a production process or a value chain, a basis on which integration and interoperability between different systems can be built. The work by Groen et al. [21] proposes a flexible standardization approach called FlexMM for manufacturing processes with materials undergoing complex micro-mechanical evolutions. It consists of a general rule structure to describe both constitutive behavior and its interaction with the subroutines used by a finite element solver. For these authors, good interoperability between solver models and processes is key for achieving a functional DT. The intention is to provide a solution for the interoperability problem posed between material data and their constitutive behavior, which represents an original approach. The article of Ameri et al. [22] addresses the combination of interoperability with the scheduling process and the ZDM management model. These authors particularly address the semantic interoperability problem in industry which, as they see it, is an unresolved issue. So they propose the widespread adoption and implementation of ontologies, which require more systematic and coordinated efforts with the joint participation of industry, academia, and public administrations internationally. They hypothesize that a systematic coordinated ontology development effort will gradually lead to the creation of an ecosystem of interoperable software applications to support access and consistent reasoning throughout a product’s lifecycle. Several relevant articles relate interoperability solely to the DT. Leng et al. [23] present a DT-driven manufacturing cyber-physical system (MCPS) for the parallel control of smart workshops in line with mass individualization. These authors believe

188

J. C. Serrano-Ruiz et al.

that achieving interoperability between the physical and digital worlds of manufacturing systems is a bottleneck. So they model MCPS in such a way that it provides interconnectedness and interoperability which go beyond M2M interactions or the IoT. For Nilsson et al. [24], developments like OPC UA in M2M terms, or Arrowhead Framework in service-oriented architectures (SOA) terms, support the interoperability vision in Industry 4.0 and the IoT. With their DT-based MCPS model for autonomous manufacturing on smart shop floors, Ding et al. [25] consider that interoperability comes directly from the DT by establishing real-time data synchronization and interoperability channels to improve the cyber-physical shop floors interaction with tools like distributed Python apps. Vogt et al. [26] obtain the necessary interoperability of a smart production system in a looped DT system that virtually replicates and synchronizes both the product and production system individually. It is applicable to certain industries, such as the automotive industry for part traceability or production control in flexible smart production systems. Piroumian [27] believes that, to date, the real motivation for creating a DT standard in industry has been solely the need for consistency in the way that objects or processes are modeled and simulated. However for the time being, information has typically been industry-specific and proprietary, and standardization is absolutely lacking. This results in isolated islands of information that cannot be easily shared, and leaves no interoperability between tools and applications. In another approach, Szejka et al. [28] identify obstacles for semantic interoperability to materialize thanks to the usual heterogeneity of information from many perspectives and their relations in different product manufacturing phases. Their work presents a preliminary method for product manufacturing models to support semantic interoperability across the manufacturing system. It is based on modeling reference ontologies, application ontologies, and semantic rules. Finally, the article by Bloomfield et al. [29] addresses job-shop interoperability by presenting a standard information model. It aims to improve and standardize data exchanges between manufacturing applications throughout a product’s life cycle by implementing the Core Manufacturing Simulation Data (CMSD) Standard created by the Simulation Interoperability Standards Organization (SISO). It is worth noting that the related literature identifies certain noteworthy aspects: (i) No article specifically addresses the role of interoperability of the job-shop scheduling process in a zero-defects context; (ii) only one article deals with interoperability in the job-shop scheduling process context, and only three in the job shop as a general concept; (iii) the relation between interoperability and the ZDM model scarcely comes over in the literature. Only three relevant papers simultaneously consider this composition: two contemplate ZDM to be the effect of an interoperable model; a third understands that the ZDM model can support further interoperability; (iv) automation, autonomy, and real-time action ability are features that are often associated with using DTs rather than interoperable communications. However, it is generally accepted that DTs and interoperability are virtually indissoluble concepts either because the former entails the latter, or vice versa.

Interoperability as a Supporting Principle of Industry 4.0 for Smart …

189

3 Analysis and Discussion The literature on the addressed topic is not plentiful and presents varied approaches. No article that includes it offers an approach that fully coincides with that put forward in this research work about the role that interoperability, as a design principle of Industry 4.0, plays in a zero-defects context for the job-shop scheduling process. However, some articles provide sufficient partial information to conclude that, taken together, SMS and interoperability synergistically add capabilities. The most visible example is the synergy between interoperability and the DT. Gramegna et al. [20] take interoperability as one of the essential elements to characterize a DT. Designing the proposed platform based on the RAMI4.0 architecture and using the OPC UA communications protocol guarantee interoperability in the automation that their research proposes: The creation and maintenance of specifications that standardize the communication of the acquired process data, the alarm and event logs, the historical and batch data sent to the business systems of many suppliers, or between production devices, are examples of that. And all this in a strategic context oriented toward the zero-defects objective which is, in this case, the first priority for the authors. This is also the view of other researchers. Leng et al. [23] rely on seamless data transfers from their MCPS to the DT. Nilsson et al. [24] believe that developments based on OPC UA and the Arrowhead Framework, and equivalents, support the interoperability of both M2M environments and any other environment suitable for adjusting subsymbolic relations; e.g., simulations with the DT as their use cuts the simulation deployment time and reduces the probability of errors; they are also oriented to ZDM. For Ding et al. [25], their DT-MCPS is the direct guarantor of data synchronization and interoperability between physical and virtual shop floor plans by facilitating not only automation, but also autonomy and real-time action ability to production systems. The literature also contains examples of the usefulness and synergy of interoperability in the scheduling process context. Ameri et al. [22] present the confluence of areas, personnel, and applications around this process which, at different levels, share the need for static and dynamic data on resources and availabilities to consult and analyze planned activities, and to also know in real time the deviations or disturbances that impact scheduling optimization. With all this complexity, for these authors a validated common ontological interoperability framework is critical for meeting the goal of creating competitive products for markets with tight time-to-market. Another example is that by Chou et al. [19], for whom interoperability between agents is essential in their flexible job-shop scheduling, who rely on implementing the IEEE FIPA standard to achieve success in this respect. It is worth noting that interoperability and ZDM also demonstrate the ability to synergistically contribute to SMS in the works of Gramegna et al. [20] or Nilsson et al. [24]. However, Ameri et al. [22] go one step further and present a specific use case that focuses on the interoperability challenges within the ZDM framework. It explains how this strategy can benefit from implicit harmonization and integration within an ontological interoperability framework. This will ideally and progressively

190

J. C. Serrano-Ruiz et al.

Fig. 1 Vectors promoting interoperability in SMS

lead to the creation of an ecosystem with interoperable software applications that enable consistent data access and reasoning throughout a product’s life cycle. From that discussed so far, it can be deduced that the literature basically identifies three vectors for promoting interoperability in the SMS context: (i) DTs, (ii) standard communications protocols, and (iii) ontologies (Fig. 1). Finally, on application spaces, the proposed research tends to use the interoperability concept together with the elements defining SMS in industries that stand out for being intensive in production resources and/or edge technologies, such as automotive or aeronautical industries. Nevertheless, some authors like Gramegna et al. [20] do not limit their research results to complex manufacturing sectors, but extend them to SMEs. Finally, most simply do not set a limit to the size or the complexity scale of production systems beyond which applying the approach is difficult or easy.

4 Concluding Remarks This article analyzes the potential of interoperability as a design principle of Industry 4.0 to support SMS by answering questions like: (i) whether SMS and interoperability are synergistic concepts (RQ1); (ii) what the consequences of this synergy are (RQ2); and (iii) in which cases can the benefit of such a symbiosis be expected (RQ3). Indeed, we respectively answer that: (i) it is possible to extract several articles from the literature that provide sufficient partial information to conclude that, taken

Interoperability as a Supporting Principle of Industry 4.0 for Smart …

191

together, SMS and interoperability synergistically add capabilities; (ii) using DTs, standard communications protocols and ontologies (of which the last one is especially posited by a common ontology framework) promotes interoperability. In turn, interoperability supports automation, autonomy, real-time action ability, and defect avoidance or mitigation and, thus, strengthens SMS; (iii) neither the size or the scale of complexity of production systems constrains a priori the application of the interoperability design principle to support the SMS paradigm. The main benefit of this study is to consolidate the SMS theoretical framework with a new piece of research that adds new insights to such a critical capability for the success of communications between job-shop systems in general and the planning processes of the operational decision level in particular, as interoperability is. Interoperability contributes positively thanks to its individual and joint associations with the scheduling process, DT technology, and the ZDM model, and also across a wide spectrum of target environments. The present research also identifies two additional research lines to bridge some identified research gaps. First and foremost, we highlight the need to advance research into interoperability as a design principle and its effects on SMS from the current conceptual framework toward its practical validation. This can be done by initially going through conceptual and mathematical modeling, and then completing validation through prototyping or implementation in a use case. In addition, despite the relation between the DT and interoperability having been studied from several perspectives in the literature, very little research relates to the ZDM model and interoperability, and it does so with very limited approaches; so the task of studying this relation in-depth is also considered relevant. Acknowledgements The research leading to these results received funding from: the European Union H2020 Program with grant agreements No. 825631 “Zero-Defect Manufacturing Platform (ZDMP)” and No. 958205 “Industrial Data Services for Quality Control in Smart Manufacturing (i4Q)”; Grant RTI2018-101344-B-I00 funded by MCIN/AEI/https://doi.org/10.13039/ 501100011033 and by “ERDF A way of making Europe”; the “Industrial Production and Logistics Optimization in Industry 4.0” (i4OPT) (Ref. PROMETEO/2021/065) Project granted by the Valencian Regional Government (Spain).

References 1. Qiu, Y., Sawhney, R., Zhang, C., Chen, S., Zhang, T., Lisar, V.G., Jiang, K., Ji, W.: Data mining–based disturbances prediction for job shop scheduling. Adv. Mech. Eng. 11(3) (2019). https://doi.org/10.1177/1687814019838178 2. Tao, N., Xu-ping, W.: Study on disruption management strategy of job-shop scheduling problem based on prospect theory. J. Clean. Prod. 194, 174–178 (2018). https://doi.org/10.1016/j.jclepro. 2018.05.139 3. Dreyfus, P.-A., Kyritsis, D., Lee, G.M., Kiritsis, D., von Cieminski, G., Moon, I., Park, J.A.: Framework based on predictive maintenance, zero-defect manufacturing and scheduling under uncertainty tools, to optimize production capacities of high-end quality products. IFIP Adv. Inf. Commun. Technol. 536, 296–303 (2018). https://doi.org/10.1007/978-3-319-99707-0_37

192

J. C. Serrano-Ruiz et al.

4. Psarommatis, F., Kiritsis, D.: A hybrid decision support system for automating decision making in the event of defects in the era of zero defect manufacturing. J. Ind. Inf. Integr. 26, 100263 (2021). https://doi.org/10.1016/j.jii.2021.100263 5. Feldt, J., Kourouklis, T., Kontny, H., Wagenitz, A., Teti, R., D’Addona, D.M.: Digital twin: revealing potentials of real-time autonomous decisions at a manufacturing company. Proc. CIRP 88, 185–190 (2020). https://doi.org/10.1016/j.procir.2020.05.033 6. Villalonga, A., Negri, E., Biscardo, G., Castano, F., Haber, R.E., Fumagalli, L., Macchi, M.A.: Decision-making framework for dynamic scheduling of cyber-physical production systems based on digital twins. Ann. Rev. Control. 51, 357–373 (2021). https://doi.org/10.1016/j.arc ontrol.2021.04.008 7. Negri, E., Pandhare, V., Cattaneo, L., Singh, J., Macchi, M., Lee, J.: Field-synchronized digital twin framework for production scheduling with uncertainty. J. Intell. Manuf. 32, 1207–1228 (2021). https://doi.org/10.1007/s10845-020-01685-9 8. Serrano-Ruiz, J.C., Mula, J., Poler, R.: Smart manufacturing scheduling: a literature review. J. Manuf. Syst. 61, 265–287 (2021). https://doi.org/10.1016/j.jmsy.2021.09.011 9. Hermann, M., Pentek, T., Otto, B.: Design principles for Industrie 4.0 scenarios. In: Proceedings of the 49th Hawaii International Conference on System Sciences (HICSS), pp. 3928–3937. IEEE, Koloa (2016). https://doi.org/10.1109/HICSS.2016.488 10. Serrano-Ruiz, J.C., Mula, J., Peidro, D., Díaz-Madroñero, F.M.: A Metamodel for Digital Planning in the Supply Chain 4.0. In process (2021) 11. Wegner, P.: Interoperability. ACM Comput. Surv. 28, 285–287 (1996). https://doi.org/10.1145/ 234313.234424 12. Chen, D., Daclin, N.: Framework for enterprise interoperability. In: Interoperability for Enterprise Software and Applications, pp. 77–88. Wiley, Hoboken (2006). https://doi.org/10.1002/ 9780470612200.ch6 13. Chapurlat, V., Daclin, N.: System interoperability: definition and proposition of interface model in MBSE context. In: Proceedings of the IFAC Proceedings Volumes, IFAC-PapersOnline, vol. 45, pp. 1523–1528. IFAC, Bucharest (2012). https://doi.org/10.3182/20120523-3-RO-2023. 00174 14. Geraci, A.: IEEE Standard Computer Dictionary: Compilation of IEEE Standard Computer Glossaries. IEEE Press (1991) 15. Todorov, G., Zlatev, B., Kamberov, K.: Digital twin definition based on virtual prototype evolution of an UPS with kinetic battery accumulator. In: Proceedings of the AIP Conference Proceedings, vol. 2333, 110008. AIP Publishing, Washington (2021). https://doi.org/10. 1063/5.0044792 16. Bao, J., Guo, D., Li, J., Zhang, J.: The modelling and operations for the digital twin in the context of manufacturing. Enterprise Inf. Syst. 13, 534–556 (2019). https://doi.org/10.1080/ 17517575.2018.1526324 17. Pinedo, M.: Scheduling. Springer, New York (2012). https://doi.org/10.1007/978-1-46142361-4 18. Lindström, J., Kyösti, P., Birk, W., Lejon, E.: An initial model for zero defect manufacturing. Appl. Sci. 10, 4570 (2020). https://doi.org/10.3390/app10134570 19. Chou, Y.-C., Cao, H., Cheng, H.H.: A bio-inspired mobile agent-based integrated system for flexible autonomic job shop scheduling. J. Manuf. Syst. 32, 752–763 (2013). https://doi.org/ 10.1016/j.jmsy.2013.01.005 20. Gramegna, N., Greggio, F., Bonollo, F., Lalic, B., Marjanovic, U., Majstorovic, V., von Cieminski G., Romero, D.: Smart factory competitiveness based on real time monitoring and quality predictive model applied to multi-stages production lines. In: IFIP Advances in Information and Communication Technology 592 IFIP, pp. 185–196 (2020). https://doi.org/10.1007/ 978-3-030-57997-5_22 21. Groen, M., Solhjoo, S., Voncken, R., Post, J., Vakis, A.I.: FlexMM: a standard method for material descriptions in FEM. Adv. Eng. Softw. 148, 102876 (2020). https://doi.org/10.1016/ j.advengsoft.2020.102876

Interoperability as a Supporting Principle of Industry 4.0 for Smart …

193

22. Ameri, F., Sormaz, D., Psarommatis, F., Kiritsis, D.: Industrial ontologies for interoperability in agile and resilient manufacturing. Int. J. Prod. Res. 60(2), 420–441 (2021). https://doi.org/ 10.1080/00207543.2021.1987553 23. Leng, J., Zhang, H., Yan, D., Liu, Q., Chen, X., Zhang, D.: Digital twin-driven manufacturing cyber-physical system for parallel controlling of smart workshop. J. Ambient Intell. Humaniz. Comput. 10, 1155–1166 (2019). https://doi.org/10.1007/s12652-018-0881-5 24. Nilsson, J., Sandin, F., Delsing, J.: Interoperability and machine-to-machine translation model with mappings to machine learning tasks. In: Proceedings of the IEEE International Conference on Industrial Informatics, vol. 2019, pp. 284–289. IEEE, Helsinki (2019). https://doi.org/10. 1109/INDIN41052.2019.8972085 25. Ding, K., Chan, F.T.S., Zhang, X., Zhou, G., Zhang, F.: Defining a digital twin-based cyberphysical production system for autonomous manufacturing in smart shop floors. Int. J. Prod. Res. 57, 6315–6334 (2019). https://doi.org/10.1080/00207543.2019.1566661 26. Vogt, A., Schmidt, P.H., Mayer, S., Stark, R.: Production in the loop-the interoperability of digital twins of the product and the production system. Proc. CIRP 99, 561–566 (2021). https:// doi.org/10.1016/j.procir.2021.03.077 27. Piroumian, V.: Digital twins: universal interoperability for the digital age. Computer 54, 61–69 (2021). https://doi.org/10.1109/MC.2020.3032148 28. Szejka, A.L., Canciglieri, O., Jr., Mas, F.: A preliminary method to support the semantic interoperability in Models of Manufacturing (MfM) based on an ontological approach. IFIP Adv. Inf. Commun. Technol. 565, 116–125 (2019). https://doi.org/10.1007/978-3-030-422509_11 29. Bloomfield, R., Mazhari, E., Hawkins, J., Son, Y.-J.: Interoperability of manufacturing applications using the Core Manufacturing Simulation Data (CMSD) standard information model. Comput. Ind. Eng. 62, 1065–1079 (2012). https://doi.org/10.1016/j.cie.2011.12.034

A Deep Reinforcement Learning Approach for Smart Coordination Between Production Planning and Scheduling Pedro Gomez-Gasquet , Andrés Boza , David Pérez Perales , and Ana Esteso

Abstract The hierarchical approach of the production planning and control system proposes to divide decisions into various levels. Some data used in the planning level are based on predictions that anticipate the behavior of the workshop; nevertheless, these predictions can be adjusted at the schedule level. Feedback between both levels would allow better coordination; however, this feedback is not implemented due to interoperability problems and the complexity of the problem. This paper proposes an agent-based system that implements deep reinforcement learning to generate solutions based on artificial intelligence. Keywords Planning · Scheduling · Production · Interoperability · DQN · Agent

1 Introduction The production planning is a complicated task that requires coordination between multiple areas of the organization, and the aggregation and disaggregation of information between various hierarchical levels. The hierarchical production planning divides the production planning decisions into subproblems with different time horizons [1]. This decision process is even more complex in collaborative networks (CN) contexts. P. Gomez-Gasquet (B) · A. Boza · D. P. Perales · A. Esteso Centro de Investigación de Gestión e Ingeniería de Producción (CIGIP), Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain e-mail: [email protected] A. Boza e-mail: [email protected] D. P. Perales e-mail: [email protected] A. Esteso e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_17

195

196

P. Gomez-Gasquet et al.

The decision-making process in production planning may require vertical and horizontal integration of a broad set of systems. Thus, interoperability mechanisms are necessary which have “the ability of exchange information and to use the information that has been exchanged” [2]. The three aspects of interoperability considered in the EIF [3] include: technical, semantic, and organizational. These three aspects should be considered in the new proposals for improving the production planning and control process taking into consideration the new technological context of Industry 4.0 [4] and the new collaborative contexts in supply chains [5]. The application of artificial intelligence (AI) in the production planning process is not alien to the interoperability requirements previously presented. Many things must come together to build and manage AI-infused applications. This requires new tools, platforms, training even new job roles [6] which implies technical and syntactic interoperability between tools and platforms, semantic interoperability between the systems involved, and organizational interoperability that connects with the business process involved. In the production planning context of this proposal, interoperability appears among different systems: 1. Among production planning subsystems. Internal interoperability between planning and scheduling subsystems. In multi-site organizations, the planning could be centralized, but the scheduling could be made in each manufacture plant. The main flow of information is from the planning subsystem to the scheduling subsystem. 2. Among production planning system and Industry 4.0 systems. External interoperability with new systems in the field of edge computing and Industry 4.0 for the incorporation of operational data, in real time. The main flow of information is from edge computing and Industry 4.0 systems to the production planning system. 3. Between the Enterprise Resource Planning (ERP) system and the production planning system. External interoperability with integrated business management systems (ERP). This interoperability appears when the ERP system does not support the production planning process. The main flow of information is from the ERP systems (operational data and analytical data) to the planning system. 4. Among the production planning system and Manufacturing Execution Systems (MES). External interoperability with MES systems. The main flow of information is from planning system (operational data-planned manufacturing) to the MES system, but also from the MES systems to the planning system (operational data-real manufacturing performed). Given the differences in the objectives pursued and the time scale envisaged, the planning and scheduling of production are normally carried out separately and hierarchically [7]. However, decoupling planning and scheduling usually results in suboptimal, showing the need to integrate production planning and scheduling [8]. An option to integrate both problems is to formulate a single optimization model for production planning and scheduling [9]. However, the computational difficulty of solving the scheduling problem (NP-complete class) in this context makes it

A Deep Reinforcement Learning Approach for Smart Coordination …

197

necessary to employ Mixed-Integer Programming (MIP) decomposition approaches such as the augmented Lagrangian decomposition [10]. The recent application of disciplines such as AI in production management has enabled the development of new iterative methods for integrating production planning and scheduling. These approaches allow not only to exchange information between stages, but also to learn about the behavior of the system with different planning and schedule solutions and to take advantage of this learning. For example, [11] propose a machine learning framework based on supervised learning to identify the feasible region of the scheduling problem, [12] identify opportunities in production planning and scheduling using machine learning and data science processes, [13] use deep reinforcement learning to address the uncertainties in the production planning and scheduling problem, or [14] use deep reinforcement learning to deal with a scheduling process within the production plan in semiconductor fabrication. Other examples can be found in the literature; however, the use of AI in the integration of production planning and scheduling is still very scarce, so it is expected that research in their integration will expand in the near future.

2 Problem Definition This paper addresses the problem of coordination between the production planning and scheduling. The hierarchical vision of production planning and scheduling problems splits the problem into two subproblems, which require mechanisms capable of coordinating. In this section, the models corresponding to each of these two levels will be presented first, and then, the key aspects of their coordination will be identified.

2.1 Production Planning Problem The following describes the MILP model on which the level of tactical decision making for production planning is based (Table 1). This model allows planning the production of multiple products in multiple periods under a Make-to-Stock (MTS) strategy taking into account the existence of a minimum production lot per product, the limited capacity of both production and storage, and the possibility of backlogging demand, and using overtime. All this, with the aim of minimizing the costs derived from such operations. The objective function aims to minimize the cost of the proposed production planning, calculated as the sum of the cost of producing and inventorying product, hiring overtime, and backlogging demand. min Z =

T I ∑ ∑ i=1 t=1

(X it · cpi + Y L it · cli + B Dit · cbi + I nvit · ci i )

198

P. Gomez-Gasquet et al.

Table 1 Nomenclature for production planning model Index i

Product (i = 1, . . . , I )

t

Planning period (t = 1, . . . , T )

Parameters dit

Demand for product i in period t (units)

a_lotit

Anticipated minimum lot size of product i in period t (units)

a_cappt

Anticipated production capacity in period t (hours)

cap

Storage capacity (units)

a_ prit

Anticipated productivity of product i in period t (units/hour)

a_sti

Anticipated setup time of product i (hours)

ybdi

Binary parameter (1 if backlogged demand is permitted and 0 otherwise)

yott

Binary parameter (1 if overtime is permitted and 0 otherwise)

a_moct

Anticipated maximum overtime in period t (hours)

cpi

Unit cost of production of product i (e/unit)

cli

Unit cost of launching an order of product i (e/order)

cbi

Unit cost of backlogged demand for product i (e/unit.period)

ci i

Unit cost of inventorying product i during a period (e/unit.period)

cot

Unit cost of working overtime in period t (e/hour)

Decision Variables X it

Quantity of product i to be produced in period t (units)

I nvit

Quantity of product i in inventory in period t (units)

S Dit

Served demand of product i in period t (units)

B Dit

Backlogged demand for product i in period t (units)

O Tt

Planned overtime for period t (hours)

Y L it

Binary variable (1 if an order of product i is launched in period t and 0 otherwise)

+

T ∑

O Tt · yott · cot

(1)

t=1

Subject to: X it ≥ Y L it · a_lotit ∀i, t ∑ i

X it + Y L it · a_sti ≤ a_cappt + yott · O Tt ∀t a_ prit O Tt ≤ yott · a_moct ∀t

(2) (3) (4)

A Deep Reinforcement Learning Approach for Smart Coordination …

199

I nvit = I nvit−1 + X it − S Dit ∀i, t

(5)

I nvit ≥ 0 ∀i, t

(6)

S Dit = dit + ybdi · B Dit−1 − ybdi · B Dit ∀i, t

(7)



I nvit ≤ cap ∀t

(8)

i T ∑

T ∑

dit ∀i

(9)

X it ≤ Y L it · M ∀i, t

(10)

X it , I nvit , S Dit , B Dit , O Tt I N T E G E R B I N A RY Y L it

(11)

t=1

S Dit ≥

t=1

The quantity to be produced per product and period must be higher than the minimum lot size defined for that product (2). The capacity required to produce the planned products may not exceed the available production capacity taking into account overtime (3). If overtime is allowed, it is limited by a proportion of the production capacity (4). The product produced in a given period can be used to serve the current period’s demand, to serve backlogged demand from the previous period, or it can be stored (5). The inventory of any product must be greater than or equal to zero (6). In the case where demand is allowed to be backlogged, the production of each product in each period shall be equal to the difference between demand and backlogged demand (7). The quantity of products to be stored is limited by the storage capacity (8). The entire demand for each product must be served over the planning horizon (9). Constraint (10) models the operation of the binary variable Y L it to take a value equal to one when product i is produced in period t, and a value of zero when it is not produced in that period. For that, the Y L it is multiplied by a very big number (M). Finally, the nature of decision variables is defined (11). Also, it can be assumed that to better model reality that the cost of deferring demand is very high compared to the rest of the costs. Without loss of generality, it can be assumed that the minimum lot size (a_lot it ) established for some products could involve quantities large enough so that one lot can meet the demand for more than one period.

200

P. Gomez-Gasquet et al.

2.2 Production Scheduling Problem For this proposal, it will be considered that the production system can be modeled as a permutation flowshop (PFS). The goal of the scheduling level is to find a sequence that optimizes the use of resources. For this, it is proposed to use a MILP model of those proposed by [15] whose objective function is to minimize the Cmax or makespan in which the conditions established by [16] are assumed. This model is extended with the following considerations about the parameters: • • • •

xi is the quantity of product i to be scheduled prri is the productivity of product i in machine r (units/hour) stri is the setup time of product i in machine r (hours) The processing time of product i in machine r (hours) Tri is always calculated as xi + stri = Tri ∀i, r prri • cappr is the production capacity of machine r (hours) • mocr is the maximum overtime capacity of machine r (hours) And in addition, the capacity restrictions are added: Cri ≤ capp j + O Tr ∀r, i

(12)

O Tr ≤ mocr ∀r

(13)

In (12), the end time (Cri ) of any product scheduled in any machine must be less or equal to the sum of the production capacity of this machine plus its overtime capacity. In (13), a maximum overtime capacity of each machine exists. And with these restrictions, a more real relationship is established between the level of planning and scheduling.

2.3 Production Coordination Problem Once the models corresponding to the planning and scheduling levels are presented, the key aspects of their coordination are identified. For ensuring feasibility, the planning problem must anticipate as accurate as possible a series of parameters from the scheduling one, since they are not known with a sufficient level of detail at the decision-making time, as it is the case, for example, of the products setups (stir ) and their expected waiting times. In other cases, the anticipated parameters are subject to a certain degree of aggregation, as for example, the planning problem does not contemplate that the products are processed in a flow shop made of a set of machines, but rather in a representative machine. The coordination problem starts solving optimally the production planning problem and communicating the quantities to be produced of the different products downstream the scheduling problem, which takes them as its main input. The

A Deep Reinforcement Learning Approach for Smart Coordination …

201

anticipation mechanism maximizes the obtaining of feasible schedules, but the joint planning-scheduling problem may be far from optimal, so that a coordination strategy to enhance it should be established. This coordination strategy will start adjusting the quantities to be produced of one or more products (X it ) with the aim to reduce the cost of the initial plan. These adjustments will be based on two main assumptions, that backlogged demand causes the highest costs, followed by anticipated production. In each period, starting with the initial one, the coordination strategy will select the most appropriate products to be changed. A period already analyzed will not be considered again. These changes in the initial optimal plan (but suboptimal from the point of view of the joint planning/scheduling) will be conveniently assessed so that its feasibility is ensured downstream in the scheduling problem.

3 The Reinforcement Learning Model Proposed Reinforcement learning (RL) [17] is a kind of machine learning in which the algorithm uses an evolving agent in an environment and learns from its own experience. For a reinforcement learning algorithm to work, there must be a state that represents the environment, and it must have some type of reward function that evaluates how good an agent is at making decisions. One of the most widespread algorithms with the best results in the field of RL is known as Q-Learning [18]. The objective of the algorithm is to iteratively find the optimal Q table, which relates the states with the possible actions and their rewards, through the matrix of Q values. To do this, it uses the Bellman equation: Q(s, a) = r + γ max Q(s,, a,) , a

(14)

In 2013, the startup DeepMind published an article about Playing Atari with deep reinforcement learning [19]. One of the main differences with Q-Learning was the replacement of the Q matrix by a neural network (NN) that emulated this function. For a more efficient approach, a software agent is usually implemented in which the NN is embedded, known as Deep Q Network (DQN) agent.

3.1 The DQN Agent Proposed This proposal is focused on adjust the quantities to be produced of one or more products (X it ) of the master production schedule to try to reduce the cost of the initial plan. Specifically, it will try to reduce the cost associated with backlogged demand. In a single iteration, the Q-Learning algorithm is applied using a DQN agent named the backlogged.

202

P. Gomez-Gasquet et al.

Fig. 1 DQN backlogged agent

During the iteration, the agent will propose changes, choosing the most appropriate products, period by period, starting with period 1, then period 2, and so on. A period already analyzed will not be considered again. An episode is the set of consecutive periods in which the agent has made modifications into a production planned lot in the initial plan. The interactions between the backlogged agent and the environment are depicted in Fig. 1. The structure of the neural network will have five hidden layers, and each layer will have a number of nodes equal to three times the number of products, since it has turned out to be the most efficient architecture.

3.2 The Backlogged Agent In the following state diagram (Fig. 2), it shows what should be the evolution of the master plan when the backlogged agent tries to improve planning in a situation of backlogged demand (BD). A transition between two states is made when a lot or partial lot is delayed (D) to the from t to t + 1 period or when is transferred (T ) from t + 1 to t period.

Fig. 2 Status graph for backlogged demand context

A Deep Reinforcement Learning Approach for Smart Coordination …

203

Each state is defined a tuple (A, B), such that A reports on the capacity status and B on the master plan status. The capacity can be defined by Table 2. The capacity in each period will be the one established by the planner, which is given by Eq. (3), and is validated by scheduler, which is given by Eq. (12). If status is Ω agent could delay (D) part of a planned lot (X it ) and it will be resized (X it, ) with respect to the initial quantities of the master plan (X it ) such as X it, = dit and , X it+1 = X it − dit is the lot delayed. If status is F, this option is not possible. If status is C, then it is because the plan already has a gap to place a backlogged order. According to these actions, the status of the master plan can be defined (Table 3). There are six possible initial states (colored background), of which four are already final states (bold circle), either because there is no adjustable capacity or because there is no backlogged demand. It is in the state (Ω, B), where the agent must perform a set of actions D or T to reach one of the final states. Status (C, B) has been included as all options are considered but should not be achievable in an optimized plan. With this proposal, the following costs are modified according to the action that the backlogged agent proposes (Table 4). Table 2 Nomenclature for capacity status Ω

Adjustable capacity, i.e., ∃v/

I ∑ i=1

F

(X it − dit ) ≥ B Dvt

Not Adjustable capacity, i.e., ∄v/

I ∑ i=1

C

I ∑

Adjusted capacity ∃v/

i=1

X it a_ prit

+

(X it − dit ) ≥ B Dvt

D Dvt a_ prvt

≤ cappt + yhet · H E t

Table 3 Nomenclature for master plan status B |B There are products with backlogged demand, and if the production of part of a lot has also + been delayed due to resizing, the + symbol is added N|N There are not products with backlogged demand, and if the production of part of a lot has + also been delayed due to resizing, the + symbol is added

Table 4 Cost variation form initial MSP T

The part of the backlogged lot that is transferred, from period t + 1 to period t, is d it+1 so the cost variance is: -if ∄X it / X it, = X it + dit+1 → cli − cbi ∗ dit+1 -if ∃X it / X it, = X it + dit+1 → −cbi ∗ dit+1

D

The part of the lot that is delayed from period t to period t + 1. Since X it , = dit and X it+1 , = X it − dit being therefore X it = X it , + X it+1 , the costs vary: ,,

-i f ∄X it+1 / X it+1 = X it+1 + X it+1 , → cli − ci i ∗ (X it − dit ) ,,

-i f ∃X it+1 / X it+1 = X it+1 + X it+1 , → −ci i ∗ (X it − dit )

204

P. Gomez-Gasquet et al.

As can be seen from the high cuj (planning model parameter) hypothesis, action T will always imply a cost reduction, although it depends on the product, it may be higher or lower. Nor does it modify the master plan in period t + 1 onward. In the case of action D, it involves a cost reduction in period t by reducing inventory, but it could lead to an increase in launch costs in period t + 1. Therefore, the agent must not only decide actions T and D to carry out but must also choose the appropriate product to do so.

3.2.1

State Representation and Actions

The state space depicts the information given to the agent as basis for decision-making on which action to select. In the RL model for planning/scheduling coordination, the states must correctly reflect the global and local information of the problem, and also indicate the operation progress and current state of production lots, as well as the state of capacity so the agent will have a status made up of three information blocks. • Block 1: It will be indicated by a single value with value 0, 1, or K associated with F, Ω, and C status, respectively, where K is the value of the available capacity. • Block 2: It is a list of length equal to the number of products (I) in the plan. Each item in the list is a 0 value, if there is no backlogged quantity (bQ) or the value of the backlogged quantity if there is. • Block 3: It is a list of length equal to the number of products (I) in the plan. Each item in the list is a 0 value, if there is no transfer lot or the value of the transfer lot quantity (tQ) if there is. As explained and can be seen in Fig. 1, the agent must decide an action of the type (T, x) or (D, x), where T is transfer and D is delay and x is the number of the product selected to apply this action. Whenever the agent is requested, it chooses one of I*2 possible actions.

3.2.2

Reward Function

According to Eqs. 14 and 15, the agent must obtain a reward based on the action taken and the new status reached. In this case, it is proposed to use the following reward set: • If the agent proposes an action that is not allowed, the reward will be negative with a value of −10. An example when state is (Ω, D+) and the agent proposes (T, x). • If the agent proposes a permitted action, the reward will be negative with a value of −10 if it involves an increase in cost and a positive value proportional to the cost variation if it involves a reduction. This value it will be within a 0–10 scale. • There is no extra reward for reaching the end of an episode.

A Deep Reinforcement Learning Approach for Smart Coordination …

205

4 Conclusion and Further Work This paper has raised a problem of coordination between levels of the production planning and control systems with the aim of improving the results of the master production planning. It has been analyzed from the perspective of improving interoperability between different decision-makers with different models and systems that require the creation of a system that facilitates the flow of information between both and provides the experience of an expert through a proposal that includes the artificial intelligence. Based on a detailed problem, a strategy has been proposed that allows the production plans to be adjusted based on the real capacity identified in the programs, and that, throughout the planning horizon, reviews them and tries to improve them. Finally, a system based on artificial intelligence is proposed that implements the strategy proposed by backlogged based on deep reinforcement learning who will learn to carry out the appropriate actions, that is, to generate more overall cost reduction, in a learning or training process, which will later help them solve problems of the same type with other data. Future works will include experiments with synthetic and real data that will allow us to contrast the proposal. In addition, the proposal will be expanded including new agents for unattended demand of anticipated production and to allow agents to be able to implement more general strategies focused on adjusting planning parameters. Acknowledgements Grant NIOTOME Ref. RTI2018-102020-B-I00 funded by MCIN/AEI/ 10.13039/50110 0011033 and ERDF A way of making Europe.

References 1. Vicens, E., Alemany, M.E., Andres, C., Guarch, J.J.: A design and application methodology for hierarchical production planning decision support systems in an enterprise integration context. Int. J. Prod. Econ. 74(1–3), 5–20 (2001). https://doi.org/10.1016/S0925-5273(01)00103-7 2. IEEE: IEEE Standard Computer Dictionary: Compilation of IEEE Standard Computer Glossaries. IEEE Press, New York (1991) 3. EIF: European Interoperability Framework. White Paper, Brussels (2004) 4. Boza, A., Alarcón, F., Perez, D., Gómez-Gasquet, P.: Industry 4.0 from the supply chain perspective: case study in the food sector. In: Research Anthology on Cross-Industry Challenges of Industry 4.0, pp. 1036–1056. IGI Global, Hershey (2021) 5. Cruz Introini, S., Boza, A., Alemany Díaz, M.D.M.: Traceability in the food supply chain: review of the literature from a technological perspective. Dirección y Organización 64, 50–55 (2018) 6. CompTIA: Artificial Intelligence in Business: Top Considerations Before Implementing AI, www.compTIA.org. Last accessed 2021/10/12 7. Kis, T., Kovács, A.: A cutting plane approach for integrated planning and scheduling. Comput. Oper. Res. 39(2), 320–327 (2012). https://doi.org/10.1016/j.cor.2011.04.006 8. Usuga Cadavid, J.P., Lamouri, S., Grabot B, Pellerin, R., Fortin, A.: Machine learning applied in production planning and control: a state-of-the-art in the era of industry 4.0. J. Intell. Manuf. 31, 1531–1558 (2020). https://doi.org/10.1007/s10845-019-01531-7

206

P. Gomez-Gasquet et al.

9. Phanden, R.K., Jain, A., Verma, R.: An approach for integration of process planning and scheduling. Int. J. Comput. Integr. Manuf. 26(4), 284–302 (2013). https://doi.org/10.1080/095 1192X.2012.684721 10. Li, Z., Ierapetritou, M.G.: Production planning and scheduling integration through augmented Lagrangian optimization. Comput. Chem. Eng. 34(6), 996–1006 (2010). https://doi.org/10. 1016/j.compchemeng.2009.11.016 11. Dias, L.S., Ierapetritou, M.G.: Data-driven feasibility analysis for the integration of planning and scheduling problems. Optim. Eng. 20, 1029–1066 (2019). https://doi.org/10.1007/s11081019-09459-w 12. De Modesti, P.H., Carvalhar Fernandes, E., Borsato, M.: Production planning and scheduling using machine learning and data science processes. In: Säfsten, K., Elgh, F. (eds.) Proceedings of the Swedish Production Symposium, vol. 13, pp. 155–166. IOS Press, Amsterdam (2020) 13. Hubbs, C.D., Li, C., Sahinidis, N.V., Grossmann, I.E., Wassick, J.M.: A deep reinforcement learning approach for chemical production scheduling. Comput. Chem. Eng. 141, 106982 (2020). https://doi.org/10.1016/j.compchemeng.2020.106982 14. Hoon Lee, Y., Lee, S.: Deep reinforcement learning based scheduling within production plan in semiconductor fabrication. Expert Syst. Appl. 191, 116222 (2021). https://doi.org/10.1016/ j.eswa.2021.116222 15. Tseng, F.T., Stafford, E.F., Gupta, J.N.D.: An empirical analysis of integer programming formulations for the permutation flowshop. Omega 32(4), 285–293 (2004). https://doi.org/10.1016/ j.omega.2003.12.001 16. Conway, R.W., Maxwell, W.L., Miller, L.W.: Theory of Scheduling. Addison-Wesley, Boston (1967) 17. Sutton, R.S., Barto, A.G.: Reinforcement learning: an introduction. IEEE Trans. Neural Netw. 9(5), 1054 (1998). https://doi.org/10.1109/tnn.1998.712192 18. Watkins, C.J.C.H., Dayan, P.: Q-learning. Mach. Learn. 8(3–4), 279–292 (1992). https://doi. org/10.1007/BF00992698 19. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing atari with deep reinforcement learning. In: NIPS Deep Learning Workshop (2013). https://doi.org/10.48550/arXiv.1312.5602

Modeling Automation for Manufacturing Processes to Support Production Monitoring José Franco , José Ferreira , and Ricardo Jardim-Gonçalves

Abstract Technologies connected to Industry 4.0 have over time provided a major boost in several domains. The Internet of Things has confirmed benefits in several areas from healthcare, aeronautics and automotive industries, to entertainment. Currently, the challenges requested to the industry have forced the hiring of professionals from various areas of knowledge. Technologies related to Industry 4.0, such as sensors and microcontrollers, have been providing support to the various processes that occur during manufacturing processes in industry. Business process model languages, such as Business Process Model and Notation, are good tools to help communication between the different stakeholders who share the same workplace, helping in the process of modeling the entire industrial business. These tools do not have, at least natively, the possibility to program microcontroller type devices. This paper aims to present an automation system in the context of a business model process for programming microcontroller devices following the Model Driven Architecture paradigm. Keywords Industry 4.0 · Internet of Things · Model driven architecture

1 Introduction Within the scope of Industry 4.0 there are two closely interlinked concepts: Internet of Things (IoT) and Cyber-Physical-Systems (CPS). IoT is a system of devices that are networked together, whether mechanical or digital, with the ability to transfer J. Franco (B) Nova School of Science and Technology, Lisbon, Portugal e-mail: [email protected] J. Ferreira · R. Jardim-Gonçalves UNINOVA, Instituto de Desenvolvimento de Novas Tecnologias, Lisbon, Portugal e-mail: [email protected] R. Jardim-Gonçalves e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_18

207

208

J. Franco et al.

data between them without human intervention. IoT is a horizontal concept, since it encourages the collection of data, together, between devices and engineering processes or the physical environment. CPS are computational collaboration systems between entities, which are in intense connection between the world around them and the ongoing process, providing and using data through the Internet, for this reason CPS is a vertical concept, which allows the data collected by IoT devices to be used at higher organizational levels [1]. The IoT devices have been used extensively in industry, in this context, microcontrollers are typically present, collecting data from the real world through sensory units. There has been an explosion of increasingly heterogeneous IoT devices, featuring diverse programming languages, the use of such devices to implement measures that help in this digitalization process is sometimes not very intuitive, since the maintenance and programming of such devices, requires the hiring of qualified employees [2]. Business Process Model and Notation (BPMN) diagrams are a simple tool with intuitive graphic elements that facilitate communication between various employees, regardless of their academic skills, providing strategies for implementing measures that can support the digitalization. Considering the blades of industrial stone cutting machines, this kind of blades, when worn, can cause defects in the stone cut. On the other hand, another consequence is the overheating of the motors, as a result of the blades blocking the stones. In this sense, the use of microcontrollers that acquire signals from sensors installed in specific parts of the cutting machines, to be studied with specialists, could help in detecting vibrations that indicate problems with the blades. Business model processes can help in the task of monitoring the values read by the sensors, allowing in an industrial context to warn operators about the status of the blades.

1.1 Problem Statement About BPMN Connection with IoT Devices Nowadays, there are several platforms, such as Google Cloud IoT and others that allows the integration and configuration of IoT devices. Through the databases of these platforms, it is possible to manage this kind of devices, however there is a need to program the IoT devices in advance. As indicated before, business model processes in industrial environment, can facilitate communication between actors in the process, optimizing support. However, this typology of tools originally does not have mechanisms to program devices, such as microcontrollers. In the scientific literature it is possible to find some contributions that allowed to make an approximation between business model process languages and devices. Some specific extensions to BPMN have been proposed to enable the programming of Wireless Sensor Networks (WSN). Tranquillini et al. [3] developed a compiler that translates into machine code, the whole process of network modeling done in BPMN to be introduced in a WSN network use case. The main focus of this paper is to present a procedure embedded

Modeling Automation for Manufacturing Processes to Support …

209

in a business model process, namely a Device Task that allows the parameterization and subsequent automatic generation of device code to be inserted into a microcontroller. In this case, it is possible to have access to the device executable code to make changes, instead of having access to the machine code. Specifically, the Model Driven Architecture (MDA) paradigm was used to achieve the goal because it provides layers of abstraction that allow maintaining a separation between system operation aspects and the technological implementation aspects. Regarding the first, in the most abstract layer typically BPMN is used, because it is a high-level language that is understood by different stakeholders facilitating communication. The second, on the other hand, concerns a lower level, i.e., executable code that uses a programming language, thus being closer to a technological concept. This separation ensures that regardless of the users’ academic background, they can generate device executable code through a business model process.

2 Model Driven Architecture MDA according to [4] is a paradigm for defining architectures for software system development methodologies, it describes what kind of artifacts constitute a concrete architecture and how are related, gathering, for this purpose, a series of guidelines for the construction of these artifacts. This methodology provides an approach for software design and implementation by providing guidelines for structuring specifications, which are expressed in models. MDA is based on models, layers, and transformations between models, and makes a separation between functional aspects and implementation aspects [5]. In general, a model represents a physical object or some natural or artificial mechanism, it is an abstraction. According to [6], models provide abstractions of a physical system that allow engineers and scientists to reason about the system by focusing on the main parts of the system. A meta-model is another abstraction that aims to enhance the properties and characteristics of a model. To this extent, when considering the features and behaviors provided by modeling languages such as Unified Modeling Language (UML) and BPMN in order to model problems from various domains, these same capabilities are defined by the so-called meta-model of the language, because it appears as the model of the language itself, defining its entire structure, semantics, and constraints [7]. A model conforms to a meta-model when all its elements can be expressed in terms of instances of the corresponding meta-model [8].

2.1 Model Driven Architecture Layers MDA structures a problem in layers and allows maintaining a separation between the technological implementation aspects and the system operation aspects. On

210

J. Franco et al.

the other hand, it considers the relationships between models and meta-models and the mapping between these [1]. This paradigm uses a three-layer architecture: Computation-Independent Model (CIM) this layer defines organizational aspects of the problem, Platform-Independent Model (PIM) this level defines aspects of the problem logic, Platform-Specific Model (PSM) this layer defines more technical or implementation aspects [1]. In the context of this paper, the entire BPMN definition is considered, such as task identification, is at the CIM level, since it only represents the requirements that the system must have, hiding the details regarding its structure. The new Device Task, despite being a new BPMN element based on a Service Task, has been modified. On the one hand, it allows the user to insert in an information collection interface, technical, or device-specific information that allows the description of a device, such as communication protocols, I/O ports, among others. On the other hand, this Device Task, allows the use of this same technical information, in order to support a device code generation mechanism, through a methodology based on the MDA paradigm. Finally, it allows the programming of a device through the previously generated code. Since this procedure is a refinement of a BPMN element, since it is approaching device technology, then this procedure is at the PIM level. Finally, the code generated by this procedure is at the PSM level, because technological aspects are taken into consideration. In particular, looking at Fig. 1, it can be seen that the PSM level encompasses the “Activate Engine” level and the “Execution” level, since these levels concern technological details. The code to be deployed in the device will allow its interconnection to an Engine that allows running BPMN diagrams. In the next section this dynamic will be discussed in more detail. Fig. 1 MDA positioning

Modeling Automation for Manufacturing Processes to Support …

211

3 System Architecture It is important to clarify that there are the Design Time and Run Time moments. The first is related to the modeling of the contemplated problem, using the BPMN language, with all its elements. In this phase the Device Task is considered, which allows device parameters to be inserted, and from these, the device code is generated. The Run Time phase, on the other hand, has the objective of executing BPMN diagrams in an Engine that transforms the processes modeled in the BPMN diagram into actions (such as establishing communication with devices). Imixs project [9] tools were used, because it provides a BPMN modeler through the Eclipse IDE for Java Developers platform and an Engine in open-source format. The Arduino Uno Wi-Fi Rev.2 device was chosen for testing. Observing Fig. 2, in the section Design Time/Run Time Moments, all the important periods in the two phases are depicted. In blue are represented the Design Time elements that concern all BPMN diagram representation, Device Task identification/ Parameterization and device code generation. The elements in orange are the device programming, and finally in green are the configuration and communication between Device and Engine. Looking at Fig. 2 in more detail (on the right side), the Design Time and Run Time phases contain modules that are responsible for all system dynamics. In the First Step takes place the Design Time phase, where through the BPMN Modeler module it is possible to create the BPMN diagram, defining which device tasks are needed and, in that circumstance, parameterize the respective device. Next, within the Device Task, through the ATL Transformation Module the executable device code is generated. Then, once the code is generated, through the Command Line Interface Module (CLI), this same code is automatically sent to the device, putting it into operation. However, it should be noted that it is important that the device at this stage does not establish communication with the Engine, otherwise communication errors may occur, since the device is not turned on. Finally, still in the First step, it is necessary to send the BPMN file, created through the BPMN Modeler to the Engine, taking the precaution of not placing it into operation. Regarding the Second step, on the Run Time side, first, the Engine is started and executing the previously loaded BPMN diagram. On the other hand, the Arduino device is executing the code that was generated earlier, exchanging information with the engine, that is, receiving and sending information.

3.1 Parametrization of the Device Task The BPMN Imixs process modeler also has a Plug-In that allows extend the functionalities of the BPMN Tasks, allowing the interconnection of Java classes that are responsible for expanding the functionality of BPMN Tasks. Thus, the strategy used to generate the device code was using the Model Driven Architecture paradigm,

212

J. Franco et al.

Fig. 2 Global system architecture in detail

which was materialized along the model transformation language called Atlas Transformation Language (ATL). Regarding the Device Task, initially there is a need to perform parameterization, i.e., the user must enter parameters such as Wi-Fi access settings, Baud Rate settings, input and output pins, and others, so through a Graphical User Interface (GUI) module the user is allowed to enter and save this data in the BPMN file, which is an XML file, where all the diagram elements, including the Device Task elements, are stored. Thus, it is necessary to look at a model that describes all the elements of the Device Task from the very beginning. These same elements are stored in the BPMN file. In this sense, it is necessary to extract them and then create the required model, through a Parameter Extractor module. In this case, through an XSLT transformation, a model is created, which is an XMI file, where only the elements parameterized by the user are shown. This model is called in the context of this research, by Device Task Model, which as was seen in Sect. 2.1, in the context of the MDA paradigm belongs to the PIM level.

Modeling Automation for Manufacturing Processes to Support …

213

The XMI model generated from the XSLT transformation, described above, is a model in which all parameters to be indicated in the Device Task are described. These parameters constitute attributes of the model. Since this model will be in accordance with its meta-model, in the circumstance that all its elements may be expressed as instances of the corresponding meta-model [5]. In particular, the Device Task has characteristics that enable it to configure it as a new BPMN element, since these same characteristics are in accordance with the instances of the BPMN meta-model. Then, through a module called Model Converter module, the concept of Mappings is used, which according to [5], it will be possible to relate the instances belonging to the BPMN meta-model with the corresponding instances in the device code meta-model. In the end, it is possible to build a target model, through Transformation Model in which parameters that conform to their corresponding meta-model are defined, in this case the meta-model of the device’s programming language (Arduino). In order to substantiate this construction of the target model, i.e., the device code model, we used the ATL language. Thus, it is necessary to take into account that in this paper, the BPMN meta-model (Ecore) of the authors [10] was contemplated in this research. On the other hand, the meta-model from the Arduino Designer project [11] was reused in order to represent the meta-model (Ecore) of the Arduino device. On the other hand, it was necessary to complete these meta-models with the features belonging to the Device Task, such as configuration parameters for Wi-Fi networks, analog/digital ports (of the Arduino), some libraries used in the programming of the Arduino device among others. All meta-models, Ecore, conform to the so-called meta meta-model, which in the context of the OMG standards, is called Meta Object Facility-MOF [8]. The target model of this transformation, i.e., the Device Code Model, in the context of the MDA paradigm, is positioned at the PSM level, for the reasons mentioned in Sect. 2.1. In conclusion, the Model-to-Model (M2M) transformation cycle has been closed. The Model Configurator module exercises special formatting on the target model, i.e., the device code model, since before extracting the device code from this model it was necessary to do an XSLT transformation to this model (which is an XMI file) in order to rearrange the information appropriately before sending this model to the last module. In the last module, i.e., Device Code Extractor, a Model to Text (M2T) transformation is embodied, whose mission is to transform the elements of the Arduino model into code. To this extent, we used the concept of ATL Queries [12] that allow the device code to be created directly from the device code model, a corrected XMI file from the previous module. Thus, through a Query ATL element, textual elements are generated, from the instances defined in the corresponding meta-model, that is, the Arduino’s Ecore.

214

J. Franco et al.

4 Case Scenario In industrial stone cutting machines like the one shown in Fig. 3 [13], sensors such as Inertial Measurement Unit (IMU) sensors can be installed to detect vibrations that indicate wear on the blades. The design of a system to detect vibrations in stone cutting machines by running a BPMN diagram on an Engine was validated in testing environment. Regarding the Design Time phase, a BPMN diagram of Fig. 4 was drawn. First, it is necessary to clarify that a small electronic circuit was assembled with a Light Emitting Diode (LED) that considers the IMU sensor installed in the Arduino Uno Wi-Fi Rev.2 itself, which was used for testing, since an IMU sensor, will also be placed in industrial stone cutting machines for validation and collecting data to support the development of the algorithm to detect a worn-out blade (future work), allowing to warn the operator in the real time to changing it. Since the Imixs project is the result of constant open-source development, it is necessary to take into account that the Imixs Engine contains some specifications that force the user to take some measures, otherwise counterproductive results may be observed when executing the BPMN diagrams in Run Time. This situation is observed in the case of the elements: Start event, Start task, and the Start_Event event that could be omitted in a single element. The Engine used is event-driven, thus, considering the Fig. 4, the Task Device Acquisition points to the event Contact_Device which allows the Engine to acquire the values read by the X, Y, Z axis of the accelerometer of the IMU sensor. If the value read by the Y axis is below a certain threshold (set to 0.12 for use case) then the Gateway Exclusive 6 allows to return to the Device Acquisition task, repeating the data acquisition process. Otherwise, the Task Warn Operator is called, which through the Event led_Green_ on turns on a LED light that is mounted in the use case circuit, warning the operator that a vibration was detected, and the operator can fill in a form to send to the project management, advising them of the action taken. It is necessary, still in the Design Time phase, i.e., during the design of the BPMN diagram, to generate the device code and program it. The Device Acquisition task allows to achieve this goal through an option in its properties, in this case only one task of the device type was defined because there is only one device to program and one sensor to monitor. Fig. 3 Stone cutting machine

Modeling Automation for Manufacturing Processes to Support …

215

Fig. 4 Modeling anomaly detection

4.1 Parametrization of the Device Task The Device Acquisition task allows the generation of the device code using the structure that was presented in Sect. 3.1. Figure 5 shows the Device Task parameterization GUI. The user must first enter the necessary parameters in order to proceed with the generation of the device code. After the user presses “OK”, the code is generated through the modules explained previously in Sect. 3.1. The mission of the Device Task is then terminated. Moreover, taking into account Fig. 1, the transition from PIM to PSM level takes place, because through Arduino-CLI commands the device is programmed. The next step, it is necessary to load the BPMN file into the Engine, taking care not to start the execution of this diagram, as discussed earlier. Regarding the Run Time phase and even before starting the BPMN execution in the Engine, it is necessary for the Arduino to fully connect to the Wi-Fi. First, to start the execution of the BPMN diagram, it is necessary to press the green button in Fig. 7. Thus, Arduino sends a message through the REST service of the IMIXS project, thus starting the process. Observing the monitoring page of the Tasks that are being executed by the Engine, (Fig. 6), when the Arduino remains static, the Engine is alternately performing the Device Acquisition and Vibration Detection tasks. On the other hand, if the user performs a sudden movement on the device, as shown in Fig. 7, it is verified that the green LED light has been turned on, since the

216

J. Franco et al.

Fig. 5 GUI device task

Fig. 6 Imixs engine task monitoring interface

value reported by the Y axis is 0.39, being higher than the set threshold value of 0.12 (for sudden movement situations). In this circumstance, Arduino sends through the REST service of the IMIXS project, the information that the device has moved suddenly. Then the gateway follows the execution flow to the Warn Operator Task that points to the Led_Green_ on Event which sends a message to the Arduino to turn on the LED light (through the message with the value “2”, Fig. 7). The operator is informed that an anomalous event has been detected. Finally, the operator will be able to inform the project management department about what has happened by filling out a form. Thus, the execution of the BPMN diagram is concluded.

Modeling Automation for Manufacturing Processes to Support …

217

Fig. 7 Arduino sudden movement and notification window

5 Conclusions and Future Developments This paper proposed a generic formalism based on the ATL model transformation language, which is an implementation solution of the MDA paradigm, since it allows keeping a separation between technological implementation aspects and system operation aspects. To this extent, the concept of BPMN Device Task provides support for the interconnection between business process modeling languages and a whole heterogeneous set of IoT devices, because depending on the device to be configured and programmed, it is only necessary to adapt its corresponding meta-model, which describes all the resources it has. Regarding future developments, it is necessary to add support for more IoT device types, and it is necessary to understand, on a case-by-case basis, what needs these devices require for their configuration and programming. The interconnection of the Cloud services may provide extra flexibility and visibility in the management of a set of devices. Thus, to this extent, there is still the need to develop a set of libraries that can be included in the developed system allowing this interconnection. Regarding the case scenario presented in Chap. 4, it is necessary to perform a set of studies in factories in order to build an algorithm based on the data generated by the IMU sensor, in order to develop a system that automatically detects when a blade is worn and warns the operator to change it. Acknowledgements The research leading to these results received funding from the European Union H2020 Program Under grant agreement No. 872548 “DIH4CPS—Fostering DIHs for Embedding Interoperability in Cyber-Physical Systems of European SMEs”

218

J. Franco et al.

References 1. Grabot, B., Vallespir, B., Gomes, S., Bouras, A., Kiritsis, D.: Advances in Production Management Systems Innovative and Knowledge-Based Production Management in a Global-Local World, Part I. Springer, Heidelberg (2014) 2. Viar. https://medium.com/@viarbox/how-to-prepare-your-workforce-for-digitalization-andindustry-4-0-b92fb19b6047. Last Accessed 07 Dec 2021 3. Tranquillini, S., Spießü, P., Daniel, F., Karnouskos, S., Casati, F., Oertel, N., Mottola, L., Oppermann, F., Picco, G., Römer, K., Voigt, T.: LNCS 7481—process-based design and integration of wireless sensor network applications. In: Barros, A., Gal, A., Kindler, E. (eds.) Business Process Management. BPM 2012. Lecture Notes in Computer Science, vol. 7481, pp. 134–149. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-32885-5_10 4. Runde, R.K., Stølen, K.: What is model driven architecture? Res. Rep. 304 (2003) 5. Brambilla, M., Cabot, J., Wimmer, M.: Model-Driven Software Engineering in Practice, Second Edition. Morgan & Claypool, London (2017). https://doi.org/10.2200/s00751ed2v01y20170 1swe004 6. Brown, A.W.: Model driven architecture: principles and practice. Softw. Syst. Model. 3, 314– 327 (2004). https://doi.org/10.1007/s10270-004-0061-2 7. Zacharewicz, G., Daclin, N., Doumeingts, G., Haidar, H.: Model driven interoperability for system engineering. Modelling 1(2), 94–121 (2020). https://doi.org/10.3390/modelling102 0007 8. Jouault, F., Allilaire, F., Bézivin, J., Kurtev, I.: ATL: a model transformation tool. Sci. Comput. Program. 72(1–2), 31–39 (2008). https://doi.org/10.1016/j.scico.2007.08.002 9. Imix Workflow. https://www.imixs.org/. Last Accessed 12 Nov 2021 10. Ferreira, J., Lopes, F., Doumeingts, G., Sousa, J., Mendonça, J., Agostinho, C., JardimGonçalves, R.: Empowering SMEs with cyber-physical production systems: from modelling a polishing process of cutlery production to CPPS experimentation. In: Jardim-Gonçalves, R., Sgurev, V., Jotsov, V., Kacprzyk, J. (eds.) Intelligent Systems: Theory, Research and Innovation in Applications, pp. 139–177. Springer, Heidelberg. https://doi.org/10.1007/978-3-03038704-4_7 11. Arduino Designer. https://github.com/mbats/arduino, last accessed 15 Sep 2021 12. Eclipse Foundation. https://wiki.eclipse.org/ATL/User_Guide_-_The_ATL_Language. Last Accessed 08 Nov 2021 13. Ceigroup. http://www.ceigroup.net/stonecut-line. Last Accessed 15 Sep 2021

Interoperable Algorithms as Microservices for Zero-Defects Manufacturing: A Containerization Strategy and Workload Distribution Model Proposal Miguel Á. Mateo-Casalí , Francisco Fraile , Faustino Alarcón , and Daniel Cubero

Abstract This paper presents a containerization strategy and workload distribution models useful to build and distribute algorithms as microservices in Zero-Defects Manufacturing solutions. The proposed strategy and workload distribution model can be used to build and deploy algorithms that solve specific problems related to defect prediction and detection and process and product optimization. The proposed containerization strategy is rooted in a layered model that decouples the algorithm instantiation and execution, the interfaces to access data services, and the management of the algorithm as a service. The paper also presents different models for building and distributing algorithms as microservices in cloud/edge architectures. Keywords Algorithm as a service · Zero-defect manufacturing · Distributed modeling for interoperability · Containerization · Cloud computing · Edge computing

M. Á. Mateo-Casalí (B) · F. Fraile · F. Alarcón Research Centre on Production Management and Engineering (CIGIP), Universitat Politècnica de València (UPV), Camino de Vera s/n, 46022 Valencia, Spain e-mail: [email protected] F. Fraile e-mail: [email protected] F. Alarcón e-mail: [email protected] D. Cubero Factor Ingeniería y Decoletaje, S.L., C/Regadors 2 P.L. Campo Anibal, 46530 Puçol, Valencia, Spain e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_19

219

220

M. Á. Mateo-Casalí et al.

1 Introduction Economic globalization and increased consumption by society have created a need for companies to optimize and improve production processes. As a result, companies are currently facing an increasing evolution of new technologies applied to industry, which aim to improve production performance. To this end, new ways of obtaining information from the entire production chain and how to use it for production process optimization are being investigated. The concept of optimization and digitization of industrial processes using technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), or Big Data is consolidating [1]. One of the most important technological advances is cloud computing. This technology provides the possibility of sharing different resources remotely through the network and the Internet so that the deployment of new services and their maintenance can be facilitated. Cloud computing infrastructures offer the possibility of deploying applications and services on resources provided by suppliers. Large companies such as Amazon and Google have used this term to describe those services that allow the user (either companies or individuals) to store data of any kind, including applications, services, databases, etc. [2]. This development has led to an increase in IoT devices that collect data from different points in the production chain, aiding decision making with the intention of improving and optimizing production. This number of IoT devices leads to increased complexity of systems and strategies for implementing efficient optimization algorithms that can be used in real time to achieve adequate interoperability between devices and all existing platforms within the context of IoT [3]. All this complexity has led to the need to process data where it is created, due to the response delay that can be generated by all the data from users’ devices when transferring to the cloud for storage or processing. This model is impractical because it would increase communication latencies when millions of devices are connected to the Internet in the future [4]. In many cases, systems require low latency interaction with their environment or with users as the possible increase of this latency decreases the Quality of Service (QoS) and Quality of Experience (QoE). This is where the concept of the alternative edge computing model is born. Edge computing is defined as the computational processes that take place inside edge devices (IoT devices) with analysis and processing capabilities, such as compact PCs, Raspberries, routers, or network gateways. By processing information gathered close to where it was created, latencies are reduced, and less bandwidth is consumed [5]. However, the computation of these devices is limited compared to the cloud. Edge computing is characterized by low latency, dense geographical distribution, network context information, location awareness, and proximity [6]. Thanks to the IoT devices found at the edge, the concept of Zero-Defects Manufacturing (ZDM) is gaining importance. This concept seeks to predict possible defects and failures, providing suggestions and improvements to avoid them [7]. In short, ZDM implies the use of algorithms as a tool for the automation and optimization of processes within the industry. These algorithms are fundamental when solving

Interoperable Algorithms as Microservices for Zero-Defects …

221

optimization, quality, or time problems, as many of them can be used in different situations or machines. Until now, the algorithms have been found as libraries hosted in the projects in which they are to be used in a specific way and which are updated according to the needs of the users or of the project itself. Using edge computing, microservices can be deployed close to the different elements of the manufacturing chain [8]. Modern software architectures and development should support built tools that help manage dependencies, run tests, and deploy code to different environments [9]. Microservices architecture is a choice to build or split applications into a set of smaller services, which can run independently or alongside the server. A definition of a microservice is “a cohesive, independent process interacting via messages” [10]. From this point, applying optimization algorithms as a microservice brings several advantages: (a) Simpler Integration: It allows each service to be deployed independently on a potentially different platform and technology stack. In addition, it can run and communicate through mechanisms such as APIS. (b) Dependency-Free: Thanks to the independence of microservices, algorithms are decoupled from other system functions, making updates independent. (c) Composability and Interoperability: One of the advantages of using microservices is heterogeneous interoperability. Thanks to this, different solutions can be integrated without dependencies influencing legacy systems [11]. Algorithms can therefore be developed to meet specific needs without affecting the integrity of the system where it is deployed. Each of these microservices will have its own instances, configurations, or dependencies. Thanks to containerization, microservices can be implemented by making the most of a system’s capabilities or by reducing the costs generated by virtualization. On the other hand, it guarantees the operation since its configuration is the same as in the development environment. It also provides the possibility to implement it anywhere that supports containerization, without the need to make any changes to the code. As seen above, these algorithms implemented within the service will be accessed thanks to the implementation of an API. This design improves the portability, isolation, and robustness of the algorithm within the microservice. This is because it will limit all dependencies to dynamically linked libraries and configuration files within your image. The microservices architecture is related to cloud computing, and studies show that tools for deployment and scalability of microservices reduce infrastructure costs by 70% [12]. Agility in the microservice architecture heavily depends on fast management operations for containers, such as create, start, and stop [13]. Considering the distribution of the workload, these microservices can be run either in the cloud or at the edge. For this, container management systems take care of scaling and load balancing functions, reducing computational costs at the edge. By packaging algorithms into microservices, they can be deployed independently at different edge points. As already mentioned, these algorithms are fundamental when solving optimization, quality, or timing problems, as many of them can be

222

M. Á. Mateo-Casalí et al.

used in different situations or on different machines. In this way, the implementation of the zero-defect philosophy is advanced. This paper presents the i4Q Containerization Strategy with the solution Manufacturing Line Reconfiguration Toolkit (i4Q LRT). The i4Q Project aims to provide a complete solution consisting of an IoT-based Reliable Industrial Data Services (RIDS), a complete suite composed of 22 i4Q Solutions, able to manage the huge amount of industrial data coming from cheap cost-effective, smart, and small size interconnected factory devices for supporting manufacturing online monitoring and control. In Sect. 2, a proposal for the containerization of algorithms as a microservice using Docker images is presented. Section 3 describes the distribution workload on the edge, looking at the different architectures that are considered in i4Q Solutions in life-cycle deployment and testing. Section 4 presents the solution i4Q LRT in more depth with the idea of meeting the requirements for reconfiguration of manufacturing systems, focusing on scalability and functional adaptability, and its proposed implementation in Factor, the project’s pilot.

2 Proposed Containerization Strategy The i4Q reference architecture adopts a microservices approach [14] where each algorithm (e.g., for prediction or optimization) is a stand-alone microservice that can be deployed and managed in a platform. The deployment strategy relies on containerization technology, specifically Docker layers, to create a distributable object (Docker image) specifically built to instantiate and run a specific machine learning model or algorithm that solves a specific problem in the Zero-Defects Manufacturing domain. In a nutshell, the containerization strategy is based on the following steps: 1. Base Image: The i4Q provides a base Docker image that implements the interfaces that any algorithm must provide to other components to instantiate and run the algorithm as a service. This base image also implements clients to access data services, both for batch data and storage services. 2. Layer Template: Developers can use a project template to develop a new algorithm as a service, called a layered template. The template includes a Python module scaffold named compute unit. The compute unit module exports a compute unit object with two methods to instantiate and run the compute unit. The project template also includes a Docker file that installs the compute unit Python module and configures the services in the base image to call the instantiate and compute unit methods in the callback functions of the corresponding service endpoints. Thus, the Docker file is used as an integration layer to execute any algorithm as a service in the platform. The layer template also includes a metadata file (metadata.json) that contains information about the algorithm input and output and techniques used. The project template also includes continuous integration and continuous delivery (CI/CD) pipelines to automatically analyze software quality and security requirements and to generate a distributable object

Interoperable Algorithms as Microservices for Zero-Defects …

223

(containing the Docker file, python modules, and metadata file) that can be deployed in the AI-Analytics runtime. 3. Model Development: Developers can now develop their models and algorithms in Python modules, possibly using external open-source libraries and modules. If they provide a compute unit module, the Docker file integration layer will take care of the necessary integration steps to ensure that the layer can be executed in the AI-Analytics runtime component. Developers need to develop the model, use the compute unit object to instantiate and run the model, modify the Docker file to ensure that the required modules are installed in the runtime image, and edit the metadata file to describe the algorithm. 4. Model Deployment: Finally, once developers have finalized a new layer, they can use the CI/CD pipeline provided to build a distributable object that can be deployed in the workload. The distributable object can be downloaded from the package registry of the development project and deployed manually. 5. Model Management and Monitoring: Finally, the workload provides user interfaces to monitor the status of the service, create, run, and load balancing of different instances of the service to ensure optimal Quality of Service, and to schedule service availability to optimize the use of computational resources.

3 Workload Distribution Two different architectures are considered for i4Q Solutions: Cloud architecture, adopted by solutions that do not require that functional components are executed at the edge (close to the data source), and cloud/edge architecture for those that do require a distributed execution environment. In the cloud, load balancing functions are responsible for performing the runtime logic to distribute the workload among the available IT assets. This model can be applied within the architecture concerning the different IT resources required, as it is also commonly used with cloud storage devices, virtual servers, or cloud services. In this sense, the workload architecture model functions as follows: Resource A and Resource B are exact copies of the same resource. Inbound requests from consumers are handled by the load balancer which forwards the request to the appropriate resource dependent on the workload being handled by each resource. Referring to the Industrial IoT architecture, you will find an edge/cloud model. This architecture allows concentrating device resources at the edge, processing data at the edge, filtering it, and sending it to the cloud. On the other hand, it allows scalability by adding new nodes for each of the resources found and thus deploying new services while reducing latency. Considering the industrial architecture and the possibility to deploy microservices on the edge and the containerization proposal seen in this document. It is possible to transfer distributable objects (workload images, binary format to transfer AI models, simulation models, analytical dashboards, etc.) from the code repositories to the workload repositories (package and container registries) that store the distributable

224

M. Á. Mateo-Casalí et al.

Fig. 1 An example of the model build

objects available on the edge platform. An example is the i4Q LRT solution, which thanks to this distribution, can export models and build the distributable files that are then hosted and deployed on the edge. Considering the containerization model, after the development of a model, a distributable object can be built that can be deployed in the workload (Fig. 1). Similarly, thanks to this distribution of workloads, it is possible to train an AI model by deploying an algorithm through the i4Q LRT solution, which uses the containerization approach seen above. This facilitates its build and, finally, its deployment (Fig. 2). This interface of i4Q LRT is used to transfer workloads and models to the edge (distributed computing) through the distribution services, orchestrated by the workload distribution functions of the edge. Therefore, following the containerization strategy to have the distributable files with microservices ready allowing to offer a tailormade solution for different proposals.

4 Use Case There are many aspects of manufacturing system reconfiguration that present important research and practical challenges. These include reconfiguration of the factory layout, configuration of machine routes (the configuration of modular machines or modular processes), and configuration of the production system using parameters [15]. The i4Q project presents the i4Q Manufacturing Line Reconfiguration solution that offers an answer to the possibility of reconfiguring the parameters of the production line. The i4Q Project aims to provide a complete solution consisting of an IoT-based Reliable Industrial Data Services (RIDS), a complete suite composed of 22 i4Q

Interoperable Algorithms as Microservices for Zero-Defects …

225

Fig. 2 An example of the model deploys

Solutions, able to manage the huge amount of industrial data coming from cheap cost-effective, smart, and small size interconnected factory devices for supporting manufacturing online monitoring and control. The i4Q Framework will guarantee data reliability with functions grouped into five basic capabilities around the data cycle: sensing, communication, computing infrastructure, storage, and analysis and optimization. The i4Q RIDS will include simulation and optimization tools for manufacturing line continuous process qualification, quality diagnosis, reconfiguration, and certification for ensuring high manufacturing efficiency, leading to an integrated approach to Zero-Defect Manufacturing [16]. One of the solutions that this paper focuses on is i4Q Manufacturing Line Reconfiguration. The objective of the Manufacturing Line Reconfiguration Toolkit (i4Q LRT) is to increase productivity and reduce the efforts for manufacturing line reconfiguration through AI. This tool consists of a set of analytical components (e.g., optimization algorithms, machine learning models) to solve known optimization problems in the manufacturing process quality domain, finding the optimal configuration for the modules and parameters of the manufacturing line. Finetune the configuration parameters of machines along the line to improve quality standards or improve the manufacturing line set-up time are some examples of the problems that the i4Q LRT solves for manufacturing companies. i4Q LRT will be developed following the containerization strategy smart decisions that will optimize the manufacturing process, ensuring the quality of the product while maintaining the Overall Equipment Effectiveness (OEE) optimized to the FACTOR factory. It will be developed on a first layer, the whole system of interfaces necessary for the connection, and an integration layer will be added to execute

226

M. Á. Mateo-Casalí et al.

the algorithms as a service. Next step, the specific algorithms will be developed to cover the needs of FACTOR. Finally, the CI/CD pipeline will be used to build the distributable object to be deployed in the workload. “FACTOR Ingeniería y decolataje, SL” is an innovation focused metal fabricator located in Valencia (Spain) dedicated to precision turning. It intends to enable line reconfiguration technologies in its manufacturing process. The reconfiguration of the manufacturing line is intended to be the performance within the manufacturing process, reconfiguring the line if necessary. Considering the independent variables will provide new variables that can influence the dependent variables. This will affect the outcome of the final manufacturing process, seeking to optimize and improve, within the expected ranges, the quality of the final product. The main objective is to improve the OEE in FACTOR production by using intelligent optimization techniques based on AI algorithms that will provide i4Q Manufacturing Line Reconfiguration Toolkit. The real effectiveness of the factory can be measured by OEE, a concept introduced by Lean Manufacturing philosophy to set a realistic value of the capacity of a factory. OEE is measured by the combination of: (a) Quality Ratio, measured as the division of the parts manufactured without defects by the total produced parts. (b) Availability, measured as the division between the time that the machine has been producing good parts by the time that the machine is expected to be manufacturing. (c) Efficiency, measured as the division between the real cycle time by the theoretical cycle time. Currently, FACTOR bases its reconfiguration on human decision making (e.g., if a diameter measurement is larger than expected, the machining tool is changed), which implies a source of error because human judgments are not as accurate as of the algorithms, ending up in a non-optimized line configuration, reducing OEE and consequently factory productivity, resulting in economic losses, productivity disadvantages and environmental problems due to poorly manufactured parts, energy spent in manufacturing with low productivity rates and wasted staff time. From the FACTOR’s point of view, the i4QLRT Solution seeks to optimize the manufacturing process by configuring the variables that interfere in the process and that can be modified by a third party (e.g., the operator). This configuration can be predicted in advance (forecasting) using machine learning (ML) algorithms, anticipating vibrations that can end up on poorly manufactured parts, and optimizing the configuration of the machining process to avoid any defective manufactured part.

Interoperable Algorithms as Microservices for Zero-Defects …

227

5 Conclusions Due to the increase in IoT devices and the need to process large amounts of data, edge computing has proven to be a solution to increase the Quality of Services and Quality of Experience. It allows for reduced latency and bandwidth when processing data. Thanks to this, different solutions can be applied to predict possible failures and offer improvements, in pursuit of Zero-Defect Manufacturing (ZDM). This paper has presented a strategy for the implementation of algorithms, either prediction or optimization, as a stand-alone microservice that can be managed on a platform and facilitate the interoperability of services at the edge. Considering the industrial architecture and the possibility to deploy microservices, this paper proposes the i4Q LRT solution developed in the European Project i4Q Industrial Data Services for Quality Control in Smart Manufacturing to facilitate wide and agile deployment. This solution adopts a strategy of containerization to be adapted and integrated into different manufacturing scenarios, for diverse companies, and at varying maturity levels. On the other hand, the company FACTOR, where the integration of the LRT solution will be carried out, has been presented. The objective is to provide the possibility of optimizing the manufacturing process, ensuring product quality, and maintaining the Overall Equipment Effectiveness (OEE). A future line of research will be the implementation of the solution i4Q LRT and further analysis of the indicators defined to evaluate the fulfillment of the objectives and the scope of the improvements. Acknowledgements The research leading to these results is part of the i4Q project that has received funding from the European Union’s Horizon 2020 Research and Innovation Programme, under Grant Agreement No 958205.

References 1. Nazarenko, A.A., Sarraipa, J., Camarinha-Matos, L.M., Grunewald, C., Dorchain, M., JardimGoncalves, R.: Analysis of relevant standards for industrial systems to support zero defects manufacturing process. J. Ind. Inf. Integr. 23, 100214 (2021). https://doi.org/10.1016/j.jii.2021. 100214 2. Dutta, P., Dutta, P.: Comparative study of cloud services offered by Amazon, Microsoft and Google. Int. J. Trend Sci. Res. Dev. 3(3), 981–985 (2019) 3. Burns, T., Cosgrove, J., Doyle, F.: A review of interoperability standards for industry 4.0. Procedia Manufact. 38, 646–653 (2019). https://doi.org/10.1016/j.promfg.2020.01.083 4. Hong, C.-H., Varghese, B.: Resource management in fog/edge computing: a survey on architectures. ACM Comput. Surv. 52, 1–37 (2019) 5. Mendonca, W.D.F., Assuncao, W.K.G., Estanislau, L.V., Vergilio, S.R., Garcia, A.: Towards a microservices-based product line with multi-objective evolutionary algorithms. In: 2020 IEEE Congress on Evolutionary Computation, CEC 2020—Conference Proceedings, pp. 1–8. IEEE, Glasgow (2020). https://doi.org/10.1109/CEC48606.2020.9185776

228

M. Á. Mateo-Casalí et al.

6. Hassan, N., Gillani, S., Ahmed, E., Yaqoob, I., Imran, M.: The role of edge computing in Internet of Things. IEEE Commun. Mag. 56, 110–115 (2018). https://doi.org/10.1109/MCOM.2018. 1700906 7. Psarommatis, F., May, G., Dreyfus, P.A., Kiritsis, D.: Zero defect manufacturing: state-of-theart review, shortcomings and future directions in research. Int. J. Prod. Res. 58, 1–17 (2020). https://doi.org/10.1080/00207543.2019.1605228 8. Dai, W., Wang, P., Sun, W., Wu, X., Zhang, H., Vyatkin, V., Yang, G.: Semantic integration of plug-and-play software components for industrial edges based on microservices. IEEE Access 7, 125882–125892 (2019). https://doi.org/10.1109/ACCESS.2019.2938565 9. Ebert, C., Gallardo, G., Hernantes, J., Serrano, N.: DevOps. IEEE Softw. 33, 94–100 (2016). https://doi.org/10.1109/MS.2016.68 10. Dragoni, N., Giallorenzo, S., Lafuente, A.L., Mazzara, M., Montesi, F., Mustafin, R., Safina, L.: Microservices: yesterday, today, and tomorrow. In: Mazzara, M., Meyer, B. (eds.) Present and Ulterior Software Engineering, pp. 195–216. Springer, Cham (2017). https://doi.org/10. 1007/978-3-319-67425-4_12 11. Vresk, T., Cavrak, I.: Architecture of an interoperable IoT platform based on microservices. In: 39th International Convention on Information and Communication Technology, Electronics and Microelectronics, pp. 1196–1201. IEEE: Opatija (2016). https://doi.org/10.1109/MIPRO. 2016.7522321 12. Villamizar, M., Garces, O., Ochoa, L., Castro, H., Salamanca, L., Verano, M., Casallas, R., Gil, S., Valencia, C., Zambrano, A., Lang, M.: Infrastructure cost comparison of running web applications in the cloud using AWS lambda and monolithic and microservice architectures. In: Proceedings 16th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing, pp. 179–182. IEEE, Cartagena (2016). https://doi.org/10.1109/CCGrid.2016.37 13. Inagaki, T., Ueda, Y., Ohara, M.: Container management as emerging workload for operating systems. In: Proceedings of the 2016 IEEE International Symposium on Workload Characterization, pp. 65–74. IEEE, Providence (2016). https://doi.org/10.1109/IISWC.2016.758 1267 14. Andersson, D., Huang, Z.: Design and evaluation of a software architecture and software deployment strategy. https://odr.chalmers.se/handle/20.500.12380/256056. Last Accessed 17 Sep 2021 15. Youssef, A.M.: Optimal configuration selection for reconfigurable manufacturing systems. Electronic Theses and Dissertations, University of Windsor, p. 7216 (2006) 16. Poler, R., Karakostas, A., Vrochidis, S., Marguglio, A., Galvez-Settier, S., Figueiras, P., GomezGonzalez, A., Mandler, B., Wellsandt, S., Trevino, R., Bassoumi, S., Agostinho, C.: An IoTbased reliable industrial data services for manufacturing quality control. In: 2021 IEEE International Conference on Engineering, Technology and Innovation, pp. 1–8. IEEE, Cardiff (2021). https://doi.org/10.1109/ice/itmc52061.2021.9570203

An Interoperable IoT-Based Application to On-Line Reconfiguration Manufacturing Systems: Deployment in a Real Pilot Faustino Alarcón , Daniel Cubero , Miguel Á. Mateo-Casalí , and Francisco Fraile

Abstract The use of information to make smart decisions is the main ambition of Industry 4.0. Within the framework offered by this new concept, some technologies such as the Internet of Things (IoT) and artificial intelligence (AI) are becoming crucial for the development and modernization of companies. The use of the Industrial Internet of Things (IIoT) enables the monitorization of the status of the physical variables that influence the manufacturing process, whereas the use of AI to process the data and technological advances in terms of communication protocols (e.g., OPC/ UA) enables the interaction with the machine to reconfigure the manufacturing line, maintaining the process stable through smart and automatic decisions. This paper presents an application that is being developed within the i4Q project to immediately detect deviations in the manufacturing’s parameters and to the online automatic reconfiguration of the process to reduce quality problems, waste, and breakdowns in the machine tools, using technologies based on the Industry 4.0. Additionally, the application is deployed in a real company as a steep before its full implementation. Keywords Industry 4.0. · Reconfigurable manufacturing systems · Industrial Internet of Things · Artificial intelligence · Interoperability

F. Alarcón (B) · M. Á. Mateo-Casalí · F. Fraile Research Centre on Production Management and Engineering (CIGIP), Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain e-mail: [email protected] M. Á. Mateo-Casalí e-mail: [email protected] F. Fraile e-mail: [email protected] D. Cubero Factor Ingeniería y Decoletaje, S.L., C/Regadors 2 P.L. Campo Aníbal 46530, Puçol, Valencia, Spain e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_20

229

230

F. Alarcón et al.

1 Introduction Manufacturing1 companies must be able to react rapidly and cost-effectively to the market requirements given by the increasing frequency of new products introduction and the shortening of their life-cycles, changes in a part of existing products, changes in government regulations, large fluctuations in product demands and mix, and changes in process technology [1–3]. Manufacturing systems that operate in this context must be able to adapt to these changes and manufacture high-quality products, maintaining the lowest operational cost [3]. But this rise of the number of process’ changes impeaches the operators and the manufacturing systems to an elevated stress level and psychological burnout so that human errors, machine mismatches, and deviations on that manufacturing process parameters are increasing in frequency, driving to failures on product quality. This translates into a continuous appearance of waste, which in turn produces a loss of efficiency, an increase in costs, problems to meet the delivery date agreed with the customer, and a decrease in the sustainability of the company. Therefore, immediate detection of quality problems or deviations in process paraments and a rapid reconfiguration of the process for reducing waste appearance is becoming a priority issue for modern manufacturing systems [4]. Some basic principles of reconfigurable manufacturing systems (RMS) are [2]: (a) diagnosability or design optimally embedded product quality inspection into manufacturing systems, (b) maximization system productivity by reconfiguring operations and reallocating tasks to machines, and (c) perform effective maintenance that jointly maximizes the machine reliability and the system throughput. Dubey et al. [5] point out that, in addition, RMS should be designed for sustainability, and they confirm in their work that the higher the reconfigurability in manufacturing systems is, the better is the environmental performance. The appearance of new concepts and technologies related to Industry 4.0 [6] offers new opportunities for this ambit and boosts RMS and modern manufacturing systems to enter a new era [2]. Industry 4.0 aims to improve the efficiency of the manufacturing factories using information available of the processes to make smarter decisions, as the optimal reconfiguration of the manufacturing process. For example, IoT can provide some technological enablers (e.g., sensors, databases, apps, etc.) to know the system’s behavior and use artificial intelligence (AI) algorithms to reconfigure the manufacturing systems to improve the company’s performance by avoiding defects (quality ratio), reducing the number of machine’s stops (enhancing availability), optimizing the process (efficiency), and consequently, increasing the sustainability level. In this sense, some recent works address the use of new technologies 4.0 in the field of RMS [7], but there is still a significant lack of guides, methodologies, and tools for the immediate detection of deviations in manufacturing parameters and the automatic reconfiguration of manufacturing processes in real-time. In this regard, 1

Text in this section is partially reproduced from [17] released under a CC BY license. https://ceurws.org/Vol-3214/WS4Paper6.pdf.

An Interoperable IoT-Based Application to On-Line Reconfiguration …

231

[2] state in their work that the real-time operational decision-making for RMSs is limited in the literature. The main challenge is to understand the complexity of the systems, develop efficient strategies and algorithms that can be used in real-time and achieve suitable interoperability between the huge quantity of devices and over 300 platforms existing in the context of IoT [8]. This paper presents an application that is being developed within the framework of the European project i4Q Industrial Data Services for Quality Control in Smart Manufacturing (i4Q Project, n.d.), to the immediate detection of deviations in the manufacturing’s parameters and to the online reconfiguration of the process to reduce quality problems, waste, and breakdowns in the machine tools, by the use of technologies based on the Industry 4.0. Additionally, the application proposed will be applied to a real company (“FACTOR S.L.” n.d.) to explore its usefulness to reduce waste and machine stops, to optimize the process, and consequently, to increase the sustainability level and to improve the company’s performance. The remainder of this paper is organized as follows. Section 2 presents a literature review to identify approaches about reconfigurable manufacturing systems to reduce quality fails and breakdowns of the machine tools. Section 3 describes i4Q Line Reconfiguration Toolkit, a software-based solution that is being developed in the European project i4Q. Section 4 focuses on the analysis of the deployment of the i4Q Solution in the real metal machining use case of FACTOR. Section 5 draws some conclusions and future research lines.

2 Background RMS can be adjusted with a minimum effort and at a low cost to the manufacturing environments with a high level of uncertainty [3, 9]. The RMS concept offers a solution to the challenge of rapidly and efficiently ramping-up volumes and varieties, as its modular structure allows for reduced time for designing, building, and redesigning the system [10, 11]. Some works that offer interesting literature reviews or important contributions about basic aspects of RMS are mentioned below. Koren et al. [1] propose the first RMS definition. Furthermore, they identify the limitations of dedicated and flexible manufacturing systems and present RMS as a new class of systems that, not only combines the high throughput of dedicated manufacturing systems with the flexibility of flexible manufacturing systems but also can react to changes quickly and efficiently. These authors also define System Ramp-Up and enumerate the key characteristics of RMS: modularity, integrability, customization, convertibility, and diagnosability. In later works, [2] add to the list scalability, defined as the capability of modifying production capacity by adding or removing resources and/or changing system components. An overview of manufacturing techniques, their key drivers and enablers, and their impacts, achievements, and limitations is presented by Mehrabi et al. [12]. In the paper of [9], general requirements of next-generation manufacturing systems are discussed, and the strategies to meet these requirements are considered.

232

F. Alarcón et al.

The production paradigms that apply these strategies are also classified, putting a particular emphasis on the paradigm of RMS. Some key issues of the RMS design are also discussed, and a critical review is presented about the developments of RMSs. Some more recent works of this type have been presented by Andersen et al. [11], who investigate the prerequisites and barriers for developing RMSs and also revise several RMS Design Frameworks; [2], who formulate the design and operational principles for RMSs, and provide a state-of-the-art review of the design and operations methodologies of RMSs according to these principles; [7], who presents a structured review of the literature about RMSs, highlighting the application areas as well as the key methodologies and tools, and identifying five emerging and promising research streams (one of them links reconfigurable manufacturing to Industry 4.0), and by [3] who present a literature review on the layout design of RMS about specific forms of RMS reconfiguration, focusing on reconfigurable layouts. On the other hand, studies offer information regarding the aspects and elements that can be reconfigured in an RMS and the levels at which those elements are structured and organized. In this line, the works of [1, 13], mention two levels to RMS design: System design, and reconfigurable machine tools. [9], however, mention three critical issues involved in any type of RMS: (a) Architecture design; (b) Configuration design, and (c) Control design. Koren et al. [1] also structure the design issues around three levels: system, reconfigurable machine tools, and control for reconfigurable machines in open architecture. Most of the reviewed works about RMS agree on the importance of the capability of adaptation or reconfiguration of the manufacturing systems to address the unsteady customer’s requirements, even though only some of them get deep into the reconfiguration capabilities of the specific components that compose the RMS (i.e., machine tools, devices, or specific machining process) to zero-defect manufacturing. Some works highlight the need to reconfigure the RMS at a machine tool level in response to changes in workpiece size, part geometry and complexity, production volume and rate, required processes, material property, etc. [1]. But these works allude to physical or functional reconfiguration more than to a reconfiguration of the behavior (reconfiguration and reprogramming of the machining parameters of the machining assets). This is the case, for example, of [10], who indicate that to reconfigure a manufacturing system at the machine level, specific changes can be applied to machine axes or by adding spindles (i.e., physical or functional reconfiguration). Only two works [2, 14] have been found that mention explicitly the possibility of reconfiguring a manufacturing system through the programming system of the machines (behavior) along with the other options normally mentioned such as controlling physical layout by adding and removing machines and their components, material handling systems, and configuration of work stations (physical), and even routing, scheduling, planning (more related with the reconfiguration of product flow). This issue is important because the focus of the present work is on reconfiguration of the behavior of the manufacturing system at the machine level. Regarding the way to obtain data for process and quality control of the manufactured parts, is worth mentioning that there are approximations in the literature based on online control such as that of [15], but most of these approximations are scarce

An Interoperable IoT-Based Application to On-Line Reconfiguration …

233

and were published a long time ago, so they do not explore the possibilities that offer the new technological advances of recent years in the Industry 4.0 context. One of the most important advances comes from synergies obtained from the combination of sensors, Big Data, and AI. Although, more than twenty years ago the use of Expert Systems and AI was proposed in the context of RMSs by Mehrabi et al. [12], the last decade this technology has experimented a rise in its potential thanks to the big quantity and quality of data captured by the increasing number of sensors, the storage capabilities offered by technologies like Big Data and Data Analytics [2], and the improvements in technological interoperability [16]. It seems clear, therefore, that the effectiveness and efficiency of real-time decisionmaking in an RMS context can be improved by applying intelligent manufacturing techniques, such as cloud computing, digital manufacturing, and cyber-physical systems [2]. However, the literature review highlights a lack of papers that address the automatic and online reconfiguration of the machine tools to quality control and to avoid failures by using IoT, specifically, by integrating, in an interoperable environment the automated machining process, IA, IoT, sensors, Big Data, and Data Analytics.

3 The Manufacturing Line Reconfiguration Toolkit I4Q Solution The European project named Industrial Data Services for Quality Control in Smart Manufacturing (i4Q) (i4Q Project, n.d.) aims to develop software solutions based on AI to be implemented in manufacturing lines allowing interoperability along with the production processes and improving them by making intelligent decisions. Universities, research centers, and private companies participate in this project, joining their unique capabilities for the development of innovative solutions applicable to industrial environments. Some of the solutions developed in the i4Q Project are: i4Q Data Quality Guidelines, i4Q Blockchain Traceability of Data, i4Q Trusted Networks with Wireless and Wired Industrial Interfaces, i4Q Cybersecurity Guidelines, i4Q Data Repository, i4Q Services for Data Analytics, i4Q Infrastructure Monitoring, i4Q Digital Twin Simulation Services, i4Q Manufacturing Line Reconfiguration Guidelines, i4Q Manufacturing Line Reconfiguration Toolkit, i4Q Manufacturing Line Data Certification Procedure… (see [4] for more details). One of the most important solutions developed in the i4Q project is Solution Manufacturing Line Reconfiguration Toolkit (i4Q LRT). This solution aims to propose changes in the configuration parameters of the manufacturing system to achieve improved quality targets by a collection of optimization microservices based on simulations that evaluate different possible scenarios (i4Q Project, n.d.).

234

F. Alarcón et al.

3.1 Technical Background of I4Q LRT2 i4Q LRT aims to be a set of tools designed for optimizing the manufacturing process using AI algorithms and simulations, comparing feasible scenarios, and achieving the best configuration of parameters to improve quality standards. The solution can be used in the cloud or deployed on-premise, with sensor data from the manufacturing process as input and operational data as output (e.g., configuration parameters and actuation commands), it will optimize the manufacturing line with the best set of parameters available. Thus, i4Q LRT is based on optimization algorithms developed in Python and deployed by using a containerized (e.g., Docker). It consists of both pre-trained models ready to be used and abstract models that will be customized by the end-user. One of the main technical challenges of a successful implementation of the solution in the manufacturing line is enabling the interconnectivity and interoperability with assets and devices that interfere in the manufacturing process and provide diverse types of data that can be used to optimize the configuration and improve the decision-making process, either by humans or by AI. i4Q project Website (i4Q Project, n.d.) contains the information related to the scientific background used to develop the solution and details about the components of the solution, the reference architecture, the modeling artifacts, and the specific AI algorithms that will be used. From a customer of the i4Q Solution point of view, the expected results will be the optimization of the manufacturing process thanks to the configuration of the variables that interfere in the process and can be changed by a third party (e.g., the operator). For example, it is known that the width of an external diameter rises while the tool starts to wear since the tool has a lower range of machining. The diameter can be kept within the range limits by changing the machine offsets, i.e., changing the position of the tool to compensate for the loss of material of the tool. At the same time, tool wear can be predicted by measuring physical variables (e.g., temperature, vibrations). The analysis of the tendencies of these variables can forecast future tool wear and therein an increase in the diameter width. A palliative reconfiguration will be described as the process of changing the offsets of the machine, which solves the problem without acting in the root cause. This will settle the problem, ensuring the quality of the product despite the wear of the tool, and can be determined as the first approach of reconfiguration. On the other hand, there is a possibility of interfering in the root cause (e.g., a rise of vibrations) by configuring the independent physical variables that makes the vibrations rise. An example of that could be lowering the machining speed in particular operations to optimize the tool wear and extend it over time. This configuration can be predicted in advance using machine learning (ML) algorithms, anticipating future excessive vibrations that can end up on poorly manufactured parts, and optimizing the configuration of the machining process to avoid any 2

Text in this section is partially reproduced from [17] released under a CC BY license; https://ceurws.org/Vol-3214/WS4Paper6.pdf.

An Interoperable IoT-Based Application to On-Line Reconfiguration …

235

defective manufactured part. The main goal of the reconfiguration is the optimization of the three main factors that interfere in the Overall Equipment Effectiveness (OEE) at the same time: (a) quality ratio, (b) availability, and (c) efficiency. The configurations should not be based only on avoiding defective parts (improving quality ratio) leaving aside availability and efficiency, it should be focused on enhancing the OEE, reaching the point where the three variables that interfere in the OEE are optimized. The main approach of the RMS will be changing the machine parameters to avoid abnormal behavior of the manufacturing process, detecting deviations in the measured physical variables, correcting them before an error can occur, reconfiguring the process, and preventing potential errors. A second and equally important approach is focused on produced parts’ quality. Since the geometrical measures of the final product depend on the process status (e.g., the dimension of a diameter depends on the temperature of the tool), the process can be configured and stabilized in a way that ensures the final quality of the produced parts. Combining both approaches, the objective of the RMS will be accomplished, enhancing the final value of the OEE, achieving the best configuration to optimize quality ratio, availability, and effectiveness.

4 I4q LRT in a Real Metal Machining Factory. Deployment in FACTOR Pilot 4.1 FACTOR Pilot Presentation FACTOR Ingeniería y Decoletaje S.L. (“FACTOR S.L.” n.d.) is a manufacturing company located in Valencia (Spain) dedicated to metal manufacturing a precision turning for the most restrictive sectors of the industrial network (e.g., aeronautics, automotive, medical) [17]. Some examples of the products that FACTOR manufactures are dental implants, plungers, and customized screws and nuts. FACTOR participates as Pilot in i4Q Project, to validate the outcomes of the solutions, among which is i4Q LRT, which will enable a rapid reconfiguration of the manufacturing process based on AI algorithms, improving the effectiveness of the factory.

4.2 Technical Background to Deploy I4q LRT i4Q LRT will be deployed on the factory of FACTOR to provide smart decisions that will optimize the manufacturing process, ensuring the quality of the product while maintaining the OEE optimized. Figure 1 shows the data path between the i4Q Solutions and the data acquisition systems of FACTOR. The CNC provider of FACTOR machining assets (“FANUC” n.d.) has its data acquisition system (MTLinki) that provides data from different physical variables

236

F. Alarcón et al.

Fig. 1 Information and Communication technologies in FACTOR

(e.g., temperature, vibrations) and process variables (e.g., machine status, machining speed). The machining assets are represented by the gear in Fig. 1. The data is stored in MongoDB and the communication is supported by OPC/UA (“OPC” n.d.). It can be extracted by i4QLRT or any i4Q Solution for algorithm training and online reconfiguration of the process. Quality measurements are acquired from digital tools (represented by a microscope in Fig. 1). This data can be used to evaluate the status of quality, which is one of the main approaches of the RMS. Quality status data will be considered for online reconfiguration decisions. The Solutions of i4Q are shown in the yellow box, considering the AI platform that will be used for smart decision-making in FACTOR, cloud, and edge deployments are also differentiated. The interaction between the different solutions is exposed at an important level and i4QLRT is highlighted in the blue boxes. i4Q LRT will communicate via OPC/UA with the factory to take data of the manufacturing process and use the data to make smart algorithmic decisions. Standard data storage services will be used, such as MongoDB or SQL. The results of the evaluation will be sent to the factory through recommendations for optimizing the process.

An Interoperable IoT-Based Application to On-Line Reconfiguration …

237

4.3 An Example of Deployment of I4Q LRT in FACTOR The deployment phase of i4Q Solutions consists of reviewing the objectives, requirements, and functionalities, identified in previous phases, and in conducting all the preparations for the subsequent implementation of said solutions in a real environment. In the deployment of the LRT solution, the interoperability aspects of the solution with the FACTOR manufacturing system are fundamentally analyzed. Therefore, interoperability is a key point for a successful implementation of the Line Reconfiguration Toolkit solution in FACTOR, and it should be approached from two main points. On the one hand, interoperability is key to ensure effective communication between the management level (high-level information services such as Manufacturing Execution Systems (MES), Enterprise Resources Planning (ERP), where decisions based on resources and effectiveness are made, and the factory level, where the manufacturing process is performed. To solve this interoperability issue, communication standard protocols (e.g., OPC UA) are used to enable a traceable flow of data between management and factory levels. On the other hand, interoperability between edge devices implemented at the machining assets, which gets the real data of the different physical variables that affect the process and provide the real status of the health of the process, and the microservices, that execute the decision through AI. Latency and determinism imply an interoperability challenge that will be solved through the installation of Time Sensitive Networks (TSN) supported by communication standard protocols between manufacturing assets. The purpose of reconfiguration a manufacturing line is to make changes within the manufacturing process by adjusting the independent variables that influence the dependent variables and ultimately impact the outcome of the process. The reconfiguration process should be conducted with the aim of keeping the dependent variables within the desired ranges, which can be achieved through the proper adjustment of independent variables. The main objective is to enhance the OEE in the production of FACTOR using smart optimization techniques based on AI algorithms that the i4Q Manufacturing Line Reconfiguration Toolkit will provide. The reconfiguration is based on human decision-making (e.g., if a diameter measurement is getting higher, a tool is changed), which involves a source of error that ends on a not optimized line configuration, reducing the OEE and therein the productivity of the factory, resulting in economic loses, productivity disadvantages and environmental issues due to the poorly manufactured parts, the energy employed in manufacturing with low productivity rates, and loses on time of personnel.

238

F. Alarcón et al.

5 Conclusions Manufacturing Line Reconfiguration solutions based on AI are one of the focal points of the next steps of Industry 4.0 developments, but the implementation of these solutions will be under the umbrella of interoperable systems. This paper highlights the challenges for a successful deployment of an RMS based on the use of the recent technological advances related to Industry 4.0 that proposes the solution i4Q LRT developed in the European Project i4Q Industrial Data Services for Quality Control in Smart Manufacturing (i4Q Project, n.d.) on the real manufacturing facility of FACTOR. The objectives in the use of this approach are the immediate detection of deviations in the manufacturing’s parameters and the online reconfiguration of the process to reduce quality problems, waste and breakdowns in the machine tools. The main contributions of this work are, firstly, the identification of a gap in the literature about the automatic and online reconfiguration of the machine tools parameters to quality control and failure avoidance by using IoT technologies in an interoperable environment. Secondly, the presentation and explanation of i4Q LRT and its technical background as a solution to cover the previous gap identified, and the main technical challenges of a successful implementation. Finally, an example of deployment de i4Q LRT in a real metal machining factory is presented, in which, it is mentioned the importance of taking into account the parameters that influence the OEE as a way to evaluate the improvements by the use of i4Q LRT. A future line of research will be the full implementation of the i4Q LRT solution and further analysis of the indicators defined to evaluate the fulfillment of the objectives and the scope of the improvements. Acknowledgements The research leading to these results is part of the i4Q project that has received funding from the European Union’s Horizon 2020 Research and Innovation Program, under Grant Agreement No 958205.

References 1. Koren, Y., Heisel, U., Jovane, F., Moriwaki, T., Pritschow, G., Ulsoy, G., Van Brussel, H.: Reconfigurable manufacturing systems. CIRP Ann. Manuf. Technol. 48, 527–540 (1999). https://doi. org/10.1016/S0007-8506(07)63232-6 2. Koren, Y., Gu, X., Guo, W.: Reconfigurable manufacturing systems: principles, design, and future trends. Front. Mech. Eng. 13, 121–136 (2018). https://doi.org/10.1007/s11465-0180483-0 3. Maganha, I., Silva, C., Ferreira, L.M.D.F.: The layout design in reconfigurable manufacturing systems: a literature review. Int. J. Adv. Manuf. Technol. 105, 683–700 (2019). https://doi.org/ 10.1007/s00170-019-04190-3 4. Poler et al., 2021 Poler, R., Karakostas, A., Vrochidis, S., Marguglio, A., Galvez-Settier, S., Figueiras, P., Gomez-Gonzalez, A., Mandler, B., Wellsandt, S., Trevino, R., Bassoumi, S., Agostinho, C.: An IoT-based reliable industrial data services for manufacturing quality control.

An Interoperable IoT-Based Application to On-Line Reconfiguration …

5.

6.

7.

8. 9. 10. 11.

12.

13.

14.

15.

16.

17.

239

In: 2021 IEEE International Conference on Engineering, Technology and Innovation (ICE/ ITMC), pp. 1–8. IEEE, Cardiff (2021) Dubey, R., Gunasekaran, A., Helo, P., Papadopoulos, T., Childe, S.J., Sahay, B.S.: Explaining the impact of reconfigurable manufacturing systems on environmental performance: the role of top management and organizational culture. J. Clean. Prod. 141, 56–66 (2017). https://doi. org/10.1016/j.jclepro.2016.09.035 Pérez, D., Alarcón, F., Boza, A.: Industry 4.0: a classification scheme. In: Viles, E., Ormazábal, M., Lleó, A. (eds.) Closing the Gap Between Practice and Research in Industrial Engineering, pp. 343–350. Springer, Heidelberg (2018) Bortolini, M., Galizia, F.G., Mora, C.: Reconfigurable manufacturing systems: literature review and research trend. J. Manuf. Syst. 49, 93–106 (2018). https://doi.org/10.1016/j.jmsy.2018. 09.005 Burns, T., Cosgrove, J., Doyle, F.: A review of interoperability standards for industry 4.0. Procedia Manuf. 38, 646–653 (2019). https://doi.org/10.1016/j.promfg.2020.01.083 Bi, Z.M., Lang, S.Y.T., Shen, W., Wang, L.: Reconfigurable manufacturing systems: the state of the art. Int. J. Prod. Res. 46, 967–992 (2008). https://doi.org/10.1080/00207540600905646 Koren, Y., Shpitalni. M.: Design of reconfigurable manufacturing systems. J. Manuf. Syst. 29, 130–141 (2010). https://doi.org/10.1016/j.jmsy.2011.01.001 Andersen, A.L., Nielsen, K., Brunoe, T.D.: Prerequisites and Barriers for the development of reconfigurable manufacturing systems for high speed ramp-up. Procedia CIRP 51, 7–12 (2016). https://doi.org/10.1016/j.procir.2016.05.043 Mehrabi, M.G., Ulsoy, A.G., Koren, Y.: Reconfigurable manufacturing systems: key to future manufacturing. J. Intell. Manuf. 11, 403–419 (2000). https://doi.org/10.1023/A:100893040 3506 Yelles-Chaouche, A.R., Gurevsky, E., Brahimi, N., Dolgui, A.: Reconfigurable manufacturing systems from an optimisation perspective: a focused review of literature. Int. J. Prod. Res. 59, 6400–6418 (2021). https://doi.org/10.1080/00207543.2020.1813913 Garbie, I.H.: An analytical technique to model and assess sustainable development index in manufacturing enterprises. Int. J. Prod. Res. 52, 4876–4915 (2014). https://doi.org/10.1080/ 00207543.2014.893066 Mehrabi, M.G., Ulsoy, A.G., Koren, Y.: Reconfigurable manufacturing systems and their enabling technologies. Int. J. Manuf. Technol. Manage. 1, 114–131 (2000). https://doi.org/ 10.1504/IJMTM.2000.001330 Daclin, N., Mallek, S.: Capturing and structuring interoperability requirements: a framework for interoperability requirements. In: Mertins, K., Bénaben, F., Poler, R., Bourrières, J.-P. (eds.) Interoperability for Enterprise Software and Applications (I-ESA’14), pp. 239–249. Springer, Heidelberg (2014) Cubero, D., Andres, B., Alarcón, F., Mateo-Casali, M.A., Fraile, F.: Toolkit conceptualization for the manufacturing process reconfiguration of a machining components enterprise. CEUR Workshop Proc. 3214 (2022). http://ceur-ws.org/Vol-3214/WS4Paper6.pdf

Part V: Standards, Ontologies and Semantics

New Ways of Using Standards for Semantic Interoperability Toward Integration of Data and Models in Industry Yves Keraron, Jean-Charles Leclerc, Claude Fauconnet, Nicolas Chauvat, and Martin Zelm

Abstract Recent European H2020 projects and clusters, Joint Industrial Projects in industry and advanced standards of Standardization Development Organizations converge toward new ways of using standards to enable integration of data and applications, thus enabling new ways of working all along the products’ and plants’ lifecycle and ecosystem. In this paper, we will describe innovative means developed by TotalEnergies to address the lack of interoperability of data produced all along the lifecycle of an asset. These means result in a TotalEnergies Semantic Framework, which aims at interfacing according to generic principles source data with reference standards to provide internal and external users with data they can compute in their own applications to support their processes. Keywords Enterprise innovation and standardization · Digitization methodology · Semantic interoperability · Data modeling · Digital twin · Semantic web · Linked data

Y. Keraron ISADEUS, 20 Avenue Des Muses, 44470 Carquefou, France e-mail: [email protected] J.-C. Leclerc (B) TotalEnergies, Tour Coupole, 2 Place Jean Millier, 92400 Courbevoie, France e-mail: [email protected] C. Fauconnet SousLeSens, 2 Rue Louis Combes, 33000 Bordeaux, France N. Chauvat Logilab, 104 Bd Auguste Blanqui, 75013 Paris, France e-mail: [email protected] M. Zelm Interop-VLab, 21 Rue Montoyer, 1000 Brussels, Belgium © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_21

243

244

Y. Keraron et al.

1 Introduction Industrial companies have a huge amount of data which cannot be retrieved and used easily because they are locked in, being not computable, stored on paper, or in proprietary formats. TotalEnergies has experimented a standards-based approach to solve these issues in an innovative way. New types of standards are necessary for industries to succeed in their digital transition. Efforts are being carried out in ISO/IEC to provide computable standards with model-based standards, e.g., ISO 15926 series [1], and SMART standards [2]. European coordination projects support integration of innovation and standards for industry of the future, e.g., the projects as Ontocommons [3] or StandICT [4]. Professional organizations or clusters in different industrial ecosystems are publishing roadmaps and conduct Joint Industrial Projects to develop the framework for needed standards development and use at the digital era. The conceptual framework we draft in this paper is driven by a Systems Engineering mindset, the methodology is based on ontologies to make data interpretable by all stakeholders of the ecosystem thanks to the linking to standards, and open-source tools are using semantic web technological standards for the implementation. This paper is organized as follows: In Sect. 2, we will describe the industrial context and motivations, in Sect. 3, we will present the necessary shared conceptual framework to organize and manage information on an industrial asset, in Sect. 4, we describe the TotalEnergies semantic framework, supported by the development of a set of open-source tools based on W3C standards to match the methodological process requirements, in last section, we will open the discussion on social challenges and conclude.

2 Context 2.1 Innovation and Standardization Standardization and innovation apparently seem contradictory. Efforts have been carried out to raise awareness among researchers and innovators on the importance of a standardization strategy in the development of game-changing products and services. As shown on Fig. 1, innovation and standardization are in dynamic relationship with industry competitiveness, and we need to consider the impact of the digital technological environment on these dynamics. Data and models’ standardization stresses this problematic when advanced Information and Communication Technologies, as semantic web standards and technologies, offer new opportunities for re-modeling industrial ecosystems to improve their economic balance between fulfilled needs and used resources. This approach of

New Ways of Using Standards for Semantic Interoperability Toward …

245

Fig. 1 Standardization–industry competitiveness–innovation in a digital environment

combining knowledge and technology, contributes to build a more sustainable and inclusive future and to address major societal challenges as climate change. The approach will follow the concrete industrial Oil and Gas ecosystem, based on the principles, which made the success of the web. The initial proposal of Tim Berners Lee [5] deserves a careful reading. It is striking how the analysis of the issue related to the availability of information in an ecosystem and the Tim Berners Lee’s proposition fits to other ecosystems than CERN as industrial ones. The solution consists of using hypertext and insists on the separation of the data storage systems from the data display systems and on the interface between them. In the Oil and Gas ecosystem, and beyond, in process industries, standards and the linking to standards play a key role in this interfacing to leverage the value of existing data and to support the use and creation of knowledge for decision making in the collective processes of industrial engineering.

2.2 Industrial Context The current way of working is often one-dimensional and fragmented; actors struggle to traverse different viewpoints such as operations, maintenance, engineering, and aligning References Data Libraries from various standards and normalization groups is still challenging in terms of harmonization. In real life, we meet complexity and need to work with several hierarchical data structures that serve the needs of different perspectives to represent in a precise manner our industrial assets all along the lifecycle.

246

Y. Keraron et al.

Industry of the future (Digital Factory, IEC CDD,1 AAS,2 of RAMI 4.0,3 …) needs a holistic plan and a methodological common framework to accelerate alignment of an Industrial References Data Foundation. We need to agree and to develop a governance model to enable digital transformation. This implies structuring and digitalizing information and standards in the same way to avoid dispersion of initiatives. It is being considered that the coupling of ISO 15926-14 [6] with ISO/IEC 81,346 [7] series standards, offers a reference backbone adapted to structure and link correctly together the data of multi-energies assets.

3 The Necessary Shared Conceptual Framework We have already addressed the need for a shared conceptual framework for data interoperability in the domain of maintenance [8]. This conceptual framework shall fit to the formal ontology of the industrial engineering that means that of systems engineering, as a foundation to other domain ontologies [9]. This approach is focused on the composability of different parts of a complex system, seen as a dynamic whole, open to legacy and future models, based on existing, in the making or on future standards. This is the best guarantee to bring sustainable solutions to long-standing issues of lack of interoperability and to the impossibility to exploit the entire actionable legacy knowledge of a company.

3.1 ISO/IEC 81346 and Systems Engineering Our approach is based on existing cross domain proven standards as ISO/IEC 81346, reference designation system [7], which introduces the concept of aspect, one key to have a common approach to integrate different perspectives. ISO/IEC 81346 brings very helpful principles to build a complex system; that means building from heterogeneous parts and bringing them together in a common framework. It fits with a systems’ engineering approach and is the pillar of the asset Information Management Framework (IMF) [10] of the READI JIP project [11], which aims at digitalizing the requirements of standards applicable to the Oil and Gas exploration sector in Norway.

1

IEC CDD: International Electrotechnical Commission Common Data Dictionary. AAS: Asset Administration Shell. 3 RAMI 4.0: Reference Architectural Model for Industry 4.0. 2

New Ways of Using Standards for Semantic Interoperability Toward …

247

ISO/IEC 81346 offers the necessary and sufficient framework to support a systems’ engineering methodology, to provide linked structures for the management of all the information produced in the lifecycle of an asset and to specify the data models and standards to be used in different contexts. This framework opens the door to benefits brought by early verification of the conformity of design with requirements through early trustworthy digital twins, which will be further completed and used in the operation phases with connection to the streams of data from sensors of the physical observable asset, associated with data analytics. There are 3 primary aspects of an object according to ISO/IEC 81346-1 [7], namely: (i) the function aspect used to highlight the functional relations among the components of the system; (ii) the product aspect used to highlight the constructional relations of the components of the system, and (iii) the location aspect is used to highlight the spatial relations among the components of the system. These aspects are supported by the ISO/CD TR 15926-14, Data model adapted to OWL 2 Direct semantics [6]. Data standards are used for specific contexts, e.g., ISO 14224 for reliability and maintenance data. ISO/IEC 81346-1 [7] also brings principles of modularity and configuration management enabling the reuse of modules and the follow-up of the modifications during the lifecycle, a critical topic of configuration management. It benefits from dozens of years of application in various industries, as power generation, and supports the integration of existing designation systems. This flexibility and scalability make the standard easier to implement in industry and able to integrate existing set of data. ISO/IEC 81346-10 [12] in its new version published in 2022 specifies the reference designation systems for objects of Energy Production Systems. ISO 15926-14 reconciles the needs of the whole ecosystem and supports its coherence because the applications of the recommendations of the W3C supports a selfregulating, self-developing, and self-reinforcing information system with common constraints.

3.2 Application in the Context of Oil and Gas Pilot Contributions to normalization bodies groups such as IOGP4 -DISC,5 JIP33, JIP36 CFIHOS6 are unique opportunities to cross several knowledge domains and to experience together a methodology and its applicability by developing a learning non-competitive space to experiment on real asset data together.

4

IOGP: International Oil and Gas Producers. DISC: Digitalization and Information Standards Subcommittee. 6 CFIHOS: Capital Facilities Hand Over Specification. 5

248

Y. Keraron et al.

We conducted in IOGP (2020–2021) a Low Voltage Motor Full Digital-Twin Pilot of standard modeling and digitalization in partnership with the JIP7 READI.8 It makes possible structuring fragments of information step by step, in such a way that each step provides immediate value without the need for a huge upfront investment. Finally, we do believe that a common language across our disciplines and métiers is of key importance; this standard language creates unambiguous understanding among all stakeholders internally and externally. This semantization layer, as described in Sect. 4, is key to get adequate data quality, in a trustable and reliable way, for data analytics and further reuse in other applications.

4 TotalEnergies Approach of a Standard Semantic Framework: A Methodology Supported by Souslesensvocables Tools 4.1 TotalEnergies Semantic Framework (TSF) Foundations Relational models and databases hardly manage complex and evolving data and faces complexity efficiency challenges when trying to interconnect data because of the “spaghetti plate” effect. The semantic web approach is the most convenient way to implement standards through ontologies: • Capability to manage complexity with the concept of aspects; • Agility to evolve compared to the rigidity of SQL models. Implementing the TSF process implies that the standards used are expressed using so-called controlled vocabularies that ensure disambiguated semantics both for human communication and machine processing. For controlled vocabularies we use W3C standards such SKOS [13] for thesaurus and OWL [14] for ontologies both having a graph structure. In addition, semantic web technologies define the Linked Data principles that provide for each atomic resource a unique resource identifier URI and eventually an URL, unique resource locator, which can be also a hyperlink to a web site that hosts the resource identity card. The TSF methodology and tools are designed to align enterprise data on standards using semantic web technologies and are guided by several principles: • Transparency and auditability of all treatments (no black box effect); • Usable by business experts in a collaborative process; • Iterative and cumulative process to deliver both short term and patrimonial value.

7 8

JIP: Joint Industry Project. READI: REquirement Asset Digital lifecycle Information.

New Ways of Using Standards for Semantic Interoperability Toward …

249

Standards are historically expressed in.pdf format including tables and text. For years now, some pioneer standards have been natively expressed or translated in OWL. TSF uses ontologies natively expressed in OWL by their editor or when necessary, build OWL ontology from native format.csv or even.pdf arrays: SousLeSensVocables [15] manages mainly graphs and URIs at all levels.

4.2 The TotalEnergies Semantic Framework Processes Supported by SousLeSensVocables (SLSV) The most valuable outcomes of TSF process consist in producing standardized knowledge graphs: foundations for digital twins and business domains ontologies. This provides a framework for both data and knowledges governance as showed in Fig. 2. SLSV is a set of experimental tools we developed and continuously improved to implement this process. These tools can be used separately to explore each step of the process as well as to orchestrate generation of standardized knowledge graphs. The implementation of TSF processes faces different challenges as described in Fig. 3.

Fig. 2 TSF framework to manage assets life cycle stages with Standards, Ontologies, and Graph management tools

250

Y. Keraron et al.

Fig. 3 TSF process’ challenges to transform traditional datasets into knowledge graphs

4.3 Standards TBOX Discovery and Comparison Based on Label Similarity To reach TSF goals and choose the most convenient classes among all available standards one needs to explore and compare them in their different aspects. This requires an interactive and visual environment. In SousLeSensVocables toolbox as shown in Fig. 4, this ontologies exploration task is mainly made possible using Lineage Tool. It allows users to navigate graphically inside the ontology’s taxonomies, open close node branches, search and compare classes and properties by their labels, explore linked nodes, and edit all metadata. Those tools allow also creating and modifying nodes and relations in a controlled process. These technical functions help to evaluate the: • Completeness and precision of concepts (called here classes); • Quality of taxonomies organizing classes in hierarchical trees of concept; • Richness of predefined properties describing the intrinsic semantic links between classes; • Richness of metadata associated with classes: definitions, synonyms, links to other standards.

Fig. 4 SousLeSensVocables tools and their functional perimeter

New Ways of Using Standards for Semantic Interoperability Toward …

251

4.4 ABOX Data Mapping to Standard TBOX To convert tabular data into knowledge graphs and setup the enterprise ontology models, the first step is to map the words contained in the data to classes of the standard using labels. Theoretically the content of a table column should be semantically consistent thus mapping the whole column of a standard class should be enough. Unfortunately, it is not always the case because the semantic and the content of relational data models is much less precise and ambiguous compared to the fine and smart semantics of the standards. So, for table columns containing terms, it is necessary to analyze the semantic context of each value while mapping them to relevant standard classes. This task is achieved in SLSV using the Standardizer tool coupled to KGMappings tool schematized in Fig. 5. The Standardizer tool extracts distinct values from table columns containing terms (not numbers) and tries to align them with the labels (names) of classes from several standards. For terms without exact match the Standardizer tool has a search Engine to find fuzzy matches in the standards or even search capability to perform a manual mapping. It should be noticed that the effort to finalize vocabulary mapping will be drastically reduced for future data integration belonging to the same business domain. The smartest standard ontologies contain not only class taxonomies but also properties and constraints that link classes together by design. Using these predefined links between reference classes (technically domains, ranges, and restrictions) it is possible to infer many triples of the knowledge graph just by using the transitive link

Fig. 5 Using transitivity between data, classes, and ontology constraints to generate efficient and consistent knowledge graphs

252

Y. Keraron et al.

Fig. 6 From tabular data to standardized knowledge graph generation and exploitation

between data typed to classes, themselves being linked in the ontology TBOX [16] with some properties. This mechanism ensures the consistency of the knowledge graph and allows a processable quality control between requirements as defined in the standards and the reality of data.

4.5 Knowledge Graph Construction and Management Once the mapping is achieved, it is time to generate the knowledge graph. This task is done automatically by the KGgenerator tool that takes the data, the mappings and the dictionaries and produces sets of triples, concrete form of the knowledge graph. SLSV has also prototypes the KGbrowser tool to allow navigation through and perform complex queries in the knowledge graphs combining graph traversal algorithm and automatically generated SPARQL queries, as presented in the Fig. 6.

5 Discussion and Conclusion The development and use of the TotalEnergies Semantic Framework are an innovation, which faces different challenges. The technical ones should be overcome through projects proving the value of the methodology and of the chosen technology, its scalability for the big volume of data of a real asset. We observe that the main remaining challenge is a social one. Innovation needs support, trust, protection from skepticism to grow progressively and safely. Thus, the social aspect appears to be key to motivate teams to carry out the needed efforts required for a successful digital transition. Our main motivation for developing methodology and tools managing together industry standards, semantic web, and data, is that it is probably the most efficient and promising way to link complex knowledge and data for existing assets. This shapes future information systems like digital twins in a context of interoperability and to

New Ways of Using Standards for Semantic Interoperability Toward …

253

support patrimony information structuration and knowledge capture in a coherent way. This change of mindset supports a progressive participation of the stakeholders for connecting our data and taxonomies to higher level conceptual common nodes. We believe that this is the key to remove bottlenecks in our fragmented information system. A new generation of tools adapted to industrial requirements in terms of functionalities and performances are being developed and are interoperable thanks to their standards compliance. This approach needs further co-development, cooperation of the internal and external actors through collective intelligence processes to bring coherence and resilience to the industrial ecosystems.

References 1. ISO 15926-1:2004, Industrial automation systems and integration—integration of life-cycle data for process plants including oil and gas production facilities—part 1: Overview and fundamental principles 2. What’s next in Standards and Standards Publishing at ISO and IEC. https://www.typefi.com/sta ndards-symposium-2021/whats-next-in-standards-publishing-iso-iec/. Last Accessed 13 Oct 2021 3. Ontocommons, Ontology-Driven Data Documentation for Industry Commons, https://ontoco mmons.eu. Last Accessed 13 Oct 2021 4. StandICT. https://www.standict.eu. Last Accessed 03 Oct 2021 5. Information Management: A proposal, the original document of Tim Berners Lee, March 1989. https://www.w3.org/History/1989/proposal.html. Last Accessed 27 Sep 2021 6. ISO/CD TR 15926-14: Industrial automation systems and integration—integration of life-cycle data for process plants including oil and gas production facilities—Part 14: Data model adapted for OWL2 Direct Semantics 7. ISO/IEC 81346-1: 2009—Industrial systems, installations and equipment and industrial products—Structuring principles and reference designations—part 1: Basic rules 8. Maintenance terminology standards: some issues and the need of a shared framework for interoperability, Yves KERARON et Antoine DESPUJOLS, I-ESA 2020, Interoperability for Maintenance Workshop, November 2021 9. Lu, J., Keraron, Y., Cameron, D., Smith, B., Kiritsis, B.: Systems Engineering as the foundation for industrial domain ontologies. SemWeb.Pro 2021. Last Accessed 17 Mar 2021 10. READI JIP—IMF, Asset Information Model Framework. https://readi-jip.org/asset-inform ation-modelling-framework. Last Accessed 11 Sep 2021 11. READI JIP—Shaping the Future of Digital Requirements and Information Flow in the Oil and Gas Value Chain. https://readi-jip.org. Last Accessed 17 Sep 2021 12. ISO/TS 81346-10:2015—Industrial systems, installations and equipment and industrial products - Structuring principles and reference designation—part 10: Power plants 13. W3C SKOS—Simple Knowledge Organization System. https://www.w3.org/TR/skos-refere nce/. Last Accessed 23 Sep 2021 14. W3C OWL 2—Web Ontology Language. https://www.w3.org/TR/owl2-primer/. Last Accessed 22 Sep 2021 15. SousLeSensVocables under MIT license. https://github.com/souslesens/souslesensVocables. Last Accessed 22 Sep 2021 16. TBox and ABox reasoning in expressive description logics. https://www.aaai.org/Papers/Wor kshops/1996/WS-96-05/WS96-05-004.pdf. Last Accessed 02 Oct 2021

Business Context-Based Quality Measures for Data Exchange Standards Usage Specification Elena Jelisic , Nenad Ivezic , Boonserm Kulvatunyou , Scott Nieman , and Zoran Marjanovic

Abstract Standards-based methods for data exchange are key for Business-toBusiness (B2B) integration. However, in the case of small businesses, there are significant barriers to utilizing these methods. One of the reasons is that the standards are large, making their use very difficult. The most recent attempt to resolve this issue was the first OAGIS Express Pack version that is defined as Minimal Viable Product (MVP) of the OAGIS standard. Yet, there is no existing approach to assess the quality of the OAGIS Express Pack usage specification. This is important because such quality measurement would give an actionable feedback whether the standard (or its portion) meets the integration requirements. These measurements could help reduce standards-development time, standards-adoption time, and integration-testing time, while increasing the overall integration process effectiveness. This paper introduces two quality measures, illustrates their measurement, and shows how to interpret the results. Keywords Data exchange standard · Quality measures · Usage specification · Implementation guideline · Digitalization · Small and medium enterprise

E. Jelisic (B) · N. Ivezic · B. Kulvatunyou Systems Integration Division, National Institute of Standards and Technology, Gaithersburg, MD 20899, USA e-mail: [email protected] N. Ivezic e-mail: [email protected] B. Kulvatunyou e-mail: [email protected] E. Jelisic · Z. Marjanovic Faculty of Organizational Sciences, University of Belgrade, 11000 Belgrade, Serbia e-mail: [email protected] S. Nieman Land O’Lakes, Arden Hills, MN, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_22

255

256

E. Jelisic et al.

1 Introduction The Small Business Administration (SBA) Office of Advocacy cites that small businesses account for roughly 40% or more of the Gross Domestic Product of the United States, extensively contributing to job growth and innovation [1]. Furthermore, small businesses comprise 99.9% of all U.S. businesses. Key, digital, supply chain-related sectors of interest include Manufacturing jobs (44%), Food Services (60%), Agricultural (85%), and Technical/Scientific (57%) [1]. Despite the large numbers and perceived impact, there are significant barriers for small businesses to get connected. Their most common challenges include (1) the lack of skilled, technical resources, especially in rural areas; (2) salary and consulting cost of these resources; (3) large, standards-making integrations more difficult; (4) lack of software application with usable application programming interfaces (APIs) enabling data exchange; (5) vendors of software applications with more interests in customer “lock-in” their products and reliance on professional services; (6) the high cost of integration tools that enable these capabilities, and, (7) the lack of broadband Internet in rural areas. In order for small businesses to leverage data exchange standards (DES) for integration, in many cases a consensus on how to use those standards is still needed. Such a consensus is usually included in an implementation guideline [2], which is the refinement of DESs. In addition, tools to support the management of these usage specifications have historically been lacking. Consequently, users often resort to heterogeneous spreadsheets. Interpretation of certain DESs, due to their flexibilities and coverages, required highly skilled professionals with substantial standards maintenance and implementation experience. The Score tool, developed by the Open Applications Group Inc. (OAGi) [3] and the National Institute of Standards and Technology (NIST) [4], was designed to speed up the development of DESs and their usage specifications [5, 6]. Score is the first and only open-source tool based on the ISO Core Components metamodel (CCTS). CCTS provides both the required research and the operational support for enterprise-integration architects, business analysts, integration developers, and standards architects. The tool was used to advance research on life-cycle management [6, 7] and quality improvement of DESs [8]. Additionally, it has been used to develop new releases of the Open Applications Group Integration Specification (OAGIS) standard [9]. Most recently, Score was employed to define Minimal Viable Product (MVP) of the OAGIS DES for use by Small and Medium Enterprises [10]. The first OAGIS Express Pack version is released in March 2021. The OAGIS Express Pack reflects “the requirements gathered from SMEs over many years in an 80–20 principle (i.e., 80% of the users need only 20% of the product) approach” [7]. Currently, however, the Score platform does not measure whether a candidate DES usage specification meets implementation requirements. Failures at the enterprise level, caused by DES usage failures, can be very costly and time-consuming to fix. Data-exchange errors at the production level can result in scrapping entire product batches. Without such a

Business Context-Based Quality Measures for Data Exchange …

257

measurement mechanism, candidate DESes must undergo a lengthy testing and validation process. This testing could be greatly reduced with effective initial measurements. Usage specifications and quality measurements would help reduce DES usage-development time, standards-adoption time, and integration-testing time while increasing the overall effectiveness of DESs. The main contribution of this paper is to introduce usage specification quality measures, propose their measurements, and discuss their interpretations. The structure of the paper is as follows. Section 2 provides the necessary background. Section 3 introduces the proposed quality measures and gives instructions for their measurements. Section 4 uses the OAGIS Express Pack to illustrate the quality measures. Section 5 discusses the results of the quality measures and proposes future research directions. Finally, Sect. 6 concludes the paper.

2 Background The Core Components Technical Specification (CCTS) is an ISO-approved, implementation-neutral, meta-model standard that improves the practice of developing and using DESs [11]. CCTS introduces two types of data modeling components–Core Components (CCs), as DES building blocks, and Business Information Entities (BIEs), as DES usage specifications. CCs are conceptual, data-model components, while BIEs are logical components that restrict the underlying CCs to a specific Business Context [8]. Business Context is a novel concept described by CCTS to describe integration use case(s) the BIE captures using a directed acyclic graph. Each Business Context is described by a set of Business Context categories that have an assigned list of values. UN/CEFACT proposed eight Business Context categories, but one can choose their own list of Business Context categories beyond those proposed and most often these unique categories relate to business processes. Besides OAGIS [12], the CCs part of CCTS has been adopted by several DESs such as UBL [13], and NIEM [14]. Effective business context is another important concept [2]. Keeping in mind that each BIE has its associated Business Context, Effective business context for a BIE is calculated to determine whether the BIE is relevant for a requested integration use case. It is calculated as an intersection between the BIE’s assigned Business Context and the Business Context of the requested integration use case (i.e., requested Business Context). The intersection is determined for each employed Business Context category. If the intersection for any category is an empty set, the Effective business context is resolved as null, thus the component will be treated as not relevant for the requested integration use case. Otherwise, if the Effective business context is not null, the component will be treated as relevant (details can be found in [15]).

258

E. Jelisic et al.

3 DES Usage Specification Quality Measures During the DES usage specification development process, only a limited list of integration use cases can be accounted for. Otherwise, this process would become inefficient, and the usage specification would become difficult to use by developers. Each integration use case is specified by a combination of values that are valid for each employed Business Context category. This paper proposes a measurement methodology to predict the performance of candidate DES usage specifications. The main technical idea is to develop a CCTS Business Context-based, measurement method that advances the notion of standards performance given a set of requirements. Contextual information is important for successful DES-based integration. Contextual information indicates meta-data about the integration. Examples of contextual information include the overall objective of the integration workflow, the data objects associated with a workflow task, and the specific industry and country in which the task will be performed. Contextual information narrows the semantics and value domains of the DES components. Similarly, it explicates the semantics of the integration and exchange requirements (e.g., message structure, value domains). Two quality measures are suggested to support the assessment of a DES usage specification. • Completeness of coverage—measures how completely the standard covers the data exchange requirements. • Effectiveness—measures how focused and compact the standard is in meeting the requirements.

3.1 Completeness of Coverage The Completeness of coverage measure indicates the difference between the Business Contexts for projected and targeted integration use cases. Targeted integration use cases are those that are originally accounted for, while projected are the ones for which we hope that DES usage specification will be able to cover. If there is an identified difference between these two Business Contexts (e.g., Business Context for projected integration use cases is wider), it would indicate that the scope for the projected, integration, use case is not covered entirely. For example, there might be some required DES components missing from the usage specification, or the value domains are too restricted. This situation would result in a conclusion that the content of the DES usage specification should be revised and analyzed to identify the potentially missing components. Another scenario would be that projected use cases are narrower. This situation would result in a conclusion that components from the DES usage specification are too general and need to be refined. However, this scenario will be neglected and left for future work since it requires more detailed analysis. The numerical result of Completeness of coverage will inform us about the portion of projected integration use cases that are likely to be covered entirely by existing DES usage specification. This measure may be interpreted to assess the

Business Context-Based Quality Measures for Data Exchange …

259

quality of the structure of the DES usage specification. To measure the Completeness of coverage of a DES usage specification, two scopes have to be identified. The first one, Targeted scope (TS), is a union of N integration use cases that were accounted for when the DES usage was specified. TS = ∪IUCi , i ∈ N .

(1)

The second one, Projected scope (PS), identifies M projected integration use cases for which we hope that developed DES usage specification will be applicable. PS = ∪IUC j , j ∈ M.

(2)

The ratio between those two scopes gives us information about the Completeness of coverage. The Number of intersecting integration use cases represents the number of integration use cases that can be found both in the TS and PS. Completeness of coverage =

Number of intersecting integration use cases . (3) M

3.2 Effectiveness The Effectiveness measure involves two, classical parts–precision and recall [16]. To assess the Effectiveness of the DES usage specification, we will have to go through the following steps. First, we must select a list of arbitrary test integration use cases from the TS. Second, for each test integration use case, we must calculate Effective business context (see Sect. 2). Third, we must calculate the precision and recall, as follows: Number of true relevant components . Number of true relevant + False relevant components

(4)

Number of true relevant components . Total number of components needed for targeted use case

(5)

Precision = Recall =

The precision rate denotes the capability of a DES usage specification to identify only components that are relevant for the targeted integration use cases. On the other hand, the recall rate denotes the capability of the DES usage specification to identify all components that are needed for the same use cases. Low precision and recall rates indicate that the DES usage specification is not informative enough to support recognition of all relevant components. This situation would indicate that the contextualization should be revised and improved. This measure may be interpreted to assess the quality of the contextualization of the DES usage specification.

260

E. Jelisic et al.

4 OAGIS Express Pack–Use Case Starting with a B2B process definition, which communicates the scope of a use case, one selects a schema to support a specific, information exchange in the process. Let’s assume that a PurchaseOrder Schema from OAGIS Express Pack [17] has been identified. Figure 1 outlines the schema that will be used for illustration of the proposed measures.

4.1 Business Context Knowledge Base For this paper, we will employ five Business Context categories (see Table 1). Figure 2 shows a portion of the Business Context knowledge base for the OAGIS Express Pack comprised of the identified categories and schemes. Gray nodes denote the TS, while red rounded-rectangle nodes denote the PS. As emphasized in Sect. 3, each integration use case is specified by valid combinations of values for each employed Business Context category. That means that if Canada, hypothetically, does not have organizations of size S, then this combination of values for employed Business Context categories (i.e., integration use case) would not be valid. This does not have to necessarily be a general rule. Instead, it can denote applied domain business rules (e.g., OAGIS Express Pack business rules). For this paper, we do not consider such rules, so all combinations are valid. Fig. 1 An example OAGIS express pack PurchaseOrder schema

Table 1 Business context definition

Business context category

Business context schemes

Size of organization

Size list

Item type

Products and services

Industry

ISIC [18]

Geo-political location

Countries

Business process

Order-to-cash

Business Context-Based Quality Measures for Data Exchange …

261

Fig. 2 OAGIS express pack–business context knowledge base Table 2 Targeted scope Number

Targeted integration use cases

1

S–Finished goods–USA–Animal feeds1 —PurchaseOrder

2

M–Finished goods–USA–Animal feeds—PurchaseOrder

4.2 OAGIS Express Pack–Quality Measures’ Results In this section, we will present the measurement and results of the proposed DES usage specification quality measures. Completeness of coverage. To measure the Completeness of coverage, t we first determine the intersection between TS and PS. The list of targeted integration use cases is presented in Table 2, while the list of projected integration use cases is presented in Table 3. The number of intersecting integration use cases is 2, and consequently, the result for the Completeness of coverage is as follows: Completenesso f coverage =

2 = 0.25. 8

(6)

To illustrate the intended interpretation of Completeness of coverage (i.e., the portion of the projected integration use cases that are likely to be covered entirely by existing DES usage specification, as stated in Sect. 3.1), we analyze PurchaseOrder requirements for a new use case that is included in the given projected integration 1

Abbreviated from Manufacture of prepared animal feeds.

262

E. Jelisic et al.

Table 3 Projected scope Number

Projected integration use cases

1

S–Finished goods–USA–Animal feeds—PurchaseOrder

2

M–Finished goods–USA–Animal feeds—PurchaseOrder

3

S–Finished goods–Canada–Animal feeds—PurchaseOrder

4

M–Finished goods–Canada–Animal feeds—PurchaseOrder

5

S–Raw material–USA–Animal feeds—PurchaseOrder

6

M–Raw material–USA–Animal feeds—PurchaseOrder

7

S–Raw material–Canada–Animal feeds—PurchaseOrder

8

M–Raw material–Canada–Animal feeds—PurchaseOrder

use cases, which differs from the targeted integration use cases. For this paper, we hypothesize PurchaseOrder requirements for some Company A which business can be described by the integration use case No 7 from Table 3. Figure 3 shows PurchaseOrder requirements for Company A. If we compare Company A’s requirements with OAGIS Express Pack PurchaseOrder schema, it is obvious that five components are missing. Those components are underlined in Fig. 3. This is possible and likely to occur since this use case was not covered by the targeted integration use cases. Yet, for very homogeneous application domains where new requirements do not result in very different BIEs from the existing BIEs, the measure may be too pessimistic. Effectiveness. To measure the Effectiveness, the first step is to choose a set of test integration use cases from the TS. Let us assume one such example test case is Company B, which declared that its business can be described by the integration use case No 1 from Table 2. Company B’s PurchaseOrder requirements are presented in Fig. 4. The next step is to calculate Effective business context for OAGIS Express Pack components from the PurchaseOrder schema. For this example, the required Business Fig. 3 Projected scope–company a PurchaseOrder requirements

Business Context-Based Quality Measures for Data Exchange …

263

Fig. 4 Targeted scope–Company B PurchaseOrder requirements

Context is Company B (i.e., integration use case No 1 from Table 2). All components from OAGIS Express Pack PurchaseOrder schema have the same assigned Business Context that is defined by the list of identified, integration use cases for the TS (see Table 2). Further, Effective business context is calculated as described in Sect. 2. Since the calculation of Effective business context for this example is trivial, details will be omitted. According to the Effective business context calculation, all components from OAGIS Express Pack PurchaseOrder schema are valid for Company B. However, according to the Company B’s PurchaseOrder requirements presented in Fig. 4, countrySubDivision component is not relevant. In summary, for this simplified test integration use case 11 components are true relevant, and 1 component is false relevant. Consequently, precision and recall rate are as follows: Precision = Recall =

11 = 0.92. 12

(7)

11 = 1. 11

(8)

5 Discussion and Future Work Proposed quality measures are envisioned to be used as guidelines for more effective DES usage specification development, standards adoption, and integration testing. If the projected scope is significantly different from the targeted scope, it is not realistic to expect that the Completeness of Coverage result would be close to 1. The DES usage specification development team should agree on acceptable results. In the example shown in this paper, that result was notably low. However, this measure only indicates the probability that the DES usage specification would miss the components needed for the projected integration use cases. In practice, this does not have to be the case, especially if targeted and projected scopes are close enough. Having this in mind, it would be useful to introduce an additional, quality measure that would determine the similarity between targeted and projected scopes. We believe that such

264

E. Jelisic et al.

a measure would give more precise indications about the quality of the structure of the DES usage specification. In addition, for the measurement of Completeness of coverage, only missing BIEs are discussed. In other words, the assumption is that there is no defined usage specification (BIEs) for existing DES components (CCs). The separate problem would be if the DES does not contain the needed component at all (i.e., the component is missing on the CC level). This issue should be addressed through future work. In the example shown in this paper, Effectiveness gave notably good results. Such results are surprising since all components from the OAGIS Express Pack PurchaseOrder schema have the same assigned Business Context. In reality, this would not be the case, since not all components are relevant for all targeted, integration, use cases. Although this functionality is not currently supported in Score, such variability of components’ assigned Business Contexts would indeed require reliable measures about the quality of the contextualization of the DES usage specification. This remains an area of ongoing research.

6 Conclusion The paper proposes a CCTS Business Context-based, measurement method that advances the notions of standards performance given a set of integration requirements. Two, quality measures were introduced that can be used to assess the quality of the DES usage specification. The paper employs the OAGIS Express Pack PurchaseOrder schema to illustrate the measurement and to give an interpretation of the results. Although these quality measures would be useful, certain caveats are identified. Future research will propose enhancements to tackle those issues.

7 Disclaimer Any mention of commercial products is for information only; it does not imply recommendation or endorsement by NIST.

References 1. Small Businesses Generate 44 Percent Of U.S. Economic Activity. https://advocacy.sba.gov/ 2019/01/30/small-businesses-generate-44-percent-of-u-s-economic-activity/. Last Accessed 07 Oct 2021 2. Novakovic, D.: Business Context Aware Core Components Modeling, https://publik.tuwien. ac.at/. Last Accessed 28 Sep 2021 3. The Open Applications Group Inc. https://oagi.org. Last Accessed 21 Oct 2021

Business Context-Based Quality Measures for Data Exchange …

265

4. The National Institute of Standards and Technology (NIST). https://www.nist.gov. Last Accessed 21 Oct 2021 5. Jelisic, E., Ivezic, N., Kulvatunyou, B., Nieman, S., Oh, H., Anicic, N., Marjanovic, Z.: Knowledge representation for hierarchical and interconnected business contexts. In: Archimede, B., Ducq, Y., Young, B., Karray, H. (eds.) Enterprise Interoperability IX, pp. 17–20. Springer, Heidelberg (2020) 6. Kulvatunyou, B. (Serm), Oh, H., Ivezic, N., Nieman, S.T.: Standards-based semantic integration of manufacturing information: past, present, and future. J. Manuf. Syst. 52, 184–197 (2019). https://doi.org/10.1016/j.jmsy.2019.07.003 7. Ivezic, N., Kulvatunyou, B., Jelisic, E., Oh, H., Frechette, S., Srinivasan, V.: A novel data standards platform using the ISO core components technical specification. In: Proceedings of the ASME 2021. International Design Engineering Technical Conferences and Computers and Information in Engineering Conference IDETC/CIE 2021, V002T02A063. The American Society of Mechanical Engineers, New York (2021). https://doi.org/10.1115/DETC2021-68067 8. Jelisic, E., Ivezic, N., Kulvatunyou, B., Anicic, N., Marjanovic, Z.: A business-context-based approach for message standards use—a validation study. In: Welzer, T., Eder, J., Podgorelec, V., Wrembel, R., Ivanovic, M., Gamper, J., Morzy, M., Tzouramanis, T., Darmont, J., Latific, A.K. (eds.) New Trends in Databases and Information Systems. ADBIS 2019. Communications in Computer and Information Science, vol. 1064. pp. 337–349. Springer, Cham (2019). https:// doi.org/10.1007/978-3-030-30278-8_3 9. OAGi Releases OAGIS 10.7.1 and Score 2.1. https://oagi.org/NewsEvents/NewsandArticles/ OAGISReleases1071andScore21/tabid/323/Default.aspx. Last Accessed 27 Oct 2021 10. Small & Medium Enterprise Working Group. https://oagi.org/OAGISWorkingGroups/Workin gGroups/tabid/149/Default.aspx. Last Accessed 27 Oct 2021 11. Core Components Technical Specification CCTS, version 3.0. https://unece.org/DAM/cefact/ codesfortrade/CCTS/CCTS-Version3.pdf. Last Accessed 17 Mar 2021 12. OAGIS 9 Naming and Design Rules Standard. https://oagi.org/Portals/0/Downloads/Resour ceDownloads/OAGIS_90_NDR.pdf. Last Accessed 15 Apr 2021 13. UBL Conformance to ebXML CCTS ISO/TS 15000-5:2005. http://docs.oasis-open.org/ubl/ UBL-conformance-to-CCTS/v1.0/UBL-conformance-to-CCTS-v1.0.html. Last Accessed 10 Oct 2021 14. National Information Exchange Model Naming and Design Rules. https://niem.github.io/ NIEM-NDR/v5.0/niem-ndr.html. Last Accessed 19 May 2021 15. Jelisic, E., Ivezic, N., Kulvatunyou, B., Nieman, S., Oh, H., Babarogic, S., Marjanovic, Z.: Towards inter-operable enterprise systems–graph-based validation of a context-driven approach for message profiling. In: IFIP International Conference on Advances in Production Management Systems, pp. 197–205. Springer, Cham (2020) 16. Machine Learning Crash Course. https://developers.google.com/machine-learning/crashcourse. Last Accessed 27 Nov 2021 17. OAGIS—Express Pack Presentation. https://oagi.org/Resources/TrainingVideos/tabid/180/ Resources/TrainingVideos/ExpressPacks/tabid/322/Default.aspx. Last Accessed 27 Nov 2021 18. International Standard Industrial Classification of All Economic Activities (ISIC) Rev. 4. https://ilostat.ilo.org/resources/concepts-and-definitions/classification-economic-act ivities/. Last Accessed 9 Sep 2021

An Ontology of Industrial Work Varieties Antonio De Nicola

and Maria Luisa Villani

Abstract Industry 4.0 requires an increased digitalization and automation of industrial processes. Achieving a better understanding of them is a precondition still hindered by several factors. Workers hiding their individual know-how gained in the company to avoid losing power because they hold unique knowledge, and lack of agreement among different workers on how work practices are actually performed, are just two frequent examples of such obstacles. To support process comprehension, we present an upper ontology for modeling industrial work varieties, named Work-As-x (WAx) ontology. The aim is to shed light on the different varieties of work knowledge and on how these are converted between agents within a cybersocio-technical system, such as an industry. The WAx ontology has been conceived to consider and better manage the different perspectives on the actual industrial processes, such as the Work-As-Imagined held by blunt-end operators and the WorkAs-Done by sharp-end operators. The ontology extends the Suggested Upper Merged Ontology (SUMO) to guarantee a rigorous semantic basis. Finally, we discuss how the WAx ontology can be used to semantically annotate different repositories of industrial process representations to the purpose of their analysis. Keywords Ontology · Industry 4.0 · Cyber-socio-technical system · Business process

1 Introduction Industry 4.0 is a driving force toward innovation and the corresponding transition is completely changing the production landscape. From one side it is putting a lot of pressure on the production system due to the requirement of short development A. De Nicola (B) · M. L. Villani ENEA—Centro Ricerche Casaccia, Via Anguillarese 301, 00123 Rome, Italy e-mail: [email protected] M. L. Villani e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_23

267

268

A. De Nicola and M. L. Villani

periods, individualization of product demand, increased flexibility of production processes, decentralization of decision making, and sustainability. On the other side it is offering a lot of opportunities such as increased automation and mechanization, digitalization and networking, and miniaturization of computational devices [1]. Industry 4.0 falls in the category of cyber-socio-technical systems (CSTSs), namely socio-technical systems that include interconnected cyber technical artifacts [2]. These systems will enable Industry 4.0 but, even if the maturity of cyber-physical technologies already allows for automation of production processes, there is a fuzzy area that still needs to be better investigated at the interface between the human and cyber-physical worlds. Most of the related problems that need to be addressed, and that could even lead to safety flaws and incidents, is due to the unpredictability of human behavior in stressful situations and to the diversity of individual backgrounds, competences and experiences that give raise to a lack of shared understanding on work practices. Indeed, the transition toward Industry 4.0 and the corresponding innovation advancement requires a better understanding of industrial processes. As anticipated, achieving it is a precondition that is hindered by several factors, mostly due to the current human involvement in the work practices. Examples of these barriers are, for instance, people hiding individual know-how to avoid losing power due to holding unique knowledge and lack of agreement on how work practices are actually performed in the workplace [3]. Taking into account the different perceptions of agents on the same process requires introducing novel modeling methods to characterize them. For instance, Work-As-Imagined (WAI) and Work-As-Done (WAD) refer, respectively, to how someone thinks the work is executed and how it is actually done. A relevant contribution to model work varieties was given by [4] who classified work according to four types, which include the above-mentioned WAI and WAD and the Work-AsPrescribed and the Work-As-Disclosed, to refer, respectively, to how work is formally specified and how someone talks or writes about it. Here, we present the Work-As-X (WAx) ontology, an upper ontology of industrial work varieties, shedding light on the different types of work representations and on how these are converted between agents within a cyber-socio-technical system, such as an industry. The WAx ontology is derived from the WAx conceptual framework [2]. It models the different perspectives on the actual industrial processes, such as the Work-As-Imagined held by blunt-end operators and the Work-As-Done by sharp-end operators. It extends the Suggested Upper Merged Ontology (SUMO) [5] to guarantee a rigorous semantics and interoperability with other ontologies and industrial standards. Finally, we describe how the ontology can be used to semantically annotate different repositories of industrial processes to the purpose of their analysis. The novelty of the proposed approach is the creation of a semantic backbone for the management of distributed work knowledge [6], allowing to shift the focus from individual process elements to entire process representations under various lenses. With such a view, the development of our ontology poses the ground for a systemic analysis of the work organization not limited to process automation and monitoring

An Ontology of Industrial Work Varieties

269

by means of workflow models. The aim is to support modeling to enable complexity science-based methods devoted to resilience assessment of cyber-socio-technical systems [7]. The rest of the paper is organized as it follows. Section 2 presents related work on business process ontologies. The WAx framework is briefly recalled in Sect. 3 whereas Sect. 4 describes the WAx ontology. A possible usage in an industrial context is discussed in Sect. 5. Finally, Sect. 6 presents some conclusions.

2 Related Work Adding semantics to business processes aims at enhancing business process management systems with increased automated functionalities for specifying, implementing, executing, validating, and monitoring business processes [8]. Existing works focus on adding a semantic layer either to the behavioral model of a business process [9] or to the business process entities of the model to the purpose of increasing automation [8]. Among the works on semantic business process management, the Business Process Abstract Language (BPAL) ontological framework adds semantics to business processes entities, such as activities, decisions, and roles [10, 11]. Along this line, [12] propose the BPMN ontology to reason on semantically annotated processes. De Medeiros et al. [13] present how semantics can be used in practice to support business process monitoring. In detail, they identify and describe 5 different phases of the business process monitoring lifecycle where semantics can have a role: observe, evaluate, detect, diagnose, and resolve. In [14], the authors show how semantic lifting of business processes [15] can be used to improve process mining. Our work can be thought of as complementary to the above presented works, as we propose to use semantics at a higher level where the semantically enriched entities are the whole processes rather than their constituents. As such, our method is independent from specific process modeling languages and/or log events used for process mining [16], and the ontology can also be used to link different representations (models or free text) of the same process as well as process models built from monitoring logs. More generally, this approach aims at improving understanding of the processes and how knowledge on these is generated and transferred by the different agents in a complex cyber-socio-technical system such as an industry. Our work builds on the work on the SECI model of [17] devoted to organizational knowledge management and where the view of knowledge-as-a-flow is developed. We characterized processes as work varieties, we extended the knowledge dynamics and added a semantic layer to the model.

270

A. De Nicola and M. L. Villani

3 The WAx Framework The WAx framework is a conceptual framework that provides a systematic structure to the variety of industrial process perspectives. We summarize its main features presented in [2]. The most important concepts of the WAx framework are work varieties and knowledge. Cyber-socio-technical systems include a large amount of distributed knowledge, embodied in operators, embedded in technology, and outlined in organizational structures. The framework allows tracing the processes of creation and loss, amplification transfer, and analysis of such knowledge. The basic elements of the WAx framework are the knowledge structure, the knowledge entities, and the knowledge dynamics. The knowledge structure consists of three elements: (i) levels, (ii) knowledge types, and (iii) agency. Levels represent the abstraction layer where knowledge is located. There are three possible layers: Cyber-Socio-Technical (CST) world, CSTS primary knowledge, and CSTS analysis knowledge. The CST world is the ground level representing the real world where knowledge is actually used to achieve a functional purpose. At this level, system operations are performed by humans, organizations, technical, and cyber artifacts. The first level is named CSTS primary knowledge. Here, knowledge about industrial processes is explicitly or tacitly created by blunt-end or sharp-end actors. The top layer is named CSTS analysis knowledge. In this level, knowledge of the CSTS is managed to the aim of analysis. Knowledge types represent the form of knowledge, which could be either explicit or tacit. Examples of explicit knowledge are manuals of procedures, workflow models, and computer programs. Tacit knowledge is everything that is not explicitly shared in a formal and systematic language such as beliefs and intuitions. Agency represents the agents who own the knowledge. Those that are close to the process are the sharp-end operators, those distant are the blunt-end operators, and those that act metacognitively, for instance to assess the system performance, are the analysts. The model proposed in this WAx framework has a holistic and fractal perspective. Accordingly, systems are formed by organizations in turn formed by teams, people, and artifacts, each one possessing agency features from time to time. Moreover, the same individuals may take on different agency roles depending on the context. Knowledge entities are the declinations of work varieties (Work-As-x), which therefore give the framework its name. They are: • Work-As-Imagined (WAI) is the entity that represents the mental models concerning the activities related to human work; the work implied by WAI is the ideal work in terms of potentiality (i.e., how we imagine the present, past, and future work to be performed) as well as ideal in terms of belief (i.e., how we imagine the various activities to be performed, but also how we believe we perform ours).

An Ontology of Industrial Work Varieties

271

• Work-As-Prescribed (WAP) encompasses all perspectives of work within the organization as it is formalized in terms of procedures, checklists, standards, and task descriptions. • Work-As-Normative (WAN) encompasses all norms external to the organization in different degrees of formalization: laws, rules, international standards, safety procedures, technical standards. • Work-As-Done (WAD) is the activity actually carried out in the working environment (i.e., within the CSTS world). The WAD is only partially accessible. Since it addresses a dynamic, unstable, and unpredictable reality, this variety of work can be different from what is imagined or prescribed or what follows the norms. • Work-As-Disclosed (WADI) represents what the system’s various agents consciously or unconsciously tell about their work. It is disclosed mostly what is wanted to be conveyed as a specific message to a specific audience. In a more or less deliberate way the WADI is influenced by the interaction with the audience. In any case, a part of this communication eludes the will of the agent, transmitting additional side signals beyond the mere direct message. Therefore, the WADI possesses always a rich informational content. • Work-As-Observed (WAO) refers to the mental model related to an observation of work. This can be distorted as much by the mental model of the observer as by that of the observed. The most relevant knowledge entities are the WAI and the WAD. The former represents an idealized version of human work, the latter the work as it actually occurs in the ever changing and resource-constrained operating conditions. Both are de facto unattainable, but we can consider them as their best proxy measures, respectively, the view of the process from the perspective of blunt-end operators and that from the perspective of sharp-end operators. The knowledge flows from a knowledge entity to another and, in doing so, it can become tacit or explicit (i.e., from the sharp-end to the blunt-end and vice versa), being reified or becoming the object of an analysis. Such transfers happen through so-called foundational knowledge conversion activities. These are inspired by those identified by [17]. They are: socialization (tacit-to-tacit, different agents), introspection (tacit-to-tacit, same agent), externalization (tacit-to-explicit), combination (explicit-to-explicit), internalization (explicit-to-tacit), conceptualization (actionto-tacit), and reification (tacit-to-action) [2]. Knowledge conversion activities are driven by a combination of knowledge conversion drivers, which could be accidental in case knowledge conversion is affected by information losses, misunderstandings, and subjective interpretations and deliberative when they follow the Efficiency-Thoroughness Trade-Off (ETTO) principle [18]. The knowledge structure, the knowledge entities, and the knowledge dynamics are all elements of the knowledge model (see [2]) that shows how knowledge flows between multiple CSTS knowledge entities, with respective agents.

272

A. De Nicola and M. L. Villani

4 The WAx Ontology The goal of the WAx ontology is to provide the WAx framework with a rigorous semantics. To this purpose, we built an upper ontology representing the main concepts of the framework that can be easily extended to include domain and application– specific concepts. Figure 1 shows the main concepts of the WAx ontology as UML class diagram. A process (e.g., the maintenance of an electric arc furnace) belongs to a system (e.g., an industry in the iron and steel sector). It relates to subsystems, which can be physical objects (e.g., an electric arc furnace) or agents (e.g., the furnace operator). A knowledge entity, such as the WAI of a sharp-end operator (WAISO ), refers to a process and pertains to an agent. The latter is characterized by an aggregation level, which following the fractal nature of the WAx framework, could be, for instance, individual or team, and an agent role, which could be a sharp-end or bluntend operator or an analyst. The knowledge entity has a knowledge form, which could be tacit (in case, for instance, of a not disclosed mental model about the functioning of the furnace to an external analyst) or explicit (in case, for instance, of a written procedure about how to check that the furnace is still working properly). The level, the knowledge form, and the agent role are parts of the WAx framework knowledge structure. A knowledge entity can be subject to lack of information quality. This could motivate a knowledge conversion driver related to a foundational knowledge conversion activity, which has as target and source two different knowledge entities. An example of driver is the ETTO communication driver, which could represent the situation related to someone that deliberately hides something related to a process to hold an individual know-how inside an organization. A foundational knowledge conversion activity can be influenced by a knowledge entity. This happens, for instance, when the Work-As-Observed of a blunt-end operator, which is a conceptualization of the Work-As-Done, is influenced by the Work-As-Imagined of the same operator. To ensure the quality of the WAx ontology, we connected its most relevant concepts to the SUMO foundational ontology. This contributes to increasing both the semantic quality of the ontology, by linking it to a well-defined semantic model, and the social quality, which is guaranteed by the large community of practitioners behind SUMO. In the following we list the most relevant WAx ontology concepts with their definitions and with the SUMO concepts they are linked to. System. A regularly interacting or interdependent group of items forming a unified whole, such as a group of devices or artificial objects or an organization forming a network especially for distributing something or serving a common purpose [19]. It is a specialization of the SUMO physical entity. Examples of systems are cyber-physical systems, such as a supervisory control and data acquisition (SCADA) system, or socio-technical systems, such as an airport or a hospital. Subsystem. A system that is part of a larger system [19]. It is a specialization of the SUMO physical system. Examples of subsystems are agents, such as robots or employees, and physical objects, such as hammers or screwdrivers.

An Ontology of Industrial Work Varieties

273

Fig. 1 WAx ontology depicted as a UML class diagram

Process. A series of actions or operations conducing to an end [19]. It is a specialization of the SUMO process. An example is a packaging process. Knowledge Entity. The knowledge entities represent different knowledge dimensions, i.e., what knowledge is referred to. It is a specialization of the SUMO procedure. Examples of knowledge entities are the Work-As-Done, the Work-As-Imagined, and the Work-As-Disclosed. Level. The levels are part of the knowledge structure and represent where knowledge is located. It is a specialization of the SUMO internal attribute. An example of level is the CSTS primary knowledge level, which refers to where knowledge of the CST system is explicitly or tacitly created, shared, integrated, and applied by blunt-end or sharp-end agents. Knowledge Form. A characterization of knowledge, which could be either explicit or implicit. It is a specialization of the SUMO internal attribute. Foundational Knowledge Conversion Activity. Foundational knowledge conversion activities allow knowledge conversion between two knowledge entities, as largely inspired by [17]. It is a specialization of the SUMO process task. Examples are conceptualization, internalization, and socialization. Knowledge Conversion Driver. The explanatory factor connected to knowledge development, i.e., explain how it is converted. It is a specialization of the SUMO proposition. An example is the ETTO process driver representing the ETTO motivating factor necessary to achieve a well-defined objective in specific contextual conditions.

274

A. De Nicola and M. L. Villani

Fig. 2 Screenshot of the Protégé ontology management system with the WAx ontology

Lack of Information Quality. A feature of a knowledge entity that describes the result of objective or subjective assessments on the lack of quality of its information content. It is a specialization of the SUMO internal attribute. Examples are lack of accessibility and lack of completeness. Agent Role. The role played by an agent in a process. It is a specialization of the SUMO social role. Examples are blunt-end and sharp-end operators. Knowledge Structure. A conceptual artifact that is made of three constituent elements: (i) levels, representing where knowledge is located, (ii) knowledge types, i.e., in which form (how) knowledge is preserved, and (iii) agency, i.e., the agent who is responsible for the knowledge. It is a specialization of the SUMO model. Figure 2 shows a screenshot of the WAx ontology in the Protégé Ontology Management system.

5 Use of the WAx Ontology The WAx ontology has been conceived to semantically enrich industrial business processes, to allow process interoperability, and to enable a holistic analysis of a business process repository. For instance, the diversity of process perspectives can be studied to the purpose of increasing organizational resilience [20] by discovering possible safety flaws [21].

An Ontology of Industrial Work Varieties

275

Figure 3 concerns the iron and steel sector and shows an example of semantic lifting of a business process repository, which allows associating a meaning to the contained processes. The repository gathers two business process models, which are, respectively, annotated by the ontology concepts Maintenance of electric arc furnace (plant manager perspective) and Maintenance of electric arc furnace (furnace operator perspective). The former is a specialization of WADIBO and the latter of WADISO . Both these knowledge entities refer to the same process Maintenance of electric arc furnace but pertain to two different agents: the plant manager, playing the role of blunt-end operator, and the furnace operator, playing the role of sharp-end operator. We observe that the knowledge entity pertaining to the plant manager has an additional activity (for instance, related to a check of instruments used during the maintenance activity) with respect to the one pertaining to the sharp-end operator.

Fig. 3 Example of semantic lifting of a business process repository though the WAx ontology

276

A. De Nicola and M. L. Villani

6 Conclusion Enhanced business process management plays a crucial role in developing the Industry 4.0 paradigm. It has recently attracted the interest of several researchers working in advancing the corresponding scientific field. However, most of the existing semantic modeling approaches are aimed at defining the inner constituents of a process and disregard how the same process is viewed by different agents in an organization. Here, we adopt a novel approach to fill in this gap that focuses, instead, on the actual varieties according to which work can be modeled and/or disclosed by different agents and it is indeed performed by them. For organizing such knowledge in a information system, we propose the WAx ontology, an ontological upper model extending the SUMO ontology. The ontology was tested in the H(CS)2 I international project and was validated by safety practitioners finding it beneficial for their safety analysis activities. The proposed ontology is independent from specific process modeling languages and/or log events used for process mining and has the potential to contribute to a harmonization of the different representations or terminology within the industrial community. Thus, in a pragmatic perspective, the WAx ontology could lead to ERP systems enhancement with a more in-depth management of process knowledge. Acknowledgements This work is supported by the H(CS)2 I project, which is partially funded by the Italian National Institute for Insurance against Accidents at Work (INAIL) and Institution of Occupational Safety and Health (IOSH) under the 2018 SAFeRA EU funding scheme.

References 1. Lasi, H., Fettke, P., Kemper, H.G., Feld, T., Hoffmann, M.: Industry 4.0. Bus. Inf. Syst. Eng. 6(4), 239–242 (2014). https://doi.org/10.1007/s12599-014-0334-4 2. Patriarca, R., Falegnami, A., Costantino, F., Di Gravio, G., De Nicola, A., Villani, M.L.: WAx: an integrated conceptual framework for the analysis of cyber-socio-technical systems. Saf. Sci. 136, 105–142 (2021). https://doi.org/10.1016/j.ssci.2020.105142 3. Gagné, M., Tian, A.W., Soo, C., Zhang, B., Ho, K.S., Hosszu, K.: Different motivations for knowledge sharing and hiding: the role of motivating work design. J. Organ. Behav. 40(7), 783–799 (2019). https://doi.org/10.1002/job.2364 4. Moppett, I.K., Shorrock, S.T.: Working out wrong-side blocks. Anaesthesia 73, 407–420 (2014). https://doi.org/10.1111/anae.14165 5. Niles, I., Pease, A.: Towards a standard upper ontology. In: Proceedings of the International Conference on Formal Ontology in Information Systems, vol. 2001, pp. 2–9 (2001). https:// doi.org/10.1145/505168.505170 6. Hutchins, E.: The distributed cognition perspective on human interaction. In: Enfield, N.J., Levinson, S.C. (eds.) Roots of Human Sociality, pp. 375–398. Routledge, London (2020). https://doi.org/10.4324/9781003135517-19 7. Hollnagel, E.: FRAM: The Functional Resonance Analysis Method—Modelling Complex Socio-technical Systems. CRC Press, Boca Raton (2012) 8. Semeraro, G., Basile, P., Basili, R., De Gemmis, M., Ghidini, C., Lenzerini, M., Lops, P., Moschitti, A., Musto, C., Narducci, F., Pipitone, A.: Semantic technologies for industry: from

An Ontology of Industrial Work Varieties

9.

10. 11.

12.

13.

14. 15.

16.

17.

18. 19. 20.

21.

277

knowledge modeling and integration to intelligent applications. Intell. Artif. 7(2), 125–137 (2013). https://doi.org/10.3233/IA-130054 Weber, I., Hoffmann, J., Mendling, J.: Semantic business process validation. In: Proceedings of the 3rd International Workshop on Semantic Business Process Management (SBPM’08), CEUR-WS Proceedings, vol. 472 (2008) De Nicola, A., Lezoche, M., Missikoff, M.: An ontological approach to business process modeling. In: Proceedings 3th Indian International Conference on Artificial Intelligence (2007) De Nicola, A., Missikoff, M., Smith, F.: Towards a method for business process and informal business rules compliance. J. Softw. Evol. Process 24(3), 341–360 (2012). https://doi.org/10. 1002/smr.553 Di Francescomarino, C., Ghidini, C., Rospocher, M., Serafini, L., Tonella, P.: Reasoning on semantically annotated processes. In: International Conference on Service-Oriented Computing Proceedings, pp. 132–146. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-54089652-4_13 De Medeiros, A.A., Pedrinaci, C., Van der Aalst, W.M., Domingue, J., Song, M., Rozinat, A., Norton, B., Cabral, L.: An outlook on semantic business process mining and monitoring. In: OTM Confederated International Conferences 2007, LNCS, vol. 4806, pp. 1244–1255. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-76890-6_52 Azzini, A., Braghin, C., Damiani, E., Zavatarelli, F.: Using semantic lifting for improving process mining: a data loss prevention system case study. CEUR-WS Proc. 1027, 62–73 (2013) De Nicola, A., Di Mascio, T., Lezoche, M., Taglino, F.: Semantic lifting of business process models. In: 2008 12th Enterprise Distributed Object Computing Conference Workshops, pp. 120–126. IEEE, Munich (2008). https://doi.org/10.1109/EDOCW.2008.55 Bloemen, V., van Zelst, S., van der Aalst, W., van Dongen, B., van de Pol, J.: Aligning observed and modelled behaviour by maximizing synchronous moves and using milestones. Inf. Syst. 103, 101456 (2022). https://doi.org/10.1016/j.is.2019.101456 Nonaka, I., Toyama, R., Konno, N.: SECI, Ba and leadership: a unified model of dynamic knowledge creation. Long Range Plan. 33, 5–34 (2000). https://doi.org/10.1016/S0024-630 1(99)00115-6 Hollnagel, E.: The ETTO principle: efficiency-thoroughness trade-off: why things that go right sometimes go wrong. Ashgate, Farnham (2009) Merriam-Webster: https://www.merriam-webster.com. Last accessed 05 Dec 2021 Patriarca, R., Bergström, J., Di Gravio, G., Costantino, F.: Resilience engineering: current status of the research and future challenges. Saf. Sci. 102, 79–100 (2018). https://doi.org/10.1016/j. ssci.2017.10.005 Gattola, V., Patriarca, R., Tomasi, G., Tronci, M.: Functional resonance in industrial operations: A case study in a manufacturing plant. IFAC Pap. OnLine 51(11), 927–932 (2018). https://doi. org/10.1016/j.ifacol.2018.08.489

Combining a Domain Ontology and MDD Technologies to Create and Validate Business Process Models Nemury Silega , Manuel Noguera , Yuri I. Rogozov , Vyacheslav S. Lapshin , and Anton A. Dyadin

Abstract The business process models are a primary artifact in the software development process. However, usually they are described only in terms of activities and events. This leads to incomplete business process descriptions because relevant domain information could be not included. Moreover, commonly the business processes are represented by means of semiformal notations which make difficult their validation. Summarizing, despite the relevance of business process models, they are usually incomplete or have errors. The adoption of formal languages is a suitable alternative to validate and analyze business process descriptions. In that sense, the ontologies are a formal language based on description logics which have been widely adopted to represent and analyze knowledge of diverse domains. Nevertheless, this could be a difficult language for some of those involved in describing business processes. The adoption of a Model-driven development (MDD) approach, could make it possible to combine formal and semiformal models to represent and validate business process descriptions. Therefore, this paper aims to introduce an approach based on MDD and ontologies to describe, validate and analyze business processes. In this approach, the processes are described in a graphical notation which can be easily understood for all participants in the process description. Then this process is transformed into an ontology and some additional domain information is added. This new ontological model can be semantically validated with the support of reasoners. A study case to demonstrate the applicability of the approach is provided. Keywords Business process model · Model-driven development · Ontology and domain model

N. Silega (B) · Y. I. Rogozov · V. S. Lapshin · A. A. Dyadin Department of System Analysis and Telecommunications, Southern Federal University, 347900 Taganrog, Russia e-mail: [email protected] M. Noguera Departamento de Lenguajes y Sistemas Informáticos, Universidad de Granada, Pdta. Daniel Saucedo Aranda S/N, 18071 Granada, Spain © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_24

279

280

N. Silega et al.

1 Introduction Currently companies should constantly strive to improve their efficiency and flexibility to face diverse scenarios. The business processes models are a useful mean to analyze the behavior of these key indicators. Since usually different types of actors participate in the process description, it is convenient to use a notation that is easy to comprehend for all participants. BPMN, UML Activity diagram, and IDEF are three of the most popular notations to represent business processes. BPMN has become in the standard de-facto to represent business process models because provide a graphical notation that is easy to understand by all participants in the process description [1]. In spite of BPMN includes a wide set of constructors to describe processes, some specific domain information cannot be represented. This fact hinders the semantic validation of the business descriptions, for example, in a process certain type of activities may have precedence dependency. However, this type of restriction cannot be validated in a BPMN model. On the other hand, the business process models are a primary artifact during the first stages of the software development process [2, 3]. Therefore, the validation of these models is essential to prevent their errors from spreading to the next stages of the software life cycle [2, 4]. However, it is not easy to carry out semantic validation with semiformal notations. Likewise, some domain business entities which could be considered to generate system entities are not included in the process descriptions. The adoption of formal languages to represent business processes is a suitable alternative to deal with the aforementioned issues. In that sense, the ontologies are a formal language based on description logics which have been extensively exploited to represent and analyze knowledge in several domains [5–7]. In an ontology can be represented the behavioral information of the process, the domain information and domain specific restrictions. With all this information in the same model, a multidimensional validation of the process description can be carried out. However, the ontologies could be a difficult language for some of those involved in describing business processes. The adoption of a Model-driven development (MDD) approach may be a solution to exploit the advantages of graphical notations and formal languages [8–11]. Model Driven Architecture (MDA) is a paradigm for software development that has received the attention of researchers and software developers. MDA structures the software development process in terms of the creation and transformation of models at different abstraction levels. MDA distinguishes between three types of models: Computation Independent Models (CIMs), Platform Independent Models (PIMs), and Platform Specific Models (PSMs). As models increase the abstraction level on enterprise software systems, they improve the communication, understanding, and analysis between software developers [12]. This paper aims to introduce an approach based on MDD and ontologies to describe, validate, and analyze business processes. In this approach, the processes are described in a graphical notation that can be understood for all participants in the process description. Then this process is transformed into an ontology and

Combining a Domain Ontology and MDD Technologies to Create …

281

some additional domain specific information is added. Since ontologies are a formal language, this new ontological model can be semantically validated with the support of reasoners. This ontological model may be an appropriate source model to generate system models because includes the required domain specific information and was semantically validated. Semantic validation prevents that the errors in the process models from spreading to the next stages of the software life cycle. In this ontological model, both the static and dynamic dimensions of the business are represented. The structure of the paper is as follows. In Sect. 2, we analyze related work. In Sect. 3, our approach is introduced. In Sect. 4, we describe the application of our proposal in the domain of enterprise management systems. Finally, Sect. 5 presents the conclusions.

2 Related Work Several approaches foster the business process descriptions as a suitable model to represent the business of companies as well as a required model during the first stages of the software life cycle [2, 3]. However, usually the validation of these models does not receive enough attention [2, 4]. To validate business process models some prominent approaches have been presented [13]. Nevertheless, these approaches are based on formal languages which are difficult to understand by certain participants in the process descriptions. Hence the combination of formal and semiformal models may be considered an attractive alternative [14, 15]. The lack of domain specific information is other key gap in the process description. To deal with this issue some approaches have combined domain models with process descriptions [15, 16]. Laaz et al. [15] propose a domain ontology to represent the CIM level in the domain of e-Health Management Systems. Nevertheless, this approach does not consider a dynamic view in the CIM level. Whilst [16] describe an interesting approach which combines domain ontologies and business process descriptions to generate Interaction Flow Modeling Language (IFML) models. In spite of the contribution of this proposal, they do not exploit the ontological model to validate the business process description. In our approach we combine domain ontologies and process descriptions to semantically validate the business process descriptions and obtain a model with the required information to generate system models.

282

N. Silega et al.

3 Combining the Domain Ontology and MDD Technologies to Create and Validate Business Process Models Figure 1 depicts an overall representation of our approach. Three models compose the business view: Domain ontology, Business process model, and integrated ontology. The domain ontology is a static view of the business to represent the main business concepts (or objects) and their relationships. The business process model is a dynamic view which shows the flow of activities to achieve the business goals. The third model is an integration of the static and dynamic views. In this ontology, the relationships among the activities with the other domain concepts are represented. The separation of concerns is one of the characteristics of this approach. Changes in the activities flow will be considered in the business process model, whilst changes in the domain concepts will be considered in the domain ontology. Both type of changes will be updated in the integrated ontology. The standardization is other characteristic of this approach. We adopted standard languages to represent each model. The business process models are represented by means of BPMN whist the ontologies are represented by means of the language OWL Web Ontology Language (OWL). As we mentioned in the introduction, BPMN is the standard de-facto to describe business process models whilst OWL is one of the most significant languages in the semantic web domain. The standardization allows that different agents may read and update these models. The third main characteristic of this approach is the automation. This characteristic is possible due to the previous one. Since standard languages have been adopted, it is possible to adopt transformation tools to generate models from others. In this particular case, we have created transformation rules in ATL to automatically generate the integrated ontology.

Fig. 1 Overall representation of our proposal

Combining a Domain Ontology and MDD Technologies to Create …

283

The representation in the same model both static and behavioral information may help to carry out a semantic validation. Since ontologies are a formal language, a reasoner can automatically check the consistency of the models and provide some useful inferences for the system analysts and designers. On the other hand, after validate this ontology with support of reasoners, it can be the source to generate different system design models. From this ontology, both behavioral and static system models can be generated. Therefore, these models will be consistent.

4 Application in the Domain of Enterprise Management Systems The development of enterprise management systems is a proper scenario to apply our approach. In this type of systems, a high number of business process integrated between themselves are implemented.

4.1 Domain Ontology To describe the main domain concepts is the first stage of our approach. Taking into account information of the literature of this domain [17–20] and knowledge of domain specialists, we identified the following concepts: Patrimonial elements, accounts, assets, liabilities, etc. In addition, we have identified some concepts that are not specific of the companies but they have to use them to carry out their processes. For example, a company does not have the control of the currency exchange rate but this information may be required to carry out some operations in the company. Hence, this concepts also has to be considered in the domain description. We named to this type of concepts as domain elements. These concepts also were represented in the domain ontology. We developed the ontology following the methodology of [19]. This methodology defines the following steps: Determine the domain and scope of the ontology, consider reusing existing ontologies, enumerate important terms in the ontology, define the classes and the class hierarchy, define the properties (called relationships or slots) of the classes, define facets and/or restrictions on slots or relationships and define instances. After carrying out the steps of the methodology a domain ontology was obtained. Some of the main classes of the ontology are PatrimonialElement, DomainElement, Account, Asset, and Liability. In Sect. 4.3 we explained how these concepts are related to the process elements.

284

N. Silega et al.

Fig. 2 BPMN diagram of the process for Settle an advanced payment

4.2 Business Process Description In the second stage of our approach, the business processes are described. The processes will be described by means of BPMN. In reality the business processes are integrated, but it is a good practice to describe them separately. To illustrate this step, we described the process for Settle an advanced payment. Figure 2 shows the BPMN diagram of this process.

4.3 Integrated Ontology In the third stage of our approach, the domain ontology and the business process descriptions are integrated in an ontology. The BPMN model is automatically transformed into the ontology. Then the elements of the business process model are related to the elements of the domain ontology. To represent a process in an ontology we created classes for the main concepts of a process. Hence, some of the main classes are Activity, Gateway, and Event. Furthermore, we created the abstract class FlowElement to subsume the other three classes. To carry out the transformation of the BPMN model into an ontology, specific ATL transformation rules were specified. Table 1 shows three of the transformation rules. For example, the first rule specifies that Tasks in the BPMN model will be transformed into instances of the class Activity in the ontology. Figure 3 shows an expert of this rule in ATL language and Fig. 4 shows the instances of the class activity that were generated after executing this transformation rule.

Combining a Domain Ontology and MDD Technologies to Create …

285

Table 1 Transformation rules No. BPM concept

O-BPM concept

1

Activity

• Instances of the classes Activity and Step • Statement to relate the step and the activity • Statements to specify activities flow (through the properties followsTo and IsFollowedBy)

2

Event

Instance of the class Event

3

Gateway Instance of the class Gateway

Fig. 3 ATL transformation rule

To relate the concepts of the process with the domain specific concepts we defined several object properties. The object properties are other key component in an ontology. The object properties in OWL allow to represent binary relations between two individuals. For example, we defined the object property ReferenceTo to define the relationship between instances of the class Activity and instances of the class

286

N. Silega et al.

Fig. 4 Views of the ontology: a instances of the class activity that were generated after executing the transformation rule 1; b object property assertions of the activity SettleAdvancedPayment

PatrimonialElement. Figure 4b illustrates that the activity SettleAdvancedPayment ReferenceTo to the domain element AdvancedPayment. We also defined object properties to represent the activities flow. For example, the object property FollowsTo allows to represent the relationship between two consecutive FlowElements. Figure 4b shows that activity SettleAdvancedPayment FollowsTo activity ConsultCurrencyExchange. In OWL the properties have an inverse property, for example IsFollowedBy is the inverse property of FollowsTo. Therefore, if SettleAdvancedPayment FollowsTo activity ConsultCurrencyExchange then ConsultCurrencyExchange IsFollowedBy SettleAdvancedPayment. These axioms are automatically generated in the ontology with the transformation rule of Fig. 3. Once the ontology has been developed, a reasoner can analyze the model. First, the consistency of the model is validated, then some inferences can be obtained. Check if the order of the activities complies with the domain restrictions is one of the validation tasks. For example, in this domain some types of activities cannot be carried out after some other specific activities. In this example we have defined that, once accounted, an advanced payment cannot be deleted. Therefore, it is useful to check that this restriction is satisfied in the model. Figure 5a shows the rule specified in WSRL to check this restriction. To illustrate the application of this rule, we specified that the activity DeleteAdvancedPayment (i.e., an activity consisting of deleting a payment), followed the activity AccountAdvancedPayment (i.e., an accounting activity). Figure 5b, c show that the reasoner inferred that these activities have precedence problems. The property HasPrecedenceProblem was declared symmetric. In addition to this example, several specifications to check the correctness and completeness of the process description in the ontology were included. For example, some specific activities may be required for some type of processes. This type of restriction also was modeled in our ontology.

Combining a Domain Ontology and MDD Technologies to Create …

287

Fig. 5 a Rule to detect precedence problems, b precedence problem detected for activity DeleteAdvancedPayment, and c precedence problem detected for activity AccountAdvancedPayment

5 Conclusions The business process modeling can take advantage of ontology-driven engineering to ensure model consistency and detection of errors in early stages of software construction. In this paper, we have presented an approach based on MDD and ontologies to support the process modeling. An example of its application to the domain of enterprise management systems was presented. The method makes use of ATL transformations for translating BPMN models into an ontological model. Separation of concerns, standardization, and automation are three of the main characteristics of our approach. The representation in the same model both static and behavioral information may help to carry out a semantic validation. Since ontologies are a formal language, a reasoner can automatically check the consistency of the models and provide some useful inferences for the system analysts and designers. On the other hand, after validate this ontology with support of reasoners, it can be an adequate source to generate different system design models. From this ontology, both behavioral and static system models can be generated. Therefore, these models will be consistent. Acknowledgements This research was funded by the grants of the Russian Science Foundation, grant number RSF 22-21-00670, https://rscf.ru/project/22-21-00670/.

References 1. OMG: Business Process Model and Notation (BPMN), V.2.0. 2011 formal/03 Jan 2009. http:// www.omg.org/spec/BPMN/2.0. Last accessed 01 Nov 2021 2. de Oca, I.M.M., Snoeck, M., Reijers, H.A., Rodríguez-Morffi, A.: A systematic literature review of studies on business process modeling quality. Inf. Softw. Technol. 58, 187–205 (2014). https://doi.org/10.1016/j.infsof.2014.07.011 3. Sánchez-González, L., García, F., Ruiz, F., Mendling, J.: Quality indicators for business process models from a gateway complexity perspective. Inf. Softw. Technol. 54(11), 1159–1174 (2012). https://doi.org/10.1016/j.infsof.2012.05.001

288

N. Silega et al.

4. Mendling, J.: Empirical studies in process model verification. In: Jensen, K., van der Aalst, W.M.P. (eds.) Transactions on Petri Nets and Other Models of Concurrency II, pp. 208–224. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-00899-3_12 5. Palmer, C., Urwin, E.N., Pinazo-Sánchez, J.M., Cid, F.S., Rodríguez, E.P., Pajkovska-Goceva, S., Young, R.I.M.: Reference ontologies to support the development of global production network systems. Comput. Ind. 77, 48–60 (2016). https://doi.org/10.1016/j.compind.2015. 11.002 6. Bouzidi, R., De Nicola, A., Nader, F., Chalal, R.: OntoGamif: a modular ontology for integrated gamification. Appl. Ontol. 14(3), 215–249 (2019). https://doi.org/10.3233/AO-190212 7. De Nicola, A., Melchiori, M., Villani, M.L.: Creative design of emergency management scenarios driven by semantics: an application to smart cities. Inf. Syst. 81, 21–48 (2019). https://doi.org/10.1016/j.is.2018.10.005 8. Benaben, F., Pingaud, H.: The MISE project: a first experience in mediation information system engineering. In: Information Systems: People, Organizations, Institutions, and Technologies, pp. 399–406. Springer, Cham (2010). https://doi.org/10.1007/978-3-7908-2148-2_46 9. Castro, J., Lucena, M., Silva, C., Alencar, F., Santos, E., Pimentel, J.: Changing attitudes towards the generation of architectural models. J. Syst. Softw. 85(3), 463–479 (2012). https:// doi.org/10.1016/j.jss.2011.05.047 10. Singh, Y., Sood, M.: The impact of the computational independent model for enterprise information system development. Int. J. Comput. Appl. 11(8), 21–26 (2010). https://doi.org/10. 5120/1602-2153 11. Harbouche, A., Erradi, M., Mokhtari, A.: An MDE approach to derive system component behavior. Int. J. Adv. Sci. Technol. 53, 41–60 (2013) 12. OMG: MDA Guide Version 1.0.1 (2003) 13. Fisteus, J.A.: Definición de un modelo para la verificación formal de procesos de negocio. Doctoral dissertation. Universidad Carlos III de Madrid (2005) 14. Li, Z., Zhou, X., Ye, Z.: A formalization model transformation approach on workflow automatic execution from CIM level to PIM level. Int. J. Soft. Eng. Knowl. Eng. 29(09), 1179–1217 (2019). https://doi.org/10.1142/S0218194019500372 15. Laaz, N., Wakil, K., Gotti, Z., Gotti, S., Mbarki, S.: Integrating domain ontologies in an MDAbased development process of e-health management systems at the CIM level. In: International Conference on Advanced Intelligent Systems for Sustainable Development, pp. 213–223. Springer, Berlin (2019). https://doi.org/10.1007/978-3-030-36664-3_25 16. Laaz, N., Kharmoum, N., Mbarki, S.: Combining domain ontologies and BPMN models at the CIM level to generate IFML models. Proc. Comput. Sci. 170, 851–856 (2020). https://doi.org/ 10.1016/j.procs.2020.03.145 17. Palanivelu, D.V.: Accounting for Management. Laxmi Publications, New Delhi (2012) 18. Kralik, L., Dumbrav˘a, M.: Analysis model of the company’s patrimonial elements. Rev. Rom. Stat. Supl. Trim II 141 (2012) 19. Burja, V., Burja, C.: Patrimonial resources’ management and effects on the economic value added. Ann. Univ. Apulensis Ser. Oecon. 12(2), 608–615 (2010). https://doi.org/10.29302/oec onomica.2010.12.2.12 20. Flo¸stoiu, S.: The role and place of accounting information in the decision-making system. Int. Conf. Knowl. Based Organ. 25(2), 46–51. https://doi.org/10.2478/kbo-2019-0055

A New Polyglot ArchiMate Hypermodel Extended to Graph Related Technologies Nicolas Figay , David Tchoffa , Parisa Ghodous , Abderrahman El Mhamedi , and Kouami Seli Apedome

Abstract This paper describes an approach for extending polyglot ArchiMate hypermodel for interoperability to graph technologies. This extension addresses advanced interactive graph visualization issue applied to modular models of composite systems. The proposed approach, which can be applied to any kind of modeling language having the same characteristics than ArchiMate, is however specifically applied to this standardized open language which is considered as an enterprise interoperability key enabler, and can then be used for enterprise digital twins. After describing Interoperability of Enterprise Application and how ArchiMate enables interoperability, this paper points out, from a performed state of the art related to extended hypermodel for interoperability, the inaccuracy of the graph technologies as a branch of an hypermodel having to describe composite model of complex systems. Consequently the proposed innovative approach consists in extending legacy hypermodels for ArchiMate with compound graphs technologies supporting nested nodes interactive visualization. An experimental platform was developed in order to demonstrate both achieved interoperability with proposed approach and the accuracy for producing composite enterprise models with expected interactive nested graph visual interactive features.

N. Figay (B) Airbus Defence and Space, Elancourt, France e-mail: [email protected] D. Tchoffa · P. Ghodous · A. El Mhamedi University of Lyon I, Lyon, France e-mail: [email protected] P. Ghodous e-mail: [email protected] A. El Mhamedi e-mail: [email protected] K. S. Apedome Ottawa University, Ottawa, Canada e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_25

289

290

N. Figay et al.

Keywords Distributed modeling for interoperability · Meta-modeling for interoperability · Methods and tools for interoperability

1 Introduction Enterprise architecture is unavoidable to manage change and complexity; and ArchiMate is a language for describing architectures. Our paper describes an approach for extending polyglot ArchiMate hypermodel for interoperability to graph technologies. The part 1 of this paper describes consequently Interoperability of Enterprise Application and how ArchiMate enables interoperability. The part 2 states problematic of distributed architectural representations in a networked organization, and points out, from state of art related to extended hypermodel for interoperability, the addressed problem, i.e., the inaccuracy of the graph technologies as a branch of an hypermodel having to describe composite model of complex systems. The part 3 describes the suggested solution: polyglot ArchiMate Hypermodel extended to compound graph technologies, with associated approach and experimental platform.

2 ArchiMate and Interoperability 2.1 Enterprise Interoperability Enterprise Applications Interoperability research community exists since a long time. Following IDEAS European Research roadmap [1], an Interoperability Network of Excellence was created which still exist, Interop Vlab [2]. Concern is effective support of operations for competitiveness of enterprises, ensuring secured collaboration within networks of organizational cells distributed across enterprises. It requires sharing various informational assets managed in various enterprise repositories (Processes, Resources, Products, Clients …) and in enterprise applications supporting the business processes. Being for each organizational cell or at whole network level, when willing to prepare and build continuous operational interoperability, interoperability must consider simultaneously and consistently business, applicative and technological layers [3]. Links must be established with motivation and strategies of enterprise, which have to be aligned within a coopetition context, i.e., a collaboration or cooperation of circumstance or opportunity between different economic actors who, moreover, are competitors. So considered networks have to be easily reconfigured in order each members of the network being able to share with t others partners, ensuring continuous interoperability despite changes, i.e., continuous interoperability, for an effective collaboration. However access to strategic information which concerns them and which they do not wish to share must be prevented. Due to continuously growing

A New Polyglot ArchiMate Hypermodel Extended to Graph Related …

291

pace of change and complexity of considered organizational systems, software products, and Information and Communication technologies (ICT), which are more and more interconnected and interdependent, there is also need to adopt a systemic and holistic approach, considering multiple viewpoints of concerned stakeholders and addressing problems at the accurate levels of granularity. Indeed, we are facing complex systems of systems requiring multi-scale approaches, similar patterns being applicable at each scale. Interoperability enablers considered since [1] are enterprise modeling, ontology, model driven approach, and service oriented architecture. Within a dynamic business ecosystem, [4] considers that ability to adopt a relevant set of consistent open standards is key for achieving a high level of maturity in terms of interoperability. Standards, being languages or exchange protocols, concern all the layers considered by enterprise architecture. They have to be open and mature, i.e., be implemented by various solutions relying on different technologies. They concern enterprise architecture itself, and associated practices and capabilities which have also to be interoperable, relying on same enablers: open standards, ontology, model driven approach, and service oriented architecture. Finally, these capabilities are themselves part of the organizations and subject for enterprise modeling.

2.2 ArchiMate If considering previous section, ArchiMate appears as perfect candidate for preparing and building operational interoperability. Indeed, ArchiMate, an Open Group (OG) open standard [6], specifies a visual enterprise architecture modeling language, supporting holistic description, analysis and visualization of the architecture of an enterprise as the working place where operations occur and as the execution system performing the business activities by running business processes. It supports cross business domains description in an unambiguous way. Unlike many other Domain Specific Language, it isn’t based on a specific modeling framework, such as the Object Management Group’s Meta Object Facilities. Consequently it can be used with various platforms and technologies1 : diagramming tools visualization software generating drawings from text mind map solutions, ArchiMate specific modeling platforms or Unified Modeling Language (UML) modeling platforms supporting creation and usage of UML profiles. It is also an Architecture Description Language, as defined by ISO 42010 [11]: each ArchiMate diagram is a view on an architecture model, governed by a viewpoint which give the purpose of the diagram for the concerned stakeholders, and indicates the modeling constructs which may be used. It is also a multi-layered language. A viewpoint can be dedicated to a single layer, or inter-layer. All modeling constructs other than relationships belongs to one and only one layer: business, application, technology. As reflecting the viewpoints of various 1

E.g., respectively are OG’s Microsoft Visio Stencils [7], PlantUML [8], Xmind [9], the open source and free ArchiMate modeling platform of Refs. Archi [10], Modelio ArchiMate Plugin [11].

292

N. Figay et al.

kind of architects and stakeholders which each have their own specific specialized languages, standardized or not, the number of modeling construct is restricted to the minimum required one, in order to ensure interoperability with these languages, which can extend ArchiMate subsets they are about by mean of stereotypes. OG also specified the Open Exchange Format for ArchiMate (OEF) [12]. As it supports the planning of a continuous transformation of the enterprise, it also contributes to the establishment of a continuous interoperability. In order to prevent result research of this paper to be applicable to a single specific language, it is important to point out the generic characteristics of ArchiMate which will make result applicable to any other language. (1) visual and graph oriented modeling language, distinguishing mono-typed entities and oriented binary relationships with as source and target entity or relationship (2) distinction of model element and visual elements, each visual element belonging to a single view and referencing the model element it represents (3) all model and visual elements, views, viewpoints, and layers have identifiers (4) composition relationship (or equivalent) is supported by language between entities, which can be optionally specialized with specializations constituting a hierarchy (5) specification provides an authorized relationships matrix. However, selecting enterprise modeling language which have not characteristics required for enterprise interoperability will make it inaccurate for preparing and building continuous operational interoperability. Complementary qualitative informations concerning ArchiMate which will allow to check results achieved with demonstrator through Fig. 1: ArchiMate provides 62 modeling constructs for typing model entities, which are all given a definition and a specific icon, a layer it belongs to plus eventually a shape, and 11 kind of relationships, with definition and associated icon and arc symbol (start and end symbol, style of the line) (cf. [5]).

3 Distributed Architectural Representations in a Networked Organization 3.1 Problematic In a dynamic network of enterprises in a coopetition context, architecting an enterprise is a local private processes, run by each networked enterprises’ architects. If and when ArchiMate is used, it usually relies on heterogeneous ArchiMate applications realized by various heterogeneous technologies. Problem arise when used ArchiMate solutions are relying on realization languages with different expressivity, in particular when having to deal with nested representations, such as boxes for diagrams, document elements in XML, packages with UML or folders (as with in Archi or File management systems). This may concern model of composite systems, relying on composition relationships or diagrams. This is used as soon as dealing

A New Polyglot ArchiMate Hypermodel Extended to Graph Related …

293

Fig. 1 Test model imported in ArchiMateCG

with complex systems of systems. This applies to modular models of systems which are physically partitioned, which concerns most of the modeling platforms, being UML with packages and modules, or not UML (e.g., folders for Archi). While nesting visual representations are quite useful and important for efficient cognition of composite models and visual analysis, or mandatory for physical breakdown of a model, usage of nesting may come with some issues in terms of semantic, as they reflects relationships which are not reified. It is consequently not possible to attach properties or metadata indicating the type of relation it represents. If relying on semantic graphs with only reified relationships, most of graph technologies can only represent flat models, which are not suited when having to deal representation of composite models. The underlying meta-model are not able to reflect composition. A similar issue exists with semantic graphs supported by semantic web technologies such as the Ontology Web Language (OWL). Object properties used for representing relationships are reified by mean of links which are references from a source object to a target object. The source object, link, and target object are captured by means of a triple, so no properties can be attached to it. If some patterns exists for representing binary oriented relationships which properties in OWL, the language itself and associated technologies doesn’t support explicitly relation concept. This is same for composition or aggregation. So distributed knowledge database systems relying on semantic web technologies have some problems when having to deal with ArchiMate relationships and with compositions.

294

N. Figay et al.

Some knowledge based technologies are not relying on object properties and binary oriented technologies, but on n-ary relationships. With such technologies, it is also not possible to formalize links as references from an object to another. As a consequence, some technological silos exist between platforms which should be candidate for being used as enterprise repositories, if willing to rely on ArchiMate as the common language between the organizations.

3.2 State of the Art and of the Practices A very large scientific literature exist which address interoperability relying on the model driven approach as defined by Object Management Group. However it is not suited if willing an approach which doesn’t require to master this framework, which is quite complex and not consistently implemented between modeling solution providers of this domain. Models can be exchanged easily except when they are modular or relying on profile defined specifically by one vendor. Diagrams remain too difficult to exchange. Multi-representation of a model relying on different modeling language was proposed by [13] considering UML, XML Schema, and OWL, named hypermodel. However it didn’t addressed properly the OWL part, and it was mainly considering flat UML class diagrams. [14] proposes to extend the concept for dealing with interoperability, ensuring semantic preservation and preventing data loss when having to exchange models between heterogeneous applications. Our proposed approach allows to extend dynamical the number of languages used for representation of a model, providing new branches to the hypermodel. Considered languages are EXPRESS, OWL, and UML. The identified limitations concerned models of composite systems we want to address. Considering ArchiMate model with context of networked organizations in manufacturing domain, literature proposes mainly mapping to OWL deriving only object properties from ArchiMate relations (e.g., [15]), without considering moving models between heterogeneous platforms in terms of underlying realization languages. [16] proposes an approach for modeling in ArchiMate over UML 2 or SysML, bringing the ability of these languages for producing modular models of composite systems to ArchiMate, without changing language. Proposed modeling over UML2 relies on usage of multi-typed instance specifications, for the alignment with OWL modeling, and facilitating an ArchiMate model to move between UML and OWL modeling platforms. However technologies based on graph and network theories are not considered as target for ArchiMate modeling. The issue of visual representation including nested visual representation from a flat graph are also not considered by literature dedicated to ArchiMate modeling platforms interoperability. Resulting from research activities with numerous publications, [17] is a software platform for visualizing complex networks and integrating these with any type of attribute data. It has been developed in order to support various domains, including bioinformatics, social network analysis, or semantic web. It combines advanced

A New Polyglot ArchiMate Hypermodel Extended to Graph Related …

295

visualization of compound graphs with application programming interfaces (APIs) based on graph and network theories, allowing to produced nested representations from flat graph models. [18] addresses its combined usage with ArchiMate, motivated by analytical needs. Pipelines of Pipelines are defined in order to extract data and to specify the graphs to be define with associated visualizations means in order to response to given analysis needs. Compound visualization is not considered and the focus is not on interoperability. [19] import Information Technologies infrastructure definition from ArchiMate model, in addition to Business Process Definition Notation (BPMN) models within Cytoscape, in order to perform some model aggregation. The approach is more process centric, and doesn’t consider neither the issue of flat graph nor the establishment of interoperability of enterprise architecture capabilities.

3.3 The Addressed Problem: ArchiMate Hypermodel Encompassing Compound Graph Related Technologies Problem addressed in this paper, which is not addressed according to state of the art, is to make it possible to derive from an ArchiMate model a graph representation which can be used for producing nested visualization of various composition relationship but also composite data structure of these models. It should support network organizations’ architects, taking advantage of emerging graph related technologies for analysis of networked organization in order to prepare, build, realized, and monitor continuous operational interoperability.

4 Proposed Solution: Polyglot ArchiMate Hypermodel Extended to Graph Related Technologies 4.1 Principles and Proposed Approach The principles include first those defined for extended hypermodel for interoperability. (1) Annotations must be provided on each provided representation relying on a given language in order ensuring it will be possible from this representation to derive the other representations relying on other languages considered by the hypermodel. (2) Each representation must considers the intended usage of the language it relies on. For considered ArchiMate hypermodel, applying principle 2 means: (1) OWL intended usage includes (a) reasoning relying on Descriptive Logic or Rule Engines (b) data aggregation relying on semantic mappings between languages. (2) UML intended usage includes (a) modular description of a composite system supporting system development which support constraints checking (b) computer aided activities such as document generation, simulation, validation, verification, and monitoring.

296

N. Figay et al.

(3) Compound graph as implemented by [17] or similar solution includes (a) usage of advanced visualization technics for visually and interactively explore the architecture of the network organizations, being able to expand or collapse one or all the compound nodes, in combination with automatic layouts (b) abilities to use graph and network theories-based algorithms for aggregating various enterprise models from different sources and analyzing the resulting network of enterprise, e.g., based on clustering, identification of the shortest past, etc. Doing so, the proposed representation of ArchiMate models over various languages is driven by value, avoiding any artificial data mapping preventing the effective used of produced data artifacts. Concerning issue related to hierarchies representation, understanding how [17] deals with compound graphs is needed. Let’s consider a node N1 contained in a node N2. This is captured by a particular property of the contained node N1, “parent”, which contains the identifier of containing node, N2. It doesn’t change graph structure manipulated by the algorithms. But visualization algorithms take advantage of this field in order to produced nested representations. For each hierarchy of interest which is to be visualized with nested boxes, we can derive a specific compound graph representation of ArchiMate (let’s call it ArchiMateCG). In order to remain aligned with [4], an ArchiMateCG can be defined according to a viewpoint which includes information about the considered hierarchy, purpose of represent. Applying principle 1 lead to use a specific property, “parentType”, which will contained types constituting hierarchy. This approach is suited for representation of any hierarchy, being for a composition relationship or for a composite data structure. Combined with the expand-collapse functionalities, it allows to explore several levels of decomposition for multi-scale models without particular effort for drawing, as drawing with layouts is automated.

4.2 Validating the Approach Adopted approach was derived from [20], which proposes a way to assess Product Lifecycle Management interoperability standards and test their implementation for dynamic networked organizations. The needs are similar, main difference being that we don’t consider the same system of interest. Proposed approach consisted in first in building an experimental lab, including (1) one solution platform for each languages used for producing a representation (2) a set of import/export executable description, relying on what is provided by the selected platforms. (3) to experiment iteratively a chain of transformation in order to be able to move a complete model from platform to platforms, validating conditions for not loosing any information. As no product today exist for ArchiMateCG models, a prototype was created from [17] an agile way focusing on the proof of value it brings, being in terms of usage for networked organizations’ architects (with a particular focus on preparing and building continuous operational interoperability). In order to validate and test interoperability, in terms of ArchiMate models moving from one platform to another without data loss, we defined a test data model

A New Polyglot ArchiMate Hypermodel Extended to Graph Related …

297

containing all the ArchiMate Language modeling constructs, plus complementary concepts suited for defining hierarchy of use for the architects: views, viewpoints, folders, and packages. In order to illustrate organization breakdown, some extensions were created for “Business actor” and “Work Package” ArchiMate constructs. ArchiMateCG viewer realization followed proposed approach, by typing each node and edge with ArchiMate type corresponding to ArchiMate language constructs, extended with the some other useful ones. ArchiMate icons, relying on the visual language specification, were used as background images for the nodes of the graph. A parent property was created for reflecting hierarchy to be capture, with indication of the typed property the parent is derived from. An exporter was created for Archi, based on jArchi javascript plugin of Archi, in order to export ArchiMateCG graphs. An exporter was created from the ArchiMateCG viewer and importer for Archi in order to generate an Archi Model from an ArchiMateCG graph.

4.3 The Performed Experimentation and Results Steps are the following: (1) a model was created with Archi, containing all ArchiMate language constructs plus complementary ones such as model, view, viewpoint, package, layer and extensions of Business Actors and Work Package (2) the export script was launch in order to produce a JSON graph as expected by the ArchiMateCG viewer with as hierarchy model, layers, plus specialization, drawing and meta groups (3) the model was open and all the node were expanded or collapsed automatically with automated drawing and layout application for ArchiMateCGs. Figure 1 shows the result: all ArchiMate constructs, extended with the complementary ones are represented. The defined hierarchy is reflected by nested boxes, reflecting nodes with a parent property.

4.4 Assessment of the Results Performed experimentation demonstrated the following: (1) ArchiMateCG allows to represent an ArchiMate model as a Compound Interactive Graph, allowing to visually navigate the different levels of composition and bringing value in terms of cognition of complex systems (2) data exchange can be performed without data loss for the considered constructs (3) hypermodel for interoperability approach and associated interoperability testing works, and was applied to extend previously defined ArchiMate ones. Approach should applicable for many other languages than ArchiMate, or to a combination of languages, as soon as having the generic characteristics defined in part 2.2. However some ongoing prototyping and experimentation work is to be finalized for reverse exchange and full bidirectional interoperability demonstration, but also for using such models as way to realize digital twins of the enterprises. As

298

N. Figay et al.

relying on top of reach web technologies with bidirectional interaction between web clients and servers, nothing prevent replacing data import flow by real time flow.

5 Conclusion and Future Work After presenting needs for preparing and building operational continuous interoperability for emerging networked organization having to collaborate in a coopetition context, we highlighted how ArchiMate could contribute as an enabler, as soon as associated enterprise modeling capabilities of enterprises themselves are made interoperable. If showing that some approaches are already proposed for moving ArchiMate model between heterogeneous platforms, based on hypermodel for interoperability concept, state of art also pointed out that some issues are not addressed and which are addressed by this paper: the ability to capture composite models in graphs which can be visualized as interactive compound graphs with automated drawing. A proposed approach for solving such issue was described, which extends legacy polyglot ArchiMate hypermodels with compound graph advance visualization technologies. The approach comes with some prototyping and experimentation which already demonstrated valuable results and needs to be continued for the full demonstration of bidirectional interoperability. Future research perspectives concern the assessment of exchange coverage to connected graphs, but also to ability to deal with nodes drawn on the border lines of the nested boxes, in order to extend the approach to visual representation of system elements being part of the boundary of the system, such as interfaces, ports, or connectors. Some research work will also concern new advanced visual features for improved interaction with ArchiMateCG.

References 1. European Commission: Enterprise Interoperability Research Roadmap Final Version (Version 4.0), 31 July 2006. https://cordis.europa.eu/docs/projects/cnect/5/001945/080/publishing/rea dmore/ei-roadmap-final-en.pdf. Last accessed 12 Oct 2021 2. Interop Vlab Website. http://interop-vlab.eu/. Last accessed 11 Oct 2021 3. Chen, D.: Enterprise interoperability framework. In: Proceedings of INTEROP’06, Open Interop Workshop on Enterprise Modelling and Ontologies for Interoperability (2006) 4. eHealth Interoperability Framework v1.1. https://developer.digitalhealth.gov.au/specifications/ ehealth-foundations/ep-1020-2012 (2007). Last accessed 09 Nov 2021 5. Open Group ArchiMate. https://pubs.opengroup.org/architecture/archimate3-doc/. Last accessed 25 Sept 2021 6. Open Group ArchiMate Vision Stencils. https://publications.opengroup.org/i163. Last accessed 25 Sept 2021 7. PlantUML ArchiMate Diagram. https://plantuml.com/fr/archimate-diagram. Last accessed 25 Sept 2021 8. Figay, N.: Does it Makes Sense Using ArchiMate with XMind. https://www.linkedin.com/ pulse/does-make-sense-using-archimate-xmind-dr-nicolas-figay/. Last accessed 25 Sept 2021 9. Archi Web Site. https://www.archimatetool.com/. Last accessed 25 Sept 2021

A New Polyglot ArchiMate Hypermodel Extended to Graph Related …

299

10. ArchiMate Modelio Plugin Web Site. https://www.modeliosoft.com/fr/downloads/plugins/arc himate/archimate-modelio-4-0-x.html. Last accessed 25 Sept 2021 11. ISO/IEC 42010:2011: Systems and Software Engineering—Architecture Description. https:// www.iso.org/standard/50508.html. Last accessed 27 Sept 2021 12. The Open Group ArchiMate Model Exchange File Format Web Site. https://www.opengroup. org/open-group-archimate-model-exchange-file-format. Last accessed 27 Sept 2021 13. Carlson, D.A.: Semantic Models for XML Schema with UML Tooling (2006) 14. Figay, N., Ghodous, P.: Extended hypermodel for interoperability within the virtual enterprise. In: SITIS 2009, pp. 393–400. Marrakesh, Morocco (2009). https://doi.org/10.1109/SITIS.200 9.68 15. Bakhshadeh, M., Morais, A., Caetano, A., Borbinha, J.: Ontology transformation of enterprise architecture models. In: 5th Doctoral Conference on Computing, Electrical and Industrial Systems, pp. 55–62. Costa de Caparica, Portugal (2014). https://doi.org/10.1007/978-3-64254734-8_7 16. Tchoffa, D., Figay, N., Ghodous, P., Exposito, E., Apedome, K.S., El Mhamedi, A.: Dynamic manufacturing network—from flat semantic graphs to composite models. Int. J. Prod. Res. 57(20), 6569–6578 (2019). https://doi.org/10.1080/00207543.2019.1570375 17. Franz, M., Lopes, C.T., Huck, G., Dong, Y., Sumer, O., Bader, G.D.: Cytoscape.js: a graph theory library for visualisation and analysis. Bioinformatics 32(2), 309–311 (2016). https:// doi.org/10.1093/bioinformatics/btv557 18. Naranjo, D., Sánchez, M., Villalobos, J.: Towards a unified and modular approach for visual analysis of enterprise models. In: 2014 IEEE 18th International Enterprise Distributed Object Computing Conference Workshops and Demonstrations, pp. 77–86 (2014). https://doi.org/10. 1109/EDOCW.2014.20 19. Seyffarth, T., Raschke, K.: BCIT: A Tool to Recommend Compliant Business Processes Based on Process Adaption. BPM (2020) 20. Tchoffa, D., Figay, N., Ghodous, P., Panetto, H., El Mhamedi, A.: Alignment of the product lifecycle management federated interoperability framework with internet of things and virtual manufacturing. Comput. Ind. 130, 103466 (2021). https://doi.org/10.1016/j.compind.2021. 103466

Normalized City Analytics Based on a Semantic Interoperability Process Tiago F. Pereira , Nuno Soares , Mariana Pinto, Carlos E. Salgado, Ana Lima , and Ricardo J. Machado

Abstract The City Catalyst project aims to enhance better urban management, implementing the concept of sustainability and enabling semantic interoperability in cities in Smart Cities. Therefore, this paper presents a contextualization of Smart Cities and an overview of the remaining gaps that persist in terms of semantic interoperability. An introduction is also made at the level of ontologies. The NGSI-LD model is presented at the level of the generalized information referential and then, more specifically, the ontology layer. At last, scenarios were built, taking into account the context of the project, and, through these, tables were prepared with terms that constitute the glossary of terminologies. The work culminates with the mapping of the NGSI-LD using Neo4j that will serve as a basis for the Smart City systems to communicate with each other. Keywords Data model · Semantic data model · Ontologies · Semantic interoperability · Smart cities

T. F. Pereira (B) · N. Soares · M. Pinto · C. E. Salgado · A. Lima · R. J. Machado Center for Computer Graphics, Campus Azurém, Edf. 14, 4800-058 Guimarães, Portugal e-mail: [email protected] M. Pinto e-mail: [email protected] C. E. Salgado e-mail: [email protected] A. Lima e-mail: [email protected] R. J. Machado e-mail: [email protected] R. J. Machado University of Minho, 4800-058 Guimarães, Portugal © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2_26

301

302

T. F. Pereira et al.

1 Introduction Currently, there are still some gaps related to the lack of semantic interoperability, which are in line with what happens in many organizations in different sectors. Here we refer mainly to the excessive use of verbal communication to transmit information between departments, excessive use of excel resources as a way to store information about the management and planning of the organizational process, manual export of application logs that are used in other applications, among other aspects. This lack of interoperability results, in many cases, from a delay in the work process and a failure of communication/knowledge about what the other organizational areas are developing. Therefore, all these interdependencies associated with the phenomenon of the new industrial revolution and Smart Cities will have a strong impact on the relationship with business processes, as they change the perspectives from a centralized to a decentralized paradigm. This will require the widespread adoption of machine and system interoperability, not only at the same production site/city neighborhood, but also across the entire ecosystem. This paper describes our effort to deliver an ontological model as a basis for the system to contain semantic and structured information ensuring easy and fast information search. The next sections document this path, starting by establishing some theoretical background, Sect. 2 justifies the need to target the problem and the work in the scope of the reference models and architectures for mart Cities, placing them in these standards, and eliciting and describing a set of potential data models to perform the assignment. Section 3 describes the studied data models and Sect. 4 addresses the definition of the ontological model based on the NGSI-LD framework. Subsequently, Sect. 5 demonstrates how the construction of the final ontological model occurs. Finally, Sect. 6 finishes with some observations on the research project as well as some suggestions for future research.

2 Project Scope Cities are currently faced with a very diverse set of challenges, resulting from the complexity of the urban experience of citizens, who are active in different segments. The ambition of the City Catalyst project [1] is, therefore, to address these challenges through research, development, and validation, in a real context, of innovative technological solutions and services that promote integrated, more efficient, and effective urban management and catalyze innovation and sustainable development through specific contributions to the implementation and interoperability of urban platforms. In the context of the present project, it is proposed to implement and validate an Auto Machine Learning (AutoML), with different analytical capabilities (batch, stream, and AutoML algorithms) based on a data lake and different modules integrated as microservices, available as a service to be used in Smart Sustainable Cities

Normalized City Analytics Based on a Semantic Interoperability Process

303

in the areas of governance/sustainability, energy, and mobility. This platform will be based on a data aggregator solution of urban platforms, to be developed, that will allow the integration, processing, and distribution of data from the platforms of the various stakeholders [1]. To guarantee the solution’s replicability and scalability, it is necessary to choose and use a data model open standard, to define and develop an ontological model and to agree on a normalized cities information model that the existing platforms comply with, for context-sensitive access to their standardized data, facilitating the work of data analysis tools, and artificial intelligence/machine learning (AI/ML).

3 Data Models The infrastructures of a city are usually developed and delivered through autonomous vertical systems. As such, there is still no consensus about the language and architectural principles to be used in them. A unifying standard for Smart City reference architectures will contribute to alleviate real-world deployment, bringing notorious advantages such as interoperability assurance, integration facilitation, reuse, risk reduction, among others [1]. Although there is no single reference model that can be applied to a Smart City, there are several models that have been developed in order to understand and conceptualize Smart Cities as a whole, which aim to define their definition, scope, objectives, benefits, and architectures.1 In a previous work, developed in our research center, the authors [1] aimed to analyze the existing Smart City modeling approaches, contributing to the definition of components that must be part of such models and providing an aligned template for a Reference Model for Smart Cities. To perform the alignment of the identified Smart Cities dimensions, and resorting to influences from the TOGAF architecture domains and the OSI reference model, the authors applied a grouping and categorization process that resulted in a set of dimensions organized by the categories: human, institutional, city-component and environmental, and supported by a services provider five layers ICT framework, which composes the proposed reference model for smart cities presented in Fig. 1. At the end of the model is presented the technology. Each level provides services to the immediately preceding level. The exception and the reason for being a vertical and all-encompassing level in opposition to the others is the ICT Lvl V, responsible for supporting and providing common services for all the remaining ICT levels. The non-technological dimensions are based directly on ICT Lvl A, consuming its services and having here the point of interaction of the city stakeholders with the entire ICT component.2 Having already defined its scope in the reference architectures for smart cities, that is, the level, detail, and interactions that the intended data model has to ensure, we proceeded to list some of these models that can be used to assure the role of the 1 2

This paragraph was adapted from [1]. © 2020, Springer. Reused with permission. See Footnote 1.

304

T. F. Pereira et al.

Fig. 1 An aligned reference model for smart cities. Adapted from [1], © 2020, Springer. Reused with permission

Data and Knowledge Layers–ICT LvlB [1] in the references architectures abovementioned. Data models play a crucial role in defining the unified representation formats and semantics that will be used by applications both to consume and to publish data.3 The oneM2M [2] is the global standard initiative for machine-to-machine communications. Three layers make up the model: the application layer, the common services layer, and the underlying network services layer. RESTful APIs allow manipulation of all oneM2M resources. OneM2M has established an Internet of Things (IoT) platform that is interoperable with a variety of networks and systems. Open Mobile Alliance’s Lightweight M2M [3] is a communication protocol designed to establish a quick client–server standard for machine-to-machine services. LwM2M is frequently used in conjunction with Constrained Application Protocol (CoAP) and enables users to accomplish tasks, run diagnostics and applications, and manage remote IoT embedded devices. APIs for device setup, monitoring, communication, statistics, security, firmware update, and server provisioning are provided by the LWM2M specification. The Internet Protocol for Smart Objects Alliance (IPSO) intends to provide a standard design pattern and a data model that might facilitate high-level interoperability between IoT devices and software applications. The Open Mobile Alliance (OMA) united with the IPSO Alliance to establish OMA SpecWorks. IPSO Smart Objects from OMA SpecWork offer a structure for defining device objects, which are collections of resources that a device gathers, saves, and makes available to other applications. Four components comprise the IPSO Smart Objects data model [4]: object representation, data types, operations, and content formats.

3

See Footnote 1.

Normalized City Analytics Based on a Semantic Interoperability Process

305

The Semantic Sensor Network (SSN) ontology [5] is used to define sensors and their observations, as well as methods, characteristics of interest, samples utilized, observed attributes, and actuators. SSN follows a horizontal and vertical modularization design, with a lightweight but autonomous core ontology named Sensor, Observation, Sample, and Actuator (SOSA) for its basic classes and properties. Next Generation Service Interface–Linked Data (NGSI-LD) is a data model and application programming interface for publishing, querying, and subscribing context data. European Telecommunications Standardization Institute (ETSI) has been standardized the NGSI-LD protocol. Context Information is represented in the NGSILD data model [6] by entities with attributes and relationships. Based on Resource Description Framework (RDF) and the semantic web framework, the semantics are clearly established. This framework employs JSON-LD, which enables each entity and relation to have a unique Internationalized Resource Identifier (IRI) as an identifier, hence enabling the export of the related data as linked data sets. Scorpio, a NGSI-LD compliant context broker developed by NEC Laboratories Europe and NEC Technologies India, implements the whole NGSI-LD API. After studying these models, it was understood that NGSI-LD would be the most convenient data model for the project. NGSI-LD presents itself as the best suitable data model to support linked data, allowing better integration of different semantics and ontologies. On a practical level, this readiness regarding linked data will reveal its benefits when integrating the different data sources from the cities whilst standardizing them in a common idiom. Also, as it is a scalable solution, it will allow the connection of novel data sources over time, which will arise as cities enable more domains to be smart. Finally, it is the one being adopted for developing the architecture of the Big Data analytics instance, which is following the SynchroniCity initiatives [7], which themselves developed the Minimal Interoperability Mechanisms from the NGSI and NGSI-LD standards. NGSI-LD bootstraps integration at different levels, which are of the utmost importance to the City Catalyst project.

4 The Ontological Model Definition This section has as main objective to make a definition of the ontological model and normalized data models for semantic interoperability, implementation, and the validation of the semantic engine for searching the data and services in the solution [8]. The NGSI-LD Information Model specifies the structure of context information that an NGSI-LD system must support. The foundation classes, which correspond to the Core Meta-model, and the Cross-Domain Ontology, are defined at two levels in the NGSI-LD Information Model. The first is a formal description of the “property graph” paradigm [9]. The latter is a set of generic, cross-domain classes designed to eliminate conflicting or redundant definitions of the same classes in each domain-specific ontology. Domain-specific ontologies or vocabularies can be created below these two levels. For example, the

306

T. F. Pereira et al.

Fig. 2 NGSI-LD information model. Adapted from [12], © 2019, ETSI. Reused with permission

SAREF Ontology ETSI TS 103 264 [10] can be translated to the NGSI-LD Information Model, making this Context Information Management API specification useful for smart home applications [11]. Built on the NGSI-Meta Model, the NGSI-LD Cross-Domain Ontology level defines a set of generic entities, relationships, and attributes that serve as common words for domain-specific models, addressing the general temporal and structural description of physical systems across domains. It establishes a foundation for the study of temporality, mobility, system states, and system composition [6]. The NGSILD model makes it easy to create models of real-world entities, relationships, and properties, therefore the model is expressive enough to connect other existing models, using JSON-LD, a JSON-based serialization format for linked data. Each entity developed from the information model has an associated JSON-LD file and context. The main advantage of JSON-LD is that it offers the capability of expanding JSON terms to URIs, so that vocabulary can be used to define terms unambiguously [12] (Fig. 2).

5 Design of the Smart Cities Semantic Model The definition of the semantic model aims to support the semantic interoperability layer. A method for building semantic models is used, supporting a unified lexicon and glossary, as well as the relationships between their terms, for the different domains of cities (Fig. 3). The integration layer will include the design and implementation of a semantic model. The semantic model (and its mapping rules) will be designed based on data extraction and subsequent centralization of information in order to unify the terminology of the data catalogues. With this semantic model will also be possible to identify the relationships between data from different sources and the mapping rules that will facilitate the definition of queries to an ontological database [13].

Normalized City Analytics Based on a Semantic Interoperability Process

307

Fig. 3 The process used to develop the semantic component of smart cities

The development of scenarios is one of the instruments used for decreasing uncertainty. Four scenarios were defined from different cities: Porto, Vila Nova de Famalicão, Aveiro, and Guimarães. In each one, a description was made that makes possible to understand the action focus in each activity areas and built the semantic model. To connect all the terms, a table was built with the terminologies presented on the different use cases that were described, including the NGSI-LD model. Initially, we must focus on the analysis of the cities’ internal processes and activities, as well as the actors that carry them out. During the Information Source Characterization phase, used cases based on the Unified Modeling Language (UML) were prioritized, as they allow for a holistic view of the city and, in collaboration with the demonstrator, determining which regions should be prioritized for ontological implementation. The analysis must then be targeted toward the existing data model or reference architecture within the domain of interest during the Data Model Specification step. If an object is discovered, it is examined and documented at the same time as scenarios are created to ensure that the model’s ontological layer covers the project’s intervention areas. If a stakeholder defines a requirement, a data model appropriate for the needs is created, which is then utilized as a starting point to create an ontological database schema populated according to the client’s specifications and terminology. In the last phase, Ontological Mapping, we will use the ontology information to integrate a visualization tool. Filters may be applied, the database can be edited, and new terms and relations can be added in a more user-friendly way with this tool [14].

5.1 Characterization of Information Sources Speaking specifically about the “Characterization of Information Sources” phase, at first, it is necessary to characterize the actors as well as the tasks in which each one participates. This description will help to understand in detail the field of action as

308 Table 1 General terminology identification (excerpt)

T. F. Pereira et al.

ID

Terminology

UC

TMI1

Citizen

Guimarães

TMI2

Sensor

Guimarães

TMI3

Sustainability indicator

Guimarães

TMI4

Parameter

Aveiro

TMI5

Smart lamp posts

Aveiro

well as the details of the tasks that are performed. Afterwards, we proceed to the identification and construction of a glossary of terminologies. Each terminology glossary identifies a specific domain, as such should be constructed through an analysis in the context of the project, in order to detail the identified terms through a description. These terms, as previously mentioned, result from the intersection between the “Use Case” phase and the analysis of the environment and context of the city. This terminology analysis and cataloging should be carried out in as much detail as necessary, and it can be deepened or extended at any time in a future iteration of the method. For the cataloging of terminologies, a previously defined structure must be followed, as shown in Table 1. For a better understanding and contextualization of the identified terms, at this stage is further defined a table-like structure where each term is characterized by an ID which will allow to identify the term, the name of the term, a detailed description of the identified term, and the term Source property that allows us to identify the sources of the identified term (e.g., City of Guimarães, City of Aveiro, City of Porto, City of Famalicão or NGSI-LD data model). Are still identified synonyms for each terminology and its dependencies.

5.2 Data Model Specification In this specification phase of the data model, previously identified terminologies are used, where the main objective of this phase of the method is to map the terminologies through a UML class diagram. When mapping the terminologies, we must associate to each one of them the corresponding synonyms that were listed in the previous phase. Thus, to elaborate the mapping, we follow the design rule of a class diagram composed by: • Class: the class itself, this element is used when we want to visually demonstrate the class in the diagram. • Association: connector without tips—It is a type of relationship used between classes. Applicable to classes that are independent (they live without dependence on each other), but that at some point in the ontology may have some conceptual relationship.

Normalized City Analytics Based on a Semantic Interoperability Process

309

• Generalization: Heritage–connector with arrow at one end—It is a type of relationship where the generalized class (where the “arrowhead” of the connector is) provides resources for the specialized class (heir). It adopts more advanced concepts, that is, everything that the mother (generalized) class has, the daughter (specialized) will have. We are talking here about a concept of property inheritance. • Composition: connector with a “diamond” hatched at the tip—It is a type of relationship where the composite class depends on other classes to “Exist”. For example, the “Car” class has a composition with the “Motor” class. Without the “Motor” class, the “Car” class cannot functionally exist and Aggregation– connector with a leaked “diamond” on the tip—It is a type of relationship where the aggregate class uses other classes to “exist”, but you can live without it. For example, the “Car” class has an aggregation with the “Roof” class. Without the “Rooftop” the “Car” class can exist. As mentioned before, taking into account the mapping of the class diagram, we now focus on the entities that compose it. Thus, an analysis and description of each of the previously identified classes are performed to allow the definition of the schema for the semantic database (Fig. 4).

Fig. 4 Mapping of the class diagram

310

T. F. Pereira et al.

5.3 Ontological Mapping Taking into account all entities, attributes and methods identified in the class diagram (UML), we performed a Neo4J modeling of the ontological database schema to be used by the CityCatalyst system. Thus, this scheme translates the semantic model of the ontological component. Each node refers to a terminology previously identified in the glossary terminology, and its color is associated with a label, which in this case indicates the respective defined hierarchical level. In the center we define the project name “CityCatalyst” in red, in green we identify the municipalities where the project will be applied, the blue color refers to the NGSI-LD data model and finally the orange color identify all the terminologies raised in the course of the project associated with smart cities. In addition to all this information, there are also relationships between each of the identified nodes. These relationships are intended to represent the way each of them communicates, as well as the dependencies between them. Each of the nodes here represented contains unique information that characterizes them (Fig. 5). Each of this information can be represented by attributes that have a type of parameters which can be, for example, string, boolean, date, etc. Thus, it is also possible to unify the form and type of data collection in each of the attributes. This information allows us to store the respective data of each one of the entities. It is in this way, based on all this information and relationships, that interoperability between the different applications becomes viable.

Fig. 5 Schema for the semantic database mapped in Neo4J

Normalized City Analytics Based on a Semantic Interoperability Process

311

6 Conclusion The new cycle that will be felt with the so-called Smart Cities will be characterized by greater autonomy, intelligence in the activities or devices responsible for decision making, integration involving all external agents that interact in the same ecosystem, integration with all services of payment and commercial transactions and transparency in the traceability of processes. In order to effectively confront the difficulties of today’s competition, identify new business prospects, and deliver better customer service, firms must practice semantic interoperability. Taking into account that transport is responsible for 20% of a city’s energy consumption, investments in intelligent transport systems show promising results in terms of gas reduction and energy savings, and, for this reason, several studies in the area of smart mobility are related to sustainable thinking. Thus, the intended objective throughout this project will be to interconnect the cities, standardizing the terminologies used in each of the different cities based on the semantic model presented here. This will allow data to be stored in a Data Lake, centralizing and storing these large volumes of data produced by the different municipalities. Portugal will be able to create a milestone in terms of standardization by having four cities compatible with the standard through several SDKs developed and made available openly. Acknowledgements This work was carried out within the project “CityCatalyst” reference POCI/ LISBOA-01-0247-FEDER-046119, co-funded by Fundo Europeu de Desenvolvimento Regional (FEDER), through Portugal 2020 (P2020).

References 1. Soares, N., Monteiro, P., Duarte, F.J., Machado, R.J.: A unified reference model for smart cities. In: Santos, H., Pereira, G., Budde, M., Lopes, S., Nikolic, P. (eds.) Science and Technologies for Smart Cities. SmartCity 360 2019. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol. 323, pp. 162–180. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51005-3_16 2. Wu, C.-W., Lin, F.J., Wang, C.-H., Chang, N.: OneM2M-based IoT protocol integration. In: 2017 IEEE Conference on Standards for Communications and Networking, pp. 252–257. IEEE, Helsinki (2017). https://doi.org/10.1109/CSCN.2017.8088630 3. Rao, S., Chendanda, D., Deshpande, C., Lakkundi, V.: Implementing LWM2M in constrained IoT devices. In: 2015 IEEE Conference on Wireless Sensors, pp. 52–57. IEEE, Melaka (2015). https://doi.org/10.1109/ICWISE.2015.7380353 4. Jimenez, J., Koster, M., Tschofenig, H.: IPSO Smart Objects. https://omaspecworks.org/dev elop-with-oma-specworks/ipso-smart-objects/. Last accessed 13 Nov 2021 5. W3C: Semantic Sensor Network Ontology. https://www.w3.org/TR/vocab-ssn/. Last accessed 13 Dec 2021 6. Privat, G.: Guidelines for Modelling with NGSI-LD. ETSI White Paper, vol. 42 (2021) 7. Synchronicity. https://synchronicity-iot.eu/. Last accessed 11 Dec 2021

312

T. F. Pereira et al.

8. Si, A.N., Copromoção, E.M.: Sistema de Incentivos à Investigação e Desenvolvimento Tecnológico. Si I & Dt Co-Promoção, vol. 12 (2015) 9. Robinson, I., Webber, J., Eifrem, E.: Graph Databases: New Opportunities for Connected Data. O’Reilly, Farnham (2015) 10. Applications, S., Ontology, R.: TS 103 264—V3.1.1—SmartM2M; Smart Applications; Reference Ontology and oneM2M Mapping, vol. 1, pp. 1–25 (2020) 11. Bees, D., Frost, L., Bauer, M., Fisher, M., Li, W.: NGSI-LD API: for Context Information Management. ETSI, Antipolis (2019) 12. ETSI: GS CIM 009—V1.1.1—Context Information Management (CIM) NGSI-LD API, vol. 1, pp. 1–159 (2019) 13. Rocha, B.: LGeoSIM: um Modelo Semântico de Dados para Cidades Inteligentes. Universidade Federal do Rio Grande do Norte, Natal (2020) 14. Pereira, T.F., Matta, A., Mayea, C.M., Pereira, F., Monroy, N., Jorge, J., Rosa, T., Salgado, C.E., Lima, A., Machado, R.J., Magalhães, L., Adão, T., Guevara López, M.Á., Gonzalez, D.G.: A web-based voice interaction framework proposal for enhancing information systems user experience. Proc. Comput. Sci. 196, 235–244 (2021). https://doi.org/10.1016/j.procs.2021. 12.010

Author Index

A Abdi, Mustapha Kamel, 49 Ahmad, Adeel, 49 Alarcón, Faustino, 219, 229 Alfaro-Saiz, Juan-José, 87, 171 Apedome, Kouami Seli, 289 Ariño, María José Núñez, 37

B Basson, Henri, 49 Bouneffa, Mourad, 49 Boza, Andrés, 63, 195

C Carlin, Hazel M., 125 Chauvat, Nicolas, 243 Cherif, Chahira, 49 Costa-Soria, Cristobal, 99 Cubero, Daniel, 219, 229

D De Nicola, Antonio, 267 Deshmukh, Rohit A., 37 Doumeingts, Guy, 99 Dyadin, Anton A., 279

E El Mhamedi, Abderrahman, 289 Escudero-Santana, Alejandro, 25, 159 Esteso, Ana, 63, 195

F Fauconnet, Claude, 243 Ferreira, José, 207 Figay, Nicolas, 289 Fraile, Francisco, 219, 229 Franco, José, 207

G Gering, Patrick, 3 Ghodous, Parisa, 289 Gil-Gomez, Hermenegildo, 111 Gomez-Gasquet, Pedro, 195 Goodall, Paul A., 125 Guadix, José, 25

H Haidar, Hezam, 99 Hinde, Chris, 135

I Ioannidis, Dimosthenis, 37 Ivezic, Nenad, 255

J Jardim-Gonçalves, Ricardo, 207 Jelisic, Elena, 255 Jochem, Roland, 13

K Keraron, Yves, 243 Knothe, Thomas, 3

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Rodríguez-Rodríguez et al. (eds.), Enterprise Interoperability X, Proceedings of the I-ESA Conferences 11, https://doi.org/10.1007/978-3-031-24771-2

313

314 Kulvatunyou, Boonserm, 255

L Lapshin, Vyacheslav S., 279 Leclerc, Jean-Charles, 243 Leon, Ramona-Diana, 87, 171 Lima, Ana, 301 Lorenzo-Espejo, Antonio, 25, 159

M Machado, Ricardo J., 301 Maiza, Mohammed, 49 Marde, Sanjana Kishor, 13 Marjanovic, Zoran, 255 Mateo-Casalí, Miguel Á., 219, 229 Matsuda, Michiko, 75 Mayer, Jan, 13 Melesse, Tsega Y., 147 Muñoz-Díaz, María-Luisa, 25, 159 Muñuzuri, Jesús, 159 Mula, Josefa, 183

N Nieman, Scott, 255 Nishi, Tatsushi, 75 Nizamis, Alexandros, 37 Noguera, Manuel, 279 Nowak-Meitinger, Anna M., 13

O Oltra-Badenes, Raul, 111 Oltra-Gutierrez, Juan Vicente, 111 Ortiz Bas, Angel, 63, 99

P Pasquale Di, Valentina, 147 Perales, David Pérez, 195 Pereira, Tiago F., 301 Pérez, Luis Miguel, 111

Author Index Pinto, Mariana, 301 Poler, Raúl, 183

R Riemma, Stefano, 147 Rodríguez, María Ángeles, 63 Rodríguez-Rodríguez, Raúl, 87, 171 Rogozov, Yuri I., 279

S Salgado, Carlos E., 301 Sassanelli, Claudio, 99 Schneider, Alexander, 37 Schnieder, Maren, 135 Scholz, Julia-Anne, 3 Serrano-Ruiz, Julio C., 183 Silega, Nemury, 279 Soares, Nuno, 301

T Tchoffa, David, 289 Tzovaras, Dimitrios, 37

V Vafeiadis, Thanasis, 37 Valencia, Fernando Gigante, 37 Verdecho, María-José, 171 Villani, Maria Luisa, 267

W West, Andrew A., 125, 135

Y Young, Robert I. M., 125

Z Zelm, Martin, 243