Artificial Intelligence and Industrial Applications: Smart Operation Management [1st ed.] 9783030511852, 9783030511869

This book gathers the refereed proceedings of the Artificial Intelligence and Industrial Applications (A2IA’2020), the f

269 76 17MB

English Pages XX, 442 [445] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Front Matter ....Pages i-xx
A Fuzzy Ontology Based Approach to Support Product Eco-Design (Chaimae Abadi, Imad Manssour, Asmae Abadi)....Pages 1-13
A Genetic-Based SVM Approach for Quality Data Classification (Wahb Zouhri, Hamideh Rostami, Lazhar Homri, Jean-Yves Dantan)....Pages 15-31
Towards a Platform to Implement an Intelligent and Predictive Maintenance in the Context of Industry 4.0 (El Mehdi Bourezza, Ahmed Mousrij)....Pages 33-44
Towards a Prediction Analysis in an Industrial Context (Ilham Battas, Ridouane Oulhiq, Hicham Behja, Laurent Deshayes)....Pages 45-57
Methodology for Implementation of Industry 4.0 Technologies in Supply Chain for SMEs (Hafsa El-kaime, Saad Lissane Elhaq)....Pages 59-76
A Deep Reinforcement Learning (DRL) Decision Model for Heating Process Parameters Identification in Automotive Glass Manufacturing (Choumicha El Mazgualdi, Tawfik Masrour, Ibtissam El Hassani, Abdelmoula Khdoudi)....Pages 77-87
Analytic Hierarchy Process (AHP) for Supply Chain 4.0 Risks Management (Kamar Zekhnini, Anass Cherrafi, Imane Bouhaddou, Youssef Benghabrit)....Pages 89-102
SmartDFRelevance: A Holonic Agent Based System for Engineering Industrial Projects in Concurrent Engineering Context (Abla Chaouni Benabdellah, Imane Bouhaddou, Asmaa Benghabrit)....Pages 103-123
A Cyber-Physical Warehouse Management System Architecture in an Industry 4.0 Context (Mariam Moufaddal, Asmaa Benghabrit, Imane Bouhaddou)....Pages 125-148
PLM and Smart Technologies for Product and Supply Chain Design (Oulfa Labbi, Abdeslam Ahmadi)....Pages 149-160
Production Systems Simulation Considering Non-productive Times and Human Factors (Ismail Taleb, Alain Etienne, Ali Siadat)....Pages 161-172
Distributed and Embedded System to Control Traffic Collision Based on Artificial Intelligence (Tarik Hajji, Tawfik Masrour, Mohammed Ouazzani Jamil, Zakaria Iathriouan, Sanaa Faquir, Elmiloud Jaara)....Pages 173-183
The Emergence and Decision Support in Complex System with Fuzzy Logic Control (Mohammed Chennoufi, Fatima Bendella)....Pages 185-198
Markov Decision Processes with Discounted Costs over a Finite Horizon: Action Elimination (Abdellatif Semmouri, Mostafa Jourhmane)....Pages 199-213
Robust Adaptive Fuzzy Path Tracking Control of Differential Wheeled Mobile Robot (Brahim Moudoud, Hicham Aissaoui, Mohammed Diany)....Pages 215-226
Deep Learning Approach for Automated Guided Vehicle System (Mohamed Rhazzaf, Tawfik Masrour)....Pages 227-237
Path Planning Using Particle Swarm Optimization and Fuzzy Logic (Ahmed Oultiligh, Hassan Ayad, Abdeljalil Elkari, Mostafa Mjahed)....Pages 239-251
Prediction of Robot Localization States Using Hidden Markov Models (Jaouad Boudnaya, Amine Haytoumi, Omar Eddayer, Abdelhak Mkhida)....Pages 253-262
A New Approach for Multi-agent Reinforcement Learning (Elmehdi Amhraoui, Tawfik Masrour)....Pages 263-275
Recommender System for Most Relevant K Pick-Up Points (Ayoub Berdeddouch, Ali Yahyaouy, Younès Bennani, Rosanna Verde)....Pages 277-289
Feature Detection and Tracking for Visual Effects: Augmented Reality and Video Stabilization (Houssam Halmaoui, Abdelkrim Haqiq)....Pages 291-311
Spectral Image Recognition Using Artificial Dynamic Neural Network in Information Resonance Mode (Ivan Peleshchak, Roman Peleshchak, Vasyl Lytvyn, Jan Kopka, Mariusz Wrzesien, Janusz Korniak et al.)....Pages 313-322
U-Net Based Model for Obfuscated Human Faces Reconstruction (Elmoukhtar Zemmouri, Youssef Yakoubi, Mohammed Douimi, Adil Saadi)....Pages 323-337
A Machine Learning Assistant for Choosing Operators and Tuning Their Parameters in Image Processing Tasks (Issam Qaffou)....Pages 339-350
Convergence and Parameters Setting of Continuous Hopfield Neural Networks Applied to Image Restoration Problem (Joudar Nour-eddine, Ramchoun Hassan, Zakariae En-Naimani, Ettaouil Mohamed)....Pages 351-369
The IBN BATTOUTA Air Traffic Control Corpus with Real Life ADS-B and METAR Data (Kasttet Mohammed Saïd, Lyhyaoui Abdelouahid)....Pages 371-384
Weed Recognition System for Low-Land Rice Precision Farming Using Deep Learning Approach (Olayemi Mikail Olaniyi, Emmanuel Daniya, Ibrahim Mohammed Abdullahi, Jibril Abdullahi Bala, Esther Ayobami Olanrewaju)....Pages 385-402
A Comparative Study Between Mixture and Stepwise Regression to Model the Parameters of the Composting Process (Echarrafi Khadija, Ben Abbou Mohamed, Taleb Mustapha, Rais Zakia, Ibtissam El Hassani, El Haji Mounia)....Pages 403-413
Deep Learning Based Sponge Gourd Diseases Recognition for Commercial Cultivation in Bangladesh (Tahmina Tashrif Mim, Md. Helal Sheikh, Sadia Chowdhury, Roksana Akter, Md. Abbas Ali Khan, Md. Tarek Habib)....Pages 415-427
Exploitation of Vegetation Indices and Random Forest for Cartography of Rosemary Cover: Application to Gourrama Region, Morocco (Hassan Chafik, Mohamed Berrada)....Pages 429-440
Back Matter ....Pages 441-442
Recommend Papers

Artificial Intelligence and Industrial Applications: Smart Operation Management [1st ed.]
 9783030511852, 9783030511869

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Advances in Intelligent Systems and Computing 1193

Tawfik Masrour Anass Cherrafi Ibtissam El Hassani   Editors

Artificial Intelligence and Industrial Applications Smart Operation Management

Advances in Intelligent Systems and Computing Volume 1193

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Nikhil R. Pal, Indian Statistical Institute, Kolkata, India Rafael Bello Perez, Faculty of Mathematics, Physics and Computing, Universidad Central de Las Villas, Santa Clara, Cuba Emilio S. Corchado, University of Salamanca, Salamanca, Spain Hani Hagras, School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK László T. Kóczy, Department of Automation, Széchenyi István University, Gyor, Hungary Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX, USA Chin-Teng Lin, Department of Electrical Engineering, National Chiao Tung University, Hsinchu, Taiwan Jie Lu, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, Australia Patricia Melin, Graduate Program of Computer Science, Tijuana Institute of Technology, Tijuana, Mexico Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro, Rio de Janeiro, Brazil Ngoc Thanh Nguyen , Faculty of Computer Science and Management, Wrocław University of Technology, Wrocław, Poland Jun Wang, Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong

The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing such as: computational intelligence, soft computing including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms, social intelligence, ambient intelligence, computational neuroscience, artificial life, virtual worlds and society, cognitive science and systems, Perception and Vision, DNA and immune based systems, self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric computing, recommender systems, intelligent control, robotics and mechatronics including human-machine teaming, knowledge-based paradigms, learning paradigms, machine ethics, intelligent data analysis, knowledge management, intelligent agents, intelligent decision making and support, intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia. The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results. ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink **

More information about this series at http://www.springer.com/series/11156

Tawfik Masrour Anass Cherrafi Ibtissam El Hassani •



Editors

Artificial Intelligence and Industrial Applications Smart Operation Management

123

Editors Tawfik Masrour Laboratory of Mathematical Modeling, Simulation and Smart Systems (L2M3S), Department of Mathematics and Computer Science, Artificial Intelligence for Engineering Sciences Team (IASI) National Graduate School for Arts and Crafts, My Ismail University Meknes, Morocco

Anass Cherrafi Laboratory of Mathematical Modeling, Simulation and Smart Systems (L2M3S), Department of Industrial and Manufacturing Engineering, Modeling and Optimization of Industrial Systems Team (MOSI) National Graduate School for Arts and Crafts, My Ismail University Meknes, Morocco

Ibtissam El Hassani Laboratory of Mathematical Modeling, Simulation and Smart Systems (L2M3S), Department of Industrial and Manufacturing Engineering, Artificial Intelligence for Engineering Sciences Team (IASI) National Graduate School for Arts and Crafts, My Ismail University Meknes, Morocco

ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-3-030-51185-2 ISBN 978-3-030-51186-9 (eBook) https://doi.org/10.1007/978-3-030-51186-9 © Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Organization

General Chair Tawfik Masrour, Department of Mathematics and Computer Science, Artificial Intelligence for Engineering Sciences Team (IASI), Laboratory of Mathematical Modeling, Simulation and Smart Systems (L2M3S), ENSAM, My Ismail University, 50500 Meknes, Morocco. [email protected] [email protected]

Co-chair Vincenzo Piuri, Department of Computer Science, University of Milan via Celoria 18, 20133 Milano (MI), Italy

Keynotes Speakers Amal El Fallah Seghrouchni, Sorbonne University, Paris, France Mawa Chafii, ENSEA, CY Paris University, Paris, France Abdellatif Benabdellah, University of Le Havre, France Ali Siadat, Arts et Métiers Paris Tech Metz, France Jiju Antony, Heriot-Watt University, Edinburgh, Scotland Andrew Kusiak, University of Iowa, USA

v

vi

TPC Chairs Alexandre Dolgui, France Jose Arturo Garza-Reyes, UK Abby Ghobadian, UK Kannan Govindan, Denmark Janusz Kacprzyk, Poland Vikas Kumar, UK Ali Siadat, France

International Scientific Committee Aalaoui Zinab, Morocco Abawajy Jemal H., Australia Aboulaich Rajae, Morocco Aghezzaf El-Houssaine, Belgium Ahmadi Abdeslam, Morocco Ait Moussa Abdelaziz, Morocco Akhrif Iatimad, Morocco Aksasse Brahim, Morocco Al-Mubaid Hisham, USA Alali Abdelhakim, Morocco Alami Hassani Aicha, Morocco Ali Ahad, USA Ali Siadat, France Allahverdi Ali, Kuwait Aly Ayman A., Saudi Arabia Arbaoui abdelaziz, Morocco Azizi Abdelmalek, Morocco Azzeddine MAZROUI, Morocco Babai Mohamed Zied, France Badie Kambiz, Iran Balas Valentina Emilia, Romania Bansal Jagdish Chand, India Batouche Mohamed Chawki, Saudi Arabia Behja hicham, Morocco Belhadi Amine, Morocco Ben Abdllah Mohammed, Morocco Benabbou Rajaa, Morocco Benaissa Mounir, Tunisia Benghabrit Asmaa, Morocco Benoussa Rachid, Morocco

Organization

Organization

Berrada Mohamed, Morocco Bouhaddou Imane, Morocco Brouri Adil, Morocco Buccafurri Francesco, Italy Carrabs Francesco, Italy Castillo Oscar, Mexico Cerulli Raffaele, Italy Chaouni Benabdellah Abla, Morocco Chaouni Benabdellah Naoual, Morocco Charkaoui Abdelkabir, Morocco Chbihi Louhdi Mohammed Reda, Morocco Cherrafi Anass, Morocco Ciaramella Angelo, Italy Ciasullo Maria Vincenza, Italy D’Ambrosio Ciriaco, Italy Daoudi El-Mostafa, Morocco De Mello Rodrigo Fernandes, Brazil Deep Kusum, India Dolgui Alexandre, France Ducange Pietro, Italy El Akili Charaf, Morocco El Haddadi Anass, Morocco El Hammoumi Mohammed, Morocco Ibtissam El Hassani, Morocco El Jai Mustapha, Morocco El Jasouli Sidi Yasser, Belgium El Mazzroui Abb Elaziz, Morocco El Mghouchi Youness, Morocco El Ossmani Mustapha, Morocco Elbaz Jamal, Morocco Elfezazi Said, Morocco Elmazroui Azz Addin, Morocco ES-SBAI Najia, Morocco Ettifouri El Hassane, Morocco Ezziyyani Mostafa, Morocco Faquir Sanaa, Morocco Fassi Fihri Abdelkader Fiore Ugo, Italy Fouad Mohammed Amine, Morocco Gabli mohamed, Morocco Gaga Ahmed, Morocco Gao Xiao-Zhi, Finland Garza-Reyes Jose Arturo, UK Ghobadian Abby, UK Giuseppe Stecca, Italy

vii

viii

Govindan Kannan, Denmark Grabot Bernard, France Hajji Tarik, Morocco Hamzane Ibrahim, Morocco Harchli Fidaa, Morocco Hasnaoui Moulay Lahcen, Morocco Herrera-Viedma Enrique, Spain Itahriouan Zakaria, Morocco Jaara El Miloud, Morocco Jaouad Kharbach, Morocco Jawab Fouad, Morocco Kacprzyk Janusz, Poland Kaya Sid Ali Kamel, Morocco Khadija Bouzaachane, Morocco Khireddine Mohamed Salah, Morocco Khrbach Jawad, Morocco Kodad Mohssin, Morocco Krause Paul, UK Kumar Vikas, UK Laaroussi Ahmed, Morocco Lagrioui Ahmed, Morocco Lasri Larbi, Morocco Lazraq Aziz, Morocco Lebbar Maria, Morocco Leung Henry, Canada Manssouri Imad, Morocco Marcelloni Francesco, Italy Massoud Hassania, Morocco Medarhri Ibtissam, Morocco Mkhida Abdelhak, Morocco Mohiuddin Muhammad, Canada Moumen Aniss, Morocco Moussi Mohamed, Morocco Najib khalid, Morocco Nee Andrew Y. C., Morocco Nfaoui Elhabib, Morocco Nguyen Ngoc Thanh, Poland Nouari Mohammed, France Noureddine Boutammachte, Morocco Novák Vilém, Czech Ouazzani Jamil, Morocco Ouerdi Noura, Morocco Oztemel Ercan, Turkey Palmieri Francesco, Italy Pesch Erwin, Germany

Organization

Organization

Pincemin Sandrine, France Rachidi Youssef, Morocco Rahmani Amir Masoud, Iran Raiconi Andrea, Italy Rocha-Lona Luis, Mexico Saadi Adil, Morocco Sabor Jalal, Morocco Sachenko Anatoliy, Ukraine Sael Nawal, Morocco Saidou Noureddine, Morocco Sekkat Souhail, Morocco Senhaji Salwa, Morocco Serrhini Simohammed, Morocco Sheta Alaa, USA Siarry Patrick, France Soulhi Aziz, Morocco Staiano Antonino, Italy Tahiri Ahmed, Morocco Tarnowska Katarzyna, USA Tyshchenko Oleksii K., Czech Tzung-Pei Hong, Taiwan Zemmouri Elmoukhtar, Morocco Zéraï Mourad, Tunisia

Local Organizing Committee Abou El Majd Badr, Med V University Ahmadi Abdessalam, ENSAM Benabbou Rajaa, ENSEM Benghabrit Asmaa, ENSMR Benghabrit Youssef, ENSAM Bouayad Aboubakr, ENSAM Bouhaddou Imane, ENSAM Chaira Abdellatif, Fac Sci, Meknes Cherrafi Anass, ENSAM Ibtissam El Hassani, ENSAM Hajji Tarij, ENSAM Masrour Tawfik, ENSAM Najib Khalid, ENSMR Saadi Adil, ENSAM Sekkat Souhail, ENSAM Zemmouri El Moukhtar, ENSAM

ix

x

Publication Chairs Badr Abou El Majd, Morocco Ibtissam El Hassani, Morocco Ercan Oztemel, Turkey

Poster Chairs Benabbou Rajaa, Morocco Souhail Sekkat, Morocco

Registration Chairs Abdessalam Ahmadi, Morocco Adil Saadi, Morocco Jalal Sabor, Morocco

Web Chairs Ibtissam El Hassani, Morocco Zemmouri El Moukhtar, Morocco

Public Relations Chairs Abla Benabdellah Chaouni, Morocco Asmaa Benghabrit, Morocco Youssef Benghabrit, Morocco Imane Bouhaddou, Morocco Said Ettaqi, Morocco

Industrial Session Chairs Anass Cherrafi, Morocco Souhail Sekkat, Morocco

Organization

Organization

Ph.D. Organizing Committee Amhraoui ElMehdi, Morocco Benabdellah Chaouni Abla, Morocco Eddamiri Siham, Morocco El Mazgualdi Choumicha, Morocco El Mekaoui Fatima Zahra, Morocco Fattah Zakaria, Morocco Hadi Hajar, Morocco Jounaidi Ilyass, Morocco Khdoudi Abdelmoula, Morocco Moufaddal Meryam, Morocco Raaidi Safaa, Morocco Rhazzaf Mohamed, Morocco Zekhnini Kamar, Morocco

xi

Preface

It is fairly obvious that our world is uncertain, complex, and ambiguous. It is true today more than ever, and tomorrow will be more challenging. Artificial Intelligence (AI) can improve responses to major challenges to our communities from economic and industrial development to health care and disease containment. Nevertheless, Artificial Intelligence is still growing and improving technology, and there is a necessity for more creative studies and researches led by both academics and practitioners. Artificial Intelligence and Industrial Applications—A2IA’2020, which is the first edition of an annual international conference organized by the ENSAM—Meknes at Moulay Ismail University, intends to contribute to this common great goal. It aims to offer a platform for experts, researchers, academics, and industrial practitioners working in artificial intelligence and its different applications to discuss problems and solutions, concepts, and theories and map out the directions for future research. The connections between institutions and individuals working in this field have to keep growing on and on, and this must have a positive impact on productivity and effectiveness of researches. The main topics of the conference were as follows: • • • • • • •

Smart Operation Management Artificial Intelligence: Algorithms and Techniques Artificial Intelligence for Information and System Security in Industry Artificial Intelligence for Energy Artificial Intelligence for Agriculture Artificial Intelligence for Healthcare Other Applications of Artificial Intelligence

In A2IA’2020 conference proceeding, about 141 papers were received from around the world. A total of 58 papers were selected for presentation and publication. In order to maintain a high level of quality, a blind peer review process was performed by a large international panel of qualified experts in the conference topic areas. Each submission received at least two reviews, and several received up to

xiii

xiv

Preface

five. The papers were evaluated on their relevance to A2IA’2020 tracks and topics, scientific correctness, and clarity of presentation. The papers were organized in two parts: • Artificial Intelligence and Industrial Applications: Smart Operation Management (Volume 1) In : Advances in Intelligent Systems and Computing. • Artificial Intelligence and Industrial Applications: Artificial Intelligence Techniques for Cyber-Physical, Digital Twin Systems and Engineering Applications (Volume 2) In : Lecture Notes in Networks and Systems. We hope that our readers will discover valuable new ideas and insights. Lastly, we would like to express our thanks to all contributors in this book including those whose papers were not included. We would also like to extend our thanks to all members of the Program Committee and reviewers, who helped us with their expertise and valuable time. We are tremendously grateful for the professional and organizational support from Moulay Ismail University. Finally, our heartfelt thanks go especially to Springer-Nature. Meknes, Morocco

Tawfik Masrour Anass Cherrafi Ibtissam El Hassani

Contents

A Fuzzy Ontology Based Approach to Support Product Eco-Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chaimae Abadi, Imad Manssour, and Asmae Abadi

1

A Genetic-Based SVM Approach for Quality Data Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wahb Zouhri, Hamideh Rostami, Lazhar Homri, and Jean-Yves Dantan

15

Towards a Platform to Implement an Intelligent and Predictive Maintenance in the Context of Industry 4.0 . . . . . . . . . . . . . . . . . . . . . . El Mehdi Bourezza and Ahmed Mousrij

33

Towards a Prediction Analysis in an Industrial Context . . . . . . . . . . . . Ilham Battas, Ridouane Oulhiq, Hicham Behja, and Laurent Deshayes

45

Methodology for Implementation of Industry 4.0 Technologies in Supply Chain for SMEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hafsa El-kaime and Saad Lissane Elhaq

59

A Deep Reinforcement Learning (DRL) Decision Model for Heating Process Parameters Identification in Automotive Glass Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Choumicha El Mazgualdi, Tawfik Masrour, Ibtissam El Hassani, and Abdelmoula Khdoudi

77

Analytic Hierarchy Process (AHP) for Supply Chain 4.0 Risks Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kamar Zekhnini, Anass Cherrafi, Imane Bouhaddou, and Youssef Benghabrit

89

SmartDFRelevance: A Holonic Agent Based System for Engineering Industrial Projects in Concurrent Engineering Context . . . . . . . . . . . . . 103 Abla Chaouni Benabdellah, Imane Bouhaddou, and Asmaa Benghabrit

xv

xvi

Contents

A Cyber-Physical Warehouse Management System Architecture in an Industry 4.0 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Mariam Moufaddal, Asmaa Benghabrit, and Imane Bouhaddou PLM and Smart Technologies for Product and Supply Chain Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Oulfa Labbi and Abdeslam Ahmadi Production Systems Simulation Considering Non-productive Times and Human Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Ismail Taleb, Alain Etienne, and Ali Siadat Distributed and Embedded System to Control Traffic Collision Based on Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Tarik Hajji, Tawfik Masrour, Mohammed Ouazzani Jamil, Zakaria Iathriouan, Sanaa Faquir, and Elmiloud Jaara The Emergence and Decision Support in Complex System with Fuzzy Logic Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Mohammed Chennoufi and Fatima Bendella Markov Decision Processes with Discounted Costs over a Finite Horizon: Action Elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Abdellatif Semmouri and Mostafa Jourhmane Robust Adaptive Fuzzy Path Tracking Control of Differential Wheeled Mobile Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Brahim Moudoud, Hicham Aissaoui, and Mohammed Diany Deep Learning Approach for Automated Guided Vehicle System . . . . . 227 Mohamed Rhazzaf and Tawfik Masrour Path Planning Using Particle Swarm Optimization and Fuzzy Logic . . . 239 Ahmed Oultiligh, Hassan Ayad, Abdeljalil Elkari, and Mostafa Mjahed Prediction of Robot Localization States Using Hidden Markov Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Jaouad Boudnaya, Amine Haytoumi, Omar Eddayer, and Abdelhak Mkhida A New Approach for Multi-agent Reinforcement Learning . . . . . . . . . . 263 Elmehdi Amhraoui and Tawfik Masrour Recommender System for Most Relevant K Pick-Up Points . . . . . . . . . . 277 Ayoub Berdeddouch, Ali Yahyaouy, Younès Bennani, and Rosanna Verde Feature Detection and Tracking for Visual Effects: Augmented Reality and Video Stabilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Houssam Halmaoui and Abdelkrim Haqiq

Contents

xvii

Spectral Image Recognition Using Artificial Dynamic Neural Network in Information Resonance Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Ivan Peleshchak, Roman Peleshchak, Vasyl Lytvyn, Jan Kopka, Mariusz Wrzesien, Janusz Korniak, Janusz Kolbusz, and Pawel Rozycki U-Net Based Model for Obfuscated Human Faces Reconstruction . . . . . 323 Elmoukhtar Zemmouri, Youssef Yakoubi, Mohammed Douimi, and Adil Saadi A Machine Learning Assistant for Choosing Operators and Tuning Their Parameters in Image Processing Tasks . . . . . . . . . . . . . . . . . . . . . 339 Issam Qaffou Convergence and Parameters Setting of Continuous Hopfield Neural Networks Applied to Image Restoration Problem . . . . . . . . . . . . . . . . . 351 Joudar Nour-eddine, Ramchoun Hassan, Zakariae En-Naimani, and Ettaouil Mohamed The IBN BATTOUTA Air Traffic Control Corpus with Real Life ADS-B and METAR Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Kasttet Mohammed Saïd and Lyhyaoui Abdelouahid Weed Recognition System for Low-Land Rice Precision Farming Using Deep Learning Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Olayemi Mikail Olaniyi, Emmanuel Daniya, Ibrahim Mohammed Abdullahi, Jibril Abdullahi Bala, and Esther Ayobami Olanrewaju A Comparative Study Between Mixture and Stepwise Regression to Model the Parameters of the Composting Process . . . . . . . . . . . . . . . 403 Echarrafi Khadija, Ben Abbou Mohamed, Taleb Mustapha, Rais Zakia, Ibtissam El Hassani, and El Haji Mounia Deep Learning Based Sponge Gourd Diseases Recognition for Commercial Cultivation in Bangladesh . . . . . . . . . . . . . . . . . . . . . . . 415 Tahmina Tashrif Mim, Md. Helal Sheikh, Sadia Chowdhury, Roksana Akter, Md. Abbas Ali Khan, and Md. Tarek Habib Exploitation of Vegetation Indices and Random Forest for Cartography of Rosemary Cover: Application to Gourrama Region, Morocco . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 Hassan Chafik and Mohamed Berrada Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441

About the Editors

Dr. Tawfik Masrour is a Professor of Applied Mathematics and Artificial Intelligence in National High School for the Arts and Crafts (ENSAM-Meknes), My Ismaïl University UMI, and member of Research Team Artificial Intelligence for Engineering Sciences (AIES) and Laboratory of Mathematical Modeling, Simulation and Smart Systems. (L2M3S). He graduated from Mohammed V University—Rabat with MSc degree in Applied Mathematics and Numerical Analysis and from Jacques-Louis Lions Laboratory, Pierre and Marie Curie University, Paris with M.A.S (DEA) in Applied Mathematics, Numerical Analysis and Computer Science. He obtained his Ph.D., in Mathematics and informatics, from École des Ponts ParisTech (ENPC), Paris, France. His research interests include Control Theory and Artificial Intelligence. Email: [email protected]. Dr. Anass Cherrafi is an Assistant Professor at the Department of Industrial Engineering, ENSAMMeknes, Moulay Ismail University, Morocco. He holds a Ph.D. in Industrial Engineering. He has about 7 years of industry and teaching experience. He has published a number of articles in leading international journals and conferences. He has participated as Guest Editor for special issues in various international journals. His research interests include industry 4.0, green manufacturing, Lean Six Sigma, integrated management systems, and supply chain management.

xix

xx

About the Editors

Dr. Ibtissam El Hassani is a Professor of Industrial and Logistic Engineering in National High School for the Arts and Crafts (ENSAM-Meknès), My Ismaïl University UMI, and Head of Research Team Artificial Intelligence for Engineering Sciences (AIES). She graduated as an industrial engineer and then got her Ph.D. in Computer Engineering and Systems Supervision at the Center for Doctoral Studies at My Ismail University. Her fields of interest are applications of artificial intelligence in industrial engineering specially in Lean Manufacturing, Six Sigma and Continuous Improvement, Production systems, Quality, Management System, Health and security at work, Supply Chain Management, Industrial Risk Management, and Work Measure. Email: i.elhassani@umi. ac.ma.

A Fuzzy Ontology Based Approach to Support Product Eco-Design Chaimae Abadi , Imad Manssour, and Asmae Abadi

Abstract Nowadays, eco-products have become a real need in the world due to the awareness of the importance of preserving the environment. Consequently, companies have become faced to new requirements that should be taken into consideration from the design process. It is in this context that a new structured approach has been developed in this paper. It is based on fuzzy ontologies which make the comparison between many eco-design alternatives and the selection of the best one automatic and simple. The use of this artificial intelligent tool allows also the handling and the consideration of fuzzy and uncertain information in the design process which makes the studies in eco-design more efficient and precise. A case study is presented in order to demonstrate the efficacy of the proposed methodology. Keywords Artificial intelligence · Ontology · Fuzzy ontologies · Eco-design · Structured approach

1 Introduction Producing eco-products has become one of the main objectives of the majority of companies because of the need of protecting the environment and natural resources. Thus, this requirement should be taken into consideration from the earlier stages of product life cycle. In this context, the eco-design has appeared and different definitions and approaches of this concept have been developed. Actually, eco-design aims to lead the world towards a sustainable future [1]. Platcheck defines eco-design as “a holistic view in that, starting from the moment we know the environmental problems and its causes, we begin to influence the C. Abadi (B) · I. Manssour Laboratory of Mechanics, Mechatronics and Command. Team of Electrical Energy, Maintenance and Innovation, Ecole Nationale Supérieure d’Arts et Métiers, Moulay Ismail University, B.P. 15290 El Mansour, Meknes, Morocco e-mail: [email protected] A. Abadi INSA, Euro-Mediterranean University of Fez, Fes, Morocco © Springer Nature Switzerland AG 2021 T. Masrour et al. (eds.), Artificial Intelligence and Industrial Applications, Advances in Intelligent Systems and Computing 1193, https://doi.org/10.1007/978-3-030-51186-9_1

1

2

C. Abadi et al.

conception, the materials selection, the production, the use, the reuse, the recycling and final disposition of industrial products” [2]. In addition to that, according to Ihobe, the Eco-design concept means that the environment is taken into account when making decisions during the product development process as an additional factor that traditionally has been taken into account [3]. There are a variety of eco-design tools that have been proposed in the literature which are classified generally into four categories: guidelines/standards, checklists, comparative tools, and analytical methods. The most used method of eco-design is the LCA (life cycle assessment) method. This method consists on supplying a quantitative data on the environmental impact of the studied product along its life cycle, since extraction and production of materials to the end of its life, using different environmental indicators [4]. In order to rank the impacts caused by a product, the ABC analysis is used. Electronic product environmental assessment tool (EPEAT) is used also for rating and ranking. In fact, it allows rating the life cycle issues to suggest the design changes for eco-labeling. In addition to that, there are other eco-design tools like: The eco-design indicator tool that Builds the life cycle profile of a product for the measurement of its impact, MET-matrix (material, energy and toxicity) which permits the identification of the environmental load of the materials, energy and toxicity of products and Tool for environmentally sound product innovation (TESPI) that allows the determination of the features of greener products [5]. Contrary to the majority of the previous eco-design approaches which focus on the analytic side of the problem, in this paper, the proposed approach aims to automate this phase using the characteristics and the capacities of fuzzy ontologies as a new emerging artificial intelligence tool. To do so, reviews about fuzzy ontologies researches will be presented in section one. Then, the proposed approach and its contributions will be developed. Finally, a case of study will be submitted in order to show the efficacy of the developed approach.

2 Fuzzy Ontologies as an Emerging Artificial Intelligence Tool An ontology is defined as an explicit and formal specification of the concepts, individuals and relationships which exist in some area of interest. It is created by defining axioms that describe the properties of these entities [6]. Many times, ontologies are used for knowledge representation. But, according to [7], the role of ontologies differs from a context to another. In fact, ontologies present different advantages. However, they have a few limitations too. Actually, the description logics were the base of ontologies, at first, which makes them unable to deal with imprecise information [8, 9]. As a solution to this problem, Fuzzy Ontologies were appeared.

A Fuzzy Ontology Based Approach to Support Product Eco-Design

3

By referring to [10, 11], a fuzzy Ontology is defined as a quintuple OF = {I, C, R, F, A} where I is a set of individuals, C is a set of concepts, R is a set of relations, F is a set of fuzzy relations and A is the set of axioms. Consequently, it can be concluded that a fuzzy ontology is a generalization of a crisp ontology since it considers fuzzy membership degrees of relations and it presents intuitively more expressiveness [12]. Through the concept of fuzzy DLs and specially the fuzzy DL reasoner, fuzzy ontologies have become able to express fuzzy relations and to manage fuzzy and uncertain information [13]. Hence, they are used in many applications, for examples: robotics [14], image interpretation [15], ambient intelligence [16], and information retrieval [17] and of course decision making [18].

3 The Proposed Fuzzy Ontology-Based Approach In order to support designers in the eco-design process of a new industrial ecoproduct, a new approach is presented in this paper. The proposed approach is based on the fuzzy ontologies concept. Actually, this artificial intelligent tool allows mainly, as it is noted in the previous section, the automation of decision-making. So, the selection of the most appropriate alternative in an eco-design problem will be done in an automated way.

3.1 Description of the Proposed Approach The developed approach concerns in particular the eco-design of a new industrial product. As it is shown in Fig. 1 that presents its flowchart, the proposed approach is compound of the following steps: Step 1: Defining decision eco-criteria and their weights according to each member of the design team Step 2: Collecting design team members’ performances for each alternative with respect to each criterion using linguistic variables. This step is done following four stages. • Step 2.1: The first stage of the second step is to collect design team members performances in the pre-manufacturing phase of the studied product. In this stage, each member should take into consideration three criteria which are: The material extraction of the studied product, design of environment and the material processing. • Step 2.2: In this second stage, the purpose is to collect design team members performances in the manufacturing phase of the studied product according to three criteria: The production energy used, the hazardous waste produced and the renewable energy used.

4

C. Abadi et al.

Fig. 1 The proposed fuzzy ontology based approach to support the product eco-design process

A Fuzzy Ontology Based Approach to Support Product Eco-Design

5

• Step 2.3: Collecting design team members’ performances in the use phase of the studied product represents the third stage of the second step of the proposed approach. In this stage, three criteria are taken into account: Emissions, functionality and hazardous waste generated. • Step 2.4: The last stage in this step is to collect design team members performances in the post-use phase of the studied product. Each member should consider in this stage three different criteria which are: The recyclability, the remanufacturability and the redesign of our ecoproduct. Step 3: Encoding the different membership functions of linguistic values, which designers have used to weight alternatives, in the fuzzy ontology. This operation is based on the data types noted in the case study presented in this paper. Step 4: Encoding the different design team members’ performances in the fuzzy ontology. Step 5: Computing the relative rank value for each eco-design alternative for each member of the design team on the basis of weighted sum concepts and by using the fuzzy DL reasoned. Step 6: Computing the final rank value of each eco-design alternative using the same method of the previous step. Step 7: Selecting the best eco-design alternative by interrogating the fuzzy ontology. It is the alternative which maximizes satiability degree of the fuzzy concept (FinalValoration-i).

3.2 The Role of the Fuzzy Ontology in the Proposed Approach In the developed approach, the fuzzy ontology plays a key role in the automating of the eco-design process. Actually, it rounds up all the collaborators in a common model. Thus, it allows the exchange of information between members and applications without difficulties. In addition to that, it facilitates the understanding and provides the semantic interoperability within the multidisciplinary teams. Those two functions are ensured through the formalization of semantics. Moreover, in addition to its ability of knowledge representation, the fuzzy ontology permits the computation of the final ranking of the different eco-design alternatives. So that we can conclude that the proposed approach uses the fuzzy ontology as an evaluation tool of the different eco-design alternatives. Table 1 represents the different characteristics of the fuzzy ontologies that allow the insurance of its different mentioned functions.

6

C. Abadi et al.

Table 1 The role of the Fuzzy Ontology in the proposed approach The fuzzy ontology characteristic Its roles Expressiveness

Through this feature, a unified knowledge model is developed. It contains crisp information, imprecise data using fuzzy relations and membership functions. Thus, the majority of semantic interoperability problems and of designers’ conflicts are resolved

Inference

Through the main role that plays the fuzzy DL reasonner FuzzyDL [13], the inference capacities of the fuzzy ontology will be used in the different steps of the developed approach as following: – Knowledge recuperation: by modeling eco-design criteria in OWL2, it will be possible to query the fuzzy ontology and to benefit from the ancient similar projects – Deduction of new eco-design alternatives by combining several technical solutions – Time gain and facilitation of the MCDM process by clustering the different evaluated alternatives in a pertinent consistent group – Automation of the computation of the relative and final ranks of the eco-design alternatives

Storage

This characteristic enables the capitalization and the storage of all the data and knowledge manipulated and generated within the project in particular during the eco-design phase. Thus, the design information and results can be reused in other eco-design projects

4 Implementation of the Proposed Approach: Case Study In this section, an industrial case of study is developed to illustrate the implementation of the proposed eco-design ontology-based approach. In fact, an industrial company specialized in the manufacturing of washing machines is desired to choose the most relevant eco-concept design solution from a set of five pre-evaluated eco-friendly alternatives. To do so, a design team composed of three experts specialized in environmental impact product studies (D1 , D2 and D3 ) has been formed to evaluate the five eco-product conceptual alternatives. In their environmental impact evaluations, we consider that the experts will use four common eco-criteria. Each criterion represents the expected environmental impact of the eco-product in a specific phase of its product lifecycle. We denote the used eco-criteria as follows (Fig. 2): • • • •

C 1 : Pre-manufacturing C 2 : Manufacturing C 3 : Use C 4 : Post-use.

A Fuzzy Ontology Based Approach to Support Product Eco-Design

7

Fig. 2 The considered eco-design MCDM problem

Table 2 The importance degree of each designer in the team and the weight affected by each designer to each criterion

Designer

D1

D2

D3

Importance degree

0.32

0.36

0.32

C 1 weight

0.14

0.18

0.35

C 2 weight

0.24

0.41

0.39

C 3 weight

0.31

0.19

0.12

C 4 weight

0.31

0.22

0.14

The first inputs of our design making problem are the importance degree of each designer in the team and the weight affected by each designer to each criterion. Table 2 summarizes these data. Then, the design team members have expressed their performance on each ecodesign alternative with respect to each criterion using the six different linguistic variables described in Fig. 3. The next step in our ontology-based approach was

Fig. 3 The linguistic variables used for alternatives rating and their corresponding membership functions

8 Table 3 The corresponding fuzzy numbers for each linguistic variable

C. Abadi et al. Linguistic variables

Corresponding fuzzy numbers

VP

Left-shoulder (1, 2)

P

Triangular (1, 2, 3)

F

Triangular (2, 3, 4)

G

Triangular (3, 4, 5)

VG

Triangular (4, 5, 6)

E

Right-shoulder (5, 6)

to exploit the fuzzy ontology expressivity in order to directly convert the linguistic variables into fuzzy numbers as in Table 3. To do so, we encoded six data types in our fuzzy ontology and we associated to each one of them a specific annotation property [13, 18]. The role of the last ones was to define, for each linguistic variable, the parameters of its corresponding fuzzy membership function. An example of the annotation properties that we have defined in the fuzzy ontology in order to define the linguistic variables “Excellent” and “Fair” is given in Fig. 4. The ontology editor used for the implementation is Protégé 5.0. Then, the decision making matrix has been constructed as represented in Table 4. In fact, each designer Dk has expressed his performance for each eco-design alternative EDi with respect to each eco-criterion C j .

Fig. 4 The linguistic variables used for alternatives rating and their corresponding membership functions

A Fuzzy Ontology Based Approach to Support Product Eco-Design

9

Table 4 The eco-design alternatives evaluation according to the design team members Eco-criterion C1

C2

C3

C4

Eco-design alternative

Decision makers D1

D2

D3

ED1

G

E

VG

ED2

P

F

F

ED3

F

G

VP

ED4

VG

VG

P

ED5

E

P

E

ED1

G

P

VG

ED2

VG

F

F

ED3

F

G

VP

ED4

VP

VG

P

ED5

P

E

E

ED1

VG

VP

G

ED2

F

P

VG

ED3

VP

F

F

ED4

P

G

VP

ED5

E

VG

P

ED1

P

G

VP

ED2

F

VG

P

ED3

G

F

F

ED4

VG

VP

G

ED5

E

P

VG

In order to encode these information in the fuzzy ontology, a data property ‘hasScore’ has been defined. In our case study, for example, Designer 3 has judged the eco-design alternative 5 according to criterion 2 as “Excellent”. To encode this performance, we have defined a new concept entitled Performance352 as follows: Performance352 = ∃ hasScore. Excellent, where Excellent is the datatype that we have previously defined to encode the linguistic variable “Excellent” as the right-shoulder fuzzy number (5, 6). The same methodology has been used to encode all the other performances. The next step is to proceed to calculating the relative and final rates of the different eco-design alternatives. To do so, we defined a concept denoted RelativeRank. Then, for each eco-design alternative EDi and for each designer Dk , a sub concept RelativeRank ki has been defined and annotated as a weighted sum concept [13] as in Fig. 5. The last stage of our proposed methodology was to compute the final rank values of each eco-design alternative. To do so, the same encoding steps have been performed.

10

C. Abadi et al.

Fig. 5 The annotation property defining the concept RelativeRank13

In fact, we defined for each eco-design alternative EDi , a specific weighted sum subconcept denoted FinalRank i which takes into account the relative rates calculated before and the importance degree of each design team member. An example of the fuzzy annotation that we have defined to characterize the final rank of the eco-design 1 is given in Fig. 6. Finally, using the fuzzy DL reasoner FuzzyDL [13], we deduce the best eco-design alternative solution. It is the concept design solution which maximizes the stability degree of the fuzzy concept FinalRank i. Table 5 summarizes the final rank value of the five alternatives for our case study. We conclude then that the ranking of the five eco-design alternatives is: ED5 > ED1 > ED2 > ED4 > ED3 and the optimal eco-design alternative that the design team needs to extend in the detailed design phase is ED* = ED5 .

A Fuzzy Ontology Based Approach to Support Product Eco-Design

11

Fig. 6 Example of the annotation property defining the concepts FinalRank

Table 5 The final rank value of each eco-design alternative

Eco-design alternative

Final score

ED1

3,6324

ED2

3,2308

ED3

2,6396

ED4

3,1012

ED5

4,5812

5 Conclusion In this paper, we propose a new fuzzy ontology-based approach to support designers in the eco-design process. The use of this new artificial intelligence tool allows us, at the same time, to encode the subjective performances of designers and to calculate in an automated way the final rank of each eco-design alternative. The proposed approach retains the advantages of classical ontologies and allows in addition the modeling and reasoning on the fuzzy and imprecise information

12

C. Abadi et al.

manipulated in the eco-design process. To demonstrate the feasibility of the proposed approach, a case study was proposed using the ontology editor Protégé 5.0 to choose the best eco-product design from several alternatives based on different eco-design criteria. This work demonstrates that fuzzy ontologies can be used as an efficient tool to support the decision making process in the different stages of product eco-design. Thus, the use of fuzzy ontologies potentialities to support other stages of the product development process will be investigated in our future works such as: Manufacturing phase, control phase, recycling phase.

References 1. Meadows, D.H., Meadows, D., Randers, J., et al.: Los límites del crecimiento: informe al Club de Roma sobre el predicamento de la humanidad. fondo de cultura económica (1972) 2. Platcheck, E.R., Schaeffer, L., Kindlein, W., Jr., Candido, L.: Methodology of ecodesign for the development of more sustainable electro-electronic equipment. J. Clean. Prod. 16, 75–86 (2008) 3. IHOBE. Manual práctico de ecodiseño: operativa de implantación en siete pasos (2000) 4. Germani, M., Dufrene, M., Mandolini, M., Marconi, M., Zwolinski, P.: Integrated software platform for green engineering design and product sustainability. In: Re-engineering Manufacturing for Sustainability, pp. 87–92. Springer, Singapore (2013) 5. Singh, P.K., Sarkar, P.: Eco-design approaches for developing eco-friendly products: a review. In: Advances in Industrial and Production Engineering, pp. 185–192. Springer, Singapore (2019). 6. Mädche, A., Staab, S., Studer, R.: Handbook on ontologies. In: Handbook on Ontologies, International Handbooks on Information Systems, pp. 173–190. Springer (2004). 7. Mezei, J., Wikström, R., Carlsson, C.: Aggregating linguistic expert knowledge in type-2 fuzzy ontologies. Appl. Soft Comput. 35, 911–920 (2015) 8. Morente-Molinera, J.A., Pérez, I.J., Chiclana, F., Herrera-Viedma, E.: A novel group decision making method to overcome the Web 2.0 challenges. In: 2015 IEEE International Conference on Systems, Man, and Cybernetics, pp. 2233–2238. IEEE (2015) 9. Lukasiewicz, T., Straccia, U.: Managing uncertainty and vagueness in description logics for the semantic web. Web Semant. Sci. Serv. Agents World Wide Web 6, 291–308 (2008) 10. Baader, F., Calvanese, D., McGuinness, D., Patel-Schneider, P., Nardi, D.: The Description Logic Handbook: Theory, Implementation and Applications. Cambridge University press (2003) 11. Calegari, S., Ciucci, D.: Fuzzy ontology, fuzzy description logics and fuzzy-owl. In: International Workshop on Fuzzy Logic and Applications, pp. 118–126. Springer, Berlin, Heidelberg (2007) 12. Carlsson, C., Brunelli, M., Mezei, J.: Decision making with a fuzzy ontology. Soft. Comput. 16, 1143–1152 (2012) 13. Bobillo, F., Straccia, U.: The fuzzy ontology reasoner fuzzyDL. Knowl.-Based Syst. 95, 12–34 (2016) 14. Eich, M., Hartanto, R., Kasperski, S., Natarajan, S., Wollenberg, J.: Towards coordinated multirobot missions for lunar sample collection in an unknown environment. J. Field Robot. 31, 35–74 (2014) 15. Dasiopoulou, S., Kompatsiaris, I., Strintzis, M.G.: Investigating fuzzy DLs-based reasoning in semantic image analysis. Multimed. Tools Appl. 49, 167–194 (2010) 16. Rodríguez, N.D., Cuéllar, M.P., Lilius, J., Calvo-Flores, M.D.: A fuzzy ontology for semantic modelling and recognition of human behaviour. Knowl.-Based Syst. 66, 46–60 (2014)

A Fuzzy Ontology Based Approach to Support Product Eco-Design

13

17. Calegari, S., Sanchez, E.: Object-fuzzy concept network: an enrichment of ontologies in semantic information retrieval. J. Am. Soc. Inform. Sci. Technol. 59, 2171–2185 (2008) 18. Abadi, A., Ben-Azza, H., Sekkat, S., Zemmouri, E.M.: A Fuzzy Ontology-based approach for multi-criteria decision-making: a use case in supplier selection. Congrès International Du Génie Industriel Et Du Management Des Systèmes, Meknes (2017)

A Genetic-Based SVM Approach for Quality Data Classification Wahb Zouhri, Hamideh Rostami, Lazhar Homri, and Jean-Yves Dantan

Abstract With the emergence of the Industry 4.0 and the big data era, many organizations had recourse to data-based approaches for Quality management. One of the main aims of the data-based approaches in manufacturing industries is quality classification. Classification methods provide many solutions related to quality problems such a defect detection and conformity prediction. In that context, this paper identifies a suitable technique (Support Vector Machine) for quality data classification, as it proposes an appropriate approach to optimize its performances. The proposed approach is tested on a chemical manufacturing dataset and a rolling process dataset, in order to evaluate its efficiency. Keywords Quality management · Quality 4.0 · GA-SVM · Robustness

1 Introduction Industry 4.0, also known as the fourth industrial revolution, is characterized by the application of information and communication technologies to industry, and that by making a network connection that allows the communication between production systems, components, and people. This network connection aims to meet the requirement for unique and customized products while maintaining equivalent costs, W. Zouhri (B) · L. Homri · J.-Y. Dantan Arts et Metiers Institute of Technology, Université de Lorraine, LCFC, HESAM University, 57070 Metz, France e-mail: [email protected] L. Homri e-mail: [email protected] J.-Y. Dantan e-mail: [email protected] H. Rostami Data science and Engineering group, ASML, De Run 6501, 5504 DR Veldhoven, The Netherlands e-mail: [email protected] © Springer Nature Switzerland AG 2021 T. Masrour et al. (eds.), Artificial Intelligence and Industrial Applications, Advances in Intelligent Systems and Computing 1193, https://doi.org/10.1007/978-3-030-51186-9_2

15

16

W. Zouhri et al.

despite the low production volumes generated. These forms of connections make it possible to gather new masses of data on the network and thus, new knowledge [1]. To extract valuable knowledge, industries working with large amounts of data aim to discover hidden patterns and insights, and to make decision using methods such as statistics, predictive analytics, and machine learning, in order to help companies in their decision making. One of the main industries that are taking benefit of data analytics capabilities is manufacturing. Manufacturing companies are having recourse to data-based approaches to improve their standard in product, process design and management in order to remain competitive. The extracted knowledge is used within manufacturing systems for many purposes such as: maintenance, supply-chain management, cost estimations, demand forecasting, material requirements, enterprise resource planning, process optimization, scheduling, and quality control [2]. Quality 4.0 represents an opportunity to utilize those data-based approaches to improve the quality within manufacturing systems, and that by predicting failures and then by informing what action to take to change the outcome. The purposes of data-based approaches for quality management can be summed up in three main categories [3]: • To describe the production process in order to identify the different factors that directly influence the quality. • To predict the quality using predictive models, which allows finding the optimal settings that guarantee the best quality. • To identify defective products from non-defective products using classification tools. Various manufacturing industries exploited classification tools to improve the quality. This choice is justified by the ability of these tools to guarantee a set of solutions related to quality problems, such as defect detection, extraction of optimal adjustment ranges, and conformity prediction [2]. El Attar and Hamery [4] designed a learning system using the ID3 classifier to automatically integrate knowledge into an expert system to guide helicopter blade repair. The C4.5 method was used in a hard disk drive production company to identify the best settings, which increased production quality by 12% [5]. Decision trees have been tested within a chip manufacturing industry to help identify the causes of defects, making decisions immediately and, eventually, reducing the cycle time required to solve quality problems [6]. Decision trees were also used to define the number of kanbans in a dynamic JIT (just-in-time) factory. The number of kanbans is important for the efficient operation of a JIT production system. The results showed that decision trees classifiers represent a practical approach with special capabilities to dynamically adjust the number of Kanbans [7, 8]. Hou et al. [9] have integrated neural networks for monitoring the manufacturing process and diagnosing failures. A neural network was used to classify quality defects such as uneven thicknesses, and to determine the cause-and-effect relationship between parameters such as process temperature and output quality measurements. Cho and Leu [10] presented neural network applications for monitoring and

A Genetic-Based SVM Approach for Quality Data Classification

17

controlling manufacturing processes such as machining processes, semiconductors manufacturing processes, and injection molding processes, while shedding light on the influence of this tool in improving the quality of these processes. Also, neural networks have been used for monitoring and controlling the induction hardening process to improve the hardness of metal products [11, 12]. A quality approach based on the exploitation of data using neural networks has been proposed by Thomas et al. [13] to classify quality results, in order to avoid the occurrence of defects. The approach consists first in designing different neural networks and then choosing the one that guarantees the best results. This approach has been tested in a paint industry, and it did improve the product quality by sending warnings if there is a risk of defects, or by stopping production if one of the parameters is set on an outlier value. Bahlmann et al. [14], have implemented a five-step approach for classifying the different textile seams, the approach consists in locating the seams in images acquired in grayscale, then using neural networks, and based on the undulations around the seam, the classification of seam quality is performed on a scale from 1 to 5. This approach has made it possible to classify the quality of a seam within one second with an accuracy of 80%, which is much less than the time required for an expert, which is estimated at 30 s. Gryllias and Antoniadis [15], proposed a two-stage hybrid approach based for the automated diagnosis of defective rolling element bearings. A support vector machine (SVM) model is first trained using simulation data, then, vibration measurements resulting from the machine under condition monitoring are imported and processed directly by the already trained SVM. Fernández-Francos et al. [16] presented an automatic method for bearing fault detection and diagnosis. The method uses the information contained in the vibration signals and a one-class v-SVM to discriminate between normal and faulty conditions. A weighted support vector machines (WSVM) was proposed by Xanthopoulos and Razzaghi [17] for automated process monitoring and early fault diagnosis. Diao et al. [18] propose a quality control approach by improving the most influential factors. The approach begins by identifying the factors that lead to quality problems, known as dominant factors (DFs), using a principal component analysis (PCA). Then, a quality prediction model to improve DFs is proposed based on the SVM. An additional weight is introduced into the SVM model to improve and increase its quality prediction accuracy. Thus, the quality of the product can be guaranteed by controlling the dominant factors. This approach was tested on a coating process, and the result showed the feasibility of this methodology which allowed a better control of the parameters, and consequently an improvement of the quality of this process. Baccarini et al. [19] used vibration as a basis for diagnosing induction motor faults. Four SVM-based models have been developed to classify the three recurring defects, using the ‘’one against all” approach, which identifies whether the vibrations emitted by the engine indicate one of the three defects or not. Similarly, Jegadeeshwaran and Sugumaran [20], proposed an approach for diagnosing hydraulic brake system failure using vibration signals and classification methods. The C4.5 decision tree was used to select the relevant parameters, then classification was performed using two types

18

W. Zouhri et al.

of SVMs with different kernels, whose basic radial function (RBF) ensured better accuracy. The short review above, shows the great potential of the different classifiers in improving the quality within manufacturing industries. Although, the support vector machine was selected as the technique to study among the different classifiers, due to its easy use, and its ability to handle large dimensional dataset with different features. In addition, due to the issue that the quality data are not often linearly separable, SVM use different tricks, called Kernel functions to classify non-linear data [2]. However, the selection of the optimal kernel function is challenging. Some results may be bad not because the data is noisy or the used learning algorithm is weak, but due to the bad selection of the kernel space and the parameters that define it. Hence, an approach should be followed in order to find the best one (Kernel function) that guarantees the best predictive performances. To this end, an approach based on genetic algorithm (GA-SVM) is proposed to optimize the performances of SVM. The genetic algorithm based approaches rely on bio-inspired operators such as mutation, crossover and selection to evolve toward high-quality solutions. This evolution usually starts from a population of randomly generated individuals that undergo recombination in order to produce better generations, and thus, better solutions [21]. In the SVM case, the genetic algorithm aims to find the best kernel space with the optimal predictive performances. Once defined, the kernel space robustness is assessed, and that by evaluating how measurement uncertainties affect its performances. Accordingly, and after introducing the different applications of classification methods within manufacturing industries in Sect. 1, a detailed description of the developed approach is addressed in Sect. 2, as well as the results gotten by applying it on the studied datasets. Finally, Sect. 3 discusses the different results, as it presents some concluding remarks.

2 GA-SVM: Genetic-Based SVM Approach for Quality Data Classification It has been shown that evolutionary algorithms such as genetic algorithms or evolution strategies are efficient and robust approaches to solve a wide range of optimization problems [22]. Accordingly, to optimize the SVM’s predictive performances, different evolutionary algorithms have been utilized such as genetic algorithm [23], particle swarm optimization [24], artificial bee colony algorithm [25], and firefly algorithm [26]. Among these, genetic algorithm (GA) has been widely used to solve optimization problems, and consequently in this paper, a genetic-based SVM model (GA-SVM) is developed to find the best kernel function as well as its parameters, in order to achieve SVM’s highest accuracy. This section describes the design of the proposed algorithm to classify the quality data.

A Genetic-Based SVM Approach for Quality Data Classification

19

Fig. 1 GA-SVM approach for quality data classification—a graphical description

In the following, the different parameters and steps of the proposed genetic-based SVM approach are presented. The genetic representation and the initialization of the first population are discussed at first. After that, the fitness function is described, then, the selection, the crossover, and the mutation techniques used in the approach are presented in detail. The approach is evaluated through an application on two datasets: a chemical manufacturing data, and a rolling process data. Then, the robustness of the classification results regarding the measurement uncertainties is assessed (Fig. 1).

2.1 Genetic Representation and Initialization One of the most critical decisions that significantly affects the performance of a genetic algorithm while implementing it, is deciding the representation to use in order to represent the solutions. It has been observed that unsuitable genetic representation can lead to poor performance of a GA, hence, choosing a proper genetic representation is necessary for the success of a GA. The representation of the solution must be complete and must contain the information needed to represent a solution to the problem, otherwise, the search will be either larger or poorer than necessary. In this paper and in order to define a proper genetic representation, the different parameters to optimize need to be defined. Accordingly, Table 1 represents the different kernel functions and their different parameters that need to be optimized. Since different kernel functions have different parameters, the representation of the solution varies for each kernel. In this paper, an integer representation is used, where each parameter is represented by a set of genes depending on its range of

20

W. Zouhri et al.

Table 1 Kernel functions and their parameters Kernel Polynomial

Sigmoid

Expression d  T αx y + r

  tanh αx T y + r



RBF e

2 − ||x−y|| 2σ 2



Parameters to tune

Range of variation

Penalty C

[1, 1000]

Constant r

[−49, 50]

Slope α

[1, 50]

Degree d

[1, 10]

Penalty C

[1, 1000]

Constant r

[−49, 50]

Slope α

[1, 50]

Penalty C

[1, 1000]

Parameter σ

[0.001, 10]

variation. Consequently, the representation of each kernel functions, and the links that map the genotypes to the phenotypes, are shown in Fig. 2. Once the three representation are defined, a first population needs to be initialized. The first population should be diverse to avoid premature convergence, at the same time, its size should not be too large as it can slow down the algorithm [27]. Many methods can be found in the literature to initialize the population in a GA, including random generation, structured generation, and combination of random and structured generation. The random generation was applied in this paper, as it has been noticed that a randomly generated population leads to optimality due to its diversity [27]. Polynomial Kernel Variation ranges Mapping

A0 [0-9]

A1 [0-9]

A2 [0-9]

A3 [0-9]

A4 [0-9]

A5 [0-4]

A6 [0-9]

A7 [0-9] A7+1 d

Solution

A0.100+A1.10+A2+1 C

A3.10+A4-49 r

A5.10+A6+1 α

Sigmoid Kernel Variation ranges

A0 [0-9]

A3 [0-9]

A5 [0-4]

Mapping

Mapping Solution

A2 [0-9]

A4 [0-9]

A6 [0-9]

A0.100+A1.10+A2+1

A3.10+A4-49

A5.10+A6+1

C

r

α

Solution RBF Kernel Variation ranges

A1 [0-9]

A0

A1

A2

A3

A4

A5

A6

[0-9]

[0-9]

[0-9]

[0-9]

[0-9]

[0-9]

[0-9]

A0.100+A1.10+A2+1

A3+A4.0,1+A5.0,01+(A6+1).0,001

C

σ

Fig. 2 Genetic representations of Kernels functions

A Genetic-Based SVM Approach for Quality Data Classification

21

2.2 GA-SVM Parameters • Fitness evaluation Through any GA, it is necessary to be able to evaluate how good a potential solution is, compared to other solutions. To do so, a fitness function is defined. The fitness values are then used for the selection of individuals (parents), on which crossover and mutation operations will be applied. For the proposed approach, to get the fitness value of a potential solution, the following steps should be followed: 1. Split the dataset into training (train), validation (dev), and test (test) sets. 2. Train the SVM model using one potential solution (chromosome). 3. Get the SVM accuracy on the validation set, which represents the fitness value of the potential solution. • Selection operation Selection operation in a genetic algorithm consists of choosing individuals (parents) for reproduction. Parent selection is very important to the convergence rate of the genetic algorithm (GA), as good parents generate and yield better and fitter solutions [28]. In this paper, a 3-way tournament selection is used. This selection method consists on selecting randomly 3 individuals from the population, and then the one that guarantees the best accuracy on the validation set is selected to become a parent. The choice of this selection method is due to its simplicity, efficiency in parallel and non-parallel architecture, and also to the non-necessity to sort the population [29]. • Crossover operator By this operator, individuals (parents) share information by crossing their genotypes to create better individuals (children). Three different types of crossover are mainly used in the literature: one-point crossover, two-point crossover, and uniform crossover [30]. The uniform crossover was considered in this work. In a uniform crossover, each gene is treated separately. In this, a coin is flipped for each gene to decide whether or not it’ll be included in the off-spring. This operator doesn’t require the definition of the crossover rate parameter, as it avoids the genes interchanging of two different parameters (e.g. C genes with σ genes). Figure 3 depicts the concept of the uniform crossover. • Mutation operator Mutation may be defined as a small random modification in the chromosome to get a new solution. It is used to introduce diversity in the genetic population and to release from local minima [31]. To that end, a random resetting mutation is used, where

22

W. Zouhri et al.

Fig. 3 Uniform crossover example

Parent 1

0

4

6

0

9

1

5

Parent 2

3

7

9

6

8

3

9

Coin

0

0

1

1

0

0

1

Child 1

0

4

9

6

9

1

9

Child 2

3

7

6

0

8

3

5

a random value from the set of permissible values is assigned to randomly chosen genes. At this stage, the different techniques and steps of the proposed approach are described. The Algorithm 1 gives a summary of the approach. Algorithm 1: Genetic-based SVM approach for Quality data classification Require Quality dataset for classification Choose a kernel function of SVM to be optimized Define the genetic representation of the kernel’s parameters Define the maximum number of generations: Max_Gen Generate an initial random population: Pop Calculate the fitness of each individual of Pop for each i Max_Gen do: Select the individuals according to their fitness scores: Pop Perform uniform crossover on Pop’s individuals: Pop Perform random resetting mutation on Pop’s individuals: Pop Calculate the fitness of each individual of Pop Pick the individual with the highest fitness score: Best_Param[i] end for Pick the best individual out of all generations: BEST_KERNEL return (BEST_KERNEL)

By applying the GA based SVM approach on the two quality datasets, the following results were found (Table 2). It should be mentioned that having three types of kernels, resulted in running the approach three times in order to optimize each one of these types. Once the three runs are performed, the best out of the three results is picked as the optimal kernel function. The results show that the RBF kernels give better results compared to the polynomial and the sigmoid kernels for the two datasets, see Tables 2, and 4 in Appendix. With only two parameters, RBF kernels were easier to tune. Moreover, the optimal kernel functions did show great performances when tested on new unseen data (test sets), with an 83.24% accuracy on the chemical dataset, and an 87.67% accuracy on the rolling process data. A last test was performed to evaluate the robustness of the optimal kernel functions. In this case, the Monte-Carlo simulation is used to quantify: “how perturbing the

Rolling process data

Genetic algorithm parameters: – Population’s size: 100 – Number of generations: 50

585 × 11 219 × 95

1168 × 11

437 × 95

Chemical data 219 × 95 RBF

RBF

Kernel

584 × 11

Optimal kernel

Train

Test

Dev

Sets dimensions

Dataset

Table 2 GA-SVM results

C = 274

C=8

Parameters σ = 0.444 σ = 0.051

100

83.98

Train (%)

Accuracy

99.08

87.15

Dev (%)

87.67

83.24

Test (%)

A Genetic-Based SVM Approach for Quality Data Classification 23

24

W. Zouhri et al.

Table 3 Robustness assessment regarding measurement uncertainties Dataset

Accuracy of noiseless set (optimal kernel functions)

Average accuracy of 1000 noisy sets

Accuracy drop

Dev (%)

Test (%)

Dev (%)

Test (%)

Dev (%)

Test (%)

Chemical data

87.15

83.24

82.15

80.59

5.00

2.65

Rolling process data

99.08

87.67

98.09

86.81

0.99

0.85

validation set and the test set by Gaussian measurement uncertainties will impact the SVM model prediction accuracy”. The evaluation is performed by measuring the average accuracy of 1000 datasets perturbed with a “2.5%”-Gaussian noise in the first chemical data example and a “15%”-Gaussian noise in the rolling process data example. These noise rates were estimated by taking into account the precision and the accuracy of the sensors needed within these two manufacturing systems, and the environment impact. Table 3 shows the different accuracies drops, due to the measurement uncertainties. After performing the Monte-Carlo simulations on both datasets, drops in the accuracies of the validation and test sets were noticed. For that reason, an awareness should be raised about the ubiquitous nature of measurement uncertainties in quality data, and their impact on the predictive performances of the classification methods.

3 Discussion and Conclusion The average fitness of each generation is calculated in order to see if the results are evolving and converging to an optimal solution. Also, a short comparison between the different kernel types is performed in order to select the best one to recommend for quality data classification. To show how the GA-SVM progress through generations, Figs. 4 and 5, show the evolution of the fitness’s average over generations, while the Figs. 6 and 7 show the evolution of the fittest individual. Linear trend-lines are also added to the graphs to observe if the fitness is getting better or not over the generations. That is necessary to prove the efficiency of the proposed approach. The results showed that the RBF kernel functions have a higher fitness’s average compared the polynomial and the sigmoid kernel functions, as it has been noticed that the convergence of the RBF kernel is also faster. For those reasons, and for being the optimal kernel in both datasets, the RBF kernel is recommended for the classification of the quality data. It has been also noticed that the evolution of the fitness’s average doesn’t mean that the fittest individual over all the generations will appear in the last one. Finally, the robustness assessment performed using Monte Carlo simulations showed that having

A Genetic-Based SVM Approach for Quality Data Classification

25

Average fitness of every generaƟon RBF - Chemical Dataset 0.85 0.84 0.83 0.82 0.81 0.8 0.79 0

10

20

30

40

50

Fig. 4 Average fitness evolution (RBF kernel; Chemical Data)

Average fitness of every generaƟon Sigmoid - Chemical Dataset 0.66 0.65 0.64 0.63 0.62 0.61 0.6 0

10

20

30

40

50

Fig. 5 Average fitness evolution (Sigmoid kernel; Chemical Data)

The fiƩest individual of every generaƟon RBF - Chemical Dataset 0.872 0.871 0.87 0.869 0.868 0.867 0.866 0.865 0.864 0

10

20

30

Fig. 6 Fittest individual evolution (RBF kernel; Chemical Data)

40

50

26

W. Zouhri et al.

The fiest individual of every generaon Sigmoid - Chemical Dataset 0.9 0.85 0.8 0.75 0.7 0

10

20

30

40

50

Fig. 7 Fittest individual evolution (Sigmoid kernel; Chemical Data)

high prediction accuracies doesn’t mean that the results are robust. Accordingly, a robustness evaluation should also be performed in order to see how the determined optimal kernel functions using the GA-SVM approach are robust to measurement uncertainties. To conclude, the aim of this study was to analyze the performances of the SVM technique when dealing with a quality data classification issue. As any other machine learning tool, the SVM performances are related to the determination of the hyperparameters’ values. To do so, a GA-based SVM approach was proposed for the identification of best hyperparameters allowing the highest accuracy on the validation set. The approach was applied to two quality datasets. The approach identified the RBF kernel as the most suitable one for the classification of quality data. At the end, a robustness evaluation was performed in order to emphasize the impact of measurement uncertainties on the classifiers’ predictive performances. As a future work, the proposed GA approach will be applied on Multi-layer perceptron (GAMLP) in order to evaluate its efficiency, and to compare it to GA-SVM, also, the key parameters’ uncertainties will be identified when dealing with SVM approach. That is necessary to validate the efficiency and reliability of the technique when noisy data is considered in the context of quality management task.

Appendix See Tables 4, 5 and 6.

C

903

419

Dataset

Chemical data

Rolling process data

α 1 1

r

−5

−27

Optimal sigmoid kernel

89.47

83.56

Train (%) 93.12

86.13

Dev (%)

Accuracy sigmoid

Table 4 Optimal sigmoid and polynomial kernel functions

59.82

83.93

Test (%) 422

12

C 26

−8

r 41

31

α

Optimal polynomial kernel

1

1

d

100.00

80.22

Train (%)

99.08

84.93

Dev(%)

Accuracy polynomial

80.82

81.03

Test (%)

A Genetic-Based SVM Approach for Quality Data Classification 27

28

W. Zouhri et al.

Table 5 Average fitness of every generation—Chemical Data Kernels

Generations 1

2

72.5

87.9 90.4 90.7 91.8 94.0 93.2 91.4 95.7 94.8 95.0 93.1 94.2

Sigmoid 50.4 (%)

51.1 51.1 50.8 50.2 50.0 50.0 50.0 50.0 50.2 50.4 50.7 50.3

Poly (%)

97.9

98.3 98.5 98.3 98.5 98.4 98.5 98.4 98.4 98.4 98.3 98.2 98.4

14

15

92.9

93.8 92.6 90.5 90.0 92.4 93.0 94.9 95.5 93.8 94.0 93.2 92.2

Sigmoid 50.0 (%)

50.2 50.1 50.1 50.1 50.3 50.0 50.2 50.4 50.1 50.5 50.8 50.8

Poly (%)

98.7

98.7 98.5 97.5 97.6 98.4 98.4 98.7 97.7 98.6 98.5 98.4 97.5

27

28

91.2

95.2 95.8 91.5 90.5 94.7 94.0 94.8 92.4 93.1 93.8 93.1 95.4

Sigmoid 51.0 (%)

51.4 50.8 50.8 51.3 50.1 51.0 50.6 50.7 50.2 50.5 50.1 50.1

Poly (%)

98.3

98.5 98.7 98.4 98.5 98.5 98.6 98.5 98.2 98.5 98.5 98.3 98.4

40

41

93.1

93.6 92.6 93.4 94.2 95.0 91.9 92.4 93.3 95.4 92.6

Sigmoid 50.1 (%)

50.3 50.4 50.2 50.5 51.4 50.3 50.4 50.6 50.2 50.9

Poly (%)

98.4 97.5 98.4 98.6 98.4 98.6 98.2 97.6 97.5 98.7

RBF (%)

RBF (%)

RBF (%)

RBF (%)

98.5

3

16

29

42

4

17

30

43

5

18

31

44

6

19

32

45

7

20

33

46

8

21

34

47

9

22

35

48

10

23

36

49

11

24

37

12

25

38

13

26

39

50

Table 6 Average fitness of every generation—Rolling Data Kernels

Generations 1

2

79.6

80.8 82.5 83.6 83.9 84.0 84.6 84.3 84.1 84.1 84.2 84.1 84.3

Sigmoid 61.2 (%)

61.4 61.5 60.4 62.4 60.9 62.4 61.3 62.5 62.1 62.3 61.5 63.0

RBF (%)

Poly RBF (%)

3

4

5

6

7

8

9

10

11

12

13

67.3

67.0 67.4 66.0 67.1 68.0 68.5 68.3 67.7 67.4 69.0 71.1 70.7

14

15

84.3

84.1 84.5 84.1 84.2 83.9 84.2 84.0 84.0 84.5 83.8 84.3 84.0

16

17

18

19

20

21

22

23

24

25

26

(continued)

A Genetic-Based SVM Approach for Quality Data Classification

29

Table 6 (continued) Kernels

Generations 1

2

3

4

5

6

7

8

9

10

11

12

13

Sigmoid 64.2 (%)

63.9 63.3 63.4 63.9 63.5 63.5 63.3 63.5 64.1 64.5 62.7 63.4

Poly (%)

69.1

69.6 68.1 70.7 70.0 73.1 72.1 66.8 70.8 70.3 70.4 72.3 70.8

27

28

84.0

84.2 84.1 84.0 83.3 83.8 84.0 83.7 84.0 84.2 83.6 84.5 84.6

Sigmoid 63.1 (%)

63.1 61.7 62.5 62.8 62.4 63.1 62.6 63.2 63.6 63.1 62.9 62.3

Poly (%)

71.5

70.8 71.6 699. 71.5 68.6 69.2 66.9 69.7 68.8 71.4 71.2 71.9

40

41

84.2

83.7 83.9 84.0 84.6 84.7 84.2 84.3 84.2 84.3 84.5

Sigmoid 62.3 (%)

63.0 63.8 645. 64.4 64.2 63.8 63.7 64.3 64.5 65.4

Poly (%)

71.9 72.2 70.0 71.3 72.1 72.4 72.8 72.6 71.7 74.0

RBF (%)

RBF (%)

70.7

29

42

30

43

31

44

32

45

33

46

34

47

35

48

36

49

37

38

39

50

References 1. Alcácer, V., Cruz-Machado, V.: Scanning the Industry 4.0: a literature review on technologies for manufacturing systems. Eng. Sci. Technol. Int. J. 22(3), 899-919 (2019). https://doi.org/10. 1016/j.jestch.2019.01.006 2. Rostami, H., Dantan, J.-Y., Homri, L.: Review of data mining applications for quality assessment in manufacturing industry: support vector machines. Int. J. Metrol. Qual. Eng. 6(4), 401 (2015). https://doi.org/10.1051/ijmqe/2015023 3. Köksal, G., Batmaz, ˙I., Testik, M.C.: A review of data mining applications for quality improvement in manufacturing industry. Expert. Syst. Appl. 38(10), 13448–13467 (2011). https://doi. org/10.1016/j.eswa.2011.04.063 4. Attar, M.E., Hamery, X.: Industrial expert system acquired by machine learning. Appl. Artif. Intell. 8(4), 497–542 (1994). https://doi.org/10.1051/ijmqe/201502310.1080/088395194089 45457 5. Siltepavet, A., Sinthupinyo, S., Chongstitvatana, P.: Improving quality of products in hard drive manufacturing by decision tree technique. J. Comput. Sci. Issues 9(3), 7 (2012) 6. Adidela, D.R.: Construction of fuzzy decision tree using expectation maximization algorithm. J. Comput. Sci. Manag. Res. 1(3), 9 (2012) 7. Wray, B.A., Rakes, T.R., Rees, L.P.: Neural network identification of critical factors in a dynamic just-in-time kanban environment. J. Intell. Manufact. 8(2), 83–96 (1997). https://doi. org/10.1023/A:1018548519287 8. Markham, I.S., Mathieu, R.G., Wray, B.A.: A rule induction approach for determining the number of kanbans in a just-in-time production system. Comput. Ind. Eng. 34(4), 717–727 (1998). 10.1016/S0360–8352(98)00099-0

30

W. Zouhri et al.

9. (Tony) Hou, T.-H., Liu, W.-L., Lin, L.: Intelligent remote monitoring and diagnosis of manufacturing processes using an integrated approach of neural networks and rough sets. J. Intell. Manufact. 14(2), 239–253 (2003).https://doi.org/10.1023/A:1022911715996 10. Cho, H. S., Leu, M.C.: Artificial neural networks in manufacturing processes: monitoring and control. IFAC Proc. Vol. 31(15), 529–537 (1998). https://doi.org/10.1016/S1474-6670(17)406 07-0 11. Zahran, B.M.: Using neural networks to predict the hardness of aluminum alloys. Eng. Technol. Appl. Sci. Res. 5(1), 4 (2015) 12. Pouraliakbar, H., Khalaj, M., Nazerfakhari, M., Khalaj, G.: Artificial neural networks for hardness prediction of HAZ with chemical composition and tensile test of X70 pipeline steels. J. Iron Steel Res. Int. 22(5), 446–450 (2015). 10.1016/S1006-706X(15)30025-X 13. Noyel, M., Thomas, P., Charpentier, P., Thomas, A., Beauprêtre, B.: Improving production process performance thanks to neuronal analysis. IFAC Proc. Vol. 46(7), 432–437 (2013). https://doi.org/10.3182/20130522-3-BR-4036.00055 14. Bahlmann, C., Heidemann, G., Ritter, H.: Artificial neural networks for automated quality control of textile seams. Pattern Recognit. 32(6), 1049–1060 (1999). https://doi.org/10.1016/ S0031-3203(98)00128-9 15. Gryllias, K.C., Antoniadis, I.A.: A Support Vector Machine approach based on physical model training for rolling element bearing fault detection in industrial environments. Eng. Appl. Artif. Intell. 25(2), 326–344 (2012). https://doi.org/10.1016/j.engappai.2011.09.010 16. Fernández-Francos, D., Martínez-Rego, D., Fontenla-Romero, O., Alonso-Betanzos, A.: Automatic bearing fault diagnosis based on one-class ν-SVM. Comput. Ind. Eng. 64(1), 357–365 (2013). https://doi.org/10.1016/j.cie.2012.10.013 17. Xanthopoulos, P., Razzaghi, T.: A weighted support vector machine method for control chart pattern recognition. Comput. Ind. Eng. 70, 134–149 (2014). https://doi.org/10.1016/j.cie.2014. 01.014 18. Diao, G., Zhao, L., Yao, Y.: A dynamic quality control approach by improving dominant factors based on improved principal component analysis. Int. J. Prod. Res. 53(14), 4287–4303 (2015). https://doi.org/10.1080/00207543.2014.997400 19. Baccarini, L.M.R., Rocha, V.V., Silva, de Menezes, B.R., Caminhas, W.M.: SVM practical industrial application for mechanical faults diagnostic. Expert Syst. Appl. 38(6), 6980–6984 (2011). https://doi.org/10.1016/j.eswa.2010.12.017 20. Jegadeeshwaran, R., Sugumaran, V.: Fault diagnosis of automobile hydraulic brake system using statistical features and support vector machines. Mech. Syst. Signal Process. 52–53, 436–446 (2015). https://doi.org/10.1016/j.ymssp.2014.08.007 21. Lessmann, S., Stahlbock, R., Crone, S.F.: Optimizing Hyperparameters of Support Vector Machines by Genetic Algorithms, p. 7 22. Bäck, T., Schwefel, H.-P.: An overview of evolutionary algorithms for parameter optimization. Evol. Comput. 1(1), 1–23 (1993). https://doi.org/10.1162/evco.1993.1.1.1 23. Chou, J.-S., Cheng, M.-Y., Wu, Y.-W., Pham, A.-D.: Optimizing parameters of support vector machine using fast messy genetic algorithm for dispute classification. Expert Syst. Appl. 41(8), 3955–3964 (2014). https://doi.org/10.1016/j.eswa.2013.12.035 24. Lin, S.-W., Ying, K.-C., Chen, S.-C., Lee, Z.-J.: Particle swarm optimization for parameter determination and feature selection of support vector machines. Expert Syst. Appl. 35(4), 1817–1824 (2008). https://doi.org/10.1016/j.eswa.2007.08.088 25. Yang, D., Liu, Y., Li, S., Li, X., Ma, L.: Gear fault diagnosis based on support vector machine optimized by artificial bee colony algorithm. Mech. Mach. Theory 90, 219–229 (2015). https:// doi.org/10.1016/j.mechmachtheory.2015.03.013 26. Olatomiwa, L., Mekhilef, S., Shamshirband, S., Mohammadi, K., Petkovi´c, D., Sudheer, C.: A support vector machine–firefly algorithm-based model for global solar radiation prediction. Solar Energy 115, 632–644 (2015). https://doi.org/10.1016/j.solener.2015.03.015 27. Baker, B.M., Ayechew, M.A.: A genetic algorithm for the vehicle routing problem. Comput. Oper. Res. 30(5), 787–800 (2003). 10.1016/S0305–0548(02)00051–5 28. Jebari, K., Madiafi, M.: Selection methods for genetic algorithms. Int. J. Emer. Sci. 3(4) (2013)

A Genetic-Based SVM Approach for Quality Data Classification

31

29. Fang, Y., Li, J.: A review of tournament selection in genetic programming. In: Cai, Z., Hu, C., Kang, Z., Liu, Y. (eds.) Advances in Computation and Intelligence, vol. 6382, pp. 181–192. Springer, Berlin, Heidelberg (2010) 30. Walchand College of Engineering, U. A.J., S. P.D., Government College of Engineering, Karad.: Crossover operators in genetic algorithms: a review. IJSC 06(01), 1083–1092 (2015). https:// doi.org/10.21917/ijsc.2015.0150 31. Angelova, M., Pencheva, T.: Tuning genetic algorithm parameters to improve convergence time. Int. J. Chem. Eng. 2011, 1–7 (2011). https://doi.org/10.1155/2011/646917

Towards a Platform to Implement an Intelligent and Predictive Maintenance in the Context of Industry 4.0 El Mehdi Bourezza and Ahmed Mousrij

Abstract In the world of the 4th industrial revolution and in order to master the production tools, the company must have a relevant maintenance management system. Therefore, it is necessary to research and develop new maintenance approaches in the context of Industry 4.0, in order to digitize the manufacturing process and generate information to detect failures and act in real time. This paper aims to present the draft of an implementation approach for an intelligent platform of industrial maintenance, aligned with the principles of Industry 4.0. This platform consists in acquiring and conditioning the data to analyze them in order to detect failures and to estimate the time of the good functioning of a device. Then the choice of the appropriate procedure is provided by the decision support, which sends it in turn for it to be planned and executed. Finally, an evaluation module to check the smooth execution. Keywords Industry 4.0 · Predictive maintenance · Maintenance architecture platform · Intelligent approach

1 Introduction In the world of the 4th industrial revolution and the changing consumer behavior, more and more demanding, in a very competitive market. The company must control their production tools while having a relevant maintenance management system. Therefore, it is necessary to research and develop new maintenance approaches that are part of Industry 4.0, in order to digitize the manufacturing process and generate information to detect failures and act in real time. In other words, the interconnection of the physical environment allows the acquisition and processing of data E. M. Bourezza (B) · A. Mousrij Laboratory of Engineering of Industrial Management and Innovation, Hassan First University, Settat, BP, Morocco e-mail: [email protected] A. Mousrij e-mail: [email protected] © Springer Nature Switzerland AG 2021 T. Masrour et al. (eds.), Artificial Intelligence and Industrial Applications, Advances in Intelligent Systems and Computing 1193, https://doi.org/10.1007/978-3-030-51186-9_3

33

34

E. M. Bourezza and A. Mousrij

to extract knowledge and optimize the production process in terms of downtime, planning interventions, the cost of maintenance, etc. The use of intelligent maintenance approaches requires the application of advanced analysis techniques and data processing (IoT, Big Data, IA, Cloud Computing …) which represent the basis of the concepts of Industry 4.0. These approaches are decision supports and handle large amounts of data. However, processing a mass of real-time data is a major challenge for decision-making systems [1]. In addition, despite the advantages of implementing the 4.0 maintenance system, the maintenance architectures proposed as part of the industry of the future are not able to work in an integrated way and are not yet mature enough [2]. This paper aims to develop an intelligent platform for implementation of industrial maintenance, aligned with the principles of Industry 4.0, which allows the acquisition of data collected in production lines. After analyzing the information collected, we arrive at the Monitoring and Failure Prediction phase in order to implement maintenance procedures. These will be planned for being performed using augmented reality. Finally, the purpose of the evaluation phase is to monitor the execution of procedures in an efficient manner. To understand the scope of our project, we will first go through a literature review of published works in order to build a comparative study between the proposed architectures. Secondly, we will identify the conditions for success as well as the limitations that can hinder the implementation of intelligent maintenance. These studies will then allow presenting in Sect. 3, the draft of an intelligent platform of industrial maintenance in context 4.0 to apply to Moroccan SMEs for a global and efficient maintenance management. The last section is devoted to conclusions and future work.

2 Literature Review and Comparison Since the arrival of industry 4.0 concepts, many architectures have been proposed by academics to implement intelligent maintenance in production lines [2] and [3] have proposed two architectures of an intelligent and predictive maintenance system that meet Industry 4.0 principles. The architectures focus on the in-depth analysis of the information collected for the early detection of possible failures and provide intelligent decision support guided during maintenance interventions. However, these two proposals do not deal with the scheduling module of the procedures to be executed as well as the evaluation module, which makes it possible to control the execution. In addition, no method used for the analysis and design of architectures. The approaches proposed by Bousdekis and Mentzas [4] and Al-Najjar et al. [5] have the same characteristics and constitute the same stages as the two previous propositions. Nevertheless, they do not follow a frame of reference like the OSA-CBM of MIMOSA used by other architectures and which presents an application of the ISO 13,374 standard for the monitoring of the state and diagnosis of the machines. Moreover, we notice the absence of the visualization module, which makes it possible to follow the instantaneous changes [6]. presented a smart predictive maintenance system that consists of the modules needed to achieve the goal of Industry 4.0. On

Towards a Platform to Implement an Intelligent and Predictive Maintenance …

35

the other hand, it does not talk about execution and evaluation modules, as well as the lack of a methodology in the design. Other architectural proposals for the implementation of intelligent and predictive maintenance as well as the approaches described previously are summarized in the comparative Table 1. By analysing this table, we find that all the architectures do not deal with the planning modules of the maintenance and evaluation procedures of the interventions. We also note the absence of a method of analysis and design of proposed approaches. Some architectures also do not meet the OSA-CBM standard and others do not consider the operational aspect of maintenance.

3 Key Factors to Develop Successfully the Maintenance System 4.0 The implementation of a 4.0 maintenance system looks like a typical change project with all its challenges. In addition, to develop a system of intelligent and predictive maintenance, it is important to consider all the key factors of success. According to [1], it is necessary to take into account all the interdependencies between the modules constituting the maintenance system 4.0 [8]. have added other important criteria for the development of a maintenance system aligned with the principles of Industry 4.0. We quote in this sense: The integration of physical resources, the interoperability of heterogeneous modules thanks to the exchange of data, the flexibility of the proposed architecture to adapt to the changes as well as to have a generic and evolutive aspect in order to follow the conditions of use. In addition, in the era of Industry 4.0, there is a variety of technology groups (Fig. 1). Therefore, to have a given 4.0 process, one must choose the appropriate technology group, as well as their methods and techniques to adopt. On the other hand, it is important to follow a standard repository that provides the necessary steps and best practices to develop an intelligent and predictive system of maintenance. The OSA-CBM standard from MIMOSA [13] is used to detail the ISO 13347 standard for equipment monitoring and diagnostics. It comprises six steps as shown in Fig. 2 explained at the level of the bibliographic reference. We will build on this standard to develop our intelligent industrial maintenance platform. However, a 4.0 maintenance system faces a number of challenges that we need to consider including: the large amount of data from different sources can influence the accessibility, quality and fusion of information [14]. In addition, the 4.0 maintenance system must be able to process data with industrial big data and have the flexibility to acquire and store information [15]. Finally, it is possible to fall in front of useless maintenance works because of the inaccurate predictors, whereas the precision of the forecasts is a critical point that can generate waste in term of time, production, cost etc.…

Data analysis























Data Collection























[2]

[4]

[7]

[8]

[9]

[10]

[5]

[4]

[6]

[11]

[12]













































Visualization of Failure data detection

Table 1 Comparison between proposed architectures























Decision making























Maintenance planning























Maintenance performance























Evaluation of the procedure























Standard OSA_CBM

36 E. M. Bourezza and A. Mousrij

Towards a Platform to Implement an Intelligent and Predictive Maintenance …

37

Fig. 1 Industry 4.0 technology groups Big Data IA

Cloud

Process 4.0

CPS

Simulation

IOT

Robot AR

Fig. 2 Steps of the OSA-CBM standard

Advisory Generetion Prognostics Assessment Health Assessment State Detection Data Manipulation Data Acquisition Sensors / Transducer / Manual Entry

4 Platform of Industrial Maintenance in a Context 4.0 The literature review discussed in the second section and the comparative study between the different proposals, as well as the consideration of key success factors and compliance with the MIMOSA OSA-CBM guidelines, allowed us to develop the architecture of our maintenance platform 4.0 articulated in six complementary phases, Fig. 3. Figure 5 presents the synthesis of our scientifically designed platform using the SADT method Fig. 4, (Structured Analysis and Design Technics). It is a systemic approach to modeling a complex system or an operating process. Each function of the modeled system (process) is presented as a subsystem as follows.

38

E. M. Bourezza and A. Mousrij

Fig. 3 The different phases of the 4.0 industrial maintenance platform

Evaluation Execution Decision Support Diagnosis & Pronosis Data Analysis Data Acquisition & Conditioning

Exploitation

Setting

Input

Configuration

Energy

Fig. 4 Method SADT

Global function Output

System

The actigram is characterized by an input/output flow, its global function and all the constraints that must be taken into account.The proposed architecture of our platform consists of several modules to design a functional system, which allows implementing an intelligent process of industrial maintenance based on the principles of Industry 4.0. In the following paper, we will detail the modules of our platform.

4.1 Data Acquisition and Conditioning Module This module is divided into two components, the acquisition and conditioning of data. The acquisition component is used to collect automatically or manually (HMI) information integrating the maintenance process from several different sources using the Internet of Things. On the other hand, the conditioning component consists in filtering the data by the elimination of disturbances and the smoothing of the signals that carry noise. In addition, it allows merging the data to avoid unnecessary and

Towards a Platform to Implement an Intelligent and Predictive Maintenance …

39

repetitive information. It is also used to transform the data into an appropriate format so that it can be used in a way that other modules can understand. Finally, the last operation deals with reducing the size of the data by applying methods of extracting signal characteristics in this direction.

4.2 Data Analysis Module It contains a database for temporary storage, it allows analyzing the data by applying new emerging technologies such as advanced analysis and machine learning to detect and identify failures in order to generate knowledge allowing to trigger an automatic correction without human intervention or initiate a maintenance procedure to return the system to its normal state. When analyzing new data, this module takes historical data into account, which allows new insights from these correlations. Neural networks (RNN) such as LSTM (Long Short Term Memory) and GRU (Gated Recurrent Units) applied in maintenance are used at this stage to control deep learning (Fig. 5).

IOT, Sensors, Ta blette, Mobile .. . Internet Availibility, Sec urity Energy

Acquire & condition data

Acquisition System

Exploit able dat a

Devices, Production, Process ...

Sta tistica l Methods Machine le arning, Dee p Lea rning Availibility, Sec urity Energy

Store & Analyze Data

SoŌware Level Updates Proceedings facts Rules Dashboa rd Internet

KPI

Datawarehouse & Analysis system

Hardware Level

Energy

Visualize, detect failures + Prognosis

Maintenance Warning

Dynamic Monitoring

Augme nted R ea lity C loud C omputing Maintenance expert Energy

Choose maintenance procedure

Order Intervention Maintenance + Procedure

ApplicaƟon Level

Heuristic algorithm s Exec ution time Human R esourc es, Materia l .. . Availability, state of produc tion a nd logistics

Decision Making

M odels Sta nda rds Sec urity

Executed Procedure

Evaluate and standardize the procedure performed

Control system

Fig. 5 Block diagram of the intelligent maintenance platform

AR Proc edure Visualiza tion HMI HMD

Perform the maintenance

Maintenance technician

Executed procedure

Schedule maintenance tasks

Scheduled Procedure

Schedule maintenance tasks

40

E. M. Bourezza and A. Mousrij

Sensors/Tranducer/Manual Entry

Fig. 6 Platform architecture

4.3 Diagnosis Module and Prognosis This module provides real-time monitoring by generating a maintenance warning whenever a failure occurs. It consists of three components: visualization, diagnosis and prognosis. The visualization represents a dashboard that allows you to follow the evolution of physical quantities in real time such as pressure, temperature, vibration, flow, etc.… It also makes it possible to visualize and report the critical state of the production process when an indicator leaves its tolerance range. The diagnostics component uses deep learning to detect probable failures, applying the instructions and rules generated by the analysis module, to evaluate the existing facts that are in the acquisition module database.

Towards a Platform to Implement an Intelligent and Predictive Maintenance …

41

The third component of this module is the prognosis for estimating the operating time of a system component prior to failure using the RUL (Remaining Useful Life) concept.

4.4 Decision Support Module After receiving the maintenance warning, the decision support module looks in its database for the correct procedure to remedy a given problem. In the event that the requested procedure does not exist, a process of creation or adaptation that uses cloud computing will be launched to feed and enrich the database with new procedures to ensure effective interventions.

4.5 Planning Module It makes it possible to plan the maintenance intervention sent by the decision support module, by the application of the heuristic algorithms, the taking into account of the necessary resources (human, material …) and the execution time determined by the operator according to the maintenance procedure. It also considers the availability of the system as well as the state of production.

5 Platform Architecture The functional decomposition of our platform allows us to propose a hierarchical architecture Fig. 6 to be established at the level of the industrial sector that supports the set of challenges discussed in Sect. 3 by ensuring an intelligent and efficient management. This architecture makes it possible to manage the heterogeneous data collected from different sources in an intelligent way, in order to guarantee the implementation of intelligent maintenance within the industrial entities. This architecture consists of three levels, namely a hardware level, a software level and an application level. The hardware level is the chain of acquisition of data collected from a distributed sensor network that represents the physical environment by taking into account the constraints of synchronization, correlation and data quality. It also allows data to be transmitted to higher levels of our architecture using different communication protocols. As a third function of this level is to ensure the digital conversion of data in order to have a data model exploitable in terms of analysis and processing. The software level consists of storing, analyzing, processing data and making decisions using artificial intelligence algorithms. It allows dynamic data management to

42

E. M. Bourezza and A. Mousrij

be viewed in real time for the purpose of monitoring equipment status. The application level is the top level of this architecture, it allows to deploy the results of this platform through interaction with stakeholders.

6 Predictive Maintenance Platform for Vibration Analysis of Rotating Machines Our predictive maintenance platform constitutes a decision-making system whose objective is to ensure real-time monitoring of the equipment of a given production line (Fig. 7). There are several techniques for this: vibration analysis, thermographic analysis, oil analysis, etc. In our case, we chose the vibrational analysis of rotating machines as an application case because it is more used to detect the appearance and evolution of most mechanical defects. The advantage of our platform is to guarantee real-time analysis and decision-making. The figure below illustrates the functioning of our platform, it makes it possible to acquire signals delivered by the vibration sensors installed on precise measurement points (bearings, fixing points, etc.). These signals will be subjected to conditioning in order to improve their qualities and to meet application constraints. Arriving at the analysis block, this information will be compared against the reference models of the predetermined vibration phenomena. Each reference model will be developed empirically, it has characteristics of judgment which will specify the admissible vibration levels. The tolerance intervals for each vibration phenomenon as well as the instantaneous variation will be displayed on a VibraƟon Phenomenon Unbalance

Misalignment

Bearings

Others

MathemaƟcal OR Empirical model Jugement Features Reference model Data Analysis

SC2 SC3 SC4 SCn

MulƟplexer

SC1 A/D Converter

Database

Fig. 7 Predictive maintenance platform for vibration analysis of rotating machines

Towards a Platform to Implement an Intelligent and Predictive Maintenance …

43

dashboard which allows monitoring the state of the system in order to make decisions and act at the right time. The maintenance intervention can be in real time as it can be planned in a systematic maintenance program, it depends on the severity of the vibration phenomenon.

7 Conclusions The architecture proposed in this paper presents a decision-making platform whose objective is to ensure real-time monitoring of the equipment of a given production line. Our platform consists in acquiring and conditioning the data to be analyzed in order to detect breakdowns and to estimate the time of good functioning of an equipment. Then, the choice of the appropriate procedure is ensured by the decision aid, which in turn sends it so that it can be planned and executed. We chose the vibrational analysis of rotating machines as an application case because it is more used to detect the appearance and evolution of most mechanical faults, but it should be noted that it is possible to implant the architecture using thermographic analysis. The proposed version is an initial version. Throughout our research, this architecture will be subject to several modifications until total satisfaction. Our future work will be devoted to the development of reference models of vibrational phenomena from empirical models developed following an experimental study. In addition, we will also devote ourselves to the design of the hardware part of our platform.

References 1. Thoben, K.-D., Ait-Alla, A., Franke, M., Hribernik, K., Lütjen, M., Freitag, M.: Real-time predictive maintenance based on complex event processing. In: Enterprise Interoperability, pp. 291–296 (2018). 10.1002/9781119564034.ch36 2. Cachada, A., Barbosa, J., Leitno, P., Gcraldcs, C. A. S., Deusdado, L., Costa, J., Romero, L.: Maintenance 4.0: intelligent and predictive maintenance system architecture. In: 2018 IEEE 23rd International Conference on Emerging Technologies and Factory Automation (ETFA) (2018). https://doi.org/10.1109/etfa.2018.8502489 3. Bousdekis, A., Lepenioti, K., Ntalaperas, D., Vergeti, D., Apostolou, D., Boursinos, V.: A RAMI 4.0 view of predictive maintenance: software architecture, platform and case study in steel industry. In: Proper, H., Stirna, J. (eds.) Advanced Information Systems Engineering Workshops. CAiSE 2019. Lecture Notes in Business Information Processing, vol. 349. Springer, Cham (2019) 4. Bousdekis, A., Mentzas, G.: Condition-based predictive maintenance in the frame of industry 4.0. In: Lödding, H., Riedel, R., Thoben, K.D., von Cieminski, G., Kiritsis, D. (eds.) Advances in Production Management Systems. The Path to Intelligent, Collaborative and Sustainable Manufacturing. APMS 2017. IFIP Advances in Information and Communication Technology, vol. 513. Springer, Cham (2017)

44

E. M. Bourezza and A. Mousrij

5. Al-Najjar, B., Algabroun, H., Jonsson, H.: Smart maintenance model using cyber physical system. In: International Conference on “Role of Industrial Engineering in Industry 4.0 Paradigm” (ICIEIND), Bhubaneswar, India, September 27–30, pp. 1–6 (2018) 6. Wang,K.: Intelligent Predictive Maintenance (IPdM) system Industry 4.0 scenario. WIT Trans. Eng. Sci. 113 (2016). https://doi.org/10.2495/IWAMA150301 7. Canito A. et al.: An architecture for proactive maintenance in the machinery industry. In: De Paz, J., Julián, V., Villarrubia, G., Marreiros, G., Novais, P. (eds.) Ambient Intelligence— Software and Applications—8th International Symposium on Ambient Intelligence (ISAmI 2017). Advances in Intelligent Systems and Computing, vol. 615. Springer, Cham (2017) 8. Peres, R.S., Dionisio, A., Leitao, P., Barata, J.: IDARTS—Towards intelligent data analysis and real-time supervision for industry 4.0. Comput. Ind. 101, 138–146 (2018). https://doi.org/ 10.1016/j.compind.2018.07.004 9. Li, Z.: A Framework of Intelligent Fault Diagnosis and Prognosis in the Industry 4.0 Era, Doctoral theses at Norwegian University of Science and Technology (2018) 10. Algabroun, H., Iftikhar, M.U., Al-Najjar, B., Weyns, D.: Maintenance 4.0 framework using self-adaptive software architecture. In: Proceedings of 2nd International Conference on Maintenance Engineering, IncoME-II 2017. The University of Manchester, UK (2017) 11. Ferreira, L.L., Albano, M., Silva, J., Martinho, D., Marreiros, G., di Orio, G., Ferreira, H., et al.: A pilot for proactive maintenance in industry 4.0. In: 2017 IEEE 13th International Workshop on Factory Communication Systems (WFCS) (2017). 10.1109/wfcs.2017.7991952 12. Galar, D., Thaduri, A., Catelani, M., Ciani, L.: Context awareness for maintenance decision making: a diagnosis and prognosis approach. Measurement 67, 137–150 (2015). https://doi. org/10.1016/j.measurement.2015.01.015 13. Mimosa—An Operations and Maintenance Information Open System Alliance, “MIMOSA OSA-CBM,” 2010. [Online]. https://www.mimosa.org/mimosa-osa-cbm. 14. Aljumaili, M., Wandt, K., Karim, R., Tretten, P.: eMaintenance ontologies for data quality support. J. Qual. Maint. Eng. 21(3), 358–374 (2015). https://doi.org/10.1108/JQME-09-20140048 15. Liu, J., Dietz, T., Carpenter, S.R., Alberti, M., Folke, C., Moran, E., Taylor, W.W., et al.: Complexity of coupled human and natural systems. Science 317(5844), 1513–1516 (2007). https://doi.org/10.1126/science.1144004

Towards a Prediction Analysis in an Industrial Context Ilham Battas, Ridouane Oulhiq, Hicham Behja, and Laurent Deshayes

Abstract The efficiency of a mine’s fixed facilities depends on several parameters, which makes its analysis and improvement very complex. The purpose of this article is to propose an approach to predict the efficiency of the fixed facilities based on data mining algorithms. The input of this approach is the data concerning the parameters influencing the efficiency. The output is the predicted efficiency of the facilities. This approach consists of four steps: domain comprehension, data preprocessing, development of prediction models and finally validation and verification of the proposed models. It has been applied for a Moroccan mining company and the results of this case study are also presented. Keywords Data mining · Prediction · Industrial efficiency

1 Introduction In the recent years, data mining has been used by a large number of companies within different industrial sectors around the world. Data mining combines statistical analysis, artificial intelligence and advanced technologies to extract relationships and I. Battas (B) Research Foundation for Development and Innovation in Science and Engineering, 16469 Casablanca, Morocco e-mail: [email protected] I. Battas · H. Behja Engineering Research Laboratory (LRI), Modeling System Architecture and Modeling Team (EASM), National and High School of Electricity and Mechanic (ENSEM) Hassan II University, 8118 Casablanca, Morocco e-mail: [email protected]; [email protected] I. Battas · R. Oulhiq · L. Deshayes Innovation Lab for Operations, Mohammed VI Polytechnic University, 43150 Benguerir, Morocco e-mail: [email protected] L. Deshayes e-mail: [email protected] © Springer Nature Switzerland AG 2021 T. Masrour et al. (eds.), Artificial Intelligence and Industrial Applications, Advances in Intelligent Systems and Computing 1193, https://doi.org/10.1007/978-3-030-51186-9_4

45

46

I. Battas et al.

patterns from huge databases [1]. To carry out a data mining project, various tools can be used such as segmentation, where the goal is to discover groups and structures within the studied databases, association tools to identify rules, and finally prediction tools [2]. Data mining is a part of the general process of knowledge discovery in databases (KDD), defined as “the nontrivial process of identifying valid, novel, potentially useful, and ultimately understandable patterns in data”. It summarily involves preparing data, searching for patterns, and evaluating and refining the extracted knowledge, all repeated in several iterations [3]. Manufacturing has become a potentially promising area of application for data mining to extract knowledge for performance improvement purposes. Indeed, there exists several indicators to evaluate the performance of a company; a crucial one is the efficiency of facilities, which combines several factors such as the efficiency of materials and human resources, effectiveness of different resources management strategies and other external and internal factors. The analysis of efficiency in the industrial context is often limited to standard methods witch are strongly related to the companies core business and mainly inspired by lean manufacturing and lean management methods. Indeed, the current manufacturing industry methods define the predicted efficiency as the result of the ratio between the predicted quantity to be produced by production lines and the production cycle time. While the data mining tools uses both the history data of the predicted efficiency and the history data of all parameters influencing the efficiency, such as field data and meteorological data, to determine predicted efficiency. This work is being carried out within the framework of industry 4.0. It proposes a new approach to the development of a system based on intelligent algorithms to predict the efficiency of industrial facilities. As powerful tools, these algorithms will then be adapted to support managers decision-making. In this paper, we present a data mining approach for predicting facilities efficiency based on extensive data acquisition over all facilities. Its main objective is to predict efficiency based on influencing factors within the process chain. Thus, these critical factors could be identified as a basis for efficiency improvement by identifying the modifications to be made in order to achieve a desired efficiency. This approach consists in four main steps. The first consists in having in-depth information and knowledge about the application domain, the mining sector’s fixed installations, in order to determine the set of parameters influencing the screening unit’s efficiency. The second concerns the data preprocessing of these parameters, in order to improve their quality before being used by the third step, which aims to develop predictive models that will be validated and verified with different evaluation criteria during the last step. In the case study section, the results will also be detailed.

2 Domain Comprehension The domain comprehension is a very important step, it allows to understand the application domain and to have a detailed view of its functioning, which allows to

Towards a Prediction Analysis in an Industrial Context

47

better understand the problem and to determine all the parameters that can have an impact on the problem.

3 Data Preprocessing Data pre-processing is a technique used to improve the quality of the database and real-world data warehouses. It includes five steps detailed bellow [4]. • Data comprehension: analyze the existing data in order to use them correctly. The target variable is the one to be predicted and the explanatory variables are the independent variables that will help us predict the target [2]. • Data integration: fuses data from multiple sources into a consistent database [2, 4, 5]. • Data cleaning: reduce and weaken the effect of missing and outlier values in the database [2, 5, 6]. • Data standardization: consist of applying the scaling of functionalities. Indeed models are based on Euclidean distance, if, the values of one column (x) are much higher than those of another column (y), the square difference between the lower and upper value is greater, the lower value will almost be treated as if it did not exist. For this reason, it is necessary to transform all our variables into a single scale [2, 4]. • Data reduction: Reducing the number of variables consists in selecting the variables that are highly correlated with each other and keeping only one [2, 4].

3.1 Model Development Data mining offers algorithms and tools for the discovery of comprehensible models from a large amount of data. These algorithms fall into three categories: prediction, segmentation and frequent pattern search. Predictive models are designed to determine a function (or model) that combines inputs and outputs. Depending on the nature of the output variable, two subcategories of tools exist: • If the output variable is of a discrete type, the classification will be used to build the model that will allow the records to be correctly classified. • If the output variable is of a continuous type, the usual statistical regression tools are most commonly used. Segmentation is unsupervised learning (which does not define “inputs” or “outputs”). It aims to identify sets of elements that share certain similarities. Indeed, segmentation algorithms maximize homogeneity within each set and maximize

48

I. Battas et al.

heterogeneity between sets. Different methods are used to define these groups: k-means, hierarchical algorithms, neural networks… Description (or identification of frequent patterns) consists in explaining the existing relationships in the data. Link analysis and visualization techniques are commonly used for this purpose. In fact, visualization techniques are used to simplify the comprehension of data using adapted graphical representations. In this study we will focus on predictive regression models. In the literature, there are several predictive regression models; we have chosen to adopt the most commonly used models, namely: Multiple linear regression, Support vector regression, Regression tree, and K nearest neighboring [7].

3.2 Model Evaluation Among the evaluation criteria most commonly used for regression models are the score R2 and the root mean square error RMSE [8, 9].

4 Case Study The different parts that have been presented along the previous sections have been applied to the database of mine’s fixed facilities. They are critical and important parts, thus developing a system for the prediction of their efficiency can represents several benefits for performance enhancement.

4.1 Domain Comprehension The fixed facilities process is divided into three main steps described below (Fig. 1). Destoning The destoning facility is located 2 km from the screening facility. The connection between these two facilities is provided by the T1 conveyor. The destoning facility consists of: • Feeding hoppers: The two hoppers are designed to receive the phosphate transported by the trucks. • Destoning: destoning is done by screens. The rejection of the screens (class > 90 mm) feeds a channel which discharges into the crusher. • Crushing: the jaw crusher is designed to crush the screen reject (class > 90 mm) and reduce the phosphate blocks to a size of 0 to 300 mm, to be transported by the conveyors.

Towards a Prediction Analysis in an Industrial Context

49 Steril

Unscreened phosphate

Steril storage Destoning

Intermediate storage

Screening Storage of screened phosphate Screened phosphate

Fig. 1 Fixed facilities process

• Conveyors: Phosphate conveyor belts, which connect fixed facilities and storage parks. Intermediate Storage • The destoned phosphate is transported by the conveyors to be stored in rafters using the storage machines (Stackers). • Resumption of the destoned phosphate: This resumption is ensured by the shovel wheel, which feeds the B7 conveyor to discharge into the T1 conveyor. This conveyor transports the phosphate to the screening hopper with a maximum throughput of 1200 t/h. Screening • The destoned phosphate is conveyed by the T1 conveyor and discharged into a distribution chute to feed the five hoppers. The destoned phosphate is then screened by five screens at the exit of the hoppers. • The screened phosphate is conveyed by the T13 conveyor to the storage area. The sterile material is conveyed by the T6 conveyor for a final screening with the CR7 screen before being sent to the sterile storage area. Tarin Loading • Homogenization: Mineral layers of different contents are extracted, a homogenization is necessary to achieve a mixture of the desired quality. This is achieved by storing the destoned and screened phosphate in alternate passes. • Shovel wheel: It allows the phosphate to be resumed and the T14a and T14b conveyors to be loaded (depending on availability) to convey the phosphate to the loading hopper, via the T15 and T17 conveyors.

50

I. Battas et al.

The product is removed from the storage area by a take-back shovel wheel and then transported by conveyors to a hopper that supplies the wagons. A train generally consists of 60 wagons with an average capacity of 61 tonne.

4.2 Data Preprocessing Pre-processing is necessary to generate a quality data and to improve the efficiency of data mining models. In order to achieve this, we started by the comprehension of the data, indeed the data are on 2 databases, one for the year 2017 and the other for the year 2018. Each of these databases contains 2 csv files. The first file contains the variables related to the T1 conveyor stops and the second file contains the rest of the data such as the shift, the shift head, the control room operator, the screening efficiency and many other additional information (see Table 1). Table 2 shows the number of observations included in each of the two databases 2017 and 2018. After the data understanding step, we moved on to the integration of the different files, but when setting up the database we encountered some problems such as the naming problem. The same data may have different names or the same name may have different formats due to spelling mistakes, lower case instead of upper case, etc.). Therefore we decided to unify the names of the different variables and then encode them as integers because it is complicated for machines to understand and work with texts rather than numbers, as the models are based on mathematical equations and calculations. Afterwards we started the cleaning data step where we deleted all lines containing more than 80% missing values, we deleted lines with incorrect and insignificant values, and we handled missing values and outliers. Then we normalized the data and finally tried to obtain a reduced representation of the dataset, smaller in volume, but which produces the same (or almost) analytical results. After this pre-processing the number of observations mentioned in Table 3 is no longer valid. Initially there were around 4400 observations, and in the end there are 2700 observations (Table 3). Approximately 1700 observations have been deleted for various reasons mentioned above.

4.3 Model Development Databases used: During our study of the different prediction algorithms, we found that they are highly dependent on the types of variables existing in the database. For this reason we have subtracted from our raw database the following databases: • Complete database: On this database we have treated all variables containing missing, outliers, erroneous, and insignificant values. • Numerical variable database: We have kept only the numerical variables in this database.

Towards a Prediction Analysis in an Industrial Context

51

Table 1 Available variables Data Explanatory variables

Heading level

Signification

Type

1

Date

Date of a day of the year

Date

2

Week

Week number in the year

Integer

3

Shift

N° shift of phosphate of the extraction

Integer

4

Shift head

Name of the head of the String phosphate extraction shift

5

Control room operator

Name of control room operator

String

6

Driver of the shovel

Name of the driver the Shovel

String

7

Quantity started in tonne

Quantity of intermediate stock between destoning and screening

Float

8

Layer

Nature of the phosphate layer

String

9

ORIGIN

Park of destoning stock

String

10

START

Start of the layer

Time

11

END

End of the layer

Time

12

THC (tonne)

Screened Wet Tonne

Float

13

THE (tonne)

THE = THC + sterile wet destoned

Float

14

HMT1 (hour)

Conveyor running hours T1

Float

15

HMT13 (hour)

Conveyor running hours T13

Float

16

Rate of the steril %

Quantity of the sterile/Quantity of phosphate screened

Float

17

Efficiency T1 (t/h)

THE/HMT1

Float

18

Efficiency T13 (t/h)

THC/HMT13

Float

19

Number of stops

Number of conveyor stops T1

Float

20

Duration of stops (min)

Duration of conveyor stops T1

Float

24

TEMPERATURE

Daily temperature of Ben Guerir

Float

25

PRECIPITATION

Daily precipitation of Ben Float Guerir

26

PREVIOUS LAYER

Nature of the previous phosphate layer

String

27

PREVIOUS ORIGIN

Previous destoning stock park

String (continued)

52

I. Battas et al.

Table 1 (continued) Data Target variable

21

Table 2 Number of observations included in databases

Table 3 Number of observations before and after preprocessing

Heading level

Signification

Type

Screening unit efficiency (t/h)

Screened Wet Tonne/number of hours

Float

Database

Number of observations

2017

2225

2018

2236

Number of observations Before preprocessing

4400

After preprocessing

2700

• Correlated variables database: on this database we are limited to variables that strongly influence the performance of fixed facilities. General parameters for the experiment: All results were obtained using the Python language, with the sklearn library containing all models. The models we used are Multiple Linear Regression, Support Vector MachineRegression, Regression Tree, k-Nearest Neighbors. Concerning the values of the experimental parameters, we kept those defined by default in the software except for the “random state” parameter which is used to choose the same sample when we test the model. Indeed, if we set this parameter equal to an integer 10 for example, the model will always take the same sample. We will then be able to compare the models with each other because the same sample will have been selected. However, if we do not set a random state, the model will randomly take a different sample for each test than before.

4.4 Model Evaluation Results. We present below the results obtained from the R2 score and the Root Mean Square Error RMSE of the different models application on each of the databases: complete database, database with numerical variables, and database with correlated variables.

Towards a Prediction Analysis in an Industrial Context Table 4 The maximum percentage of error of overfitting for the correlated variables database

Table 5 Results of the evaluation criteria of the models applied on the numerical variables database

53

Models

Score R2

Multiple Linear Regression

0.97

40.81

Support Vector Machine-Regression (SVR)

0.95

49.66

RMSE

Regression Tree

0.83

91.83

K Nearest Neighboring (KNN)

0.73

115.21

Models

Score R2 RMSE

Multiple Linear Regression

0.97

40.58

Support Vector Machine-Regression (SVR) 0.95

49.52

Regression Tree

0.83

90.98

K Nearest Neighboring (KNN)

0.89

71.57

Complete database We present in Table 4, the application results of the models: Multiple Linear Regression, Support Vector Machine-Regression, Regression Tree, and K Nearest Neighboring on the complete database. For the complete database the best model to predict screening efficiency, using regression algorithms is multiple linear regression. The score R2 for this model is more than 97% and its root mean square error is the lowest which is equal to 40.81. Numerical variable database Table 5 bellow summarizes the application results of the prediction models on numerical variable database. The best model, even for this database, is the multiple linear regression with a score R2 equal to 97% and a root mean square error equal to 40.58. Correlated variables database In Table 6 we show the application results of the prediction models on the correlated variables database. Again, we note that the best result found for the correlated variables database is the multiple linear regression with a score R2 equal to 87% and a root mean square error equal to 80.81. Table 6 Results of the evaluation criteria of the models applied on the correlated variables database

Models

Score R2 RMSE

Multiple Linear Regression

0.87

80.81

Support Vector Machine-Regression (SVR) 0.82

92.49

Regression Tree

0.83

97.07

K Nearest Neighboring (KNN)

0.85

86.66

54

I. Battas et al.

Verification of the presence of overfitting: It is important to check if there is no overfitting. Several techniques are available, such as the use of two separate files and cross-validation. In this study, we opted for the “Use of two separate files” method, its principle is to separate all observations into two files, the first file serves as learning data and the second as test data. The observation file is therefore used to build the learning model and the test file to validate the model. The percentage of division of all observations differs from one author to another, usually between 60–70% of observations are in the learning file and 30–40% in the test file and then it is necessary to determine a maximum acceptable deviation that should not be exceeded between the two files, this percentage depends on several factors. One of the important factors in determining this percentage is the sample size. In our case we set a percentage of 30% for test data and a percentage of 70% for learning data and then we tested all our models with this technique and we accepted a maximum difference of 5% between the learning file and the test file. A difference greater than this percentage would mean that our model is too adapted. We tolerated a maximum error of 5% between the R2 of the learning file and the R2 of the test file. So, with a difference of less than 5%, we assumed that there was no overfitting [10]. Complete database Table 7 shows the margin of error percentages of the different models applied on the complete database. Numerical variable database In Table 8 we give the margin of error percentages of the different models applied on the numerical variable database. Table 7 The maximum error percentage of overfitting for the complete database Models

Score R2 of learning

R2 of test

Margin of error %

Multiple Linear Regression

0.96

0.97

1

Support Vector Machine-Regression (SVR)

0.96

0.95

1

Regression Tree

0.87

0.83

4

K Nearest Neighboring (KNN)

0.77

0.73

4

Table 8 The maximum percentage of error of overfitting for the numerical variable database Models

Score R2 of learning

R2 of test

Margin of error (%)

Multiple Linear Regression

0.96

0.97

1

Support Vector Machine-Regression (SVR)

0.96

0.95

1

Regression Tree

0.87

0.83

4

K Nearest Neighboring (KNN)

0.84

0.89

5

Towards a Prediction Analysis in an Industrial Context

55

Table 9 The maximum percentage of error of overfitting for the correlated variables database Models

Score R2 of learning

R2 of test

Margin of error %

Multiple Linear Regression

0.88

0.87

1

Support Vector Regression (SVR)

0.84

0.82

2

Regression Tree

0.87

0.83

4

K Nearest Neighboring (KNN)

0.82

0.85

3

Table 10 Best results obtained for each database

Type of database

Best model

Score R2

RMSE

Complete database

Multiple linear regression

0.97

40.81

Numerical variable database

Multiple linear regression

0.97

40.58

Correlated variables database

Multiple linear regression

0.87

80.81

Correlated variables database Table 9 illustrates the margin of error percentages of the different models applied on the numerical variable database. From this study we notice that we have not exceeded the maximum error of overfitting for each model applied to each database. Summary of the Results Achieved: Table 10 summarizes the best results obtained for each database. According to this table, the best model for predicting the performance of the screening unit is the linear regression applied on the numerical variable database.

5 Conclusion and Perspectives Through this work, we presented in detail the methodology followed for data mining in four steps, namely: domain comprehension, data preprocessing, model development and finally model evaluation. Subsequently, we applied this methodology to the screening unit database in order to predict the efficiency of the fixed facilities screening unit based on many explanatory variables for three types of databases (complete database, numerical variables database, correlated variables database). To obtain the best combination of model and database, we tested and compared these models against the evaluation criteria while ensuring that there was no overfitting. These tests showed that the best model for predicting the efficiency of the screening unit is the linear regression applied on the basis of data from the numerical variables.

56

I. Battas et al.

This study illustrates the importance of regression models in predicting the efficiency of the fixed facilities. However, some limitations must be taken into account. First, other databases and case studies should be envisaged. Second, other data mining methods such as classification algorithms and other methods for handling missing and outliers data need to be tested with a larger database in order to extract new knowledge and build models that can help managers make more accurate decisions. Also, professionals must be further involved in this study to provide additional knowledge and information in this domain. Finally, methodological and technical assistance to users (novices or experts) of the data mining process must be available. Certainly a blind application of data mining methods on the data in hand can lead to the discovery of incomprehensible or even useless knowledge for the end user. Thus, the effective implementation of a data mining project requires advanced knowledge and appropriate decisions on a number of specialized techniques (data preparation, attribute transformation, choice of algorithms and parameters, methods for evaluating results, etc.) [11, 12]. Acknowledgments We would like to thank our colleagues from Mohammed VI Polytechnic University and OCP Group for giving us access to the required resources and their unconditional support.

References 1. Benbelkacem, S., Kadri, F., Chaabane, S., Atmani, B.: A data mining-based approach to predict strain situations in hospital emergency department systems. In: International Conference on Modeling, Optimization and Simulation. Nancy, France, p. 8 (2014) 2. Agarwal, V.: Research on data preprocessing and categorization technique for smartphone review analysis. Int. J. Comput. Appl. 131, 30–36 (2015). https://doi.org/10.5120/ijca20159 07309 3. Fayyad, U.M., Piatetsky-Shapiro, G., Smyth, P.: From data mining to knowledge discovery: an overview. In: Advances in Knowledge Discovery and Data Mining, pp. 1–34. American Association for Artificial Intelligence, Menlo Park, CA, USA (1996) 4. Scott, H.: Data Mining: Data Preprocessing (2016). https://slideplayer.com/slide/6466501/. Accessed 10 June 2019 5. Wei, J.: Research on data preprocessing in supermarket customers data mining. In: 2nd International Conference on Information Engineering and Computer Science, pp. 1–4. IEEE, Wuhan, China (2010) 6. García, S., Luengo, J., Herrera, F.: Tutorial on practical tips of the most influential data preprocessing algorithms in data mining. Knowl.-Based Syst. 98, 1–29 (2016). https://doi.org/10. 1016/j.knosys.2015.12.006 7. Agard, B., Kusiak, A.: Exploration des Bases de Données Industrielles à l’Aide du Data Mining – Perspectives. In: 9th National AIP PRIMECA Colloquium 9 (2005) 8. Nagelkerke, N.J.D.: Miscellanea A note on a general definition of the coefficient of determination. Biometrika Trust 4 (2008) 9. Berry, M.J.A., Linoff, G.S.: Data Mining Techniques: for Marketing, Sales, and Customer Relationship Management, 2nd edn. Wiley, Indianapolis, Ind (2004) 10. Bellavance, F.: Préparation de données pour le data mining. HEC Montréal University Course (2017)

Towards a Prediction Analysis in an Industrial Context

57

11. Zemmouri, E.M., Behja, H., Marzak, A., Trousse, B.: Ontology-based knowledge model for multi-view KDD process. Int. J. Mobile Comput. Multimed. Commun. 4, 21–33 (2012) 12. Zemmouri, E.L.M., Behja, H., Marzak, A.: Towards a knowledge model for multi-view KDD process. In: 3rd International Conference on Next Generation Networks and Services (NGNS), pp. 18–22. IEEE, Hammamet, Tunisia (2011)

Methodology for Implementation of Industry 4.0 Technologies in Supply Chain for SMEs Hafsa El-kaime

and Saad Lissane Elhaq

Abstract Over the past years, several initiatives have been emerged: industry 4.0 in Germany, smart manufacturing in the United States of America, internet + in China, future industry in France. Different nominations with an only objective, which is to make the company smarter: transformation in the organization and management of production as well as the entire logistics chain. Even the importance of this industrial revolution, there is a lack of research in the work of implementing Industry 4.0 concepts for small and medium-sized enterprises (SMEs). In order to facilitate the transformation, this work treats an approach through the presentation of a methodology for implementing Industry 4.0 technologies in supply chains. The objective of this article is to define an overview of Industry 4.0 and its impact on the optimization of the supply chain for SMEs, based on the literature review study, and pilot guide for a 4.0 transition strategy based on the method called DMAIC. In this study a framework of a methodology for implementing industry 4.0 is proposed and the elaboration of model is given in order to establish the technological means of industry 4.0 according to the objectives performance desired by the company. Keywords Industry 4.0 · Implementation methodology · Supply chain · SMES · DMAIC

1 Introduction The supply chain has existed since the start of industrialization, the overall objective is to transform and transport materials or products, to add value, and to satisfy a request at each stage of the process [1]. H. El-kaime (B) · S. L. Elhaq Laboratory of Engineering Research, National Higher School of Electricity and Mechanics (ENSEM), Hassan II University of Casablanca, Casablanca, Morocco e-mail: [email protected] S. L. Elhaq e-mail: [email protected] © Springer Nature Switzerland AG 2021 T. Masrour et al. (eds.), Artificial Intelligence and Industrial Applications, Advances in Intelligent Systems and Computing 1193, https://doi.org/10.1007/978-3-030-51186-9_5

59

60

H. El-kaime and S. L. Elhaq

The appearance of the 4th industrial revolution impacted the supply chain, it was turned upside down; it became smarter, more transparent and more flexible at all levels. Today we are talking about Supply Chain 4.0: it is the establishment of Industry 4.0 technologies in supply chains. The arrival of the 4th industrial revolution introduced the concept of the internet and intelligence into global supply chains through connected machines and tools. Thus, the production environment is directly linked to the distribution area. This constant connection between the digital world and the physical system is one of the main challenges of the industry of the future [2]. Increased digital interconnectivity between humans, machines and humanmachine systems, where hierarchies will be increasingly dismantled. Will be translated by a vertical and horizontal integration along the value chain [3]. This link has great potential for optimizing manufacturing processes, reducing costs, deadlines, improving productivity and quality. Therefore, companies must embrace this digital transformation and adapt their systems to changes in order to improve their supply chains, for greater flexibility to react to the individual needs of each client and to remain competitive in the market [4]. Thanks to these advantages, the majority of companies will wish adhere to this paradigm in order to have a smart and flexible factory, but they don’t know how to carry out this transformation towards the digital 4.0 world. Also, it turns out that it is very difficult for SMEs, which often haven’t the financial and human resources necessary to invest in recent technologies [5]. The weight of SMEs in the country’s economy has a particular importance. In Germany, for example 99.6% of companies are SMEs [6]. Also in Morocco, all studies agree today, on the fact that almost 95% of the national economic fabric is made up of SMEs [7]. Therefore, this group of companies is an essential economic engine, it is necessary to support these mediumsized enterprises in this transition 4.0, identify an approach to follow and define a methodology for implementing industry 4.0 technologies in the supply chain. In the existing literature, hundreds of varied research studies on the application of Industry 4.0 technologies in different fields of industry [8]. However, there is a lack of research into the implementing Industry 4.0 concepts strategy for small and medium-sized businesses [6]. The “how” is not well detailed, we have the means and the pillars of implementation, all the data to join this digital transformation, but the organizational transformation methodology to help companies to position themselves and define a strategy allowing them to prioritize the transformation of their supply chain to 4.0 doesn’t exist [9]. Industry 4.0: a vision, a concept which according to the industrial enterprise is now SMART. This is also the aim of the supply chain: reactivity, flexibility and security. So what is Industry 4.0? What is supply chain 4.0? How can SMEs join this transition 4.0 in the logistics chains to benefit from all the advantages of this 4th industrial revolution? The answer to these research questions is based on an analysis of an in-depth literature review, which the final objective is to provide a guide of piloting an industry 4.0 project in the supply chain for SMEs. The rest of the article is organized as follows: the following section is a synthesis of the general concepts of industry 4.0, Sect. 3 is a literature review on supply chain

Methodology for Implementation of Industry 4.0 Technologies …

61

4.0 in SMEs, the operational performance objectives of industry 4.0 and the methodologies of transition 4.0 and Sect. 4 is a detailed description of our methodology for implementing industry 4.0 in SMEs based on the results of the case studies in the literature to formalize the steps in the form of an industry 4.0 guide for SMEs in Sect. 5. We conclude with a conclusion and we provide suggestions for further research.

2 Industry 4.0: Theoretical Background The term industry 4.0 was proposed for the first time in 2010 by the German government as part of the “action plan for a high-tech strategy 2020” [10]. This term was invented to identify the complete transformation of all industrial production through the fusion of internet technologies and industrial manufacturing processes. It is a global transformation using digital integration and intelligent engineering, it is cited as the next level of manufacturing, where machines will redefine themselves in the way they communicate and perform individual functions [8]. The goal of the establishment of Industry 4.0 is to develop a new generation of intelligent factories based on the digitalization of manufacturing processes; these factories are characterized by production flexibility through the use of real-time configurable machines that allow customized production [11]. The field of industry 4.0 is very young since the first article was only published in 2012, moreover the number of scientific productions has increased exponentially compared to the number of publications in the two indexing platforms: Web Of Science and Scopus [8].

2.1 Definition of SME Small and medium-sized enterprises (SMEs) are independent companies, which employ fewer than a certain number of employees and have a specific turnover or balance sheet. Consequently, most of the variables defining SMEs vary from country to country. For example, SMES in Morocco are defined on the following bases: Turnover is between 10 MDHS (1.000.000 e) and 175 MDHS (17.500.000 e), the permanent workforce is between 10 and 250.

2.2 Evolution of the Industrial Environment The industrial environment has evolved in a discontinuous manner over the centuries. These discontinuities, which are presented in Fig. 1, lead to four industrial revolutions

62

H. El-kaime and S. L. Elhaq

Fig. 1 Industrial revolution

The first industrial revolution used steam to mechanize production, the second used electrical energy to produce a large volume of identical products at low unit cost, this period was marked by mass production and the third used electronics and information technology to automate production. Today, a fourth industrial revolution is underway, characterized by the ubiquitous use of sensors and actuators communicating via a network, allows a real-time connection between systems, machines, tools, employees, products and other objects defining what is called IOT [12].

2.3 Main Industry 4.0 Technologies The essential components of industry 4.0 included the following technologies: CPS, additive manufacturing, virtual reality, cloud computing, big data and data science [8]. According to the Boston consulting group, 9 technology groups of industry 4.0, represented in Table 1, transform the production of isolated and optimized cells into a fully integrated, automated and optimized workflow [13]. Table 1 presents a concise summary of the technological means of industry 4.0 that we have synthesized from the literature review, these pillars that we call industry 4.0 implementation means are available and accessible, sometimes at different costs and levels of development [10]. The objective is to exploit these technology groups in order to establish industry 4.0 in the company and provide the supply chain with new operational performance objectives.

Methodology for Implementation of Industry 4.0 Technologies …

63

Table 1 Industry 4.0 technologies Technology 4.0 Cloud computing

IOT

Big data

Simulation

CPS

Cyber Security

Virtual Reality

Description Def

Storage and analysis huge amounts of data [13]

Obj

Computing and storage capacity is shared by partitioning physical resources using virtualization technologies

Fct

Share information across multiple systems and networks in real time, promote planning and utilization of shared resources, process control and performance evaluation [10]

Def

Global network of interconnected objects based on standard communication protocols [14]

Obj

Communication in real time of physical objects makes it possible to control the state of products or systems and to facilitate the decentralization of decision-making in the face of a hazard [10]

Fct

Advanced connectivity of products, systems and services by providing ubiquitous information

Def

Heterogeneous interconnected objects generate a large amount of data of different types

Obj

Generating decisions in real time of great importance

Fct

Improved decision making and process optimization

Def

Modeling to reflect the physical world in a virtual world

Obj

Optimization of all industrial processes [15]

Fct

Analysis of product behavior, performance of production lines and coordination of multi-site networks, thereby reducing machine configuration time and increasing quality [11]

Def

Systems in which natural systems created by man (physical space) are tightly integrated with computing, communication and control systems (cyberspace)

Obj

Management of the interconnection between physical active systems and their computing capacity [13]

Fct

Real-time data acquisition requires an appropriate architecture, called CPS

Def

Emergence of data analysis and sharing technologies have forced users and manufacturers to secure their information [16]

Obj

Guarantee of the data which are stored and which pass on the network so that they are not vulnerable

Fct

Communication of objects with their environment and active participation in the reconfiguration of the system in real time

Def

Improvement of the real environment of a human being thanks to a virtual environment

Obj

Superposition in the user’s vision of real objects from his classic vision with virtual objects added by computer [17]

Fct

Interaction between the user, real and virtual objects [18] (continued)

64

H. El-kaime and S. L. Elhaq

Table 1 (continued) Technology 4.0 Printing 3D

Cobotif

Description Def

Manufacture of objects from a 3D model through a process in which the layers of materials are laid under precise computer control

Obj

Favoring the personalized demand of customers who request products of small quantities

Fct

Manufacture of complex shape products in record time

Def

Active cooperation with human operators during all industrial activities

Obj

Increase productivity at lower cost

Fct

Assistance for operators in their most difficult tasks, by supplementing their effort in their movements [19]

3 Literature Review The literature search was conducted using the Springer, Science direct, IEE, Google Scholar and Wikipedia databases. The literature review included research in generalities on industry 4.0, supply chain 4.0, SMEs in the age of industry 4.0, operational performance objectives and transition 4.0 methodology.

3.1 Supply Chain 4.0 Supply chain 4.0 has come to solve various problems: on-time delivery with missed deadlines, shortage of stocks, administrative burdens, inability to predict order book, operating burdens, quality problems, inability to organize to produce... By using data, digital, artificial intelligence and digital to optimize supplies. Integrated information flows are essential in the supply chain 4.0 (Fig. 2). The interconnection of the entire supply chain is necessary to succeed in achieving the potentials foreseen for Industry 4.0. Having a look at different research streams that concern the role of Industry 4.0 for supply chain management, we identify several publications that regard the role of Industry 4.0 [20]. Further investigations consider the role of Industry 4.0 respectively the Internet of Things in supply chain quality management, business process management, logistics trajectory and cloud logistics. In general, the potentials and Information flow

Procurement

Production Physical flow

Fig. 2 Main functions of SCM

Distribution

Sales

Methodology for Implementation of Industry 4.0 Technologies …

65

challenges of Industry 4.0 discussed in these publications show that Industry 4.0 seeks to achieve benefits in all areas of application. The areas of the supply chain that will be most positively affected by the introduction of Industry 4.0 are order fulfillment 53.84%, transport logistics 61.54%. With regard to the warehouse 66.6%, within the purchasing function, Industry 4.0 shows 71.43% opportunities [21]. The most relevant benefits from the implementation of Industry 4.0 are increased flexibility, quality standards, efficiency and productivity. This will enable mass customization, allowing companies to meet customers’ demands, creating value through constantly introducing new products and services to the market. Finally, industry 4.0’s technological capabilities could be opportunities or threats, depending on the context of implementation.

3.2 SMEs in the Age of Industry 4.0 The European Commission defines an SME as a company with less than 250 employees and a turnover of less than e50 million. The importance of SMEs in the country’s economy is particularly important. In Morocco, all studies now agree that almost 95% of the national economic fabric is made up of SMEs [7]. This group of companies is therefore an essential economic driver. So it is important for SMEs to follow industrial development and use recent technologies to improve their industrial performance, to have significant productivity and efficiency gains for organizations. However, this technological transition remains an evolving process that manufacturing SMEs have very unevenly integrated for the time being [22]. Table 2 summarizes the literature review on the presence of industry 4.0 means of implementation in SMEs, only a few technology groups are presented in SMEs: Cloud Computing and IOT are the most established and used in SMEs [10]. In the following paragraph, we present a detailed analysis of the operation of the various technology groups in industry 4.0 in SMEs. Table 2 Presence of industry 4.0 technologies in SME

Technology 4.0

Presence in SMEs

Obstacle

IOT/RFID



Absence of reliable data [23]

Cloud computing ✓ Big Data



CPS, Cyber Security Simulation, 3D Printing Cobotif, Virtual Reality

Digital maturity [10] Human competence [24] Human competence [10]



Financial resource [10] Financial resource [10]

66

H. El-kaime and S. L. Elhaq

IOT: Several researchers use IOT in SMEs with RFID technology to obtain information from production flows in real time [23] They show that RFID and IOT improve collaboration between SMEs. The majority of SMEs don’t have reliable data. So the use of RFID to map production flows during a Lean Manufacturing implementation process is the right solution. This system allows reliable data on production flows to be obtained quickly and thus to target continuous improvement initiatives to be launched as a priority. Cloud Computing: The most commonly used means of implementation in empirical cases, more than 65% of articles report the use of cloud computing [10], especially with the aim of building virtual collaborative ventures between SMEs. However, on the basis of the observations, SMEs do not all have the skills and capabilities to meet the complex needs of customers, cloud computing promotes the development of industrial collaboration between several partners by pooling their skills and different businesses to meet customer demand. Big data, CPS, Cyber Security, Cobotif… : Some authors argue that SMEs do not consider the potential value within their data [24]. So data analysis tools, such as Big data and data analytics, are not much used in their decision-making. The lack of research in the field of Big data in SMEs confirms the observation of [24], they showed the weakness of SMEs in development research, they stressed that cloud computing is a viable solution for SMEs for a potential use of massive data analysis. Also, few articles deal with cyber security and CPS in SMEs, they are not present in SMEs [10], the same for cobotives and 3D printing are not exploited by SMEs, and these means of implementation are very expensive with a long-term return on investment.

3.3 Operational Performance Objectives of Industry 4.0 Through the introduction of industry 4.0 techniques in the supply chain, the operational performance objectives most achieved in SMEs are: flexibility and productivity improvement [10]. According to Table 3, which we have summarized in the literature, we concluded that the adoption of fundamental industry 4.0 technologies such as cloud computing, IOT, Big Data in supply chains have a positive impact on several operational performance factors especially flexibility, improved productivity and reduced lead times. Supply chain 4.0 is therefore a necessity to be adopted in companies, especially SMEs, but it remains to be seen how to proceed, the following section presents some proposals for a methodology for the implementation of industrial 4.0 technologies in the supply chain.

Methodology for Implementation of Industry 4.0 Technologies …

67

Table 3 Operational performance objectives of Industry 4.0 Objective

Description

Technology 4.0

Flexibility

– Synchronization of the flow and Decentralized decision making – Quick response to any market development – Collaboration between partner companies – Ability to react quickly to market fluctuations – Business expertise within a corporate network with tools for collaboration [23]

– Cloud Computing

Cost reduction

– Order products from the internet from anywhere on the earth – Synchronization of flows [25] allows a reduction in stocks – Strong increase in business activity and optimization of resources, thanks to the best sharing between customer and supplier

– Cloud Computing – Big Data – IOT

Improved productivity – Optimization of production based on data generated by connected objects [23] – Optimization of the productivity of all partners (customer–supplier) using a Cloud computing platform – Optimization of production planning through analysis of customer data

– IOT – Cloud computing – Big data

Quality improvement

– Improvement of the quality control of the – RFID production process through the use of RFID – Big data technology [26] – Improvement of product quality using historical data from sensors

Delay reduction

– Synchronization of the flows between all the – Cloud computing partners allows a reduction of the deadlines [23] – IOT – Realization of a cartography of the production flows using the IOT in complementarity of Lean Manufacturing – Accurately measures the waiting times of each station – Identification of bottlenecks to optimize and target workstations in priorities

3.4 Transition 4.0 in the Supply Chain In the existing literature varied research on the applications of industry 4.0 technologies in different types of industry. However, little works that addresses the methodology for implementing industry 4.0 concepts for small and medium-sized enterprises. Among these works, we quote: • A method that allows companies participating in a given supply chain to successfully transition to industry 4.0, through a business performance assessment model that is part of a supply chain to support them in the transformation 4.0. This model

68

H. El-kaime and S. L. Elhaq

makes it possible both to assess the current level of maturity of each company and to have a global vision of the entire supply chain of which they are part. The gap analysis defines the transformation 4.0 strategy to be implemented [9], structured according to the steps of the Deming wheel (Plan-Do-Check-Act) The proposed methodology, which is qualitative, focuses on the Do and Check steps, but the ACT step, which focuses on the implementation of actions based on the application of industry resources 4.0, has not been detailed. • A method based on a qualitative study that combines three areas of intervention: Industry 4.0, customer experience and supply chain management [6]. The authors identified eight factors (degree of customization, stakeholder integration, manufacturing flexibility, process transparency, automation, inventory level, customer data analysis and customer independence) to be used to optimize supply chain processes in accordance with Industry 4.0 standards and to increase customer satisfaction. Based on the results of the application of these factors in a German SME, the authors presented a five-step framework for the implementation of industry 4.0 in the SME supply chain: 1/Establish a team of experts, 2/Prepare the data collection process, 3/Define the optimal realization of industry 4.0, 4/Analyze the real status of the industry 4.0 realization, 5/Develop an action plan. The definition of the methodology is based on a case study based on the realization of Industry 4.0 factors in an SME, these presented factors do not define an optimal implementation applicable in Industry 4.0 in general. This study must be validated by other companies and industries to ensure general practical applicability. The framework only presents a set of areas that help to discover the potential for supply chain improvement. This leads to a very individual definition of the optimal use case for Industry 4.0 and also requires specific know-how to develop a real implementation plan.

4 Methodology of Industry 4.0 Implementation in Supply Chain for SMEs 4.1 Principe The methodology that we propose is based on a synthesis of literature review of the industry 4.0 implementation in SMEs, structured according to the DMAIC approach, in order to suggest an implementing model for the industry 4.0 technologies in SMEs. The literature review shows that industry 4.0 has an impact on supply chain management in the context of SMEs, via the pillars of industry 4.0: Cloud computing, Big data, IOT…In opposite side, there is a lack of research on how to proceed: How SMEs can join this transition 4.0 in their supply chains to benefit from all the

Methodology for Implementation of Industry 4.0 Technologies …

69

advantages of this 4th revolution? How SMEs with a limited means and financial resources can establish the technological tools to have a smart supply chain?

4.2 Research Method: DMAIC Our methodology is structured according to the DMAIC approach, and consists of five steps: Define, Measure, Analyze, Innovate and control. • • • • •

Define: Determination of the general framework of the project. Measure: Development of the current VSM. Analyze: Analysis of the results obtained. Innovate: Implementation of improvement actions. Check: Measure the relevance of improvement actions.

4.3 Digital Maturity To propose an improvement in any field, it is first necessary to have a very good understanding of the envisaged place, the same thing for the transition 4.0, it is necessary to study the degree of digital maturity of the company to offer support adapted to the situation. The digital maturity of industrial organization is made up of several elements that make it possible to identify areas for improvement for the company. We have synthesized the definition of the maturity level of the industrial company in the form of a diagram containing five components (Fig. 3). This study of the level of maturity makes it possible to choose the technological solutions of industry 4.0 and the necessary equipment to acquire, according to the intellectual level of human capital, in order to guarantee that all the company’s actors adhere to the changes. We have identified in the literature two models in the Table 4 for assessing a company’s performance in relation to Industry 4.0. Based on the maturity models developed in the literature, we propose three levels of digital maturity, each level representing a category of SMEs: • level = 1, for SMEs who are experts, they have established transformation 4.0 • level = 2, for SMEs that are intermediate, they have started the transformation 4.0 • level = 3, for SMEs who are beginners, they question the general concepts of industry 4.0.

70

H. El-kaime and S. L. Elhaq

Company identity card

Human Capital Identity Sheet

Financial analysis of the company

Industrial organization

infrastructur e used

• Size, number of these locations, seniority, type of management, equipment....

• Average age, versatility of employees, skills assessment....

• Cost price, sales price, percentage of average turnover...

• Planning tool, flow type, flow mapping by VSM, identification of bottlenecks, deployment of Lean Manufacturing tools, finite or infinite capacity planning...

• Systems used (ERP, MES), automated machines, robots...

Fig. 3 Five components to measure digital maturity

Table 4 Maturity models 4.0 Model

Maturity level

Type of company

References

4.0 Readiness Model

outsider, beginner

Newcomers

[27]

intermediate, experienced

learners

expert, top performer

Leaders

standard, Big data, smart data, dark factory, industrial ecosystem

SME

Maturity Index for SMEs

[28]

4.4 Supply Chain 4.0 Implementation Framework The proposed supply chain 4.0 framework is based on the deployment of industry 4.0 tools in supply chain departments, structured by the DMAIC approach, to achieve operational performance objectives (Fig. 4).

4.5 Methodology for Deploying a Transformation 4.0 Strategy We propose a structured methodology according to the DMAIC steps (Table 5).

Methodology for Implementation of Industry 4.0 Technologies … Core functions of Supply chain

Technology 4.0

-Distribution

Operational performance objectives -Flexibility

-Procurement -Production

71

IOT, Big Data, Cloud Computing, CPS, CS, Simulation RV, 3D Printing, Cobotif

-Sales

-Cost Reduction -Time Reduction - Productivity Improvement

DMAIC

- Quality improvement

Fig. 4 Supply chain 4.0 implementation framework

Table 5 Transformation 4.0 DMAIC Step

Objective

Tools

Define

– – – – – –

QQOQCP SIPOC Diagram GANT

Working group Performance objective Supply chain area Process of the area Specifications Project planning

Measure – List all ONVA in the area – List internal and external requirements – Measure the digital maturity level

Brainstorming VSM PARETO

Analyze – Analyze the degree of maturity – Analyze the VSM

Ishikawa Brainstorming

Innovate – Define the action plan – Introduce industry 4.0 technologies to solve each problem – Trace the future VSM

Big data Cloud computing IOT CPS…

Check

Standards of control

– Monitor improvements in real time – Compare the state before and after the introduction of industry 4.0 – Capitalize the solutions in the form of a guide for managing an industrial site 4.0

Define: The first step of the DMAIC approach focuses on defining the objectives and limits of the project. During this phase, it is strongly recommended to start by establishing precise specifications of needs and describing the company’s situation. This includes defining at first with precision: • Define the project group: It is important to work with a multidisciplinary group composed of a project manager, method support, production leader, scheduling and operational agent… The project group must be supervised by a steering

72

H. El-kaime and S. L. Elhaq

committee composed of different managers: Production, logistics, sales and site manager. • Define the provisional planning of the project in order to organize the steps of transformation 4.0; the GANT diagram can be used as a tool to plan the tasks. • Form the General Concept of Industry 4.0 Project Group, including the basic tools of Industry 4.0 to facilitate the involvement of all people in the project. • Define the SIPOC diagram (Suppliers, Input, Process, Output, Customer) of the supply chain; SIPOC is a technique to model a process. This tool consists of a mapping of the process that we want to improve. It present the entire flow from suppliers to customers. Measure: In a second step, it is necessary to map the process to establish a VSM of the flow. Then it is necessary to study the degree of digital maturity of the company to propose a support adapted to the situation. This step consists of gathering information, collecting data, identifying malfunctions and identifying sources of waste. The work consists of: • • • •

List the waste terms of each area of the supply chain List the problems experienced in each area through brainstorming Perform a VSM mapping to target the 4.0 transition Measure the maturity level of the area in order to evaluate its digital maturity position.

Analyze: After a global study through the two phases “Define and Measure” the “Analyze” phase makes it possible to identify the extent of the problems, the level of maturity of the area in relation to industry 4.0, the analysis of the causes and then to formulate an action plan. Analyze the most achieved operational performance objectives of industry 4.0: flexibility and productivity improvement... in the supply chain. Innovate: Following the Analysis phase, and through brainstorming and a detailed understanding of the maturity level of the study area, an action plan can be developed to take full advantage of the benefits of industry 4.0, where the pillars of industry 4.0 will be introduced into the supply chain to make it smart. The integration of industry 4.0 techniques into supply chain management is very effective. In the innovation phase, two categories of means are considered: hardware and digital, for the first type, the fundamental action is that companies must adopt these technologies; in fact, no industry 4.0 without investment and adoption of new intelligent machines is necessary for the implementation of industry 4.0 in an SME at the beginning. The implementation of all pillars of industry 4.0 requires significant investment for an industry, such as 3D printing, cobotives and virtual reality. According to the literature review, SMEs do not have the financial resources and human skills to carry out all nine pillars of industry 4.0. However, we will only focus on the means that are very common in the literature and that mark the 4.0 transition in SMEs such as: 3D printing, cobotives, Big data, Cloud computing, IOT.

Methodology for Implementation of Industry 4.0 Technologies …

73

Control: This step consists of observing the improvement actions implemented and following up, in order to show the gain achieved by the transition 4.0. And to get feedback, we have proposed a guide to managing an industrial 4.0 site in the form of a flowchart, with the objective of making it a transition 4.0 model for SMEs.

5 Guide to Managing an Industrial 4.0 Site in SME Based on the bibliographic research, a five-step approach structured according to the DMAIC method is advised: As a first step, It is strongly recommended to start by establishing precise specifications of requirements and to describing the situation of the company or the factory, this includes properly framing the 4.0 transition project. In a second step, it is necessary to draw up a VSM mapping of the organization, then it is necessary to study the degree of digital maturity of the company to propose a support adapted to the situation. This step consists gathering information and collecting data. In the third step, it is relevant to analyze the most achieved operational performance objectives: flexibility, productivity improvement, cost and delay reduction in the supply chain in order to assess the real status of the industry 4.0 achievement and internal weaknesses. Finally, in the fourth step, it is recommended to develop an action plan to take full advantage of the benefits of Industry 4.0 in the company (Fig. 5).

6 Conclusion and Notes for Future Work This work aimed to present a methodology for supply chain transformation 4.0 in SMEs, with an emphasis on industry 4.0 technologies and the most cited operational performance objectives, through a detailed review of the literature review The main conclusions of this study are: firstly, definition of digital maturity levels for SMEs, secondly development of a methodology for deploying Industry 4.0 tools in SMEs, finally capitalization of information and feedback in the form of a flowchart of the use of industry 4.0 resources in the supply chain according to the performance objectives targeted by SMEs. This article proposes a five-step approach structured according to the DMAIC approach method. First, establish a precise specification of needs and a description of the current situation of the company. Second, measure the level of digital maturity of the business and map organization cartography VSM. Third, analyze the most achieved operational performance objectives: flexibility, improved productivity, reduction of costs and delays in the supply chain in order to assess the real status of the achievement of Industry 4.0. Finally, it is recommended to develop an action plan for the use of the nine pillars of Industry 4.0, in order to take full advantage of the benefits of Industry 4.0 in the enterprise.

74

H. El-kaime and S. L. Elhaq

Yes

Cloud computing

IOT

RFID

BD

3D Printing

Fig. 5 Industry 4.0 deployment in the supply chain for SMEs

RV

Cobotif

Methodology for Implementation of Industry 4.0 Technologies …

75

Future research is recommended in several directions: The methodology for achieving Industry 4.0 and its objectives must be validated by an SME, through a case study, to guarantee practical applicability of the proposed solution. In addition the costs are not taken into account, a detailed cost-benefit analysis is strongly recommended before implementing Industry 4.0 digital technologies. Also this work does not treat the risks associated to the new concepts adaptation related to Industry 4.0, which could be added as research in future work.

References 1. Janvier-James, A.M.: A new introduction to supply chains and supply chain management: definitions and theories perspective. IBR 5, p194 (2011). https://doi.org/10.5539/ibr.v5n1p194 2. Lee, J., Bagheri, B., Kao, H.A.: A cyber-physical systems architecture for Industry 4.0-based manufacturing systems. Manuf. Lett. 3, 18–23 (2015). https://doi.org/10.1016/j.mfglet.2014. 12.001 3. Kagermann, H.: Change through digitization—value creation in the age of Industry 4.0. In: Albach, H., Meffert, H., Pinkwart, A., Reichwald, R. (eds.) Management of Permanent Change, pp. 23–45. Springer Fachmedien Wiesbaden, Wiesbaden (2015) 4. Colotla, I., Fæste, A., Heidmann, A., Winther, A., Høngaard Andersen, P., Duvold, T.: Winning the industry 4.0 RACE.pdf (2016) 5. Tangible Industry 4.0: une approche basée sur des scénarios pour l’apprentissage de l’avenir de la production - ScienceDirect. https://www.sciencedirect.com/science/article/pii/S22128271 16301500. Accessed 9 Dec 2019 6. Bär, K., Herbert-Hansen, Z.N.L., Khalid, W.: Considering Industry 4.0 aspects in the supply chain for an SME. Prod Eng Res Devel 12, 747–758 (2018). https://doi.org/10.1007/s11740018-0851-y 7. Landaburu E La PME, moteur de l‘économie marocaine. 10 8. Muhuri, P.K., Shukla, A.K., Abraham, A.: Industry 4.0: A bibliometric analysis and detailed overview. Eng. Appl. Artif. Intell. 78, 218–235 (2019). https://doi.org/10.1016/j.engappai. 2018.11.007 9. Ioana, D., François, M., Jean-Louis, M. : Méthodologie et outil de définition de la stratégie de transition 4.0 pour la chaine logistique (2019). https://ideas.repec.org/p/ulp/sbbeta/2019-14. html. https://ideas.repec.org/p/ulp/sbbeta/2019-14.html. Accessed 16 Dec 2019 10. Moeuf, A.: Identification des risques, opportunités et facteurs critiques de succès de l’industrie 4.0 pour la performance industrielle des PME (2018) 11. Vaidya, S., Ambad, P., Bhosle, S.: Industry 4.0—A Glimpse. Procedia Manuf. 20, 233–238 (2018). https://doi.org/10.1016/j.promfg.2018.02.034 12. Stankovic, J.A.: Research directions for the Internet of Things. IEEE Internet Things J. 1, 3–9 (2014). https://doi.org/10.1109/JIOT.2014.2312291 13. Bortolini, M., Ferrari, E., Gamberi, M., Pilati, F., Faccio, M.: Assembly system design in the Industry 4.0 era: a general framework. IFAC-PapersOnLine 50, 5700–5705 (2017). https://doi. org/10.1016/j.ifacol.2017.08.1121 14. Atzori, L., Iera, A., Morabito, G.: The Internet of Things: a survey. Comput. Netw. 54, 2787– 2805 (2010). https://doi.org/10.1016/j.comnet.2010.05.010 15. Azevedo, A., Almeida, A.: Factory templates for digital factories framework. Robot. Comput.Integr. Manuf. 27, 755–771 (2011). https://doi.org/10.1016/j.rcim.2011.02.004 16. Holtewert, P., Wutzke, R., Seidelmann, J., Bauernhansl, T.: Virtual fort knox federative, secure and cloud-based platform for manufacturing. Procedia CIRP 7, 527–532 (2013). https://doi. org/10.1016/j.procir.2013.06.027

76

H. El-kaime and S. L. Elhaq

17. Lee, J., Han, S., Yang, J.: Construction of a computer-simulated mixed reality environment for virtual factory layout planning. Comput. Ind. 62, 86–98 (2011). https://doi.org/10.1016/j.com pind.2010.07.001 18. Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S., MacIntyre, B.: Recent Advances in augmented reality. IEEE Comput. Graph. Appl. 14 (2001) 19. Khalid, A., Kirisci, P., Ghrairi, Z., Thoben, K.D., Pannek, J.: A methodology to develop collaborative robotic cyber physical systems for production environments. Logist. Res. 9, 23 (2016). https://doi.org/10.1007/s12159-016-0151-x 20. Müller, J.M., Voigt, K.I.: The impact of Industry 4.0 on supply chains in engineer-to-order industries—an exploratory case study. IFAC-PapersOnLine 51, 122–127 (2018). https://doi. org/10.1016/j.ifacol.2018.08.245 21. Tjahjono, B., Esplugues, C., Ares, E., Pelaez, G.: What does Industry 4.0 mean to supply chain? Procedia Manufact. 13, 1175–1182 (2017). https://doi.org/10.1016/j.promfg.2017.09.191 22. Hennebert, M.A., Cayrat, C., Morissette, L.: Transformation des PME manufacturières : entre promesses et réalités de l’industrie 4.0. Gestion 44, 86–89 (2019) 23. Ren, L., Zhang, L., Tao, F., Zhao, C., Chai, X., Zhao, X.: Cloud manufacturing: from concept to practice. Enterp. Inf. Syst. 9, 186–209 (2015). https://doi.org/10.1080/17517575.2013.839055 24. Bi, Z., Cochran, D.: Big data analytics with applications. J. Manag. Anal. 1, 249–265 (2014). https://doi.org/10.1080/23270012.2014.992985 25. Chalal, M., Boucher, X., Marques, G.: Decision support system for servitization of industrial SMEs: a modelling and simulation approach. J. Decis. Syst. 24, 355–382 (2015). https://doi. org/10.1080/12460125.2015.1074836 26. Segura Velandia, D.M., Kaur, N., Whittow, W.G., Conway, P.P., West, A.A.: Towards industrial internet of things: crankshaft monitoring, traceability and tracking using RFID. Robot. Comput.-Integr. Manuf.s 41, 66–77 (2016). https://doi.org/10.1016/j.rcim.2016.02.004 27. Akdil, K.Y., Ustundag, A., Cevikcan, E.: Maturity and readiness model for Industry 4.0 strategy. In: Industry 4.0: Managing the Digital Transformation, pp. 61–94. Springer International Publishing, Cham (2018) 28. Häberer, S., Fabian, B., Katrin Lau, L.: Development of an Industrie 4.0 Maturity Index for Small and Medium-Sized Enterprises (2017). https://www.researchgate.net/publication/320 415942.

A Deep Reinforcement Learning (DRL) Decision Model for Heating Process Parameters Identification in Automotive Glass Manufacturing Choumicha El Mazgualdi, Tawfik Masrour, Ibtissam El Hassani, and Abdelmoula Khdoudi Abstract This research investigates the applicability of Deep Reinforcement Learning (DRL) to control the heating process parameters of tempered glass in industrial electric furnace. In most cases, these heating process parameters, also called recipe, are given by a trial and error procedure according to the expert process experience. In order to optimize the time and the cost associated to this recipe choice, we developed an offline decision system which consists of a deep reinforcement learning framework, using Deep Q-Network (DQN) algorithm, and a self-prediction artificial neural network model. This decision system is used to define the main heating parameters (the glass transfer speed and the zone temperature) based on the desired outlet temperature of the glass, and it has the capacity to improve its performance without further human assistance. The results show that our DQN algorithm converges to the optimal policy, and our decision system provides good recipe for the heating process with deviation not exceeding process limits. To our knowledge, it is the first demonstrated usage of deep reinforcement learning for heating process of tempered glass specifically and tempering process in general. This work also provides the basis for dealing with the problem of energy consumption during the tempering process in electric furnace. Keywords Deep Reinforcement Learning · Q-Learning · Deep Q-Network · Decision system · Process parameters identification · Heating process · Tempered glass C. E. Mazgualdi (B) · T. Masrour · I. E. Hassani · A. Khdoudi Artificial Intelligence for Engineering Science Team, Laboratory of Mathematical Modeling, Simulation and Smart Systems (L2M3S), ENSAM, Moulay Ismail University, Meknes, Morocco e-mail: [email protected] T. Masrour e-mail: [email protected] I. E. Hassani e-mail: [email protected] A. Khdoudi e-mail: [email protected] © Springer Nature Switzerland AG 2021 T. Masrour et al. (eds.), Artificial Intelligence and Industrial Applications, Advances in Intelligent Systems and Computing 1193, https://doi.org/10.1007/978-3-030-51186-9_6

77

78

C. E. Mazgualdi et al.

1 Introduction Tempered glass is a type of safety glass manufactured by a chemical or thermal treatment that consist of extreme heating and rapid cooling. In thermal tempering process, the glass is heated to a certain temperature in tempering furnace and is then cooled rapidly. This is one of the most complex manufacturing process that toughens the glass both physically and thermally [1, 2]. In thermal tempering glass manufacturing process, especially automotive glass production process, temperature control of tempering furnace is the main control parameter that has an important influence on the quality of the products. Another parameter that can be took into account is the glass speed through the different zones of the furnace. Thus, a good choice of those two parameters at the launch of a new project allows the process engineers to avoid many trials and errors to fix the adequate recipe. In general, this parameters choice, also called recipe, is given according to the expert process experience but, in most cases, it must be adjusted by the process engineers which is time and resources consuming. The aim of the present work is to develop a decision system that consists of solving the problem of process parameters identification using Deep Q-Learning (DQL) algorithm. Using ANSYS software, a series of simulation experiments are conducted and background data are extracted. Then, a self-prediction model and a decision agent are trained successively based on these data. The developed DQL algorithm is then utilized for training the decision model to be able to decide, based on the glass characteristics and the desired temperature of the glass at the output of the furnace, which heating process parameters to choose in order to meet production requirements. The paper is structured as follows: in the next Sect. 2 the background on RL needed to understand the proposed decision system schema is introduced, including basic notions from RL and DRL field. Section 3 provides a detailed description of the heating process control formulation into the MDP and the adoption of the DQN to solve this decision-making problem. In Sect. 4, we present and discuss the experiments results. Finally, Sect. 5 offers some final remarks and suggestions for further work.

2 Background on Reinforcement Learning 2.1 Reinforcement Learning Reinforcement learning (RL) is a subclass of machine learning which differs from classical supervised and unsupervised learning by using a trial-and-error paradigm to learn a strategy. It employs an agent that iteratively interacts with the environment and modifies its actions according to a policy π. The environment responds to every

A Deep Reinforcement Learning (DRL) Decision Model …

79

action by changing its state and provides a reward as a response. The agent tries to maximize the reward received from the environment by via a learned value function. In order to formalize a reinforcement learning problem, the Markov Decision Process (MDP) can be considered as an idealized form that provides precise theoretical statements. Hence, a reinforcement learning problem can be completely described by a tuple of the Markov decision Process framework , where S denotes the state-space which consists of all the possible states the agent can be in. A denotes the action-space which contains all the possible actions to take in the different states. P is the transition function or probability, it contains all the probability that, given a specific action in a certain state, the process moves to a particular state [3]. Finally, the reward function R represents the value of transitioning from one state to another. MDP problems can be solved in different ways, one of the most traditional methods are Dynamic Programming (DP), which is a collection of algorithms such as policy iteration, value iteration, and linear programming [4]. These methods are known by their assumption of a perfect model of the environment and their huge computational time. Hence, if the process is not finite or the transition probabilities are not available, DP becomes more difficult to apply. One famous alternative to DP is model-free algorithms which approximate the value function based on the experience and then, make decisions by accessing information that has been stored in a policy or a value function [5]. Q-Learning represents one famous model-free algorithm that systematically updates action value function q(s, a) without the need of a model of the environment. The basic idea of Q-learning is to learn the state-action pair value, which represents the long-term expected utility Q(s, a) for each pair of state and action. The learned Q-value function, using this method, directly approximates the optimal action-value function q∗ . The Q-learning is much more used in cases where action-state spaces are discrete and finite. This is due to the fact that the q-values are represented in a lookup table. However, for most applications, especially industrial ones, the state space is very large or even continuous and thus requires the q-function to be represented with a function approximation. Artificial Neural Networks (ANN) are powerful function approximators widely used especially for nonlinear functions.

2.2 Deep Reinforcement Learning Deep Reinforcement Learning (DRL) is an approach that efficiently combines the decision-making ability of the conventional reinforcement learning with the perception of deep learning [6]. It uses deep neural networks as function approximation of the action-value function q(s, a) and it is called Deep Q-Network (DQN) [7]. The input layer of the Deep Neural Networks (DNN) is the state of the RL algorithm and the output layer constitutes the predicted q-values of the state-action pairs.

80

C. E. Mazgualdi et al.

The parameters of the DNN are updated iteratively, by minimizing the loss function (Eq. 1), until the difference between the target value and the predicted value converges.    2  L i (θi ) = E r + γ max Q s  , a  , θi−1 − Q(s, a, θi )

(1)

3 DQN Problem Formulation for the Heating Process The proposed system is a DQN agent that optimizes the process parameters for the heating process of tempered glass. In this work, we developed an offline decision model in combination with a quality self-prediction model [8]. A simulation software analysis was used to collect background data, which consist of the quality index and its corresponding process parameters. Then, a self-prediction model is trained by these data to predict the quality index based on the process condition. This model is used as our simulated industrial furnace and provides a training environment for the decision model by applying a DQN algorithm.

3.1 Background Data Acquisition ANSYS is a simulation platform that provides high-fidelity virtual prototypes to simulate the behavior of complete products in their actual working environments. ANSYS is used in our work to simulate the behavior of our industrial furnace under its working condition. Then, we used this software to acquire a certain amount of background data that consists of the outlet temperature of the glass under the selected process parameters.

3.2 Self-prediction Model Development Using the background data collected from our simulation software, we established different prediction mapping models to predict the outlet temperature of glass based on the process condition as inputs to the models. In our study, we choose Random Forest algorithm because it shows a good performance compared to the other algorithms, and it is performing a better mapping of the complex relationship between the process parameters and quality indexes. Random Forest is a powerful ensemble learning method based on decision tree algorithm, and it is recommended for its ability to deal with small and large sample sizes [9].

A Deep Reinforcement Learning (DRL) Decision Model …

81

Fig. 1 Industrial electric furnace illustration

Our study case for the experiments consist of heating similar pieces of glass inside an industrial electric furnace composed of 9 zones (Fig. 1). Each zone has a specific temperature, and for all the 9 zones, one speed is defined. For every type of glass a recipe is defined according to the experience of the expert process, then some trials are mandatory to achieve the desired temperature of the glass at the output of the furnace. This recipe consist of defining the temperature at each zone plus the speed with which the glass will pass through the 9 zones. The outlet temperature is the key dimension for production requirements, and it is specific to the type and dimensions of the glass in production.

3.3 DQN Decision Model We used our trained self-prediction model as a simulated environment of the real industrial furnace. This helped us to avoid extra cost caused by the huge number of trials we will need to acquire background data. Then, our DQN agent will directly interact with the simulated environment, instead of the real one, to learn the optimal strategy for process parameters identification. Figure 2 illustrates the framework for the proposed decision model to address our problem.

Fig. 2 Proposed structure of DQN decision model

82

C. E. Mazgualdi et al.

State space S The state representation is a combination of the current temperature values of the 9 zones, the speed of the glass and the outlet temperature of the glass. It is described as follows: st = (T1 , T2 , . . . , T9 , v, Texit ) ∈ S

(2)

where Ti is the current temperature value of the ith zone of the furnace, v is the current speed of the glass, and Texit is the temperature of the glass at the output of the furnace. When the agent takes an action under the current state st , it receives a reward from the environment, and a new state st+1 can be observed under the new process parameter conditions. By repeating the procedure and interacting with the simulated furnace environment, an optimal recipe that meets the quality requirements for the outlet temperature of the glass can be acquired. Action space A In our study, we consider an environment in which the temperature of each zone Ti , and the glass speed across the furnace can be adjusted via a discrete set of values consisting of 20 discrete actions to either increase or decrease glass speed or zone temperature, denoted as follows: possible actions = (a1 , a2 , . . . , a20 )

(3)

where ai ∈ {a1 , . . . , a9 } refers to the action increase the temperature of the ith zone by 10 °C, and a j ∈ {a11 , . . . , a19 } refers to the action decrease the temperature of the (j – 10)th zone by 10 °C. Finally, actions a10 and a20 refer, respectively, to increase and decrease glass speed by 10 mm. In order to avoid unnecessary trials, we defined low and high boundaries for either the zone temperature and glass speed. The zone temperature are between 500 and 700 °C, and the glass speed is between 150 and 200 mm. For example, if in time step t, the DQN agent takes the action a2 , which means that the temperature of zone 2 will be increased by 10 °C, then, at can be denoted as follows: at = (0, a2 , 0, . . . , 0)

(4)

Thus, the agent will find itself in a new state denoted: st+1 = (T1 , T2 + 10, . . . , T9 , v, N ewT exit )

(5)

where N ewT exit is the predicted value of the outlet temperature of glass, based on the new set of process parameters generated by the agent action, using our self-prediction model.

A Deep Reinforcement Learning (DRL) Decision Model …

83

Reward function R When the DQN agent takes an action, a new set of process parameters is generated. Then, the temperature of glass at the output of the furnace is predicted using our self-prediction model, as a consequence, the agent receives a numerical reward that quantifies the goodness or badness of its action. For our purposes, the agent should obtain an optimal process parameter setting that allows it to reach the target temperature at the output of the furnace. Thus, the reward function is given by:      Pr eviousTexit − Ttarget  − Curr ent Texit − Ttarget  rt = Ttarget

(6)

In our case study, we fixed a tolerance interval for the outlet temperature of the   glass, which is Ttarget − 5 ◦ C, Ttarget + 5 ◦ C . Then, at the end of every episode, an additional reward of + 1 is added to the reward function if the outlet temperature is within the tolerance interval.

4 Experiments Validation and Results In this section, the experiment settings are explained and simulations are performed to acquire background data using ANSYS software for training both self-prediction model and DQN agent. The effectiveness and robustness of the proposed decision system are evaluated by validation experiments on an industrial case study.

4.1 Offline Simulation and Prediction Model Training The experimental case utilized in this study was presented in Sect. 3 (3.2), as a remind it consist of heating similar pieces of glass (sidelites of cars) inside an industrial electric furnace composed of 9 zones. Each zone has a specific temperature, and for all the 9 zones, one speed is defined. Thus, the process parameters are the 9 temperatures plus the speed of the glass and the quality index is the temperature of the glass at the output of the furnace. This temperature is associated to the glass model to produce and it has a tolerance interval of ±5 °C. Our purpose is to automate the choice of the process parameters to meet the production requirement for a specific type of glass, which is the temperature of the glass at the output of the furnace. The first step consists of collecting background data to train our self-prediction model. This self-prediction model will constitute our simulated environment. We, first, create a simulated furnace adopting the necessary characteristics of the real furnace and importing the needed functionalities such as physical and thermal laws that describe the real behavior of the electric furnace. After validating our

84

C. E. Mazgualdi et al.

Fig. 3 Plot of the predicted values versus real values (Target values)

simulated furnace, hundred simulations were performed, and then the background data were extracted to feed our self-prediction model. The developed self-prediction model which is a Random Forest algorithm using 120 decision trees, is then trained using the background data. It is expected to approximate the relationship between the process parameters, which are the 9 temperatures plus the glass speed, and the outlet temperature of the glass. Figure 3 shows the plot of the predicted values versus the target values over 60 observations. We can conclude that our model perform a good fit of the real data with a mean absolute error MAE = 0.807.

4.2 Decision System The DQN agent is trained by interactions with the self-prediction model as a simulated environment. We evaluate our DQN algorithm using the implementation in the OpenAI Baselines project [10]. The benefit of interfacing with OpenAI Gym [11] is that it is an actively developed interface which provides various environments and features useful for training, it offers a reference implementation of RL algorithms. The DQN algorithm is run with default hyper-parameters except for the number of simulations steps and the learning rate, which we set respectively to 1000 steps and 0.00001. The controller policy is the default two hidden layer, 64 nodes multi-layer perceptron (MLP) policy. Training and evaluations were run on Windows 10 with an eight-core i7-7700 CPU and an NVIDIA GeForce GT 1060 graphics card.

A Deep Reinforcement Learning (DRL) Decision Model …

85

Fig. 4 Cumulative reward performance evaluation of DQN algorithm

The convergence of the reward function during the training process is shown in Fig. 4. The cumulative reward curve represents the total rewards in an episode obtained by the agent. The curve is smoothed by the moving average. It can be observed that the rewards value oscillates considerably in most time. This is because of the ε-greedy policy adopted for the Q-learning algorithm, which means that the agent is unfamiliar with the environment, and thus it need to explore the entire state space in order to learn proper strategies. However, with more iterations, the reward kept increasing and eventually, after 450,000 time steps it converge to a policy good enough for identifying the good process parameters to make in use for a given product. The proposed decision system is tested for different configurations of process parameters proposed by the process engineers to evaluate the model robustness. Figure 6 provides the results proposed by our decision model for a given product in which the temperature of the glass at the output of the furnace was 685 °C. The proposed set of process parameters allows, after 44 iterations, an outlet temperature of 682 °C, which respect the tolerance imposed by the client.

Fig. 6 DQN Testing results for a given glass model starting from a randomized state

86

C. E. Mazgualdi et al.

5 Conclusion and Future Work In this paper, RL with the DQN agent was successfully applied to process parameters identification for the heating process of tempered glass. The proposed decision system consist of a reinforcement learning based model integrated with a Random Forest selfprediction algorithm as a simulated environment. The system automatically identify the process parameters to make in use for a given product based on production and quality requirements. Validation experiments were performed and demonstrates that, based on the background data provided by ANSYS simulations, the trained self-prediction model accurately predicts the temperature of the glass at the output of the furnace with an MAE of 0.807. Then, the DQN agent is trained using the prediction model as the simulated furnace environment to learn a good strategy for process parameters identification without human intervention or any prior expert knowledge. The system presents a rapid and stable convergence and it is able to replace the traditional method for process parameters identification based on physical trials which is costly and time consuming. In future work, we would like to further investigate other toolkits in DRL regime such as policy gradient based RL especially Proximal Policy Optimization (PPO) [12] and Trust Region Policy Optimization (TRPO) [13]. The reward function could also be improved, one option to do this is by introducing the time so that the agent is forced to obtain the optimal process parameter settings within the shortest time step. Finally, further researches will focus on integrating the optimization of the energy consumption without compromising the product quality.

References 1. Ford, M.: How is tempered glass made? In: Scientific American (2001) 2. Jonathan Barr: Understanding the Glass Tempering Process, Germany (2003) 3. Otterlo, M., Wiering, M.: Reinforcement learning and Markov decision processes. In: Wiering, M., Otterlo, M. (eds.) Reinforcement Learning: State-Of-The-Art, pp. 3–42. Springer, Berlin (2012) 4. Bellman, R. E.: Dynamic programming. Ser. A Rand Corporation research study. Princeton, N.J. Princeton University Press (1957) 5. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press (2018) 6. Nielsen, M. A.: Neural Networks and Deep Learning (2015) 7. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015) 8. Guo, F., Zhou, X., Liu, J., Zhang, Y., Li, D. and Zhou, H.: A reinforcement learning decision model for online process parameters optimization from offline data in injection molding. Appl. Soft Comput. 85 (2019) 9. Ali, J.: Random forests and decision trees. Int. J. Comput. Sci. Issues (IJCSI) 9, 272–272 (2012) 10. Prafulla, D., Christopher, H., Oleg, K., Alex N., Matthias P., Alec R., et al.: OpenAI Baselines. GitHub (2017). From https://github.com/openai/baselines 11. Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J. and Zaremba, W.: Openai gym (2016). arXiv preprint arXiv:1606.01540

A Deep Reinforcement Learning (DRL) Decision Model …

87

12. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms (2017). arXiv preprint arXiv:1707.06347 13. Schulman, J., Levine, S., Abbeel, P., Jordan, M. and Moritz, P.: Trust region policy optimization. In: International conference on machine learning, pp. 1889–1897 (2015)

Analytic Hierarchy Process (AHP) for Supply Chain 4.0 Risks Management Kamar Zekhnini , Anass Cherrafi, Imane Bouhaddou, and Youssef Benghabrit

Abstract Currently, supply chain management incorporates technologies from Industry 4.0, Internet of Things (IoT) and Smart Manufacturing to improve performance and customer satisfaction. Supply chain digitalization helps organizations meet new customer needs, supply issues and existing efficiency improvement standards. In other words, digitalization leads to a faster, flexible and efficient supply chain. However, this digitalization has created new supply chain risks such as information and cybersecurity risks. The purpose of this paper is to discuss key supply chains 4.0 risks and examine the relationship between them. This study also presents a literature review related to the subject to well understand the Supply chain 4.0 risk management. The authors use the approach of the analytical hierarchy model (AHP) to analyze the supply chain 4.0 risks. The suggested study is useful for both academics and practitioners, as it illustrated the relatedness of supply chains 4.0 risks. This would help organizations confront new risks facing interconnected supply chains. The findings would help secure supply chains and strengthen organizations’ risk management. Keywords Supply chain management 4.0 · Supply chain 4.0 risks · Risk management · AHP

K. Zekhnini (B) · A. Cherrafi · I. Bouhaddou · Y. Benghabrit LM2I Laboratory, ENSAM, Moulay Ismail University, 50500 Meknes, Morocco e-mail: [email protected] A. Cherrafi e-mail: [email protected] I. Bouhaddou e-mail: [email protected] Y. Benghabrit e-mail: [email protected] © Springer Nature Switzerland AG 2021 T. Masrour et al. (eds.), Artificial Intelligence and Industrial Applications, Advances in Intelligent Systems and Computing 1193, https://doi.org/10.1007/978-3-030-51186-9_7

89

90

K. Zekhnini et al.

1 Introduction Digital technologies have stimulated huge advances in manufacturing development. The role of new technologies in increasing industry competitiveness has become more crucial in the digitalization era. In other words, the way the information is shared has changed dramatically with digital technologies. All industries are impacted by these new technologies [1]. Consequently, they altered the exchange of information between supply chains (SCs). There’s a considerable amount of risk sources in SCs. This encompasses unpredictable exchange rates, supply disruptions, work accidents, competitive markets, political instability, unstable demand, and natural disasters [2]. Furthermore, many other sources have been identified through the digitization of SCs. Thus, adopting digital supply chain systems, make companies face significant risks [3]. In this sense, this study is of great importance to organizations. This provides an outline of risk management in the digital age. Because risk management is important in the presence of multitude of uncertainties in order to maintain the supply chain efficiently [4]. Over the years, many researchers have focused on supply chain risk management (SCRM) by contributing in the fields of risk definition, operationalization and mitigation [4]. To date, little attention has been given to risk management in supply chain 4.0 [5, 6]. The aim of this study is therefore to fill the literature gap with the prioritization of supply chain risks in the digital era. It addresses the new category of risks born with the supply chains 4.0. In addition, this paper aims to provide academicians with another supply chain 4.0 risk management perspectives. This article is organized as follows: in the following section, we present the literature review to well understand the supply chain 4.0 risk management. Section 3 describes the research methodology used for the study. Section 4 illustrates the application of AHP for Supply chain 4.0 risks. And finally, Sect. 5 discusses findings results and presents some guidelines to overcome most crucial risks facing supply chains 4.0.

2 Literature Review 2.1 Industry 4.0 The advanced industrial factory digitization is called Industry 4.0 [7]. Industry 4.0 is referred to the fourth industrial revolution [8]. It is also known as ‘digital manufacturing’, ‘industrial internet’, or ‘integrated Industry’. It is expected to have the potential to affect entire industries by changing the way products are produced, manufactured, distributed and charged [9]. In other words, industry 4.0 is a view of smart factories designed with digital cyber-physical systems [8]. It also refers to recent advances in technology where the

Analytic Hierarchy Process (AHP) for Supply Chain 4.0 Risks Management

91

Fig. 1 Industrial revolution

internet serves as an infrastructure for connecting physical objects, human participants, smart machines and processes across institutional borders to create a new kind of networked, smart and agile value chain [10]. The Industry 4.0 relies on many advanced technologies such as cloud computing, cyber-physical systems, 3D printing, internet of things, artificial intelligence, big data analytics, advanced sensor-driven manufacturing technologies, augmented reality and advanced robotics [11, 12]. We may identify four phases (Fig. 1) in the ongoing process called the Industrial Revolution from the first mechanical loom, dating back to 1784, precisely 235 years ago [13]. In fact, the first industrial revolution was symbolized by the implementation of mechanical manufacturing systems using water and steam power. It started in the 1800s [14]. By the beginning of the twentieth century, the second industrial revolution was described by the use of electricity in mass production. In the mid-twentieth century, the third industrial revolution has known the introduction of automation and microelectronic engineering into manufacturing [14]. The fourth industrial revolution named ‘industry 4.0’ [15] ties the Internet of Things (IOT) to production [16]. The Industry 4.0 can make a great revolution in the management of the global supply chain by allowing reaching unprecedented operative efficiency levels [17]. Moreover, by using new technologies, in the course of Industry 4.0, time and effort can be reduced within entire supply chains [18]. Also, the supply chain becomes more flexible and more transparent [19]. Furthermore, factories and businesses that have adopted Industry 4.0 will know improvements in different areas [20]. In other words, industry 4.0 offers the supply chain direct cost savings, Enhanced speed, increased employee productivity, competitiveness in the global market and increased profitability [21].

92

K. Zekhnini et al.

2.2 Supply Chain Management 4.0 Industry 4.0, the Internet of Things (IoT) and a host of new technologies allow supply chain partners to more efficiently track, predict and respond to consumer demand [22]. Hence, Supply chain 4.0 or digital supply chain (DSC) is a supply chain, based on Web-enabled capabilities. A true DSC goes far beyond the hybrid use of paper-based and IT-enabled processes, to rely entirely on synchronization, system integration, and “smart” information-producing capabilities [23]. A digital supply chain is made up of hardware, software and communication networks systems that enable interaction among globally distributed organizations. It orchestrates the supply chain activities such as buying, making, storing, moving and selling a product [24]. In other words, DSC is a process that uses new technologies to identify processes from market to market (from distributor to supplier networks) to understand, react and orchestrate bi-directionally [25]. The notion of “Supply chain management 4.0” will bring a variety of advantages to the development of organizations. In fact, the digitalization enables the evolution of the next generation of supply chains offering both flexibility and efficiency.

2.3 Supply Chain 4.0 Risks When companies adopt digital supply chain systems to operate and coordinate with their partners, they face significant risks. In other words, implementation and use of IT systems in inter-organizational contexts, like digital supply chain networks, is considered very vulnerable [3]. The risky aspect of digital supply chain systems is due to the fact that successful implementation of these systems requires focal firms to adapt to the external environment and multiple external parties, frequently beyond their control [26]. Therefore, organizations need a risk management system to deal with risks. Risk management is a structured approach that allows companies to know what the risk is, who is at risk and what safeguards are appropriate for such risks [27]. Many risks could threaten companies such as IT failures, piracy, theft, and cyber-attacks. Such risks are one of the leading causes of reputation crises and loss of reputation value. They can damage the supply chain performance [28]. Identified risks found in the literature are presented in (Table 1).

3 Methodology Various researchers in different sectors, such as education, industry, manufacturing, engineering, etc., have widely used AHP methodology to solve various multi-criteria decision problems [33–36].

Analytic Hierarchy Process (AHP) for Supply Chain 4.0 Risks Management Table 1 Supply chain 4.0 risks

93

Risks

Type of risk

References

Macroeconomic fluctuation Supply chain overall coordination

Financial risks

[29]

Product arrival variability (delays) Loss of suppliers

Supply risks

[29, 30]

Fluctuating Demand

Demand risks

[29]

Lack of skilled workers Operational risks Breakdown of fractioning,

[29]

Natural/man-made disasters

Environmental risks

[29]

Cyber-attack Information security risks Risks to computer-security

Industry 4.0 risks

[3, 31, 32]

The Analytic Hierarchy Process (AHP) is a multi-criteria decision-making approach. It was introduced by Saaty (1977 and 1994). In other words, the AHP is a tool for decision support that can be used to solve complex decision issues. This uses a hierarchical multi-level hierarchy of priorities, criteria sub criteria, and alternatives in order to rank the importance of each variable [37]. Basically, AHP uses algebra metrics-based mathematical approach. It was used as a method to determine the relevance of factors for decision-making or problemsolving in order to achieve an objective. AHP incorporates in analysis the qualitative and quantitative approach and integrates it as a single analytical problem into the context. AHP uses the qualitative approach to converting issues into hierarchy. On the other hand, based on a quantitative approach, it uses more of the pair-wise comparison strategy to achieve more reliable responses and reliability [38]. AHP has been a favorite research decision method in many fields, such as engineering, food, industry, ecology, health, and government, due to its mathematical simplicity and flexibility. In addition, The AHP not only helps analysts make the best decision, but also gives them a good reason for the choices they make. So, AHP methodology is used in this study to rank and prioritize supply chain risks in the digital era. This provides a framework for dealing with logical, quantitative and qualitative dimensions of multiple criteria situations.

94

K. Zekhnini et al.

Fig. 2 Hierarchy of risks factors

4 AHP to Supply Chain 4.0 Risk Management 4.1 Developing the AHP Model We first established a risk hierarchy as shown in Fig. 2 and then used pair-wise comparisons [39] to determine the impact of risks on the supply chain 4.0. This phase requires the formulation of an AHP model hierarchy Containing objective, factors, sub-factors, and results. Note that the first level of the hierarchy is our goal which is prioritizing supply chain 4.0 risks. The second level in the hierarchy is constituted by the criteria we will use to decide. Hence, to accomplish our goal, five major risks are defined namely financial risks, supply risks, demand risks, operational risks and industry 4.0 risks. The third level of hierarchy consists of 11 risks in the five types of risks. The fourth level of hierarchy consists of the results which is effective risk prioritization in supply chain 4.0.

4.2 Weights Evaluation Not all the criteria will have the same importance. Therefore, the second step in the AHP process is to derive the relative priorities (weights) for the criteria. The pairwise comparison decision matrices are developed by expert guidance to assess the relative importance of the supply chain 4.0 risks.

Analytic Hierarchy Process (AHP) for Supply Chain 4.0 Risks Management

4.2.1

95

Establishment of Pair-Wise Comparison Matrices

The precise assessment of the relevant data is one of the most important steps in many decision-making processes. So, in term of each criterion, comparisons are used in pairs to determine the relative importance of each alternative. In this method, the decision-makers express their preferences on the importance of one pair comparison at a time. Comparisons are made on a nine-point scale that transforms individual preferences into available alternatives. The values of the AHP pairwise comparisons are determined by Saaty’s (1980) scale. The available values for the pairwise comparisons are set according to this scale: {9, 8, 7, 6, 5, 4, 3, 2, 1, 1/2, 1/3, 1/4, 1/5, 1/6, 1/7, 1/8, 1/9} (Table 2). Overall, the matrix of comparison is defined as = (ai j ) where A = reciprocal matrix with 1 in the diagonal and the lower triangular matrix is filled using formula ai j = a1ji . Given a judgment matrix with pairwise comparisons, the corresponding maximum left eigenvector is approximated by using the geometric mean of each row. That is, the elements in each row are multiplied with each other and then the n-th root is taken (where n is the number of elements in the row). Tables 3, 4, 5, 6, 7 and 8 presents the pair-wise comparison matrix for different level of hierarchy. Table 2 The comparison scale of pair wise Saaty [40] Intensity of importance

Definition

Explanation

1

Equal importance

Two elements contribute equally to the objective

3

Moderate importance

Experience and judgment slightly favor one element over another

5

Strong importance

Experience and judgment strongly favor one element over another

7

Very strong importance

One element is favored very strongly over another, its dominance is demonstrated in practice

9

Extreme importance

The evidence favoring one element over another is of the highest possible order of affirmation

Table 3 Pair-wise comparisons of risks types Criterea

I

S

D

O

F

Weights

I

1

3

5

4

4

0.449

S

1/3

1

3

5

4

0.276

D

1/5

1/3

1

2

2

0.116

O

¼

1/5

1/2

1

2

0.090

F

¼

¼

1/2

½

1

0.068

Sum

2.033

4.783

10.000

12.500

13

CR = 0.09

96

K. Zekhnini et al.

Table 4 Pair-wise comparison judgement: of financial risks (F)

Table 5 Pair-wise comparison judgement: of supply risks (S)

Table 6 Pair-wise comparison judgement: of demand risks (D)

Table 7 Pair-wise comparison judgement: of operational risks (O)

4.2.2

Criterea

F1

F2

Weights

F1

1

5

0.833

F2

1/5

1

0.167

Sum

1.200

6.000

CR = 0

Criterea

S1

S2

Weights

S1

1

2

0.667

S2

½

1

0.333

Sum

1.500

3.000

CR = 0

Criterea

D1

D2

Weights

D1

1

2

0.667

D2

½

1

0.333

Sum

1.500

3.000

CR = 0

Criterea

O1

O2

Weights

O1

1

7

0.875

O2

1/7

1

0.125

Sum

1.143

8.000

CR = 0

Calculation of the Degree of Consistency

One of AHP’s important tasks is to measure the estimated vector consistency level. Thus, it is necessary to check the consistency once decisions have been entered. AHP calculates a consistency ratio (CR) comparing the consistency index (CI) of the matrix in question versus the consistency index of a random-like matrix (RI). Saaty (2012) provides the calculated RI value for matrices of different sizes as shown in Table 9. Table 8 Pair-wise comparison judgement: of industry 4.0 risks (F) Criterea

I1

I2

I3

Weights

I1

1

I2

1/6

6

2

0.705

1

3

0.295

I3

½

1/3

1

0.225

Sum

1.167

7.000

5.000

CR = 0.87

Analytic Hierarchy Process (AHP) for Supply Chain 4.0 Risks Management

97

Table 9 Average random index values N

1

2

3

4

5

4

7

8

9

RCI

0

0

0.58

0.90

1.12

1.24

1.32

1.41

1.45

If all the comparisons are perfectly consistent, then the following relation should always be true for any combination of comparisons taken from the judgment matrix: a i j = a i k ∗ a k j . The following steps are taken to calculate the CR coefficient. First, the consistency index (CI) needs to be estimated. This is done by adding the columns in the judgment matrix and multiply the resulting vector by the vector of priorities (i.e., the approximated eigenvector) obtained earlier. This yields an approximation of the maximum eigenvalue, denoted by λmax . So we have: • CI = (λmax – n)/(n – 1) • and CR = CI/RCI Saaty (2012) has shown that it is acceptable to continue the AHP analysis with a consistency ratio (CR) of 0.10 or less. If the consistency ratio reaches more than 0.10, the judgements need to be updated and correct the source of the inconsistency.

5 Discussion and Managerial Implication The final rankings of various supply chain 4.0 risks are presented in Table 10. According to Risk rankings, we can determine crucial risks that need managers’ attention. I1 (Cyber-attack) is the most important risk factor because it has a highest priority weight (0.32). The second most critical risk factor is S1 (Product arrival variability(delays)) with priority weight of 0.18. The third most important risk factor Table 10 Supply chain 4.0 risk rank Criterea

Local Weights

Global Weights

Overall weights

D1

0.667

0.116

0.0773

Risk rank 7

D2

0.333

0.116

0.0387

9

F1

0.833

0.068

0.0567

8

F2

0.167

0.068

0.0113

10

I1

0.705

0.449

0.3164

1

I2

0.295

0.449

0.1326

3

I3

0.225

0.449

0.1012

4

O1

0.875

0.09

0.07875

6

O2

0.125

0.09

0.01125

11

S1

0.667

0.276

0.184092

2

S2

0.333

0.276

0.091908

5

98

K. Zekhnini et al.

is I2 (Information security risks) with the risk exposure of 0.13. The fourth important risk factor is I3 (Risks to computer-security) with the risk exposure of 0.10. The fifth important risk factor is S2 (Loss of suppliers) with the risk exposure of 0.09. Then, we have O1 (Lack of skilled workers), D1 (Fluctuating), F1 (Macroeconomic fluctuation), D2 (Demand), F2 (Supply chain overall coordination), and O2 (Breakdown of fractioning). The risk exposure for first two risks I1 and S1 is considerably high than the other risks. In fact, the first two important risks add up to 50% of global weight. The graph (Fig. 3) represents the drops in risk weights of various risks. It shows that I1 has the highest risk weight. Then S1 risk weight decreases by 13%. Consequently, these two risks should take managers high attention to been mitigated in real time. According to the figure, we can divide the risks into clusters on the basis of the weights range. In other words, risks cluster where managers need to put same amount of attention are I2, I3. Next cluster that managers should not neglect is S2, O1, D1, F1. Interestingly, we can observe that the industry 4.0 and supply risks are the most crucial risks. The supply chain cyber-attacks are higher now than ever. Industries must understand how to secure their supply chains from these expanding attacks. Having a secure supply chain means that the organizations softwares are up to date. Training employees on cyber security would reduce the risk of cyber-attacks. In addition, managers have to think completely about supplier risk in other ways. They need to establish management strategies, with both proactive and reactive mechanisms. This should include identifying risk-fraught materials, educating risk management staff, creating methods for breaching their complex supply chains, and assuring a clear visibility of the whole supply chain in real time. To sum up, increasing visibility in the supply chain, establishing a trusting relationship with suppliers and having strategic plans to face threats will help organizations reduce supply chain risks. Table 11 presents some actions to mitigate the most crucial SC 4.0 risks. Fig. 3 Final weights of each supply chain 4.0 risk

0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00 I1

S1

I2

I3

S2 O1 D1 F1 D2 F2 O2

Analytic Hierarchy Process (AHP) for Supply Chain 4.0 Risks Management

99

Table 11 Supply chain 4.0 mitigation actions Risk

Actions

Cyber-attack (I1)

– – – – –

Identify and document asset vulnerabilities Continuous employee education Monitor network traffic for suspicious activity Upgrade software immediately Harden external facing web applications

Product arrival variability (delays) (S1) – Evaluate your supplier’s processes with an ISO 9001 quality audit – Regular analysis of all available supplier information – Having a well-defined response strategy and crisis management plan – Using Supply Chain Predictive Analytics to reduce risks Information security risks (I2)

– – – –

Upgrade Authentication inside and out Know where sensitive data resides Develop data protection strategy Include encryption monitoring

Risks to computer-security (I3)

– Implement rigorous application development testing – Perform periodic penetration assessments and vulnerability assessments – Use reputable anti-virus

Loss of suppliers (S2)

– Diversify Suppliers – Maintaining a close relationship with a supplier – Invoking certain policies, procedures, processes or systems – Switching suppliers

6 Conclusion Supply chain risk management has shown excellent interest in manufacturing studies and this study is an effort to outline the current risk management for the digital supply chains. The identification of key risks and future study paths could be acquired from these analyzes. Due to the supply chain’s interconnectivity and complexity, managers are becoming more conscious of the supply chain risk. Notwithstanding this, the supply chains 4.0 risks have not been widely studied so far. In other words, managers often handle issues with the supply chain without discerning risks. Because of this literature gap, the present paper examined the supply chain 4.0 risk factors. We listed 11 risks under 5 major risk factors. AHP method has been used to prioritize Supply chain 4.0 Risks. Our paper presents an overview about supply chain 4.0 risks and how risk prioritization can be implemented by AHP. The research has implications for both theory and practice. It can be a roadmap for research beginners in supply chain 4.0 risk management to explore and work on the future issues of the subject. It, also, presents

100

K. Zekhnini et al.

some actions to mitigate the most crucial supply chain 4.0 risks. Moreover, the findings of this study can enable the practitioners in the entire supply chain to achieve successful risk management. In the other hand, this study also presents some limitations due to the small number of papers used for the analysis of the subject. Our results remain to be discussed and reinforced in future studies. In addition, due to the various new risks that could arise in the supply chain 4.0, it is desirable to review the AHP calculus so far and consider introducing new risks or new level in the hierarchy model.

References 1. Ivanov, D., Dolgui, A., Sokolov, B.: The impact of digital technology and Industry 4.0 on the ripple effect and supply chain risk analytics. Int. J. Prod. Res. 57, 829–846 (2019). https://doi. org/10.1080/00207543.2018.1488086 2. Er Kara, M, Oktay Fırat, S., Ghadge, A.: A data mining-based framework for supply chain risk management. Comput. Ind. Eng. 105570 (2018). https://doi.org/10.1016/j.cie.2018.12.017 3. Xue, L., Zhang, C., Ling, H., Zhao, X.: Risk mitigation in supply chain digitization: system modularity and information technology governance. J. Manag. Inf. Syst. 30, 325–352 (2013). https://doi.org/10.2753/mis0742-1222300110 4. Ho, W., Zheng, T., Yildiz, H., Talluri, S.: Supply chain risk management: a literature review. Int. J. Prod. Res. 53, 5031–5069 (2015). https://doi.org/10.1080/00207543.2015.1030467 5. Gottlieb, S., Ivanov, D., Das, A.: Case studies of the digital technology impacts on supply chain disruption risk management. In: Schröder, M., Wegner, K. (eds.) Logistik im Wandel der Zeit – Von der Produktionssteuerung zu vernetzten Supply Chains, pp. 23–52. Springer Fachmedien Wiesbaden, Wiesbaden (2019) 6. Radanliev, P, Treall, L., Santos, O., Montalvo, R.M.: Cyber risk from IoT technologies in the supply chain-discussion on supply chains decision support system for the digital economy (2019). https://doi.org/10.13140/rg.2.2.17286.22080 7. Glas, A.H., Kleemann, F.C.: The Impact of Industry 4.0 on procurement and supply management: a conceptual and qualitative analysis. Int. J. Bus. Manag. Inven. 5, 55–66 (2016a) 8. Thames, L., Schaefer, D.: Software-defined cloud manufacturing for Industry 4.0. Procedia CIRP 52, 12–17 (2016). https://doi.org/10.1016/j.procir.2016.07.041 9. Hofmann, E., Rüsch, M.: Industry 4.0 and the current status as well as future prospects on logistics. Comput. Ind. 89, 23–34 (2017). https://doi.org/10.1016/j.compind.2017.04.002 10. Schumacher, A., Erol, S., Sihn, W.: A maturity model for assessing Industry 4.0 readiness and maturity of manufacturing enterprises. Procedia CIRP 52, 161–166 (2016). https://doi.org/10. 1016/j.procir.2016.07.040 11. Rajnai, Z., Kocsis, I.: Labor market risks of industry 4.0, digitization, robots and AI. In: SISY 2017—IEEE 15th International Symposium on Intelligent Systems and Informatics, Proceedings, pp. 343–346 (2017). https://doi.org/10.1109/SISY.2017.8080580 12. Ivanov, D., Dolgui, A., Sokolov, B.: The impact of digital technology and Industry 4.0 on the ripple effect and supply chain risk analytics. Int. J. Prod. Res. 0, 1–18 (2018). https://doi.org/ 10.1080/00207543.2018.1488086 13. Bloem, J., van Doorn, M., Duivestein, S., Excoffier, D., Maas, R., van Ommeren, E.: The Fourth Industrial Revolution, p. 40 14. Da, Xu.L., Xu, E.L., Li, L.: Industry 4.0: state of the art and future trends. Int. J. Prod. Res. 56, 2941–2962 (2018). https://doi.org/10.1080/00207543.2018.1444806 15. Hahn, G.J.: Industry 4.0: a supply chain innovation perspective. Int. J. Prod. Res. 0, 1–17 (2019). https://doi.org/10.1080/00207543.2019.1641642

Analytic Hierarchy Process (AHP) for Supply Chain 4.0 Risks Management

101

16. da Silva, V.L., Kovaleski, J.L., Pagani, R.N.: Technology transfer in the supply chain oriented to industry 4.0: a literature review. Technol. Anal. Strat. Manag. 0, 1–17 (2018). https://doi. org/10.1080/09537325.2018.1524135 17. Rodriguez Molano, J.I., Contreras Bravo, L.E., Trujillo, E.R.: Supply chain architecture model based in the industry 4.0, validated through a mobile application. CES 10, 1581–1594 (2017). https://doi.org/10.12988/ces.2017.711186 18. Müller, J.M., Voigt, K.I.: The impact of Industry 4.0 on supply chains in engineer-to-order industries—an exploratory case study. IFAC-PapersOnLine 51, 122–127 (2018). https://doi. org/10.1016/j.ifacol.2018.08.245 19. Schröder, M., Indorf, M., Kersten, W.: Industry 4.0 and its impact on supply chain risk management. Transp. Syst. 11, (2014) 20. Glas, A.H., Kleemann, F.C.: The impact of industry 4.0 on procurement and supply management: a conceptual and qualitative analysis. Int. J. Bus. Manag. Inven. 5, 55–66 (2016) 21. Recknagel, R.: The Supply Chain Manager’s Guide to Industry 4.0. Supply Chain Excell. 7 (2017) 22. Brinch, M., Stentoft, J.: Digital supply chains: Still more “wannabe” than practise. Dilf Orient. 54(2), 22–28 (2017) 23. Rouse, M., Cobb, M.: Transport Layer Security (TLS) (2016). http://searchsecurity.techtarget. com/definition.Transp ort-Layer-Security-TLS. Accessed 19 Nov 2016 24. Bhargava, B., Ranchal, R., Othmane, L.B.: Secure information sharing in digital supply chains. In: 2013 3rd IEEE international advance computing conference (IACC). IEEE, pp 1636–1640 (2013) 25. Cecere, L.: Embracing the digital supply chain. In: Supply Chain Shaman (2016). https:// www.supplychainshaman.com/demand/demanddriven/embracing-the-digital-supply-chain/. Accessed 2 Jan 2020 26. Riggins, F.J., Mukhopadhyay, T.: Interdependent benefits from interorganizational systems: opportunities for business partner reengineering. J. Manag. Inf. Syst. 11, 37–57 (1994) 27. Tupa, J., Simota, J., Steiner, F.: Aspects of risk management implementation for Industry 4.0. Procedia Manuf. 11, 1223–1230 (2017). https://doi.org/10.1016/j.promfg.2017.07.248 28. Gaudenzi, B., Siciliano, G.: Managing IT and cyber risks in supply chains. In: Khojasteh, Y. (ed.) Supply Chain Risk Management, pp. 85–96. Springer Singapore, Singapore (2018) 29. Faizal, K., Palaniappan, P.L.K.: Risk assessment and management in supply chain. Glob. J. Res. Eng. 14, 19–30 (2014) 30. Yang, Q., Wang, Y., Ren, Y.: Research on financial risk management model of internet supply chain based on data science. Cogn. Syst. Res. 56, 50–55 (2019). https://doi.org/10.1016/j.cog sys.2019.02.001 31. Colicchia, C., Creazza, A., Menachof, D.A.: Managing cyber and information risks in supply chains: insights from an exploratory analysis. Supply Chain. Manag. 24, 215–240 (2019). https://doi.org/10.1108/SCM-09-2017-0289 32. Schröder M, Indorf M, Kersten W (2014) Industry 4.0 and Its Impact on Supply Chain Risk Management, pp. 15–18 33. Al-Barqawi, H., Zayed, T.: Infrastructure management: integrated AHP/ANN model to evaluate municipal water mains’ performance. J. Infrastruct. Syst. 14, 305–318 (2008) 34. Barbarosoglu, G., Yazgac, T.: An application of the analytic hierarchy process to the supplier selection problem. Prod. Inven. Manag. J. 38, 14 (1997) 35. Mathiyazhagan, K., Diabat, A., Al-Refaie, A., Xu, L.: Application of analytical hierarchy process to evaluate pressures to implement green supply chain management. J. Clean. Prod. 107, 229–236 (2015) 36. Vaidya, O.S., Kumar, S.: Analytic hierarchy process: an overview of applications. Eur. J. Oper. Res. 169, 1–29 (2006) 37. Triantaphyllou, E., Mann, S.H.: Using the analytic hierarchy process for decision making in engineering applications: some challenges, p. 11 (1995) 38. Safian, E.E.M., Nawawi, A.H.: The evolution of Analytical Hierarchy Process (AHP) as a decision making tool in property sectors. 4 (2011)

102

K. Zekhnini et al.

39. Wind, Y., Saaty, T.L.: Marketing applications of the analytic hierarchy process. Manage. Sci. 26, 641–658 (1980). https://doi.org/10.1287/mnsc.26.7.641 40. Saaty, T.L.: Fundamentals of Decision Making and PRIORITY THeory with the Analytic Hierarchy Process. RWS Publications (2000)

SmartDFRelevance: A Holonic Agent Based System for Engineering Industrial Projects in Concurrent Engineering Context Abla Chaouni Benabdellah, Imane Bouhaddou, and Asmaa Benghabrit

Abstract In the era of industry 4.0, the design of complex engineered systems is a challenging mission, and most researchers would argue that it is linked to intentional action and it cannot emerge out of complexity. In fact, to achieve the common ‘cost/quality/delay’ targets, engineers must work together to create the best overall solution from their individual components, and exploit and regenerate the required intellectual capital (Knowledge). Therefore, to be effective, we need a knowledge based system that takes into consideration all the users’ needed methods to create share reuse and evaluate knowledge. Thus, using complex adaptive system theory, mainly Distributed Artificial Intelligence (holonic agent paradigm), a new knowledge-based system can be designed to address this research issue. In this regard, the purpose of this paper is first to provide a comparative analysis of the appropriate method that considers the global challenges for designing a product in a concurrent engineering context. Second, a holonic multi-agent system, called SmartDFRelevance is proposed based on Agent-oriented Software Process for Complex Engineering Systems methodology. Keywords Complex adaptive system theory · Knowledge based system · Distributed artificial intelligence · Concurrent engineering · Holonic multi-agents system

A. C. Benabdellah (B) · I. Bouhaddou LM2I Laboratory ENSAM, Moulay Ismaïl University, Meknès, Morocco e-mail: [email protected] I. Bouhaddou e-mail: [email protected] A. Benghabrit LMAID Laboratory, ENSMR, Mohammed V University, Rabat, Morocco e-mail: [email protected] © Springer Nature Switzerland AG 2021 T. Masrour et al. (eds.), Artificial Intelligence and Industrial Applications, Advances in Intelligent Systems and Computing 1193, https://doi.org/10.1007/978-3-030-51186-9_8

103

104

A. C. Benabdellah et al.

1 Introduction Industries are engaging in constant performance enhancement activities in the current economic and industrial environment to remain competitive in their core business. In general, performance enhancement levers can concern (1) maximizing product efficiency and organizational design processes or (2) improving internal process efficiency or even (3) improving human performance by valorizing knowledge and competences. The first axis is well recognized and investigated by enterprises. It embeds methods and tools such as functional analysis, dependability, statistical process control or modelling and simulation. The second axis integrates, for instance, all the methodologies and tools of project management, agile methodologies, engineering system, or quality management systems. While the third one remain complex due to the intrinsic nature of the knowledge and its volatile dimension. In fact, the design of a new product in terms of cost, flexibility, assembly, quality, safety, serviceability and environmental issues has specific characteristics that directly affect how knowledge is performed. In addition, giving an outline of a project or recalling the collaborative work of past projects is still a challenging mission for practitioners. As a result, designers must take into account the experience of older systems, work with draft texts, with professional understanding and coordination, and validate the veracity of information without knowing it. Thus, in order to design product in concurrent engineering context, experts from different disciplines must work together to create from their individual components the best overall structure. This scenario produces a heterogeneous and hierarchical ecosystem similar to a multi-agent system (MAS) where agents (actors, experts, designers, etc.) are organized to solve individual parts of a complex issue in a coordinated way. Notwithstanding, the development of a knowledge management tool, piloted by agents, based on role monitoring and organizational perspective is of crucial importance [1]. Indeed, the interest in using a MAS is first to be able to benefit from the capacities of autonomy, reactivity, proactivity, and social skills. Second, MAS is working in a decentralized manner is capable of using distributed and incomplete information and knowledge sources [2]. In addition to that, the advantages of using an organizational approach derive from its potential to illustrate the processes of knowledge sharing and development, rather than relying on knowledge formalization and modeling [3]. It is also capable of offering many advantages such as heterogeneity of languages, modularity, multiple possible architectures and security of the system. However, for successful application and deployment of MAS, methodologies are essential. Several methods have been proposed and most of them recognize that the building process is a traditional software system. In particular, they all recognize the idea that a MAS can be considered as an organization of individuals in which it play a role. Therefore, the current methodologies are not knowledge-based driven, they do not take into account all the processes of user-made knowledge creation, sharing, and evaluation [4]. For this purpose, we propose in this paper a new holonic knowledgebased system, called SmartDFRelevance, following the Agent-oriented Software

SmartDFRelevance: A Holonic Agent Based System …

105

Process for Complex Engineering Systems methodology (ASPECS) dedicated to the analysis, design, and deployment of complex engineered systems. More clearly, the proposed holonic multi-agent knowledge-based system aims at creating project memories for indexing and organizing the knowledge to be capitalized during engineering industrial projects in concurrent engineering context (MemoDFRelevance). It provides also a domain ontology (OntoDFRelevance) to perceive the vocabulary and semantics of the knowledge used during the engineering project while considering different virtues such as assembly, manufacture, safety, quality, supply chain, environment (DesignForRelevance) [5]. This paper is structured as follows; Sect. 2 presents the basic terminologies. Section 3 presents an extensive review of the existing knowledge management systems found in the literature. Section 4 presents a comparative analysis of the various organizational models for the deployment of multi-agent systems. The interest and the particularity of the ASPECS methodology in bringing together the advantages of organizational approaches as well as those of the holonic vision in modeling complex engineered systems is also presented in this section. Section 5 describes the overall architecture of the proposed SmartDFRelevance (Smart Design for Relevance) knowledge management system. This new knowledge-based system ensures the whole knowledge lifecycle identification, creation and acquisition, passing through validation and capitalization up to sharing and reuse. The conclusions and future works are drawn in the final section.

2 Basic Terminologies 2.1 Multi-Agents Systems (MAS) Due to its ability to solve complex computing issues [6, 7], Distributed Artificial Intelligence (DAI) has gained considerable attention from academia in recent years. Its algorithms are generally divided into three groups, depending on the specific approaches used to solve the tasks: parallel AI, Distributed Problem Solving (DPS) and Multi-Agent Systems (MAS). In fact, Parallel AI includes the development of concurrent algorithms, languages, and architectures to increase the efficiency of traditional AI algorithms by using sequential tasks [7]. DPS involves dividing a task into subtasks. Each subtask is assigned to a node between a set of cooperative nodes, known as computing entities while MAS consists of independent entities known as agents. However, there are multitudes of definitions for agents that derive from different application-specific agent features. Etymologically, the word agent is inspired by the Latin “agere” which means, an agent is an entity that acts. Drogoul defined an “agent as an entity that can exercise control over local processes of perception, communication, learning, reasoning, decision-making or execution” [8]. The authors in [6] defined “an agent as a computer system, situated in an environment, and which

106

A. C. Benabdellah et al.

Fig. 1 Multi-Agent Simulation principle according to Drogoul [8]

autonomously and flexibly to achieve the objectives for which it was designed”. Ferber described “an agent as an autonomous physical or abstract entity which is capable of acting on itself and on its environment, which can communicate with other agents in a common environment and whose behavior is the result of his observations, his knowledge and his interactions with other agents which tends to satisfy his objectives” [9]. Authors in [10] defined an “agent as an entity which is placed in an environment and senses different parameters that are used to make a decision based on the goal of the entity. The entity performs the necessary action on the environment based on this decision”. Therefore, a literature analysis shows that there is no exact definition of agent, but generally, there is a consensus on common properties (autonomy, rationality, responsiveness, adaptability, flexibility, proactivity, sociability, interactivity, etc.) and keywords (entity, environment, parameters, action, etc.) that characterize the concept of agent [11]. However, while an agent working on its own is capable of taking action (based on autonomy), the agents’ real benefit can only be gained by collaborating with other agents in partnership. In this sense, multiple agents working together to simulate and understand complex systems [12] are referred to as Multi-Agents System (MAS). According to Wooldridge [6] a “MAS is a set of heterogeneous agents with own and active motivations to achieve their goals. Each agent can share and organize their roles and activities with other agents. Besides, agents can deal with complex situations within these systems by sharing their knowledge of the environment”. More clearly, the phenomena are categorized into a series of acting or communicating elements. Each of these elements is modeled by an agent, and the overall model is the result of agent interactions [9, 13]. The multi-agent simulation principle can be illustrated in Fig. 1.

2.2 Holonic Multi-Agents Systems (HMAS) Despite the benefits of MAS, this approach presents different challenges and some open issues need to be answered. One such concern is how an organization can be viewed as agents [1]. In other words, how to interpret the fact that a group of agents

SmartDFRelevance: A Holonic Agent Based System …

107

displays a specific behavior in interaction and that they behave as if they were a single entity at a certain level of abstraction [14]. Many researchers have studied this issue and have suggested several models influenced by their experience in various fields [15]. Thus, due to the usefulness of holons where goals can be recursively decomposed into subtasks that can be assigned to individual holons [16], Holonic system is choosen in this paper. Questions arise, what is a Holonic agent? What are the main differences between holons and agents? In 1967, the Hungarian writer and philosopher Arthur Koestler [18], who had studied social organisms and organizations, pointed out that an entity that can be a whole or a part cannot be considered an absolute in real life. He then introduced the concept of Holon’s “holos”, meaning the whole and the “on” suffix that indicates a component like “proton.” Therefore, a holon is defined by Koestler as “part of a whole or a larger organization that fulfills three conditions: stability, autonomy, and cooperation” [18]. Besides, the holon can sometimes be seen as an atomic entity, sometimes like a collection of interacting holons, depending on the level of observation. Likewise, a group of separate holons can be viewed as a collection of interacting entities, or as parts of a holon of a higher level [16]. At a higher level, the composed holon is called a super-holon and the holons that make up a super-holon are called sub-holons or member holons (Fig. 2). Thus, holons are by definition composed of other holons while agents are not necessarily composed of other agents. This does not mean agents cannot be composed of agents, but the general assumption is that “agents are atomic entities.” Nonetheless, as stated by Hilaire et al. [14], almost all the proposed agent architectures have not discussed the general issue of how to handle “agent” sets as higher-order entities, such as how to treat organizations as agents. In this respect, Holonic Multi-agents system (HMAS) attempt to address this problem.

Fig. 2 Vertical and horizontal decomposition of a Holon [17]

108

A. C. Benabdellah et al.

3 The Agent Paradigm in Knowledge Engineering Artificial Intelligence (AI) research has centered in earlier days on establishing formalisms, mechanisms of inference, and strategies for knowledge-based systems (KBS) operationalization. Generally, the development efforts were limited to small KBSs to assess the feasibility of different approaches. Despite the very promising results provided, the transition of this technology to industrial use to build large KBSs in many cases failed. Therefore, there is a need for alternative methodological approaches to deal with these unsatisfactory situations. For this purpose, the field of Knowledge Engineering (KE) was born and a variety of concepts, approaches, and techniques that have greatly improved the information creation, use, and deployment process was created [19]. As an engineering discipline that requires a high level of human expertise and involves integrating knowledge into computer systems, KE means to gather, study, organize and represent knowledge. In other words, it is defined as a modeling process that represents knowledge to store it, to communicate it or to externally manipulate [20]. Nonetheless, since knowledge cannot be reduced to a static resource-based perspective due to its inherently dynamic nature, KE solutions need to focus on knowledge models with richer semantics and improved traceability [21]. According to this statement, and compared to other KE strategies, a multi-agent organizational structure allows information exchange and development processes to be explained instead of focusing on information formalization and simulation [3]. In fact, according to Kusch [22], the services that the multi-agent systems can offer are: • Finding, collecting, evaluating and classifying knowledge from different data sources. • Addressing the system’s integration or omission of knowledge. • Explaining the quality and reliability of the standardized knowledge of the system. • Learning gradually throughout the knowledge management process. Multi-agent systems are therefore well suited for managing heterogeneous and distributed knowledge by their ability to adapt to dynamic environments. In addition, coordination between agents gives them the collective ability to manage large amounts of data. To do so, several published works have implemented such services upon agent cooperation to address complicated problems related to the knowledge types and assistant management agents depending on the actor’s needs. In the domain of complex design [23], Monceyron [24] has created an architecture of blackboard construction, EXPORT, applied to harbor design. More recently, after Scalabrin et al. [25] work on cognitive agent systems, Shen and Barthès [26] developed a generalized multi-agent framework, DIDE, for mechanical engineering design. Barthès and de Azevedo [27] suggested a framework for the study and implementation of multi-agent schemes. With the same perspective, Barthès et Tacla [28] creates a multi-agent system to support the “ba” externalization approach used by the R&D department. A context for how knowledge is produced and shared [29] is given by the “ba” theory.

SmartDFRelevance: A Holonic Agent Based System …

109

Several other published works focus on the efficiency and information sharing effects of organizational structures within a group [30, 31]. In [32], Dignum and Meyer illustrate the social aspect of the organization by describing the agency organizations as collections of individuals and their interactions that are governed by social order processes and created for a common goal by autonomous actors. Moreover, Boissier and Demazeau [33] argue that an agent organization can be interpreted easily as a collection of restrictions followed by a collective of agents to facilitate the accomplishment of their objectives. From these two claims, it is apparent that agents must take into account the group’s priorities within an organization so that they can accomplish their own goals. In addition to that, Guizzardi [34] emphasizes the effectiveness of knowledge sharing among agent organizations. He suggests including the idea of the information within organizations: an agent association is a knowledgesharing environment in which agents work together and share expertise to carry out their work. Carley [35] states that human organizations ‘research and review helps to develop computational models that can be used to improve three organizational perspectives: structure, sharing of information and social cooperation. Together with teamwork, stigmergy, adaptivity, and evolution, the agent model allows the creation of software engineering applications to improve human organizations’ performance. Further research has discussed the potential of the agent to run distributed data and solve problems of collaboration in a system of action. In [32], Van Elts proposes taking into account a domain’s relational aspect along with the needs and goals of the participants in the same domain. This approach is referred to as Agent Mediated Knowledge Management (AMKM). For dynamic environments, these organizations make information acquisition simpler. With the same perspective, the creation of frameworks such as FRODO [3], CoMMA [36], Edamok [37] and KRAFT [38] was developed as complementary methods for information management (workflows, ontologies, information analysis structures, and so on). All these studies have led to the Multi-Agent Information system (MAIS) that handles and uses centralized information to view, update, and compile heterogeneous information [36, 39]. Nevertheless, even if agents’organization allow to consider social and collaboration aspects of project teams, there is a need to provide the ability to handle the information to these agents. To do so, several authors use ontology combined with the benefits of a multi-agents system to understand the world of truth. Buccafurri et al. [40] and Wooldridge [6] use ontology to provide agents with an abstract description of their related individual users desires and behaviors. Other researchers use ontology to assist agents in selecting the most appropriate agents to be contacted for knowledge sharing purposes [41, 42]. Such structures are typically designed to prevent other agents from accessing the ontologies of other agents; they maintain an individualistic conception of agent communities. Another researcher design agents that create their ontologies automatically by tracking the behavior of users [43]. In this approach, the agents can extract logical rules that characterize the actions of the user from the user interests defined in the ontology automatically. Further Researchers have developed their multi-agent structures by following a shared view of the agent societies where the ontology of each agent is believed to be available, even partially to the other agents [44].

110

A. C. Benabdellah et al.

However and based on this exhaustive overview, we conclude the need to leverage and combine the benefits of the organizational perspective, the ontology and the multi-agent approach. For this purpose, the proposed MAS called SmartDFRelevance is composed of a common ontology used by the agents to perceive the whole knowledge of the user’s domain. In addition to that, we consider using a social and collaborative approach based on role modeling and coordination throughout the project. However, in order to propose the new holonic knowledge-based system, we need first to choose the appropriate design approach for the deployment of HMAS. To do so, the next section compares and analyzes the main existing organizational MAS design approaches to address their key elements and the possible shortcoming when implemented as a KE approach.

4 Design of a Knowledge-Based System: ASPECS Methodology Many agent-oriented methodologies with a specific organizational vision have been developed in recent years to support modeling or designing complex engineered systems. Several authors evaluated the characteristics of these methodologies taking into consideration various technical parameters (criteria) for evaluating and comparing these methodologies [1, 15, 17]. In fact, they have criteria related to the methodology process such as development life-cycle (waterfall or iterative), coverage life-cycle (analysis, design, implementation, etc.) or the application domain (independent or dependent). Others are related to the methodology itself such as system structure (one or different abstraction level), agent-centered or organization-centered. Furthers are related to the supportive feature of the methodology such as dynamic structure, open system, standard integration, and ontology consideration. But in general, methodologies can be categorized into two main categories: Rules-based organizational Methodologies and Non-rules-based organizational methodologies. Table 1 gives an overview and a classification of the existing methodologies. A description of each category is also presented. Notwithstanding, based on Table 1and the different comparative studies analyzed in the literature [1, 5, 15, 51], we can state that all of these methodologies were suggested to address issues resulting from the current semantic gap between the operating environment and the software system. Each of these organizational methods enables a multi-agent structure to be defined by the positions, relationships, and organizations of the agents. They consider the multi-agent as a virtual structure independent of human organizations [4]. However, Knowledge is a semantic description of a collection of human-generated facts in a given context. Therefore, although the information sent to and between agents is well defined, the whole knowledge lifecycle is not taken into consideration most of the time. In addition, there is no methodology capable of designing a system with different granularities, which is the particularity

SmartDFRelevance: A Holonic Agent Based System …

111

Table 1 Classification of the existing methodologies Categories

Description

Methodologies

References

Rules-based organizational methodologies

– The design of the system is based on social structures of agents and eliciting behavioral rules between agents [35] – Organizations of agents are known to arise from the definition of these requirements

Engineering for Software Agents (INGENIAS)

[45]

Model of Organization for multi-agent Systems (MO˙ISE+)

[46]

ANEMONA

[47]

Non-rules-based organizational

– For each group of agents, this concept

methodologies

first defines an organizational structure and then suggests common goals – Such methods move further into the classification of relationships of agents

OPERA

[48]

Agent, Group and Role (AGR)

[9]

Multiagent Systems Engineering (MaSE)

[49]

TROPOS

[50]

after their initial stage

of the HMAS. Thus, to deal with such issues, Cossentino et al. proposed an Agentoriented Software Process for Complex Engineering Systems (ASPECS) methodology with the intrinsic ability to consider the combination of holonic and classical agent-oriented approaches with knowledge-based approaches (ontology) [17]. More specifically, ASPECS is a systematic framework that specifies the key principles for the proposed study, design and development of MAS and HMAS [52]. The ASPECS lifecycle includes three phases that are: (1) system requirements, (2) agent society design and (3) implementation and deployment [17]. In the next section, the architecture of the proposed knowledge-based system following ASPECS methodology is presented.

5 Architecture of the SmartDFRelevance Knowledge-Based System In this section, we detail the design process of the proposed knowledge-based system SmartDFRelevance. The description begins with a section detailing the activities of System Requirements Analysis process and then continues with another section explaining the final design of the proposed SmartDFRelevance Holonic multi-agent (Agent Society Design phase). We notice that the purpose of the system requirements analysis is to provide a complete description of the problem based on the principles outlined in the meta-model problem domain. While the design phase of the Agent Society aims to design a society of agents whose global behavior can provide an effective solution to the issue addressed in this paper.

112

A. C. Benabdellah et al.

5.1 System Requirements Analysis Phase of SmartDFRelevance The system Requirements analysis phase begins with a Domain Requirements Description (DRD) activity where requirements are described using traditional methods such as use cases. In our context, SmartDFRelevance is a knowledge management system whose main purpose is to handle multi-source information during the product development process while considering different virtues represented by Design for X paradigm such as Design for Assembly and Manufacture, Design for Service, Design for safety, etc. [51]. Five primary processes are illustrated in this sense (Fig. 3): • • • • •

Knowledge Identification and extraction Knowledge Validation Knowledge Capitalization Knowledge Diffusion Knowledge Reuse.

Thus, in order to analyze these goals, we chose the i* framework [53] to represent the goal-oriented analysis of our knowledge management system. In this sense and according to Fig. 3, our knowledge management system (SmartDFRelevance) is realized by the contribution of 5 Soft Goals. Besides, these objectives are linked

Fig. 3 Domain requirements description of the SmartDFRelevance knowledge management system

SmartDFRelevance: A Holonic Agent Based System …

113

by the “And” decomposition because it is by their combination that the knowledge management system can tend towards the overall objective. However, to provide a conceptual overview of the SmartDFRelevance knowledge management system, we will describe the problem ontology (Problem Ontology Description (POS)), which is the second main activity of the first phase of ASPECS needs analysis. Indeed, this activity deepens the understanding of the problem with a description of the concepts that make up the problem domain. To do this, the ontology domain is represented using a class diagram where the concepts, attributes, and actions are identified by specific stereotypes [54]. The UML diagram in Fig. 4 shows our proposed ontology related to the area of interest. More clearly, the system describes systematically the activities to follow for the development of an industrial product, from its beginning of life (feasibility) to its end of life (recycling or rejection). The product development process is made of different phases. A phase consists of a set of different activities. Each activity can itself be composed of sub-activities and carried out by professional actors. At the end of an activity, a deliverable is created. As for the phase, it delivers a state of the product such as a concept or a prototype. Each phase is characterized by its requirements, control parameters, resources, current state, and progress. The phases and activities are then modeled by organizations, which are composed of different roles. Each organization is then broken down into several sub-organizations, which follow a process carried out by several or one activities. Each role represents the behavior of

Fig. 4 Problem Ontology Description (POD) for SmartDFRelevance system

114

A. C. Benabdellah et al.

actors and makes it possible to determine their behavior during the activity through his interactions, skills, and knowledge. Thus, a role has one or more skills and these require a series of knowledge to characterize them. In addition, an actor can play several roles in depends on his competences and capacities. Therefore, the state of the product is contained in the knowledge associated with the interactions’ results shared during the interaction between roles. This knowledge is then structured according to MemoDFRelevance project memory and stored and conceptualized according to OntoDFRelevance ontology. The identification of organizations corresponds to the third activity in the System Requirements analysis phase, which is “Organization Identification (OID)” in ASPECS. Each goal/plan is assigned to an organization that is responsible for accomplishing it. Each organization is then defined by a set of roles, their interactions and a common context. According to Fig. 5, the overall organization, called SmartDFRelevance is broken down into five sub-organizations. In this figure, the “Boundary Roles” represent either humans interacting with the system, such as professional actors or experts, or the OntoDFRelevance and MemoDFRelevance components used by the HMAS for knowledge management. The professional actor is monitored by the detector role. This role monitors the activities of the user by detecting the name of the project he is working on, his role in this project, the current activity, and possibly the term in the search field. The skills and knowledge used during the product development process are presented in the MemoDFRelevance project memory that constitutes a “Boundary role”. The expert “Boundary role” represents the human experts in interaction with the system who are responsible for validating the knowledge capitalized in the project memory MemoDFRelevance and conceptualized from the ontology OntoDFRelevance. All

Fig. 5 Organization Identification (OID) of the SmartDFRelevance system

SmartDFRelevance: A Holonic Agent Based System …

115

these components interact with the five organizations “Knowledge identification”, “Knowledge validation”, “Knowledge capitalization”, “Knowledge dissemination” and “Knowledge reuse”. The first organization, Identification, aims to meet the goal of defining the knowledge that will be extracted for reuse. It mainly includes 4 sub-organizations, namely: identification, extraction and annotation, classification and sharing. The second organization, Validation, aims to satisfy the aim of validating the knowledge resulting from the “Identification” organization or added by the project agents. It includes 3 sub-organizations, namely: sharing, validation and evaluation. The third organization, Capitalization, aims to annotate, exploit and formalize the knowledge validated by the Validation organization. It includes 3 sub-organizations, namely: Storage, operation and annotation. The fourth organization, Diffusion, aims to share and disseminate knowledge among systems’ agents. It also includes 3 sub-organizations, namely: organization, updating and dissemination. Finally, the fifth organization, Reutilization, aims to reuse the knowledge stored in project memory or Metier repository. However, after the identification and the organizational decomposition of the knowledge management system “SmartDFRelevance”, we decompose each global behavior into several sub-roles to better understand the operation. The identification of interactions and roles corresponds to the fourth activity “Interactions and Role Identification (IRI)” in ASPECS. The finest behavior is represented by a Role. Role interaction must be defined in the same organization that provides the context for interaction. The goal of each Role must then contribute to the accomplishment (of a part) of the goal of the organization to which it belongs. Thus, to support this decomposition we use the strategic reasoning diagrams of the i *framework to decompose each non-elementary goal into sub-goals or plans [55]. For example for the Reutilization organization, Fig. 6 shows its strategic reasoning diagram. For this soft goal to be achieved, at least one of the four plans, namely, carrying out the PUSH/Pull mechanism, carrying out the automatic assistance, carrying out the alert system or carrying out the transfer must be executed. These four plans differ in their usefulness for agents according to the different stages of their integration process. It should be noted that PUSH activity in our context refers to the fact that the system tracks user activities, detects their role, the name of the product as well as its activity and automatically proposes the stored knowledge corresponding to the role of the user [56]. On the other hand, the Pull activity offers a personalized search for knowledge, the actor types a term in the search field and he obtains the knowledge related to this term filtered according to his role and activity in the OntoDFRelevance model [56]. For a beginner, for example, it is interesting to automatically offer him the knowledge that allows him to learn the basics of his work and extract knowledge from the project memory (MemoDFRelevance). For a more experienced actor, an alert system, helping to avoid errors, allows him to carry out projects with less effort and in a shorter time. Thus for the achievement of the objective of the Reutilization organization, four main roles are involved (represented by circles in Fig. 6):

116

A. C. Benabdellah et al.

Fig. 6 Interactions and Role Identification (IRI) for reutilization organization of the SmartDFRelevance system

• Context Agent detects the organizational context linked to the actor, namely his role, his activity, the name of the project and, possibly, the term sought. • Reasoning Project and Metier Agents responsible for extracting knowledge from project memory and Metier repository. • Reuse Agents responsible for formulating requests and proposing results and suggestions. • Assistant Agent in charge of identifying and transferring reused knowledge to agents and actors. As a summary, at the end of the system requirements phase, the scope of the knowledge-based system to be developed, as well as the organizational hierarchy that composes it, are now identified and partially specified. At each level of this hierarchy, the organizations and the roles and interactions that compose it are described. The first specification of the behavior of the roles has already been made. The goal now is to design a society of agents whose overall behavior is capable of providing an effective solution to the complex problem described in the system requirements analysis phase.

5.2 Agent Society Design Phase of SmartDFRelevance At this stage, the problem has been formed in terms of organizations, roles, capabilities and interactions. The goal of the second phase in ASPECS methodology is to develop a system of agents whose global activity is capable of providing an

SmartDFRelevance: A Holonic Agent Based System …

117

appropriate solution to the problem identified in the previous phase and satisfying the related specifications. In other words, this agent society design phase results in a description of the agent community involved in the solution in terms of social relations and interdependencies between individuals (Holons and/or Agents). The first activity in this phase is Solution Ontology Description (SOD) in which the ontology of the problem described during the first phase is refined by adding new concepts relevant to the agent-based solution and refining existing ones. This ontology’s definitions, predicates and actions are now also meant to be used to define knowledge shared between positions in communication. This means specifying all the predicates used to communicate information in correspondence as well as the acts that Holons/Agents will take and influence the position of the environment in which they exist (as a logical reflection of ontology). Therefore, with respect to the proposed knowledge management system (SmartDFRelevance), ontology (see Fig. 4) has been filled with function principles to fully support the process of knowledge management by adding actor, holon, sub-holons, super-holons, group, etc. At the end of this activity, the ASPECS development process is divided into two development sub-branches. The first and foremost being dedicated to the system’s organizational design, the second being dedicated to the identification and design of the system’s component agents. The second step is to classify agents (Agent Identification (AI)) that will make up the program hierarchy’s lowest level and their responsibilities in terms of roles implemented. Their goals must, therefore, correspond to the goals of these roles. We take the roles described in the previous models and define the types of agents (cognitive or reactive) that interpret these roles. Moreover, as in the process of knowledge management, two types of knowledge are distinguished (Project Knowledge and Metier Knowledge), the HMAS must take into account the actors who will consider both project and Metier knowledge. To do this, the actors in our SmartDFRelevance system form three communities (Fig. 7) [44]: • Metier Agents (AM) are working on the positions of Metier agents. By explaining the role of their character and their place in the process of product development, they will recreate the company. Therefore, a Metier agent wants the information conveyed from the position and expertise of his Metier actor. This category includes cognitive agents (Process Engineers Agents, Designers Engineers Agents, Project Manager Agent, etc.) as well as the context agent and the assistant agent. • The Project Knowledge Management Agents (AMCP) aim to store knowledge using project memory MemoDFRelevance. In order to ensure their reliability, they must also validate knowledge. These agents’ third objective is to manipulate, disseminate and re-use this knowledge in order to propose solutions that will be communicated to Metier agents. This category is composed of the project Identification agent, project Capitalization agent, project Validation agent, project Sharing agent, project Reutilization agent and project Reasoning agent. • The Metier Knowledge Management Agents (AMCM) plan to store knowledge from all projects in a knowledge base called Metier Repository. Unlike AMCPs,

118

A. C. Benabdellah et al.

Fig. 7 Agent ıdentification for SmartDFRelevance system

they also need to ensure durability by encouraging Metier actors to test them. This category is composed of the Metier Identification agent, Metier Capitalization agent, Metier Validation agent, Metier Sharing agent, Metier Reutilization agent and Metier Reasoning agent. After having described the distribution of roles between the agents, it is advisable to end the analysis by describing the interactions between the agents and their actions. Thus, the functionality of the Metier Agents is to detect the role, activity and project linked to the user as well as the term sought via the role of the context agent. The AM sends a message containing this information to the identification agent so that he can deduce the associated knowledge. After having identified candidate knowledge among the six types presented in the OntoDFRelevance, the latter transmits a message to the capitalization agent who annotates the knowledge. The latter then communicate the knowledge to the AMCP agents. The AMCPs build the project memory via the role of the capitalization agent after having obtained the knowledge validation agreement from the validation agent. During the validation activity, the Metier agents offer assistance from their Metier agents by providing them with the validation criteria as well as the knowledge already capitalized in MemoDFRelevance. In this context, the agents responsible for project reasoning consult the OntoDFRelevance ontology to formulate the requests that they communicate to the sharing agents. These respond to requests to disseminate the knowledge capitalized during the project. Concerning the communication between the AMCPs and AMCMs, the AMCPs archive knowledge after their validation with Metier agents. When the project is finished, they transmit this knowledge at AMCM. These build the Metier repository containing the knowledge capitalized during all the projects. The Metier repository evolves as knowledge is capitalized during projects.

SmartDFRelevance: A Holonic Agent Based System …

119

Finally, concerning the communication between AMs and AMCMs, AMs formulate requests based on the OntoDFRelevance ontology. Knowing the specialties of each AMCM, they can draw up the right request to the right Metier Agent who in this case represents a professional actor. AMCM responds to requests using the knowledge capitalized by the Metier capitalization agent and validated by the Metier validation agent in the Metier repository. In most cases, the Metier validation agent requests assistance from AMCP. After explaining the interaction between systems’ agents, we are now engaged in the “Holarchy Design (HD)” activity to finish and complete the design phase of the agent society. This seeks to provide an overarching overview that integrates and summarizes all the previous activities and concepts in a single graph explaining the overall structure of the system and the rules governing its complex use. More clearly, the holonic architecture of the SmartDFRelevance knowledge-based system is illustrated in Fig. 8. Notwithstanding, the Implementation and Deployment phase that aims to incorporate and deploy the holonic agent-oriented approach developed in the previous phase by applying Janus facilities enable the definition of the system architecture and the development of related source code and evaluation. Therefore, SmartDFRelevance agents support the knowledge management process that is a complex process by using an ontology that allows them to annotate and infer knowledge in order to find the appropriate knowledge that human actors need. SmartDFRelevance agents also share and propose the proactive reuse of knowledge not only to one actor, but also to all the actors who are likely to be concerned about knowledge. Consequently, this feature provides the intelligence for these agents and makes the proposed knowledgebased system (SmartFDFRelevance) as a complete smart holonic system, flexible and autonomous.

6 Conclusion In this paper, we have proposed a new holonic knowledge-based system, called SmartDFRelevance, following the ASPECS methodology dedicated to the analysis, design, and deployment of complex engineered systems. More clearly, the obtained results are: • A definition of the needs of a knowledge management system called SmartDFRelevance in the form of a goal tree. The goal-oriented approach is chosen in order to model the objectives of the system as well as the actors involved and their dependencies for the achievement of each goal; • A domain ontology which conceptualizes the knowledge capitalized in our project memory MemoDFRelevance; • A set of organizations composed of interacting roles aimed at satisfying the goals of the system;

120

A. C. Benabdellah et al.

Fig. 8 Holarchy design for SmartDFRelevance knowledge-based system

• A set of plan roles for the identification of roles and interactions which aim to break down a global behavior embodied by an organization into sub-organizations; • A set of abilities to define the generic behavior of roles by identifying the knowhow that a role requires of the individual who will perform it; • A list of agents and their responsibilities in terms of roles implemented. • A holarchy of the SmartDFRelevance system where the results of all the previous work are combined and summarized in a single graph.

SmartDFRelevance: A Holonic Agent Based System …

121

As a summary, the proposed SmartDFRelevance knowledge-based system assists the business actor throughout the product development process. It offers assistance to the business actor by identifying, validating, capitalizing, sharing and reusing knowledge while considering an organizational approach. We notice that our design was mainly based on the work of Monticolo [57] and BenMiled [56]. Monticolo had proposed a knowledge capitalization approach following an organizational approach [57]. In contrast, BenMiled extended Monticolo’s work by adding the knowledge reuse system [56]. Our contribution to this work is twofold. On the one hand, the analysis process makes it possible to highlight the different components and their links with a view to achieving the objectives of the system. On the other hand and on the basis of this analysis, we have proposed mechanisms to assist in the identification and acquisition of knowledge, the validation of knowledge; the capitalization of knowledge, the sharing of knowledge as well as the re-use of knowledge. As a perspective, we recommend first enriching our multi-agent system’s operational framework and adding new ideas to the paradigm used. The application of logic processes, rules or methods of learning may be important for the purpose of product development circumstances. Second, we plan to focus on the application of formal notations and procedures that have been already used for the analysis of function actions. In future works, these issues will be discussed to provide a complete knowledge-based system that is appropriate and can be used by the largest number of designers in different industries.

References 1. Chaouni Benabdellah, A., Bouhaddou, I., Benghabrit, A.: Holonic multi-agent system for modeling complexity structures of product development process. In: 2019 4th World Conference on Complex Systems (WCCS), pp. 1–6. IEEE (2019) 2. Monostori, L., Váncza, J., Annals, S.K.-C.: Agent-based systems for manufacturing. Elsevier (2006) 3. Abecker, A., Bernardi, A., International, L. van E.-P.: Agent technology for distributed organizational memories. dfki.uni-kl.de (2003) 4. Girodon, J., Monticolo, D., Bonjour, E., Engineering, M.P.-A.: An organizational approach to designing an intelligent knowledge-based system: application to the decision-making process in design projects. Elsevier (2015) 5. Benabdellah, A.C., Bouhaddou, I., Benghabrit, A., Benghabrit, O.: A systematic review of design for X techniques from 1980 to 2018: concepts, applications, and perspectives. Int. J. Adv. Manuf. Technol., 1–30 (2019). https://doi.org/10.1007/s00170-019-03418-6 6. Wooldridge, M., Jennings, N.R., Kinny, D.: The Gaia methodology for agent-oriented analysis and design. Springer (2000) 7. Smith, P.B., Bond, M.H.: Social psychology: Across cultures. Allyn & Bacon (1999) 8. Drogoul, A., Ferber, J.: Multi-agent simulation as a tool for modeling societies: application to social differentiation in ant colonies. Presented at the (1994) 9. Ferber, J., Weiss, G.: Multi-agent systems: an introduction to distributed artificial intelligence (1999) 10. Dorri, A., Kanhere, S., Access, R.J.-I.: Multi-agent systems: a survey. ieeexplore.ieee.org (2018)

122

A. C. Benabdellah et al.

11. García-Magariño, I., Gutiérrez, C.: Agent-oriented modeling and development of a system for crisis management. Elsevier (2013) 12. Chaouni Benabdellah, A., Bouhaddou, I., Benghabrit, A.: Supply chain challenges with complex adaptive system perspective (2018) 13. Durand, D.: La systémique (10e éd.). Paris Press. Universitaires de France (2006) 14. Hilaire, V., Koukam, A., Gruer, P., Müller, J.P.: Formal specification and prototyping of multiagent systems. Springer (2000) 15. Isern, D., Sánchez, D., Moreno, A.: Organizational structures supported by agent-oriented methodologies. J. Syst. Softw. 84, 169–184 (2011) 16. Giret, A., Botti, V.: Holons and agents. J. Intell. Manuf. 15, 645–659 (2004). https://doi.org/ 10.1023/B:JIMS.0000037714.56201.a3 17. Cossentino, M., Gaud, N., Hilaire, V., Galland, S., Koukam, A.: ASPECS: an agent-oriented software process for engineering complex systems. Auton. Agent. Multi. Agent. Syst. 20, 260–304 (2010). https://doi.org/10.1007/s10458-009-9099-4 18. Koestler, A., Zeppelins, P.D.-R.: Arthur Koestler. books.google.com (1905) 19. Studer, R., Benjamins, V., Fensel, D.: Knowledge engineering: principles and methods. Elsevier (1998) 20. Gandon, F.: Distributed artificial intelligence and knowledge management: ontologies and multi-agent systems for a corporate semantic web (2002) 21. Verhagen, W.J.C., Bermell-Garcia, P., van Dijk, R.E.C., Curran, R.: A critical review of knowledge-based engineering: an identification of research challenges. Adv. Eng. Informat. 26, 5–15 (2012). https://doi.org/10.1016/j.aei.2011.06.004 22. Argente, E., Julian, V., Botti, V.: Multi-agent system development based on organizations. Elsevier (2006) 23. Benabdellah, A., Benghabrit, A. and Bouhaddou, I: Complexity drivers in engineering design: toward a decision support system based on an organizational perspective. J. Eng. Design Technol. (2020), Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/JEDT-112019-0299 24. Monceyron, E., Barthès, J.-P.A.: Rev. Sci. Tech. la Concept, 49–68 (1992) 25. Scalabrin, E.E., Vandenberghe, L., Azevedo, H., Barthès, J.-P.A.: A generic model of cognitive agent to develop open systems. Presented at the October 23 (1996) 26. Shen, W., Barthes, J.P.A.: Asn experimental multi-agent environment for engineering design. Int. J. Coop. Inf. Syst. 05, 131–151 (1996). https://doi.org/10.1142/S0218843096000063 27. Sato, G.Y., de Azevedo, H.J.S., Barthès, J.P.A.: Agent and multi-agent applications to support distributed communities of practice: a short review. Auton. Agent. Multi. Agent. Syst. 25, 87–129 (2012). https://doi.org/10.1007/s10458-011-9170-9 28. Barthès, J.P.A., Tacla, C.A.: Agent-supported portals and knowledge management in complex R&D projects. Elsevier (2002) 29. Nonaka, I., Takeuchi, H.: The knowledge-creating company: how Japanese companies create the dynamics of innovation (1995) 30. Liao, C., Chuang, S.H., To, P.L.: How knowledge management mediates the relationship between environment and organizational structure. Elsevier (2011) 31. Huang, T.C., Lin, B.H., Yang, T.H.: Herd behavior and idiosyncratic volatility. Elsevier (2015) 32. van Elst, L., Dignum, V., Abecker, A.: Towards agent-mediated knowledge management. Presented at the (2004) 33. Boissier, O., Demazeau, Y.: ASIC: An architechture for social and individual control and its application to computer vision. Presented at the (1996) 34. Guizzardi, R.S.S., Aroyo, L., Wagner, G.: Agent-oriented knowledge management in learning environments: a peer-to-peer helpdesk case study. Presented at the (2004) 35. Carley, K.M.: Computational organization science: anew frontier. Natl Acad Sci (2002) 36. Gandon, F., Berthelot, L., Dieng-Kuntz, R.: International, 6th, 2002, undefined: a multi-agent platform for a corporate semantic web. hal.inria.fr (2002) 37. Bonifacio, M., Bouquet, P., Traverso, P.: Enabling distributed knowledge management: MANAGERIAL and technological implications (2002)

SmartDFRelevance: A Holonic Agent Based System …

123

38. Preece, A., Hui, K., Gray, A., Marti, P., Bench-Capon, T., Jones, D., Cui, Z.: The KRAFT architecture for knowledge fusion and transformation. In: Research and Development in Intelligent Systems XVI. pp. 23–38. Springer, London (2000) 39. Champin, P., Prié, Y., ICCBR, A.M.: Musette: Modelling uses and tasks for tracing experience. yannickprie.net (2003) 40. Buccafurri, F., Rosaci, D., Sarnè, G.M.L., Ursino, D.: An agent-based hierarchical clustering approach for E-commerce environments. Presented at the (2002) 41. Braga, R.M., Werner, C.M., Mattoso, M.: Odyssey-Search: A multi-agent system for component information search and retrieval. Elsevier (2006) 42. Bravo, M., Perez, J., Sosa, V., et al.: Ontology support for communicating agents in negotiation processes. ieeexplore.ieee.org (2005) 43. Amailef, K., Lu, J.: Ontology-supported case-based reasoning approach for intelligent mGovernment emergency response services. Elsevier (2013) 44. Monticolo, D., Mihaita, S., Darwich, H., Hilaire, V.: An agent-based system to build project memories during engineering projects. Elsevier (2014) 45. Bernon, C., Cossentino, Pavón, J.: Agent-oriented software engineering. cambridge.org (2005) 46. Bordini, R., Hübner, J., Wooldridge, M.: Programming multi-agent systems in AgentSpeak using Jason (2007) 47. Giret, A., Garcia, E., Botti, V.: An engineering framework for service-oriented intelligent manufacturing systems. Comput. Ind. 81, 116–127 (2016) 48. Aldewereld, H., Dignum, V.: OperettA: Organization-oriented development environment. Presented at the (2011) 49. DeLoach, T.L.: Soft-sided beverage cooler (2001) 50. Bresciani, P., Perini, A., Giorgini, P., Giunchiglia, F., Mylopoulos, J.: Tropos: an agent-oriented software development methodology. Auton. Agent. Multi. Agent. Syst. 8, 203–236 (2004). https://doi.org/10.1023/B:AGNT.0000018806.20944.ef 51. Benabdellah, A.C., Bouhaddou, I., Benghabrit, A., Benghabrit, O.: Design for relevance concurrent engineering approach: integration of IATF 16949 requirements and design for X techniques. Res. Eng. Design (2020). https://doi.org/10.1007/s00163-020-00339-4 52. Mazigh, B., Garoui, M., Koukam, A.: Use of formal languages to consolidate a Holonic MAS methodology: a specification approach for analysing problem and agency domains. J. Simul. 7, 159–169 (2013). https://doi.org/10.1057/jos.2012.24 53. Conklin, J., Begeman, M. L.: gIBIS: A hypertext tool for exploratory policy discussion. dl. acm.org (1988) 54. Bellifemine, F., Poggi, A., Rimassa, G.: Developing multi-agent systems with a FIPA-compliant agent framework. Softw. Pract. Exp. 31, 103–128 (2001). https://doi.org/10.1002/1097-024 X(200102)31:2%3c103::AID-SPE358%3e3.0.CO;2-O 55. Yu, E.S.: Towards modelling and reasoning support for early-phase requirements engineering. ieeexplore.ieee.orgieeexplore.ieee.org (1997) 56. Miled, A.B.: Vers un système de réutilisation des connaissances en ingénierie de conception (2011) 57. Monticolo, D., Gabriel, A., Barrios, P.C.: Une approche de conception de systèmes multi-agents dédiés à la gestion des connaissances. search.proquest.com (2018)

A Cyber-Physical Warehouse Management System Architecture in an Industry 4.0 Context Mariam Moufaddal , Asmaa Benghabrit , and Imane Bouhaddou

Abstract Cyber Physical Systems are used whenever complex physical systems need to communicate with the digital world to optimize performance and increase efficiency. They play an increasingly important role in industrial processes and in the control of production (smart factory), particularly in the context of the Internet of Things. Cyber-physical systems consist of objects, equipped with electronics and software. They are connected to each other via the Internet in order to form a simple network system. This concerns the sensors and components that allow the movement or control of a mechanism or a system and also actuators linking Cyber-Physical Systems to the outside world. Sensors allow systems to acquire and process data. This data is then made available to the various services connected to the network, which use these actuators to directly impact the measures taken in the real world. This leads to the fusion of the physical world and virtual space in the Internet of Things. In this paper, we propose a Cyber-Physical Warehouse Management System environment and architecture where warehouse operations are linked as a single platform assuring consumers satisfaction. Keywords Cyber-physical system · Warehouse management system · Internet of things

M. Moufaddal (B) · A. Benghabrit · I. Bouhaddou LM2I Laboratory, ENSAM, Moulay Ismaïl University Meknes, Meknes, Morocco e-mail: [email protected] A. Benghabrit e-mail: [email protected] I. Bouhaddou e-mail: [email protected] A. Benghabrit LMAID Laboratory, ENSMR, Mohamed V University Rabat, Rabat, Morocco © Springer Nature Switzerland AG 2021 T. Masrour et al. (eds.), Artificial Intelligence and Industrial Applications, Advances in Intelligent Systems and Computing 1193, https://doi.org/10.1007/978-3-030-51186-9_9

125

126

M. Moufaddal et al.

1 Introduction The current industrial revolution, called Industry 4.0, is based on a strong interconnection of objects and/or actors and also of the real world and the virtual world via innovations in ICT (Information and Communication Technologies) [1]. This affects the industrial ecosystem more broadly and will impact the world of logistics, already described as “Logistics 4.0”. Indeed, according to Kohler and Weisz [2], the digital revolution is gradually erasing the boundaries between B2B (Business to Business) and B2C (Business to Customer), by directly interconnecting supply and demand. The interactive and transparent collaboration offered by the evolutions of the Internet world eliminates the traditional organization in sectors, branches and trades, and makes penetrate the customer within the development of the added value, within a relation of customer-supplier hybrid co-design. In this context, the advantage will go to those who know how, by associating their customers and offering them new possibilities, to create unexpected possibilities from complementary businesses. The technological leaps observed in recent years, notably linked to the development of digital communication tools, have led to paradigm shifts which must integrate this new situation. Industry 4.0 is based on various technological advances, among which we can mainly cite CPS (Cyber-Physical System), IoT (Internet of Things), and IoS (Internet of Services) [3, 4]. The IoT represents an intelligent ICT infrastructure that allows communication and cooperation in real time between objects (machines and devices) as well as the connection between the physical and virtual worlds [5] in a dynamic connected environment, giving access to remote services (IoS). A CPS is an autonomous on-board system, equipped with sensors to perceive its environment, capable of acting on physical processes by means of actuators. The CPSs are connected to each other via digital networks and are able to use remote services to assist them. These various concepts can globally be considered as extensions of robotics and automation, going in the direction of the digitalization of the world [6]. CPSs therefore allow instant and continuous interaction between physical and virtual elements and also with external actors. Objects are becoming more and more autonomous and systems more and more reconfigurable as access to information becomes possible anytime and from anywhere. Added to this, the intelligence capacity given to objects and systems, or the ability to go and solicit via the communication capacity of remote services able to provide this intelligence. In this context, Industry 4.0 sees the factory, and even the entire production chain, as a gigantic cyber-physical system largely self-regulated by machine-object interactions. Benefits in terms of flexibility and agility, savings in time, quality and costs are expected [2]. This results in the concept of cyber-physical production systems (CPPS) [7]. As a corollary to the ongoing transformation of the manufacturing world, the organization of the logistics sector must also evolve. In the scope of this paper, we consider a supply chain as a network constituted of different nodes responsible for Procurement, Manufacturing, Warehousing and Distribution activities. For each activity, Industry 4.0, through its technologies, has

A Cyber-Physical Warehouse Management System …

127

brought profound changes in the way it is handled and the way it interacts with other processes and actors. Indeed, warehousing is no exception. A warehouse is where goods are stored and handled. Receiving and preparation to delivery are two of the activities that take place at a warehouse. But to be able to receive goods, warehouse capacity must be checked. This information should be communicated to the transporter before to know when to deliver the goods to this warehouse. Similarly, information about the availability of a transporter should be communicated to the warehouse responsible before preparing any orders. To tackle this matter, the paper is structured as follows: after introductory statements, Cyber-Physical systems are being defined, the overall architecture will be presented and their contribution on the remarkable changes in the supply chain processes will be illustrated. Next, warehouse functions and operations will be highlighted as well as what opportunities CPSs can bring. Based on the result of our analysis, a CPS environment and architecture will be proposed, highlighting how the digital and physical worlds will be combined for better handling of goods. Concluding remarks and future research directions will be given at the end of the paper.

2 The Role of CPS and Its Usefulness Many technologies related to sensors, communication, embedded systems, connected objects or decision support have been able to achieve a high degree of innovation and maturity of use, relative to their sector namely the industry 4.0 far technology Cyber-Physical system (CPS) (Fig. 1). The strong connection of the physical, the service and the digital world can improve the quality of information required for planning, optimization and operation of industrial systems [8]. The Internet has transformed how humans interact and communicate with one another, revolutionized how and where information is accessed, and even changed how people buy and sell products. Similarly, CPSs will transform how humans interact with and control the physical world around us [10]. Cyber means computation, communication, and control that are discrete, switched, and logical; the term Fig. 1 CPS basics [9]

128

M. Moufaddal et al.

Fig. 2 Human interaction with systems [12]

physical refers to natural and human-made systems governed by the laws of physics and operated in a continuous time. CPSs are embedded systems that use sensing devices to retrieve data and act on physical processes using actuators. They are connected to each other via digital networks, they use all the data and services available and benefit from multimodal human–machine interfaces [2]. Within the factory of the future, also considered as a smart factory, CPS will enable the communication between humans, machines and products alike [11]. As they are able to acquire and process data, they can self-control certain tasks and interact with humans via interfaces [12] as shown in Fig. 2. CPSs are also autonomous on-board systems, equipped with sensors to perceive their environment, capable of acting on physical processes by means of actuators [13]. The CPSs are connected to each other via digital networks and are able to use remote services to assist them. The explosion of interest in CPS is first of all correlated with the ability to embark a real-time processing capacity which allows either a strong interaction with the user of the CPS, or a total consideration in real time of the environment allowing intelligent automation of the CPS. Kohler and Weisz [2] pointed out that as a prerequisite for the implementation of CPS are considered, the compatibility and interoperability of systems which raises the question of communication standards between machines. Once connected, the systems communicate with each other and are able to self-regulate without central control. The factory is configurable as needed with modules that can be added or removed using plug and work functions. Finally, a virtual factory model is used to test the different module configurations, but also to simulate and steer the entire production process. Similarly, using these CPS functionalities, this approach can be applied to any industrial system as it will allow gains as follow: • Interoperability will promote Machine-to-Machine and Human-to-Machine interfaces; • Modularity will enhance atomicity and flexibility; • Real-time processing capacity will provide instant inventory information, important deadlines and expected delays;

A Cyber-Physical Warehouse Management System …

129

• Virtualization will enable an overview of the to-be set up model/framework allowing any possible optimization requirements; A CPS generally consists of two main functional components: (a) The advanced connectivity that ensures real-time data acquisition from the physical world and information feedback from the cyber space; (b) intelligent data management, computational and analytics capability that constructs the cyber space [14]. Some technologies that are closely connected to the CPS are IoT, wireless sensor networks, and cloud computing. Wireless sensor networks are regarded to be a vital component of CPS [15]. The Internet technology provides essential approaches to enhancing the performance of cyber physical systems [16]. A 5C level CPS structure has been proposed [17]. It defines how people construct a CPS from the initial data acquisition, then analytics, and to the final value creation: • C1—At the Connection level, the CPS operates on a Plug & Play network and uses data sent by a network of sensors; • C2—At the Conversion level, the CPS knows how to process information and transcribe it into higher level information; • C3—At Cyber level, the CPS has knowledge of other environmental CPS and can interact with them to enrich its own information processing; • C4—At the Cognition level, the CPS is capable of establishing a diagnosis based on simulations of its own behavior and a differential analysis of sensor data; • C5—At the Configuration level, the CPS can adapt itself in the event of a failure, reconfigure or adjust its parameters autonomously in order to return to nominal behavior. However, the full integration of the 5 levels within a CPS is currently only very rarely achieved and is not always justified depending on the type of application. Indeed, among the 5C levels, the cognition and configuration levels are the most difficult to achieve [18].

3 CPS is Important to Supply Chains Industrial process control systems are widely used to provide autonomous control over business processes through control loops. As CPS is usually defined as integration of computation with physical processes, the most direct application of CPS in Industry 4.0 scenarios is enhanced process control [19]. As supply chains are a set of business processes, they are ideally placed to integrate CPS as a way of monitoring and for instant decision-making. CPS is a system that deals with the physical as well as the informational aspects of processes. Indeed, it can provide broad controls over complex and large industrial processes through a heterogeneous network architecture of sensors, actuators, and processors, because CPS can integrate all the mechanisms to reach and maintain a synchronized state [20].

130

M. Moufaddal et al.

Supply chains are embedded in dynamic environments. Their management is challenged by the occurrence of perturbations, as well as by complex production and transport networks [21]. Many research topics related to CPS in Supply Chains were reviewed [19, 21–29]. Soon after, researchers started to point out specific processes to take the best out of these embedded systems. In this context, Oks et al. [30] proposed an application map with the aim of distinguishing various opportunity areas for applying industrial cyber physical systems of Industry 4.0 within which integrated supply chain, e-procurements and logistics were identified as one of the improvements categories. Lee et al. [17] also proposed a Multi-Agent System (MAS) approach for addressing sustainable supplier evaluation and selection process to provide a proper communication channel, structured information exchange and visibility among suppliers and manufacturers. After procurement related issues, researchers focused on manufacturing operations by proposing approaches and design of CPSs to tackle the diverse problems they face. Lee et al. [14] proposed a unified 5-level architecture as a guideline for implementation of CPS. Liu and Jiang [31] continued in the same perspective and provided a Cyber-physical System (CPS) architecture adapted to shop floor for achieving the goals of intelligent manufacturing. The proposed architecture provides a guideline to construct a CPS system from the hardware interconnection, to the data acquisition, processing, and visualization, and the final knowledge acquisition and learning. However, in smart factories, maintenance is still an important aspect to safeguard the performance of their production. Especially in case of failures of machine components diagnosis is a time-consuming task. For these reasons, Schneider et al. [32] presented an approach for a cyber-physical failure management system, which uses information from machines such as programmable logic controller or sensor data and Information Technology (IT) systems to support the diagnosis and repairing process. Following this, several challenges have been raised up on how to handle flexibility, optimization, and interoperability in production lifecycle. In this context, Jeon et al. [33] proposed a model-driven approach by ETRI CPS Modeling Language (ECML) to solve the challenges. Many other research subjects were studied just to name a few [7, 16, 33–42]. However, warehousing and distribution processes receive less interest from researchers. According to the PwC’s Global Industry 4.0 Survey [43], which is the biggest worldwide survey of its kind with over 2,000 participants from nine major industrial sectors and 26 countries, 21% of the survey respondents belonged to industrial manufacturing whereas only 9% of the respondents belonged to transportation and logistics fields as illustrated in Fig. 3. Indeed, for warehousing and distribution processes, industry 4.0 is no longer a ‘future trend’. It is now at the heart of the industry and research agenda. Companies should start combining advanced connectivity and advanced automation, CyberPhysical systems, cloud computing, sensors, connected capability, computer powered processes, intelligent algorithms and Internet of Things services to transform their businesses. In the rest of the paper, we focus the study on building smart warehouses. To do so, we need to focus on the technologies towards building smart warehouses [44].

A Cyber-Physical Warehouse Management System …

131

Fig. 3 Industry split of survey respondents [43]

The insight to make traditional warehouses smart lies in using Cyber-Physical System (CPS) [14]. A CPS can monitor and create a virtual copy of the real-world industry processes, and thus we can know the status of each industry process, control and make proper decisions in a real time manner [45]. In other words, CPS can bring the virtual and physical worlds together to construct an entirely networked world, where smart objects communicate and interact with each other.

4 Warehouse Operations Warehouses have always served as a vital hub in the flow of goods within a supply chain. But in today’s economic climate, they also serve as a key source of competitive advantage for logistics providers who can deliver fast, cost-efficient, and increasingly flexible warehousing operations for their customers [46]. A warehouse is a facility in the supply chain to consolidate products to reduce transportation cost, achieve economies of scale in manufacturing or in purchasing [47] or provide value added processes and shorten response time [48]. Indeed, warehousing has been recognized as one of the main operations where companies can provide tailored services for their customers and gain competitive advantage [49]. There are various types of warehouses: they can be classified into production warehouses and distribution centers [50] and by their roles in the supply chain they can be classified as raw materials warehouses, work-in-process warehouses, finished good warehouses, distribution warehouses, fulfillment warehouses, local warehouses direct to customer demand, and value-added service warehouses [51].

132

M. Moufaddal et al.

Fig. 4 A typical warehouse processes [47]

Warehouses have been going through various challenges such as—supply chains are becoming more integrated and shorter, globalized operation, customers are more demanding and technology changes are occurring rapidly. In order to cope up with these challenges organizations are adopting innovative approaches such as Warehouse Management System (WMS) [49]. A WMS primarily aims to control the movement and storage of materials within a warehouse and provide assistance through the execution of warehousing operations. A warehouse management system (WMS) is a database driven computer application, to improve the efficiency of the warehouse by directing cutaways and to maintain accurate inventory by recording warehouse transactions. The systems also direct and optimize stock based on real-time information about the status of utilization. It often utilize Auto ID Data Capture (AIDC) technology, such as barcode scanners, mobile computers, wireless LANs (Local Area Network) and Radio-frequency identification (RFID) to efficiently monitor the flow of products [49]. Even though warehouses can serve quite different ends, most share the same general pattern of material flow. Essentially, typical warehouse processes include inbound processes and outbound processes [47] (Fig. 4). • Inbound processes concern the receiving and put-away operations [49]. Receiving may begin with advance notification of the arrival of goods. This allows the warehouse to schedule receipt and unloading to coordinate efficiently with other activities within the warehouse. Once the product has arrived, it is unloaded and possibly staged for put away. It is likely to register its arrival so that ownership is assumed, payments dispatched, and so that it is known to be available to fulfill customer demand. Product will be inspected, and any exceptions noted, such as damage, incorrect counts, wrong descriptions, and so on. In put away operations, an appropriate storage location must be determined before considering of moving the product. This is very important because information about where the product is stored defines how quickly and at what cost it will later be retrieved for a customer. This requires knowing at all times what storage locations are available, how large they are, how much weight they can bear, and so on. This information will subsequently be used to construct efficient pick lists to guide the order-picking operators in retrieving the product for customers.

A Cyber-Physical Warehouse Management System …

133

Receiving operations and put away operations account respectively for about 10% and 15% of operating costs in a typical distribution center [51] and IoT technologies are expected to further reduce this. • Outbound processes include Order-picking, packing and shipping [48, 52–56]. The outbound processes of the warehouse are initiated by receipt of a customer order, which may be thought of as a shopping list. Each entry on the list is referred to as an order-line and typically consists of the item and quantity requested. The warehouse management system (WMS) then checks the order against available inventory and identifies any shortages. In addition, the WMS may reorganize the list to match the layout and operations of the warehouse for greater efficiency. Pick-lines are instructions to the order-pickers, telling them where and what to pick and in what quantity and units of measure. Each pick-line represents a location to be visited, which get organized into pick lists to achieve still more efficiencies. Packing can be labor-intensive because each piece of a customer order must be handled; but there is little walking. And because each piece will be handled, this is a convenient time to check that the customer order is complete and accurate. Order accuracy is a key measure of service to the customer, which is, in turn, that on which most businesses compete. Shipping generally handles larger units than picking, because packing has consolidated the items into fewer containers (cases, pallets). Consequently, there is still less labor here. There may be some walking if product is staged before being loaded into freight carriers. Product is likely to be staged if it must be loaded in reverse order of delivery or if shipping long distances, when one must work hard to completely fill each trailer. Staging freight creates more work because staged freight must be double handled.

5 The Shift to CPS for Autonomous and Smart Warehouses In a traditional warehouse, the operations of pickup, delivery, and accounting are accomplished by storekeepers. There are several drawbacks of conventional warehouses. First, it is time-consuming to store/fetch inventories into/from them. Second, the usage of storekeepers is a waste of human resources. Finally, it is not environmentally friendly to record the stocks using account books. The adoption of the Industry 4.0 paradigm will introduce remarkable changes in the way warehouse works these days. Especially, the introduction of ‘smart’ management throughout the proper adoption and implementation of Warehouse Management Systems (WMS) which will transform the warehouse activities into the future requirements of the inbound logistics according to the Industry 4.0 paradigm [57]. Therefore, it is profoundly essential to make traditional warehouse smart. However, one of the key paradigms of the Industry 4.0 is the use of modern information technology (IT). The insight to make traditional warehouses smart lies

134

M. Moufaddal et al.

Fig. 5 The shift from hierarchy to interconnection [59]

in using Industry 4.0 far technology Cyber-Physical System (CPS) [14]. A smart warehouse is an automated, unmanned, and paperless warehouse when conducting the operations of pickup, delivery, and bookkeeping [45]. Automated warehouse systems play a key role in manufacturing systems and are usually controlled using hierarchical and centralized control architectures and conventional automation programming techniques [58]. However, in an entirely interconnected world, each field device in a warehouse is able to initiate an exchange of data to different levels. This interconnectedness will lead to dissolution of classical, rigid hierarchies of warehouse processes [59] as shown in Fig. 5. In this context, Basile et al. [58] present preliminary results in developing a flexible, modular and distributed control architecture for automated warehouse systems using Function Blocks and a CPS perspective. Although CPS has the potential to bring revolution to traditional warehouses, it is not easy to integrate CPS techniques into smart warehouses for Industry 4.0. In this context, Liu et al. [45] discussed how the state-of-the-art techniques in cyberphysical systems has facilitated building smart warehouses to achieve the promising vision of industry 4.0. They stated that a CPS-based smart warehouse contains four main components: CPS devices, inventories, robots, and human beings. For CPS devices, there can be thousands of CPS devices working together. Thus, the communication among the CPS devices needs to be scheduled efficiently. For inventories, it is imperative to timely know their status and location context according to the data reported by the attached devices. For robots, they are required to collaboratively accomplish some tasks that are repetitive and harmful to human beings. For human beings, their gestures or other activities to perform warehouse operations need to be recognized. The gestures and other activities may be extracted using the CPS devices deployed in the environment. Therefore, they focused on four significant issues when applying CPS techniques in smart warehouses. First, efficient CPS data collection: when limited communication bandwidth meets numerous CPS devices, more efforts need to be engaged to study efficient wireless communication scheduling strategies. Second, accurate

A Cyber-Physical Warehouse Management System …

135

and robust localization: localization is the basis for many fundamental operations in smart warehouses, but still needs to be improved from various aspects like accuracy and robustness. Third, multi-robot collaboration: smart robots will take the place of humans to accomplish most tasks particularly in a harsh environment, and smart and fully distributed robot collaborating algorithms should be investigated. Fourth, human activity recognition: it can be applied in human–computer interaction for remote machine operations. To be more specific, Grzeszick et al. [53] stated that methods like human activity recognition (HAR) became of increasing interest in industrial settings. In this context, they introduced a novel deep neural network architecture for HAR. A convolutional neural network (CNN), which employs temporal convolutions, was applied to the sequential data of multiple intertial measurement units (IMUs). The network was designed to separately handle different sensor values and IMUs, joining the information step-by-step within the architecture. For illustration purposes, an evaluation was performed using data from the order picking process recorded in two different warehouses. The influence of different design choices in the network architecture, as well as pre- and post-processing, was evaluated. Finally, crucial steps for learning a good classification network for the task of HAR in a complex industrial setting were shown. Following the same CPS perspective, Basile et al. [58] adopted a service oriented multi-agent approach to obtain, from a cyber-physical system perspective, a formal model of a complex and real automated warehouse system. The proposed model is based on Colored Modified Hybrid Petri Net and aims at representing both cyber and physical aspects with high fidelity [58]. The objective here is modeling a general real warehouse, from a cyber physical perspective, as a system of cooperating agents. With this purpose, a Colored Modified Hybrid Petri Net (CMHPN) model, made up of two interacting sub-nets, one for the cyber and one for the physical part of each agent, was proposed. The need for more flexible, adaptable and customer-oriented warehouse operations has been increasingly identified as an important issue by today’s warehouse companies. This is due to the rapidly changing preferences of the customers that use their services. Furthermore, logistics within manufacturing sites like warehouse and shop floors are rationalized by RFID so that materials’ movements could be real-time visualized and tracked [60]. This gives rise to product intelligence concept. Motivated by manufacturing and other logistics operations, Giannikas et al. [61] argued on the potential application of product intelligence in warehouse operations. It was illustrated as an approach that can help warehouse companies, that manage a high variety of products and a big number of individual customers, address the related issues. This is particularly true in third-party logistics warehouses. Here, the operations are required to become more customer-oriented and more responsive to requests with different characteristics and needs in an efficient manner. Finally, they discussed the opportunities of such an approach using a real example of a thirdparty logistics warehouse company and presented the benefits it can bring in their warehouse management systems.

136

M. Moufaddal et al.

6 A Proposed CPS Framework for Warehouse Operations Handling In a supply chain, warehousing function is very critical as it acts as a node in linking the material flows between the supplier and customer. In today’s competitive market environment, companies are continuously forced to improve their warehousing operations. Many companies have also customized their value proposition to increase their customer service levels, which has led to changes in the role of warehouses. The use of information systems for warehouse management is studied extensively in literature. Complexity of warehouse management is indicated among others by amount and heterogeneity of handled products, the extent of overlap between them, amount and type of technology as well as characteristics of associated processes. As the complexity increases it becomes necessary to use Warehouse Management Systems (WMS) for handling warehouse resources and to monitor warehouse operations. The warehouses with high amount of processed order lines and amount of stock keeping units will be best supported by customized software. It is laborious to update day by day operations of stock status, areas of forklifts and stock keeping units (SKUs) in real-time by utilizing the bar-code-based or manual-based warehouse management systems [62].

6.1 The Cyber-Physical Warehouse Management System (CPWMS) Environment Innovations that occur within industry 4.0 create new technologies that facilitate system management. The Cyber-Physical System (CPS) can track and create a virtual copy of the actual process that can be used to monitor process performance [63]. In addition, the CPS allows all components within the system to communicate with one another i.e. that the physical components are virtually connected, allowing for cost savings in parallel with increasing efficiency [39]. Warehouses based on this technology include RFID sensors, Bluetooth technology, Wi-Fi access points, cameras and robots that are coordinated in the system to perform a defined task. The role of a man in such a warehouse is primarily related to monitoring and reprogramming the system if necessary [45]. However, the CPS technology also enables inter-machine co-operation between robotic systems, thus reducing the need for human work. Such designed robotic systems also enable human movement to recognize what enables employees to use robotic help in carrying out activities [64]. By replacing human labor with automated and robotized systems, or through the implementation of such systems as humanitarian aid, the efficiency of the warehouse system increases. Activities that are dangerous for a man can be robotized, thus reducing the risk of injury and unlucky cases [63].

A Cyber-Physical Warehouse Management System …

137

The combination of human and artificial resources has not been part of mainstream automation practice where (1) robots and humans are generally kept away from each other, and (2) humans must adhere to work procedures as rigid as the rest of the automated warehousing environment [65]. Indeed, on one hand, robots exhibit high precision and repeatability, can handle heavy loads and operate without performance deterioration even in difficult or dangerous environments. However, robot control systems quickly reach their limits in recognizing and handling unexpected situations, as reflected by the relatively rigid plans and robot programs widespread in today’s automated systems [65]. Humans on the other hand, tackle unexpected situations better, are aware of a much larger part of the environment than formally declared and show more dexterity in complex or sensitive tasks. Humans, however, are more prone to error, stress or fatigue [66], and their employment underlies strict health and safety regulations. For this reason, human recognition activity is an important issue among others when building a CPS for warehouse operation [45, 53, 67, 68]. To cope with all these challenges and issues, we are proposing a Cyber-Physical Warehouse Management System (CPWMS) where a smart autonomous warehouse sees the day. It is indeed a WMS but added to it a layer of intelligence. The system will be controlling the movement and storage of materials within a warehouse. It will be able to collect, provide and analyze in real-time information about each product in terms of who the supplier is, where it is located in the warehouse, what is its destination and when it should be executed thanks to cloud computing and big data technologies. The CPWMS will be able to plan resources and activities to synchronize the flow of goods in the warehouse. It will be able to track each resource (operators, forklifts, etc.) in the warehouse thanks to connected devices (wearables, etc.) and the internet of things. The system also will be offering additional functionality like transportation, dock door, and value-added logistics planning which help to optimize the warehouse operations as a whole. A typical smart warehouse [46] is shown in Fig. 6. In addition, imagine a future where IT systems are not created by computer analysts speaking the languages of Java and C but instead by business managers speaking the languages of supply chain, customer service or product development. It is a future made possible by Service Oriented Architecture (SOA)—an evolution in the way enterprise IT systems can be built [69]. With an SOA, business applications are constructed of independent, reusable, interoperable services that can be reconfigured without vast amounts of technical labor. The fundamental building blocks of an SOA are web services. An SOA is a collection of web services brought together to accomplish business tasks (checking a product availability or generating an invoice) and it is a way of designing and building a set of Information Technology applications where application components and Web Services make their functions available on the same access channel for mutual use [70]. Because the services can interact with systems outside a single organization, they provide the ability for companies to collaborate with customers and suppliers [69]. In the light of these concepts, our CPWMS will be supported by the following:

138

M. Moufaddal et al.

Fig. 6 A typical Smart warehouse [46]

• The Internet of Things (IoT) is the concept that refers to the connection of any device with an intelligent network through the use of the Internet. IoT has affected the way in which cyber-physical systems (CPS) interact, are controlled, managed and monitored. It comprises communicating smart systems using IP addresses. This enables each physical object being equipped with a unique IP address [16]. • The Internet of Services (IoS) comprises new service paradigms such as being provided by the service-oriented architecture (SOA) or the REST-technology. It highlights the fact of reusing and composing existing resources and services [70]. • The Internet of Data (IoD) enables to store and transfer mass data appropriately, and to provide new and innovative analysis methods for interpreting mass data. In this new industrial model, the work environments will be fully automated thanks to the appropriate interaction between IoT and CPS, while the data will now be processed online thanks to the use of Cloud Computing. This concept and its innovative level of organization and control of the entire value-added system, which considers the complete cycle of products with the ultimate goal of guaranteeing the supply of the needs of consumers, is made up of many areas called smart and some industry 4.0 technologies as illustrated in Fig. 7.

A Cyber-Physical Warehouse Management System …

139

Fig. 7 The CPWMS environment Cloud compuƟng

Big Data technologies /analyƟcs

Smart warehouse CPS Smart vehicles /robots

Smart products

Augmented reality

6.2 The Cyber-Physical Warehouse Management System Architecture Before talking about any CPS warehouse architecture, it is necessary to evaluate and measure the degree of digitalization of the concerned warehouse. In this context, Tao and Zhang [5] defined four stages to illustrate the process as shown in Fig. 8. At the first one, due to the lack of effective information means, warehouses depend on physical space completely, leading to low efficiency, accuracy and transparency. Then with the developments of information technologies, computer aided systems begin to be applied in warehousing activities, but as the interaction methods are weak, virtual space is out of step of the physical one. At the third stage, benefited

Fig. 8 Stages of physical and virtual worlds convergence [5]

140

M. Moufaddal et al. Physical layer

Cyber layer Client tier

Human-machine interface Send order/service request

Action and execute tier

Service and decision-making tier

Data processing tier

Sensor and capture tier

API

Perform commands

Task analysis

Locate IT vulnerabilities & risks

Task schedule

Decision-making

Services interaction

Data storage

Send data to action tier

Heterogeneous data processing

Generate control commands

Sensor unit

Receive/send data

Preliminary data pre-processing

Periodicity/continuous data retrieval

Data security Access security

Security assessment tier

Actuator unit Receive commands

Device security

Fig. 9 A CPWMS architecture

from communication technologies, sensors, IoT, etc., interaction between the two spaces exists. In the future, with the continuous developments of new information technologies, virtual space will gradually play the equally important role with the physical one and the two-way connection will also be enhanced, which supports the further convergence. In an era where virtual and physical spaces begin to further interact and converge and after the investigation of available Cyber-Physical Systems for warehouses, we are proposing a six tiers CPS architecture as follows and as shown in Fig. 9. Sensor and capture tier: Sensor tier is the data source for the above tiers. At this level, no enhanced processing and analysis capabilities are evoked. Instead, their main task is to control the sensors which are connected to the tier, ask the value of one or more attributes periodically and/or continuously as well as to send and receive messages from and to the data processing tier and perform preliminary pre-processing. The functions of this tier are environment awareness which mainly achieved by sensors WSN (Wireless Sensor Network) is one of the basic techniques of this sensor tier. Data processing tier: The huge amount of data is not enough, what really makes it useful is the way it can improve decision-making. This is when smart connections become relevant, the information must be significant, and displayed at the right time in the right place. Internet of things objects generate data. Cloud allows to have access to this data/information from anywhere in a simplified way while Big data analytics get value out of it. In this perspective, this tier provides resources for the execution of control algorithms. It consists of the computational devices and storage

A Cyber-Physical Warehouse Management System …

141

devices, providing the heterogeneous data processing such as normalization, noise reduction, data storage and other similar functions. It takes requests from the sensors in the sensor/capture tier and generate new control commands from these inputs, which they send back to the execute tier. Service and decision-making tier: This tier provides the typical functions of the whole system, including the APIs to clients, task scheduling, decision-making, task analysis and so on. In this tier, a number of services are deployed and interact with each other. Action and execute tier: At this level, a physical action is asked to be executed by means actuators. The actuator may be any kind of physical device. Orders and commands are received and executed. Client tier: The client tiers serve as a human-machine interface. It can be a tablet, a computer, a smartphone or a web browser. Security assessment tier: Security assessment tier is an explicit study to locate IT security vulnerabilities and risks along the rest of the tiers. It covers devices security, data security, and access security. The overall architecture is able to be divided into a number of small independent services which respond to specific functionalities. Each tier operates in its own process and communicates with the other tiers via light mechanisms (service tier or any other middleware system). As warehouses are considered as high load systems, this architecture is better suited. The architecture has the advantage of fully preserving the nominal performance of a warehouse management system because the nominal handling and control loops are kept within the system. This architecture is considered as the information system (IS) of a warehouse which is most often made up of a multi-level architecture, in which each level has a well-defined role. The highest level in the hierarchy is Warehouse Management System. This system manages more commercial aspects such as the processing of customer orders, the allocation of stock, etc. It allows editions such as load slips, delivery notes and invoices, based on order preparation and shipping information. It is not necessarily connected in real time to handling equipment, see not connected at all. However, here comes the role of the cyber-physical interaction. It is able to coordinate the different systems of material handling equipment. It would direct realtime data exchanges with handling equipment and also provides a unified human– machine interface (HMI) for monitoring, control and diagnostics. As a keystone in the operational management of handling equipment, the Cyber-Physical Warehouse Management System is the essential link between a simple WMS and the equipment control systems. It receives information from the sensor tier and coordinates the various control systems in real time (conveyors, label printers, etc.), by means data processing tier, to carry out the daily workload. At each decision point, the overall system determines the most efficient routing and transmits the instructions to the action and execute tier to perform the expected action. At the lowest level, we find the sensor tier. It is constituted of Inputs/Outputs (I/O) devices such as RFID tags or QR codes embedded into products (smart products) and pallets and allow the physical action of handling equipment and total traceability of

142

M. Moufaddal et al.

packages, according to the given directives. Concretely, a sensor unit only takes care of one machine (robot palletizer, sorter …) or part of the system (conveyor section, sorting stations …). Cyber-physical systems consist of objects, equipped with electronics and software. They are connected to each other or via the Internet in order to form a simple network system. This concerns the sensors and components that allow the movement or control of a mechanism or system also called an actuator linking CPS to the outside world. Sensors allow systems to acquire data. They are embedded in products, pallets or any equipment that needs to be tracked. Then, by choosing Cloud Computing, the necessary IT infrastructure will be at disposal quickly and efficiently, without making large investments. It allows also to put the necessary IT resources at disposal, and easily adapt the capacity, up or down, according to the activity peaks. Cloud helps to integrate data from numerous sources and uses Big Data Analytics to process data. This data is then made available to the various services connected to the network, which use these actuators to directly impact the measures taken in the real world. This leads to the fusion of the physical world (warehouse with all the operations) and virtual space (with all the data capture, data processing). Ultimately, our proposed architecture for a warehouse is similar to/reflects the organizational structure of human activities. Managers determine the workload to be accomplished daily, then supervisors manage operators’ activities in real-time. Each operator performs a specific task based on their expertise (preparation, Pick to Light, conveying, …). As soon as an operator has completed a task, the supervisor gives a new task based on the current workload. Once the orders are ready, the supervisor reports to managers on the progress of the preparations as well as all other relevant information. IoT would be of great use in a warehouse managed using our proposed architecture. Indeed, wireless readers capture information transmitted from each pallet because it arrives through inbound doors. This information might incorporate data on the item such as volume and measurements, which could be gathered and sent to the WMS for handling. This capability disposes of the time-consuming assignment of manual checking and volume checking of pallets. Cameras connected to the portals seem moreover to be utilized for damage detection, by checking pallets for defects. Apart from products put away in a warehouse, IoT can drive ideal resource utilization. By interfacing equipments and vehicles, IoT empowers warehouse supervisors to monitor all resources in real time. Supervisors can be cautioned when a resource is being overused or when an unoccupied resource should be sent to do other assignments. For illustration, sensors may well be conveyed to track how frequently resources, such as conveyer belts, are used or unused, and at what frequency. Applying the different Big data Analysis (BDA) techniques and processing these captured data might recognize ideal capacity rates and assignments for the resources. In order to respond to the growing complexity of computing systems, due in particular to the rapid and permanent progress of information technologies, new paradigms and architectural solutions based on self-adaptive, self-organized structures are to be developed. These must allow, on one hand, the provision of sufficient computing

A Cyber-Physical Warehouse Management System …

143

power responding to severe time constraints (real-time processing). On the other hand, to have great flexibility and adaptability to respond to changes in processing or unforeseen failures characterizing a context of changing operating environment of the system.

7 Conclusion and Future Research Directions As a complex, real-time, embedded and integrated system, CPS integrates various devices equipped with sensing, identification, processing, communication, and networking capabilities. It has provided an outstanding foundation to build advanced industrial systems and applications by integrating innovative functionalities through Internet of Things (IoT) to enable connection of the physical reality operations with computing and communication infrastructures [71]. The ongoing transformation of the supply chain area requires an evolution in the organization of the logistics sector which should allow better pooling of logistics resources, by interconnecting several logistics actors. This allows better performance, in terms of effectiveness, efficiency and quality of service. In this paper, we introduced the concept of Cyber-Physical system as the industry 4.0 revolutionary technology along with its contribution on the remarkable changes in supply chain processes. Next, we focused the study on warehouse functions and operations and the opportunities CPS can bring. After investigation of available CPS architectures in literature, we proposed a Cyber-Physical Warehouse Management System environment and architecture. Building a smart warehouse requires integration of different technologies each one is responsible for a part of the work. This is why we provided a system environment where this CPWMS could evolve and accomplish the tasks it is supposed to. The warehouse should be equipped with smart products, smart vehicles and robots, cloud computing to store data generated from connected devices and big data analytics to process and provide decision making tools. Augmented reality could also be of great use as it assists operators in accomplishing their daily tasks efficiently. Our CPWMS was a 6-tier architecture. The sensing tier where data is perceived from sensors, the data tier where data is being transferred for processing purposes, the service tier, the execute tier where actuators are asked to perform commands, the client tier as the human machine interface and finally the security tier where IT vulnerabilities and risks are identified. Through this architecture, we tried to present a smart warehouse architecture where data flows are illustrated. As future research direction, we propose to develop further this architecture and build a framework where activities of a warehouse are illustrated, and both physical and data flows will be highlighted from the order received from the client to his satisfaction. Furthermore, although the progress in CPS research, companies are still unable to fully adopt it as an underlying technology. We propose to study the main reasons

144

M. Moufaddal et al.

faced by companies while adopting CPS and formulate a paradigm shift from centralized control systems to decentralized event-triggered control systems that operate at multiple scales and have the capability to reorganize and reconfigure. Moreover, different CPS applications might need to collaborate to achieve a specific mission. For instance, the mobile-health application might need to collaborate with the transportation system to get an ambulance as fast as possible in the case of patient’s biomedical sensors revealing an emergency. In such scenarios, data analytics becomes a challenging task as it requires to be able to put the different analysis fragments from different CPS applications together to provide broader conclusions and decisions making. Industrial process control systems are widely used to provide autonomous control over business processes through control loops. As CPS is usually defined as integration of computation with physical processes, the most direct application of CPS in Industry 4.0 scenarios is enhanced process control [19]. As supply chains are a set of business processes, they are ideally placed to integrate CPS as a way of monitoring and for instant decision-making. CPS can provide broad controls over complex and large industrial processes through a heterogeneous network architecture of sensors, actuators, and processors, because CPS can integrate all the mechanisms to reach and maintain a synchronized state [20]. A typical supply chain contains four business processes: Procurement, Manufacturing/Quality control and packaging, warehousing and distribution/return. So as future research direction, we propose to study how in the era of industry 4.0 and using IoT technologies, Big data is transforming CPS data into useful knowledge to fulfill warehouse processes in an optimal and a flexible way.

References 1. El Kadiri, S., Grabot, B., Thoben, K.D., Hribernik, K., Emmanouilidis, C., von Cieminski, G., Kiritsis, D.: Current trends on ICT technologies for enterprise information systems. Comput. Ind. 79, 14–33 (2016). https://doi.org/10.1016/j.compind.2015.06.008 2. Kohler, D., Weisz, J.-D.: Industrie 4.0: quelles stratégies numériques? La numérisation de l’industrie dans les entreprises du Mittelstand allemand. Kohler Consulting & Coaching (2015) 3. Hermann, M., Pentek, T., Otto, B.: Design principles for Industrie 4.0 scenarios. In: 2016 49th Hawaii International Conference on System Sciences (HICSS), pp. 3928–3937 (2016) 4. Kagermann, H.: Recommendations for Implementing the Strategic Initiative INDUSTRIE 4.0: Securing the Future of German Manufacturing Industry; Final Report of the Industrie 4.0 Working Group. Forschungsunion (2013) 5. Tao, F., Zhang, M.: Digital twin shop-floor: a new shop-floor paradigm towards smart manufacturing. IEEE Access 5, 20418–20427 (2017). https://doi.org/10.1109/ACCESS.2017.275 6069 6. Noyer, J.-M.: L’Internet des Objets, l’Internet of “Everything”: quelques remarques sur l’intensification du plissement numérique du monde. IdO 17 (2017). https://doi.org/10.21494/ ISTE.OP.2017.0134 7. Monostori, L.: Cyber-physical production systems: roots, expectations and R&D challenges. Procedia CIRP 17, 9–13 (2014). https://doi.org/10.1016/j.procir.2014.03.115

A Cyber-Physical Warehouse Management System …

145

8. Landherr, M., Schneider, U., Bauernhansl, T.: The application center Industrie 4.0—industrydriven manufacturing, research and development. Procedia CIRP 57, 26–31 (2016). https://doi. org/10.1016/j.procir.2016.11.006 9. Cardin, O.: Contribution à la conception, l’évaluation et l’implémentation de systèmes de production cyber-physiques. Université de Nantes (2016) 10. Rajkumar, R.: A cyber-physical future. Proc. IEEE 100, 1309–1312 (2012). https://doi.org/10. 1109/JPROC.2012.2189915 11. Damm, W., Achatz, R., Beetz, K., Broy, M., Daembkes, H., Grimm, K., Liggesmeyer, P.: Nationale roadmap embedded systems. In: Broy, M. (ed.) Cyber-Physical Systems, pp. 67–136. Springer, Berlin, Heidelberg (2010) 12. Broy, M.: Cyber-physical systems—Wissenschaftliche Herausforderungen Bei Der Entwicklung. In: Broy, M. (ed.) Cyber-Physical Systems, pp. 17–31. Springer, Berlin, Heidelberg (2010) 13. Wittenberg, C.: Human-CPS interaction—requirements and human-machine interaction methods for the Industry 4.0. IFAC-PapersOnLine 49, 420–425 (2016). https://doi.org/10.1016/ j.ifacol.2016.10.602 14. Lee, J., Bagheri, B., Kao, H.A.: A cyber-physical systems architecture for Industry 4.0-based manufacturing systems. Manuf. Lett. 3, 18–23 (2015). https://doi.org/10.1016/j.mfglet.2014. 12.001 15. Kos, A., Tomažiˇc, S., Salom, J., Trifunovic, N., Valero, M., Milutinovic, V.: New benchmarking methodology and programming model for big data processing. Int. J. Distrib. Sens. Netw. 11, 271752 (2015). https://doi.org/10.1155/2015/271752 16. Wang, L., Wang, G.: Big data in cyber-physical systems, digital manufacturing and Industry 4.0. Int. J. Eng. Manuf. 6, 1–8 (2016). https://doi.org/10.5815/ijem.2016.04.01 17. Lee, J., Ardakani, H.D., Yang, S., Bagheri, B.: Industrial big data analytics and cyber-physical systems for future maintenance & service innovation. Procedia CIRP 38, 3–7 (2015). https:// doi.org/10.1016/j.procir.2015.08.026 18. Kao, H.A., Jin, W., Siegel, D., Lee, J.: A cyber physical interface for automation systems— methodology and examples. Machines 3, 93–106 (2015). https://doi.org/10.3390/machines3 020093 19. Sanchez, B.B., Alcarria, R., Sanchez-de-Rivera, D.: Enhancing process control in Industry 4.0 scenarios using cyber-physical systems. J. Wirel. Mob. Netw. Ubiquitous Comput. Depend. Appl. 7, 41–64 (2016) 20. Chen, H.: Applications of cyber-physical system: a literature review. J. Ind. Integr. Manag. 02, 1750012 (2017). https://doi.org/10.1142/S2424862217500129 21. Frazzon, E.M., Silva, L.S., Hurtado, P.A.: Synchronizing and improving supply chains through the application of cyber-physical systems. IFAC-PapersOnLine 48, 2059–2064 (2015). https:// doi.org/10.1016/j.ifacol.2015.06.392 22. Klötzer, C., Pflaum, A.: Cyber-Physical Systems (CPS) in Supply Chain Management : A definitional approach (2016) 23. Frazzon, E.M., Hartmann, J., Makuschewitz, T., Scholz-Reiter, B.: Towards socio-cyberphysical systems in production networks. Procedia CIRP 7, 49–54 (2013). https://doi.org/ 10.1016/j.procir.2013.05.009 24. Klötzer, C., Pflaum, A.: Cyber-physical systems as the technical foundation for problem solutions in manufacturing, logistics and supply chain management. In: 2015 5th International Conference on the Internet of Things (IOT), pp. 12–19 (2015) 25. Tu, M., Lim, M.K., Yang, M.F.: IoT-based production logistics and supply chain system—part 2. Ind. Manag. Data Syst. (2018). https://doi.org/10.1108/IMDS-11-2016-0504 26. Cardin, O., Leitão, P., Thomas, A.: Cyber-physical systems for future industrial systems 3 (2016) 27. Rajkumar, R., Lee, I., Sha, L., Stankovic, J.: Cyber-physical systems: the next computing revolution. In: Design Automation Conference, pp. 731–736 (2010) 28. Mosterman, P.J., Zander, J.: Industry 4.0 as a cyber-physical system study. Softw. Syst. Model. 15, 17–29 (2016). https://doi.org/10.1007/s10270-015-0493-x

146

M. Moufaddal et al.

29. Nagy, J., Oláh, J., Erdei, E., Máté, D., Popp, J.: The role and impact of Industry 4.0 and the internet of things on the business strategy of the value chain—the case of Hungary. Sustainability 10, 1–25 (2018) 30. Oks, S.J., Fritzsche, A., Möslein, K.M.: An application map for industrial cyber-physical systems. In: Jeschke, S., Brecher, C., Song, H., Rawat, D.B. (eds.) Industrial internet of things: cybermanufacturing systems, pp. 21–46. Springer International Publishing, Cham (2017) 31. Liu, C., Jiang, P.: A cyber-physical system architecture in shop floor for intelligent manufacturing. Procedia CIRP 56, 372–377 (2016). https://doi.org/10.1016/j.procir.2016.10.059 32. Schneider, M., Lucke, D., Adolf, T.: A cyber-physical failure management system for smart factories. Procedia CIRP 81, 300–305 (2019). https://doi.org/10.1016/j.procir.2019.03.052 33. Jeon, J., Kang, S., Chun, I.: CPS-based Model-Driven Approach to Smart Manufacturing Systems. IARIA, pp. 133–135 (2016) 34. Zhang, Y., Guo, Z., Lv, J., Liu, Y.: A framework for smart production-logistics systems based on CPS and industrial IoT. IEEE Trans. Ind. Inf. 14, 4019–4032 (2018). https://doi.org/10. 1109/TII.2018.2845683 35. Yan, J., Zhang, M., Fu, Z.: An intralogistics-oriented cyber-physical system for workshop in the context of Industry 4.0. Procedia Manuf. 35, 1178–1183 (2019). https://doi.org/10.1016/j. promfg.2019.06.074 36. Dworschak, B., Zaiser, H.: Competences for cyber-physical systems in manufacturing—first findings and scenarios. Procedia CIRP 25, 345–350 (2014). https://doi.org/10.1016/j.procir. 2014.10.048 37. Wang, L., Ji, W.: Cloud enabled CPS and big data in manufacturing. In: Ni, J., Majstorovic, V.D., Djurdjanovic, D. (eds.) Proceedings of 3rd International Conference on the Industry 4.0 Model for Advanced Manufacturing, pp. 265–292. Springer International Publishing (2018) 38. Wright, P.: Cyber-physical product manufacturing. Manuf. Lett. 2, 49–53 (2014). https://doi. org/10.1016/j.mfglet.2013.10.001 39. Kim, S., Park, S.: CPS (Cyber Physical System) based manufacturing system optimization. Procedia Comput. Sci. 122, 518–524 (2017). https://doi.org/10.1016/j.procs.2017.11.401 40. Monostori, L., Kádár, B., Bauernhansl, T., Kondoh, S., Kumara, S., Reinhart, G., Sauer, O., Schuh, G., Sihn, W., Ueda, K.: Cyber-physical systems in manufacturing. CIRP Ann. 65, 621–641 (2016). https://doi.org/10.1016/j.cirp.2016.06.005 41. Taisch, M.: Role of CPS in Manufacturing 18 (2015) 42. Lugaresi, G.: Simulation Modeling and Production Control in CPS-Based Manufacturing Systems. ACM, New York, NY (2017) 43. Geissbauer, R., Vedso, J., Schrauf, S.: Industry 4.0: building the digital enterprise. PwC (2016) 44. Jabbar, S., Khan, M., Silva, B.N., Han, K.: A REST-based industrial web of things’ framework for smart warehousing. J. Supercomput. 74, 4419–4433 (2018). https://doi.org/10.1007/s11 227-016-1937-y 45. Liu, X., Cao, J., Yang, Y., Jiang, S.: CPS-based smart warehouse for Industry 4.0: a survey of the underlying technologies. Computers 7(13) (2018). https://doi.org/10.3390/computers701 0013 46. Macaulay, J., Chung, G., Buckalew, L.: Internet of Things in Logistics. DHL Customer Solutions & Innovation (2015) 47. Bartholdi, J.J., Hackman, S.T.: Warehouse & Distribution Science. The Supply Chain and Logistics Institute, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0205 USA (2014) 48. Gong, Y., Koster, R.D.: A polling-based dynamic order picking system for online retailers. IIE Trans. 40, 1070–1082 (2008). https://doi.org/10.1080/07408170802167670 49. Ramaa, A., Subramanya, K.N., Rangaswamy, T.M.: Impact of warehouse management system in a supply chain. Int. J. Comput. Appl. 54 (2012) 50. Ghiani, G., Laporte, G., Musmanno, R.: Introduction to Logistics Systems Management. Wiley (2013) 51. Frazelle, E.H.: World-Class Warehousing and Material Handling. McGraw Hill Professional (2001)

A Cyber-Physical Warehouse Management System …

147

52. Poon, T.C., Choy, K.L., Chow, H.K.H., Lau, H.C.W., Chan, F.T.S., Ho, K.C.: A RFID case-based logistics resource management system for managing order-picking operations in warehouses. Expert Syst. Appl. 36, 8277–8301 (2009). https://doi.org/10.1016/j.eswa.2008.10.011 53. Grzeszick, R., Lenk, J.M., Rueda, F.M., Fink, G.A., Feldhorst, S., ten Hompel, M.: Deep neural network based human activity recognition for the order picking process. In: Proceedings of the 4th international Workshop on Sensor-based Activity Recognition and Interaction—iWOAR ’17, pp. 1–6. ACM Press, Rostock, Germany (2017) 54. Xie, L., Thieme, N., Krenzler, R.K., Li, H.: Efficient order picking methods in robotic mobile fulfillment systems (2019). arXiv:1902.03092. https://doi.org/10.13140/RG.2.2.137748.30082 55. Merkuryev, Y., Burinskiene, A., Merkuryeva, G.: Warehouse order picking process. In: Merkuryev, Y., Merkuryeva, G., Piera, M.À., Guasch, A. (eds.) Simulation-Based Case Studies in Logistics, pp. 147–165. Springer London, London (2009) 56. Ottjes, J.A., Hoogenes, E.: Order picking and traffic simulation in distribution centres. Int. J. Phys. Distrib. Mater. Manag. 18, 14–21 (1988). https://doi.org/10.1108/eb014696 57. Berttram, P., Schrauf, S.: Industry 4.0: how digitization makes the supply chain more efficient, agile, and customer-focused. https://www.strategyand.pwc.com/reports/digitization-more-eff icient (2018). Accessed 10 May 2018 58. Basile, F., Chiacchio, P., Coppola, J.: A cyber-physical view of automated warehouse systems. In: 2016 IEEE International Conference on Automation Science and Engineering (CASE) (2016), pp. 407–412 59. Bartodziej, C.J.: The concept Industry 4.0. In: Bartodziej, C.J. (ed.) The Concept Industry 4.0 : An Empirical Analysis of Technologies and Applications in Production Logistics, pp. 27–50. Springer Fachmedien Wiesbaden, Wiesbaden (2017) 60. Dai, Q., Zhong, R., Huang, G.Q., Qu, T., Zhang, T., Luo, T.Y.: Radio frequency identificationenabled real-time manufacturing execution system: a case study in an automotive part manufacturer. Int. J. Comput. Integr. Manuf. 25, 51–65 (2012). https://doi.org/10.1080/0951192X. 2011.562546 61. Giannikas, V., Lu, W., McFarlane, D., Hyde, J.: Product intelligence in warehouse management: a case study. In: Maˇrík, V., Lastra, J.L.M., Skobelev, P. (eds.) Industrial Applications of Holonic and Multi-Agent Systems, pp. 224–235. Springer, Berlin Heidelberg (2013) 62. Faber, N., de Koster, R. (Marinus) B.M., van de Velde, S.L.: Linking warehouse complexity to warehouse planning and control structure: an exploratory study of the use of warehouse management information systems. Int. J. Phys. Distrib. Logist. Manag. 32, 381–395 (2002). https://doi.org/10.1108/09600030210434161 63. Buntak, K., Kovaˇci´c, M., Mutavdžija, M.: Internet of things and smart warehouses as the future of logistics. Tehniˇcki glasnik 13, 248–253 (2019). https://doi.org/10.31803/tg-201902 15200430 64. Ding, W.: Study of smart warehouse management system based on the IOT. In: Du, Z. (ed.) Intelligence Computation and Evolutionary Computation, pp. 203–207. Springer, Berlin, Heidelberg (2013) 65. Wang, X.V., Kemény, Z., Váncza, J., Wang, L.: Human–robot collaborative assembly in cyberphysical production: classification framework and implementation. CIRP Ann. 66, 5–8 (2017). https://doi.org/10.1016/j.cirp.2017.04.101 66. Fong, T., Thorpe, C., Baur, C.: Collaboration, dialogue, human-robot interaction. In: Jarvis, R.A., Zelinsky, A. (eds.) Robotics Research, pp. 255–266. Springer, Berlin, Heidelberg (2003) 67. Zheng, X., Wang, M., Ordieres-Meré, J.: Comparison of data preprocessing approaches for applying deep learning to human activity recognition in the context of Industry 4.0. Sensors (Basel) 18 (2018). https://doi.org/10.3390/s18072146 68. Valle, A., Jacobsen, D.P.: Optimización de una red logística de distribución global e impacto de la Industria 4.0 en centros de almacenamiento 16 (2019) 69. Rippert, D.J.: The building blocks of a simpler future are in place. Financial Times (2006) 70. Reis, J.Z., Gonçalves, R.F.: The role of Internet of Services (IoS) on Industry 4.0 through the Service Oriented Architecture (SOA). In: Moon, I., Lee, G.M., Park, J., Kiritsis, D., von Cieminski, G. (eds.) Advances in Production Management Systems. Smart Manufacturing for Industry 4.0, pp. 20–26. Springer International Publishing, Cham (2018)

148

M. Moufaddal et al.

71. Lanting CJM, Lionetto A (2015) Smart Systems and Cyber Physical Systems paradigms in an IoT and Industry/ie4.0 context 14

PLM and Smart Technologies for Product and Supply Chain Design Oulfa Labbi and Abdeslam Ahmadi

Abstract The emergence of advanced smart technologies (e.g. Internet-of Things, Big Data Analytics, and Artificial Intelligence techniques) has created a number of innovations in handling supply chain activities and thus accelerate the transformation of the traditional supply chain towards a smart supply chain. Many existing studies focused only on applying smart technologies to facilitate smart manufacturing, but few ones addressed investigations covering smart technologies in the whole product lifecycle, namely the design and recovery stages. This paper interest focus on how smart technologies combined to product lifecycle management (PLM) contributes to realize sharing and integration of data in different lifecycle stages of the product? And how this could help to conduct a smart product and supply chain design? Moreover, the paper investigates the use of smart technologies especially during the product design phase where the supply chain configuration will be conducted. For this purpose, a framework is addressed and an UML model for smart product and supply chain design is proposed. Keywords Smart technologies · PLM · UML · Product design phase · Smart supply chain design

1 Introduction Nowadays companies are seeking to explore new sources of competitiveness through the optimization of their supply chains and through strong coordination between their partners. Such cooperation requires a massive product information exchange between O. Labbi (B) Department of Industrial Engineering, Ecole Nationale des Sciences Appliquées, Sidi Mohamed Ben Abdellah University, Avenue My Abdallah Km 5, Route d’Imouzzer, BP 72, 30000 Fes, Morocco e-mail: [email protected] A. Ahmadi Department of Mathematics and Informatics, Ecole Nationale Supérieure d’Arts et Métiers, Moulay Ismail University, B.P. 15290 El Mansour, 50500 Meknes, Morocco e-mail: [email protected] © Springer Nature Switzerland AG 2021 T. Masrour et al. (eds.), Artificial Intelligence and Industrial Applications, Advances in Intelligent Systems and Computing 1193, https://doi.org/10.1007/978-3-030-51186-9_10

149

150

O. Labbi and A. Ahmadi

all supply chain stakeholders. For this purpose, many advanced manufacturing technologies were put forward, such as computer aided design (CAD), manufacturing execution system (MES), computer aided manufacturing (CAM), enterprise resource planning (ERP), and recently product lifecycle management (PLM). PLM enables comparison, evaluation and optimization of the different product requirements, linking production information (specifications, models, results of performance, best practices, and reviews) to design thanks to knowledge management it provides [1]. It integrates and makes available all data collected during all phases of the product lifecycle for all the stakeholders across the extended enterprise [2]. In fact, with traditional PLM systems, massive data can be gathered through the whole lifecycle and are available with various degrees of complexity but unfortunately there is a lack in its exploitation. However, with the promotion and implementation of advanced smart technologies (e.g. Internet-of Things (IoT), Big Data Analytics (BDA), and Artificial Intelligence (AI)), the lifecycle data and knowledge can be effectively facilitated, more reasonable and better exploited enabling all supply chain components to be perceptible, predictable, and optimizable [3–5]. Through this context, the aim of this paper is to show how smart technologies combined to PLM contribute to good performance and optimization of the supply chain and its products. Browsing literature researches, many authors addressed the problem of joint product and supply chain design [6, 7], others investigated the PLM as an approach and also as a technology to support the simultaneous product and supply chain configuration [8–10] and recently others addressed the smart enabling technologies for supply chain management [11–13] but the product and supply chain design using smart technologies is seldom addressed. Indeed, most researches concerned the middle of product lifecycle (MOL) including production, inventory and maintenance to address issues regarding intelligent manufacturing [14, 15], intelligent inventory [16] and intelligent maintenance [17], few researchers performed investigations covering smart enabling technologies in the whole lifecycle, especially the design and recovery phases. Thus, this paper interests the product design phase and proposes to combine PLM and smart technologies to conduct a joint and smart product and supply chain design. Indeed, most benefits of collaboration among supply chain partners lie in the design phase of the product lifecycle since the cost of design changes increases as the design phase of the product lifecycle ends and the manufacturing phase starts [18]. Therefore, it is important to integrate product and supply chain design decisions during the beginning of product lifecycle (BOL) [19]. To sum up, this paper proposes to conduct product and supply chain design during the product design phase using the advantages of implementing smart technologies in the whole product lifecycle. For this purpose, a conceptual framework and an UML model is proposed. The rest of paper will be organized as follows: in Sect. 2, we give an overview of smart technologies for PLM and a literature review of smart technologies for supply chain management. In Sect. 3, we present the research approach by proposing a conceptual framework for smart product and supply chain design. Section 4 presents

PLM and Smart Technologies for Product and Supply Chain Design

151

the UML model proposed. Finally, a conclusion with perspectives is presented at the end of the paper.

2 Literature Review 2.1 Smart Technologies for PLM Product Lifecycle Management is a concept with multiple interpretations. The most common use of PLM is about how to manage product lifecycle gathered information for future needs for design and manufacturing, as well as for spare parts and maintenance management. However, collecting maintenance and service, as well as product usage information have not been as a fundamental part of PLM. In this section, we will provide advantages of implementing smart technologies to improve product lifecycle management through several research works. For the BOL phase, designers will be able to exploit expertise and know-how of all actors implemented in the product’s lifecycle and thus improve product design and quality using smart technologies such as big data analytics techniques (BDA) and cloud based technologies. For instance, Bohlouli et al. [20], proposed a cloudbased product design knowledge integration framework to integrate knowledge in the collaborative product design procedure for sustainable and innovative product design. BDA for product design was investigated by tao et al in their digital twin-driven product design, manufacturing and service framework [21]. Using smart technologies in MOL phase including manufacturing, maintenance and distribution was addressed in many researches. Karkkainen et al. [22] conducted a solution design experiment for implementing a tracking based inventory management system in temporary storage locationsof a project delivery chain. Other works covering smart maintenance, manufacturing and distribution proposed a complete and always up-to-date report about the status of the product and real-time assistance through the usage of artificial intelligence (AI), internet of things (IOT) technologies and radio frequency identification (RFID) systems [23–25]. End of lifecycle (EOL) phase is few addressed. Jun et al. [26] used RFID device as a mobile memory to record and update the real-time degeneration status and lifetime data of components in the EOL stage. Jensen and Remmen [27] analyzed how IoT technology contributes to stimulate and implement high quality EOL product management strategies. Finally, works handling smart technologies for supporting PLM systems are seldom addressed. In this context, we find the work of Dekhtiar et al. [28] that have conducted an engineering case study with the objective to design a software product based on deep learning techniques that could support Product Lifecycle Management (PLM) in terms of activity. In their work, deep learning was used for big data application in PLM to tackle the data heterogeneity issue in terms of type and format, to resist to noisy data as possible and to conduct a limited system-redesign costs in

152

O. Labbi and A. Ahmadi

case of situation changes. For this purpose, they construct a dataset for the digital Mock up (DMU) which will act as a knowledge-base for the deep learning model.

2.2 Smart Technologies for Supply Chain Management The advanced smart technologies have created a number of innovations in handling supply chain management and activities. Many researchers have deployed smart enabling technologies to tackle supply chain issues such as supplier selection problem, supply chain planning and conducting smart supply chain functions (e.g. smart manufacturing, smart maintenance, smart inventory, …). Oh et al. [11], have developed in their study a profit-effective and response efficient tactical supply planning model to find an optimal trade-off between profit and lead time. In their work, tactical supply planning was conducted in smart manufacturing supply chain context; it recognized the ability of a pseudo real-time decision-making constrained by the planning horizon. In other researches, smart technologies were applied to facilitate the implementation of sustainable production and consumption strategies [29, 30]. Zheng and Wu [31] have proposed a smart spare parts inventory management system to establish transparency between manufacturers and suppliers and to reduce the inventory costs. As far as intelligent maintenance is concerned, the use of IoT provides real-time data of the whole lifecycle to improve maintenance and service decisions [32]. In this paper, smart technologies will be used to improve PLM traditional functionalities to conduct better product and its supply chain design. Next section will present the approach adopted to do so.

3 Research Adopted Approach This section explains the approach adopted for conducting a smart joint product and supply chain design. Indeed, the product and its supply chain design will be handled by combining the following: • Smart technologies integrated to Product Lifecycle Management (PLM) solutions to manage all life cycle data incorporated at the product design. • Integration of supply chain constraints at the product design. For instance, constraints of suppliers, transportation, storage conditions, and manufacturing processes are introduced in the digital mock up (DMU) during the design phase. • Integration of product constraints in the supply chain design allowing supply chain partners to consider specificities of the designed product in their functions and duties. • Considering the product lifecycle phases as a horizon decision for the supply chain optimization and configuration. In fact, based on the application of advanced smart

PLM and Smart Technologies for Product and Supply Chain Design

PLM Data

Smart Technologies

BDA

Production Data

AI

SC functions

Smart design Smart production Smart inventory Smart distribution and tracking

IOT Operational support Data

RFID

Smart maintenance Smart use Smart recovery and recycling

Product Life Cycle Phases

Product Development Data

153

Fig. 1 PLM and smart technologies for product and supply chain design framework

technologies, smart supply chain is designed to be responsive and customized to be flexible in order to face product lifecycle changes such as growing and decreasing of the product demand, opening and closing of facilities and so on. • Focusing on the feedback and sharing of lifecycle data among all stakeholders in the supply chain (clients, suppliers, manufacturers, distributors, recyclers..) to take optimal decisions since each decision taken in one phase of the lifecycle affects other phases of the lifecycle. Figure 1 illustrates PLM and smart technologies framework for smart product and supply chain design. Indeed, PLM data, Smart technologies, and smart supply chain functions are the three key components of our approach. In fact, between this three elements: knowledge flows, data flows, data sources and main lifecycle stages are interconnected. Next sections describe each component in details.

3.1 Smart Technologies for PLM Support PLM data represents data to be managed through PLM systems. Indeed, it refers to large amounts of multi-source, heterogeneous data generated throughout the product lifecycle. Generally speaking, we can classify PLM data to many categories:

154

O. Labbi and A. Ahmadi

• Manufacturing data collected from manufacturing information systems (e.g., MES, ERP, SCM) • Equipment data collected by industrial IoT technologies, which includes data related to real-time performance, operating conditions, and the maintenance history of production equipment [28]. • User data collected from internet platforms involving user profiles, user preferences towards products/services, as well as user behavior. In our work, we considered that data to be managed are classified into three main categories namely, product development data, production data and operational support data [9]. Product development data describe how the product is designed, manufactured, operated or used, serviced and then retired. Production data focus on all activities associated with the production and the distribution of the product. Operational support data deal with the enterprise’s core resources, such as people, finances and other resources required for supporting the enterprise. These three categories of data are collected based on the usage of advanced smart technologies. In this work, we focus especially on BDA, AI, IoT and RFID systems. For instance, IoT technologies with RFID systems could be used to provide real-time data collected from equipment’s process, inventory warehouses, internet platforms for product users, etc. For instance, customers’ data gathered through internet platforms and IoT will be as an input for product development data and will guide designers to dress new products requirements respecting users’ preferences. BDA will be used to extract useful and valuable information from big data generated in the whole product lifecycle. It will help to manage heterogeneity which makes any data usage difficult. AI techniques’ usage concerns mainly optimal decision making thanks to their capability to intelligently recognize and learn business models such as supplier selection decisions, production scheduling, etc…

3.2 PLM and Smart Technologies for Smart Supply Chain Functions As shown in Fig. 1, from product lifecycle phase’s perspective, the implementation of advanced smart technologies will lead to smart design, smart production, smart inventory, smart distribution and tracking, smart maintenance and smart use and recovery. In fact, smart product design is achieved by applying BDA, for instance, to handle data related to customer demands such as customer preferences and behaviors. This will help to predict market demands and to develop new product design strategies. In addition, taking into account the whole product lifecycle data will help designers to integrate other phases constraints (manufacturing, recycling phases) and conduct an integrated product design, such as design for manufacturing, design for maintenance and sustainable eco-design.

PLM and Smart Technologies for Product and Supply Chain Design

155

Smart production, smart inventory and smart distribution and tracking are conducted thanks to IoT combined to RFID tagging systems. For instance, to have a real-time production and inventory visibility, all manufacturing items (e.g. products, trolleys, machines, ranks) are connected to smart sensors and RFID tags and then connected to the product management system to provide real-time data visibility. In addition, the IoT platform can indicate the condition of each product, such as the storage conditions or expiration date, to ensure the quality of the products in the supply chain. Smart maintenance concerns mainly the prediction side. In fact, by attaching sensors to equipment or parts, they can immediately alert the supply chain professional when it is operating outside acceptable levels. When using for instance, machine learning, the IoT platform can determine whether the part needs to be replaced or repaired and then automatically trigger the correct process. Smart recovery involves data gathered at the recycling phase and how to use it for optimal recovery decisions making such as remanufacturing, recycle or disposal of products. To sum up, the deployment of smart technologies in the whole product lifecycle improves substantially the supply chain performance. In addition, considering the lifecycle feedback data leads to a great expertise and provides a better knowledge management. The idea of this paper is to exploit data gathered during the whole product lifecycle from the different supply chain stakeholders to conduct an optimal product and supply chain design. Next section describes the model proposed for conducting this joint design.

3.3 PLM and Smart Technologies for Product and Supply Chain Design The product and its supply chain design are conducted during the design phase. In fact, the configuration of the supply chain is studied during the product design phase, which means that the product configuration is not yet finalized. In the product’s DMU, specifications of the product to be designed and constraints of supply chain actors are both integrated. This design of the product is supported by data gathered from smart supply chain functions: • Market research and marketing is made thanks to BDA and IoT techniques. Indeed, the demand for the new product to be designed could be estimated based on knowledge of user or consumer preferences and behaviors and also based on the feedback of other lifecycle phases regarding demand evolution for previous products. • Designing a product means determining BOMs, components, manufacturing processes and also determining the way it will be maintained, used and recovered.

156

O. Labbi and A. Ahmadi

Considering the whole lifecycle data gathered the different supply chain functions will assist designers to consider production and other logistical constraints to reduce design errors and iterations. From the same perspective of product lifecycle, the design of the supply chain is also improved by considering feedback and sharing data among all supply chain partners and also by implying then during the design stage. This sharing will assist designers on one side and will assist also each supply chain partner to operate in good performing conditions. For instance, involving suppliers while designing the product will give them an idea about their capacity to supply components of this product. Manufacturers could also anticipate equipment and facilities to produce this product. Supporting PLM by smart technologies will contribute also to a sustainable product and supply chain design. For instance, to meet product demands, production planning is optimized including expected quantities recovered or recycled. In fact, through analyses of the lifecycle data and IoT, the remaining lifetime of products or parts can be predicted in real-time helping to make the right recovery decisions (reuse or remanufacturing for example).

4 The Model Proposed for Smart Product and Supply Chain Design As mentioned beforehand, we will consider the product design phase to conduct the product and its supply chain design for the high impact of this phase decisions on the other lifecycle phases. We use Unified Model Language (UML) to model the static aspect of the product and its supply chain design (Fig. 2). To remember, the model proposed is supported by PLM functionalities empowered by smart technologies implementation. In other words, since PLM is based on the principle of experience capitalization and data archiving, product and logistics data of the UML model proposed are considered as inputs for the product design. For instance, the demand for designed product could be estimated from behaviors of the existing products’ demand in each product lifecycle phase from introduction to decline phases. As explained before, this information comes from the usage of smart technologies (IoT, BDA,..). Furthermore, all data regarding supply chain partners are also gathered the same way and thus could be integrated in the DMU of the product. DMU is the process of building a numerical (digital 3D) representation of a product to conduct tests that will predict product function and performance in the real world. While developing the DMU, we are reducing the need for physical product prototyping that is the most expensive aspect of the product development. In fact, before starting the product’s DMU development, the design team should carry out constraints of old supply chain partners, technical constraints of the manufacturers and customer specifications to list the new product requirements [9].

PLM and Smart Technologies for Product and Supply Chain Design

157

Fig. 2 UML class diagram modelling the product and supply chain design

In the model proposed, the class “Product_DMU” is the numerical representation of the product. It provides technical data of the product such as bill of materials (BOM) or components. For each product lifecycle phase, represented in the model by the class “Cyclephase” and for each product, there is a specific supply chain configuration (customers, suppliers, manufacturers, production process, distribution and recycling centers). This is for controlling the evolution of supply chain structure in accordance with the different lifecycle stages. The ESC Node class represents each node or partner belonging to the supply chain. It could be either a supplier or a manufacturer or any other partner. Each ESC node could have many production processes (equipment) to realize its product. We consider the cardinality 0…* between ESC node class and equipment class to take into account the case where the ESC node is a distribution center (it doesn’t have a production machine).

158

O. Labbi and A. Ahmadi

In this model, we showed that the product’s DMU comprises not only data regarding product architecture and features of its components but also incorporates the logistical constraints of all supply chain partners. This was through association classes that result from relationship between different classes such as: supplying, order, production, inventory and recycling classes. In these association classes, we are informed about logistical data such as costs and quantities.

5 Conclusion This work belongs to works covering the use of smart technologies to handle supply chain activities. In fact, many researchers only focused upon one stage of the lifecycle (production stage of BOL and operation or maintenance stage of MOL) to tackle intelligent manufacturing and intelligent maintenance issues. There is a lack in works dealing with the implementation of smart technologies in the whole lifecycle to exploit data sharing and knowledge interaction among the various lifecycle stages for a better supply chain performance. This paper considers the product lifecycle perspective to propose a framework that combines the PLM and smart technologies (e.g. BDA, IoT, AI, …) for product and supply chain design. The aim of the paper is to exploit the benefits of implementing smart technologies to empower the traditional PLM functionalities to conduct an optimal product design. Obviously, when we talk about the product design, it is mandatory to think about its product’s supply chain (which suppliers for this product, which manufacturing process, how the product will be stored, distributed and recovered). In this context, the paper tackles the design of the product and its supply chain in the same time during the product design. This is for reporting in the upstream level, the constraints of other downstream supply chain functions. To design the product and its supply chain, smart technologies are integrated to Product Lifecycle Management (PLM) solutions to manage all life cycle data incorporated at the product design. In addition, supply chain constraints are incorporated at the digital mock up during the design phase. For this purpose, an UML model was proposed to describe our approach for the design of the product and its supply chain. In this work, we presented how the design of the product and its supply chain could be handled exploiting the advantages afforded by the implementation of smart technologies to PLM but we did not focus on how the optimization of the supply chain is conducted. Future works will cover smart technologies and applications for supply chain optimization issues.

PLM and Smart Technologies for Product and Supply Chain Design

159

References 1. Trotta, M.G., di Torino, P.: Product lifecycle management: sustainability and knowledge management as keys in a complex system of product development. J. Ind. Eng. Manag. 3(2), 309–322 (2010) 2. Sudarsan, R., et al.: A product information modeling framework for product lifecycle management. Comput. Aided Des. 37, 1399–1411 (2005) 3. Büyüközkan, G., Göçer, F.: Digital supply chain: literature review and a proposed framework for future research. Comput. Ind. 97, 157–177 (2018) 4. Oh, J., Jeong, B.: Tactical supply planning in smart manufacturing supply chain. Robot. Comput. Integr. Manufact. 55, 217–233 (2019) 5. Ren, S., Zhang, Y., Liu, Y., Sakao, T., Huisingh, D., Almeida, C.M.V.B.: A comprehensive review of big data analytics throughout product lifecycle to support sustainable smart manufacturing: a framework, challenges and future research directions. J. Clean. Prod. (2018). https:// doi.org/10.1016/j.jclepro.2018.11.025 6. Yang, D., Jiao, R.J., Ji, Y., Du, G., Helo, P., Valente, A.: Joint optimization for coordinated configuration of product families and supply chains by a leader-follower Stackelberg game. Eur. J. Oper. Res. (2015). https://doi.org/10.1016/j.ejor.2015.04.022 7. Labbi, O., Ouzizi, L., Douimi, M., Ahmadi, A.: Genetic algorithm combined with Taguchi method for optimisation of supply chain configuration considering new product design. Int. J. Logist. Syst. Manag. 31(4), 531–561 (2018) 8. Labbi, O., Ouzizi, L., Douimi, M.: A dynamic model for the redesign of a product and its upstream supply chain integrating PLM (Product Lifecycle Management) approach In: Proceedings of the International Conference on Industrial Engineering and Systems Management (IEEE-IESM’2015), pp. 1187–1196 (2015) 9. Labbi, O., Ouzizi, L., Douimi, M., Aoura, Y.: A model to design a product and its extended supply chain integrating PLM (product lifecycle management) solution. Int. J. Sci. Eng. Res. 7(10), 1190–1220 (2016) 10. Bouhaddou, I., Benabdelhafid, A., Ouzizi, L., Benghabrit, Y.: PLM (product lifecycle management) approach to design a product and its optimized supply chain. Int. J. Bus. Perform. Supply Chain Model. 6(3/4), 255–275 (2014) 11. Jisoo, O., Bongju, J.: Tactical supply planning in smart manufacturing supply chain. Robot. Comput. Integr. Manuf. 55, 217–233 (2019) 12. Frazzon, E.M., Silva, L.S., Hurtado, P.A.: Synchronizing and improving supply chains through the application of cyber physical systems. IFAC PapersOnLine 48(3), 2059–2064 (2015). https://doi.org/10.1016/j.ifacol.2015.06.392 13. Ivanov, D., Dolgui, A., Sokolov, B., Werner, F., Ivanova, M.: A dynamic model and an algorithm for short-term supply chain scheduling in the smart factory industry 4.0. Int. J. Prod. Res. 54(2), 386–402 (2016). https://doi.org/10.1080/00207543.2014.999958 14. Mittal, S., Khan, M.A., Wuest, T.: Smart manufacturing: characteristics and technologies. In: IFIP International Conference on Product Lifecycle Management, pp. 539–548 (2016) 15. Renzi, C., Leali, F., Cavazzuti, M., Andrisano, A.O.: A review on artificial intelligence applications to the optimal design of dedicated and reconfigurable manufacturing systems. Int. J. Adv. Manuf. Technol. 72, 403–418 (2014). https://doi.org/10.1007/s00170-014-5674-1 16. Boone, C.A., Skipper, J.B., Hazen, B.T.: A framework for investigating the role of big data in service parts management. J. Clean. Prod. 153, 687–691 (2017). https://doi.org/10.1016/j.jcl epro.2016.09.201 17. Bahga, A., Madisetti, V.K.: Analyzing massive machine maintenance data in a computing cloud. IEEE Trans. Parallel Distrib. Syst. 23(10), 1831–1843 (2012) 18. Gokhan, N.M.: Development of a Simultaneous Design for Supply Chain Process for the Optimization of the Product Design and Supply Chain Configuration Problem. Ph.D. thesis, University of Pittsburgh (2007).

160

O. Labbi and A. Ahmadi

19. Chiu, M.C., Okudan, G.E. : An investigation on the impact of product modularity level on supply chain performance metrics: an industrial case study. J. Intell. Manuf. 30(4), 129–145 (2014) 20. Bohlouli, M., Holland, A., Fathi, M.: Knowledge integration of collaborative product design using cloud computing infrastructure. In: IEEE International Conference on Electro Information Technology, pp. 1–8 (2011). https://doi.org/10.1109/EIT.2011.5978611 21. Tao, F., Cheng, J., Qi, Q., Zhang, M., Zhang, H., Sui, F.: Digital twin-driven product design, manufacturing and service with big data. Int. J. Adv. Manuf. Technol. 94, 3563–3576 (2018). https://doi.org/10.1007/s00170-017-0233-1 22. Karkkainen, M., Ala-Risku, T., Framling, K., Collin, J., Holmstrom, J.: Implementing inventory transparency to temporary storage locations: a solution design experiment in project business. Int. J. Manag. Proj. Bus. 3(2), 292–306 (2010) 23. Laalaoui, Y., Bouguila, N.: Pre-run-time scheduling in real-time systems: current researches and artificial intelligence perspectives. Expert Syst. Appl. 41, 2196–2210 (2014). https://doi. org/10.1016/j.eswa.2013.09.018 24. Kumar, A., Shankar, R., Thakur, L.S.: A big data driven sustainable manufacturing framework for condition-based maintenance prediction. J. Comput. Sci. 27, 428–439 (2018). https://doi. org/10.1016/j.jocs.2017.06.006 25. Abdel-Basset, M., Manogaran, G., Mohamed, M.: Internet of Things (IoT) and its impact on supply chain: a framework for building smart, secure and efficient systems. Future Gener. Comput. Syst. 86, 614–628 (2018) 26. Jun, H.B., Shin, J.H., Kim, Y.S., Kiritsis, D., Xirouchakis, P.: A framework for RFID applications in product lifecycle management. Int. J. Comput. Integr. Manuf. 22, 595–615 (2009). https://doi.org/10.1080/09511920701501753 27. Jensen, J.P., Remmen, A.: Enabling circular economy through product stewardship. Procedia Manuf. 8, 377–384 (2017). https://doi.org/10.1016/j.promfg.2017.02.048 28. Dekhtiara, J., Durupta, A., Bricognea, M., Eynarda, B., Rowsonb, H., Kiritsis, D.: Deep learning for big data applications in CAD and PLM-research review, opportunities and case study. Comput. Ind. 100, 227–243 (2018) 29. Zhang, Y., Ren, S., Liu, Y., Si, S.: A big data analytics architecture for cleaner manufacturing and maintenance processes of complex products. J. Clean. Prod. 142, 626–641 (2017) 30. Roy, V., Singh, S.: Mapping the business focus in sustainable production and consumption literature: review and research framework. J. Clean. Prod. 150, 224–236 (2017). https://doi. org/10.1016/j.jclepro.2017.03.040 31. Zheng, M., Wu, K.: Smart spare parts management systems in semiconductor manufacturing. Ind. Manag. Data Syst. 117, 754–763 (2017). https://doi.org/10.1108/IMDS-06-2016-0242 32. Zhang, Y., Zhang, G., Wang, J., Sun, S., Si, S., Yang, T.: Real-time information capturing and integration framework of the internet of manufacturing things. Int. J. Comput. Integr. Manuf. 28(8), 811–822 (2015)

Production Systems Simulation Considering Non-productive Times and Human Factors Ismail Taleb, Alain Etienne, and Ali Siadat

Abstract For decades and all around the world, the main goal of industries is financial and economic profit. Lately, new approaches additionally aim to better the operator’s working conditions. With the help of simulation, these approaches can be tested in a much cheaper, faster and easier manner before real-life implementation. In this paper, new strategies have been tested to improve working conditions for the operator while maintaining high performance and productivity. The implementation of human factors and margins of maneuver into a production system is explored to improve working conditions through non-productive times. One main non-productive time is considered: unnecessary movement. A model describing a real-life scenario is suggested, as well as strategies to reduce and leverage non-productive times. Keywords Simulation · Non-productive times · Human factors · Margins of maneuver · Production systems

1 Introduction Non-productive times mostly known as times related to non-value-added activities are times that most industries look to remove, reduce or exploit. By doing so, they reduce costs, improve lead time and thus increase profit [1], in the other hand, these times were found to be important to the operator, adding variability which leads to better working conditions [2]. Thus, the competitiveness in this era is increasing continuously, demanding lower costs, better quality and high variety which translates to smaller lot size and shorter lead time, thus flexibility and responsiveness is a primordial aspect of companies’ concern [3]. Human-centered production systems are one of the favorite candidates, even though automatic technologies are evolving rapidly, most of the industries rely on these systems thanks to these advantages they have [4]. I. Taleb (B) · A. Etienne · A. Siadat Arts et Métiers Institute of Technology, Université de Lorraine, LCFC, HESAM Université, 57070 Metz, France e-mail: [email protected] © Springer Nature Switzerland AG 2021 T. Masrour et al. (eds.), Artificial Intelligence and Industrial Applications, Advances in Intelligent Systems and Computing 1193, https://doi.org/10.1007/978-3-030-51186-9_11

161

162

I. Taleb et al.

Most studies have focused more on one of the concepts discussed earlier without looking at the bigger picture and the interaction of these concepts when they are together. Abubakar and Wang [3] studied the effects of human factors such as age on a human-centered production system. A novel approach studied the interaction between some human factors (such as fatigue, experience) and time allowances, to grant operators better working conditions [5]. Digiesi et al. [6] shows that their models can describe the dynamic behavior of workers regarding tiredness which can have huge impacts on flow time, buffer level and throughput. They found that severe tiredness, usually towards the end of the shift, creates accumulating effects on the buffers, slowing down production. They suggest using buffers to cope with variability caused by fatigue. Margins of maneuver [7] is a concept taking into consideration all of an operator’s internal and external resources to enable an increase in control over his working conditions. They found that increasing the operator’s control over his working conditions has direct benefits over his physical and mental health, and thus indirect benefits over production and productivity. Productivity is linked to reducing non-productive times that are inspired by nonvalue-added activities mainly studied in Lean Manufacturing, also known as wastes or Muda (無駄). The fundamental focus of their studies is to “lean” the production system, meaning to systematically remove any non-value-added activity, thus reducing costs and increasing benefits. In practice, the aim is to reduce or exploit these activities as they can’t be entirely removed. These activities are rarely tied to times, hence the need to define non-productive times to be able to mathematically quantify them. This paper aims to study the effect of each of these concepts on a production system, in particular on non-productive times such as unnecessary movement, as well as their interaction between each other. The paper is organized as follows. First, these key concepts are defined. Then, their behaviors are studied separately through a simulation. And finally, their interactions are studied in a production system as well as their influence on non-productive times.

2 State of the Art This problem is related to production systems and contains multiple key concepts acting at the same time: non-productive times, human factors, margins of maneuver. This first section aims to define each one of these key concepts individually. The next section aims to introduce the model simulating their collective behavior.

Production Systems Simulation Considering …

163

2.1 Production Systems Management The production system is the main component on which all these parameters vary. It has been the central focus of changes throughout history. El Mouayni et al. [8] regroups the major steps the production system has undergone since the introduction of standards and interchangeable pieces. The steppingstone of these changes was the Taylorism, driven by the constant increase in demand, it brought innovative measures such as the fragmenting of tasks and abilities, rather than multitasking. This new approach based on small repetitive tasks increased worker performance. Then comes the Fordism, which combined the previous approach with other technological advances defining the first model of mass production. The Toyota Production System (TPS) appeared later, where the main goal was to eliminate waste in order to reduce cost, consequently, increase profit. Since then, many new approaches have been developed by different industries with different tools, but one main goal remains: increase profit [9]. Newer approaches focus now, in addition to increasing profit, on improving working conditions and Human physical and mental health. In fact, Industry 4.0 is believed to provide better working conditions via social sustainability or fully automated machinery, leading to less demanding physical labor [10].

2.2 Human Factors In this new era of high competitiveness, increasing profitability is the core goal of many businesses but it is not the only one. For instance, some of the other goals that they aim to achieve are better working conditions [7], better physical and mental (psychosocial) health [4] during and after work. There are two main houses of study, the first one concentrates on the overall human factors as a result [11, 12]. The second one studies the underlying reasons behind these human factors such as physical, cognitive, cardiovascular and psychological effects [13, 14]. The first one is more interesting to our study since it is related to engineering whereas the second one is related to biology, studying for example, the effect of fatigue on each part of the body separately. Many studies have tried to identify the most important human factors contributing to performance and working conditions. Givi et al. [11] have identified three mains contributors: (1) the first one is the pair Fatigue-Recovery factor, (2) the second pair is the Learning-Forgetting factor, and (3) the last one is the human error factor. According to them, these 3 factors are the main ones in simulating the performance as well as the health of the worker. This model has some drawbacks such as its deterministic behavior, its focus on physical fatigue only, and the interpretation of the results in a real situation. The authors assumed that fatigue and recovery follow an exponential form with maximum benefit in the earlier phases and are as follows:

164

I. Taleb et al.

F(t) = 1 − e−λt

(1)

R(τi ) = F(t)e−μτ i

(2)

The learning-forgetting model is much more complicated considering many parameters are considered, but the main formulas are shown in (3) and (4) Tx = T1 x−b

(3)

Tni = T1 (ui + ni )−b

(4)

These human factors are directly tied to working conditions. And, one of the ways to improve them is through the use of margins of maneuver, which will be discussed in the next section.

2.3 Margins of Maneuver Margins of maneuver are a new concept, very few articles talk about this subject, the oldest article found dates to 1997 according to [7]. They are mainly used as temporal, spatial or organizational buffers to allow the worker to have some control over his workload and working conditions [7]. El Mouayni et al. [15] have used temporal margins of maneuver to give the operator improved breaks according to their speed allowing him to get better break allocations without losing performance. Margins of maneuver are beneficial ergonomically for the worker and economically for the company [7]. They make the operator more autonomous resulting in more economical profit. El Mouayni et al. [16] proposed a model to simulate margins of maneuver (called Time Margins) described as so: T M i = (Cmax − pi ) × T M E E i

(5)

where T M E E i is the Mean Time between Events, and described as such (in this case, an event being the arrival of a new task): T M E Ei =

E i − E i−1 + (i + 1) × T M E E i−1 ; ∀i ≥ 1 i

(6)

Production Systems Simulation Considering …

165

2.4 Non-productive Times Non-productive times, in the other hand, is wasted time. The term is based on Lean Manufacturing which uses the terms waste and non-value-added activities. In this field, these 2 concepts are well defined, but the concept of time is not always tied to it. Some examples of waste are tied to movement, inventory, cost, …etc. Activities are split between value-added and non-value-added activities. So, Low et al. [17] thought of adding a third group of non-value-added activities but necessary, meaning that it is impossible to remove waste entirely but there is a mandatory minimum limit. The next section “Methodology” aims at explaining the logic chosen for this study, starting by introducing a new, more exhaustive taxonomy for nonproductive times, then defining the model used for the simulation, explaining the results found. Finally, the last section seeks to discuss these results and thus conclude on them.

3 Methodology 3.1 New Non-productive Times Approach Before presenting the model, the new non-productive times taxonomy is introduced. It is developed from multiple Lean concepts (Muda) and other non-productive activities [18−23] (Fig. 1) using an Eisenhower matrix to identify different types of non-productive times depending on the cadence and quality (Fig. 2). In this article, the use of “non-productive time” implies all cases that are not productive such as Poor or Bad Productivity (see Fig. 2).

Fig. 1 New taxonomy for non-productive times

166

I. Taleb et al.

Fig. 2 Eisenhower matrix with 2 entries: quality and cadence and 5 outputs

3.2 Model In this study, the main focus is human-centered production systems, which means that the center of the model should be the Human. The model describing the operator is based on the human factors described earlier. Around the operator are the main components of the production system: machines, products and other governing concepts like tasks and scheduling. Each one of these components and concepts are described with the goal of simulation, to better understand the interactions between each of them. The model is built for a discrete-event agent-based simulation, where every component of the model is modeled separately, and their combined behavior defines different interactions and results. The inputs are generally rigid and are determined by the situation for example the takt time is governed by demand and other factors. Implementing the margins of maneuver into the organizational level means using control parameters such as the number of operator or the maximum distance between tasks to accept them (Fig. 3). The robustness of the model has been tested through different inputs, then validated through comparison of real-life experiment. The model as is, is complex enough to return realistic results and can create NP-hard problems due to the complexity of scheduling problems [24]. Thus, some hypotheses have been made to simplify it, such as: • The tasks done by operators are manufacturing tasks (assembly): this is crucial to our study since the models used in this study (fatigue/recovery and learning/forgetting) are based on manufacturing tasks. • One operator per machine and one machine is operated by one operator.

Production Systems Simulation Considering …

167

Fig. 3 Proposed model with inputs, control parameters and outputs

• Machine cycle times follow deterministic laws, except for maintenance which follows stochastic laws. • A station is modeled by an operator, a machine, an input buffer and related distances.

3.3 Simulation • Why discrete-event agent-based simulation? Before simulating the model, this section aims to identify the reasons behind the choice of simulation, then the choice of Discrete Event and Agent Based Simulation (DES/ABS). These simulation approaches are often used when simulating human behavior in Operational Research and similar fields [25]. They give the ability to model individual behaviors of different parts of the simulation heterogeneously and enable the creation of populations from these entities to simulate. Due to the nature of the model, the criteria necessary to simulate it can be written as such: • Agent-based simulation: to model individual behaviors of populations. • Discrete-event simulation (can also be continuous simulation): to simulate the flow of time and dependencies. • Taking into consideration multiple control parameters in the form of variables such as distances, resources and desired output. • Dynamically change these control parameters for eventual optimization.

168

I. Taleb et al.

• Which software? Multiple tools, software and languages were identified but only 3 were final candidates: AnyLogic and Simio which are object-oriented, multi-paradigm simulation software. Or, coding a simulation tool from scratch using the coding language Python. After studying different aspects of these 3 solutions, the software AnyLogic was chosen due to the benefits it had for this particular example (ABS and DES). • Simulation: In this section, the model is implemented into AnyLogic in the form of different agents: Operators, Stations and the Production Site (Fig. 4). Each agent is modeled separately, and a population of agents is then generated from it. In this example, the production site is an agent with a population of 1, which contains 6 operators and 15 stations. The current goal is to find the optimal strategy to achieve better working conditions without losing performance. For this example, it translates to lowering the distance Fig. 4 Class diagram of the three types of agent used in the simulation (Factory, operator and station)

Production Systems Simulation Considering …

169

traveled during a workday while having the same throughput. Considering that bigger distances mean more time wasted not producing, this is the main goal of the study: lowering the non-productive time that is unnecessary movement through a better allocation of tasks for operators. The model of the operator and the station are also implemented into separate agents in AnyLogic using statecharts to describe their logics, variables and parameters to store and change values and statistics to store final results (Fig. 5). The software allows the creation of functions which enables the possibility to develop more intricate and complex logics. An example of the functions developed is the “sendRequest()” function, it activates once the station is ready to receive an operator, thus sending a request for an available operator nearby to take control of the station. (Notice “nearby” is an optimization condition in our model, and it is quantifiable). The flowchart in Fig. 5 illustrates fragments of logics followed by operators and stations in the simulation. The accuracy of the model was tested through comparison of simulation results with real-life results, such as values of total distance traveled, operator productivity.

Fig. 5 Flowchart: Logics of a station agent (left) and an operator agent (right)

Fig. 6 Evolution of peak total distance traveled (km) by strategy

I. Taleb et al. Total distance traveled (km)

170

22

20

18

17 12

Strategy Strategy Strategy Strategy Strategy #1 #2 #3 #4 #5

3.4 Results To test the robustness of the model, we made several trials with different input parameters, while analyzing the output for any variance (Fig. 6). One of the parameters we decided to test for robustness are working hours and maximum distance between the operator and the station, 5 different strategies were agreed upon: • • • • •

Strategy #1: Regular simulation (default parameters) Strategy #2: 10-h workday and max distance of 20 m Strategy #3: 8-h workday and max distance of 10 m Strategy #4: 8-h workday and max distance of 5 m Strategy #5: 8-h workday and no constraint on distance

We notice that the 3rd strategy returns the best results, with a peak for 12 km total distance traveled per workday.

4 Conclusion Using simulation, we were able to better understand the behavior and interactions of these different key concepts operating in the same environment. In this case study, we managed to identify a new non-productive time related to unnecessary movement with a better allocation strategy. We were able to prove that waiting in the current position can be a form of micro-break (margin of maneuver) that can improve productivity and working conditions at the same time (see strategy #3 vs. strategy #5). However, if the constraint over distance is too demanding (say