Towards new e-Infrastructure and e-Services for Developing Countries: 14th EAI International Conference, AFRICOMM 2022, Zanzibar, Tanzania, December ... and Telecommunications Engineering) 3031348958, 9783031348952

This book constitutes the refereed proceedings of the 14th EAI International Conference on Towards new e-Infrastructure

106 41 39MB

English Pages 516 [506] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Organization
Contents
E-infrastructure
Centrality Based Algorithms for Controller Placements in Software Defined Wide Area Networks
1 Introduction
2 Related Works
2.1 Local Centralities for Controller Placements
2.2 Global Centralities for Controller Placements
3 Joint Evidential Centrality
3.1 Input Indicators of Centrality
3.2 Determine the Reference Value
3.3 Ascertain the Domain of Influence
3.4 Estimate Propagation Capability of Nodes
4 Evaluation of Results
4.1 Performance of JEC Based on Number of Controllers Selected
4.2 Performance of JEC Algorithms in SD-WANs
5 Conclusion
References
Modelling and Multi-agent Simulation of Urban Road Network
1 Introduction
2 Related Work
3 Methodology
3.1 Data Gathering
3.2 Design
4 Implementation
5 Results and Discussion
6 Conclusion and Future Work
References
A Fuzzy System Based Routing Protocol to Improve WSN Performances
1 Introduction
2 Previous Work
3 SDN-WISE Approach
4 Fuzzy Routing Protocol Based on SDN-WISE
4.1 Principle of Our FRP Approach
4.2 FRP Flowchart
4.3 Calculating the Cost of a Node
4.4 Analytical Study of Our Approach Performance
5 Conclusion
References
A Lightweight and Robust Dynamic Authentication System for Smarthome
1 Introduction
2 Previous Work
3 Contributions
4 Performance Evaluation of Our Solution
4.1 Experimental Setup
4.2 Responses to Functional Requirements
4.3 Resistance to Attacks
5 Conclusion and Perspectives
References
Anchor-Free Localization Algorithm Using Controllers in Wireless Sensors Networks
1 Introduction
2 Literature Review
2.1 The Steps Involved in Executing a Localization Process
2.2 Anchor-Free Localization Algorithms
3 Problem Specification
3.1 Assumption
3.2 Problem Formulation
3.3 Objectives
4 Contribution
4.1 Proposed Localization Algorithm
5 Performance Evaluation
5.1 Positioning Accuracy
5.2 Rate of Localized Node
5.3 Energy Consumption Model
6 Results and Discussions
7 Conclusion and Perspective
References
Assessing the Impact of DNS Configuration on Low Bandwidth Networks
1 Introduction
2 Related Work
3 Network Environment
3.1 Local Loop
3.2 Local Area Network
4 Network Usage
4.1 DNS Hosted by LTSP Server
4.2 DNS Hosted by Gateway Router
4.3 Comparison
5 Discussion
6 Future Work
7 Conclusion
References
Mathematical Analysis of DDoS Attacks in SDN-Based 5G
1 Introduction
2 Conceptualization of Related Works
3 Mathematical Modeling of VIS System
3.1 Presentation of VIS Model
3.2 Mathematical Model of VIS
4 Solutions and Stabilities Analysis of the Model
4.1 Equilibrium Points
4.2 DDoS-Free Equilibrium
4.3 Local Stability of DDoS-Free Equilibrium
4.4 Basic Reproduction Number (BRN)
4.5 Global Stability of DDoS-Free Equilibria
5 VIS Model Simulation
6 Discussion
7 Conclusion
References
A 5G-Enabled E-Infrastructure for Multipoint Videoconferencing in Higher Education Institutions of Burkina Faso
1 Introduction
2 Background Technologies and Protocols
2.1 3GPP 5th Generation Mobile Network
2.2 Physical and MAC Layers Technologies
2.3 Upper Layer Protocols
2.4 High Availability and Computing Environments
3 Proposed E-Infrastructure
3.1 Architecture and Operation
3.2 Discussion
4 Conclusion
References
E-Services (Farming)
Virtual Fences: A Systematic Literature Review*-4pt
1 Introduction
1.1 Study Process
1.2 Method
2 Related Works
2.1 Contributions
2.2 Acronyms
2.3 Advantages of Virtual Fencing Comparing to Conventional Fencing
2.4 (Types of Virtual Fences)
3 Location of Livestock
3.1 Conceptual Approaches to Locating Livestock
3.2 Localization Algorithms
4 Discussion
4.1 Strengths and Limitations
4.2 Future Challenges and Opportunities
5 Conclusion
References
Digital Technologies for Tailored Agronomic Practices for Small-Scale Farmers
1 Introduction
1.1 The Context
1.2 An Underperforming Agricultural Sector
1.3 Improving Agricultural Extension Services to Boost the Local Production
2 Proposed Method
2.1 General Architecture of the Proposed Method
2.2 Proposed Model Stages
3 The Case Study
4 The Application on the Case Study
5 Conclusion and Future
References
Disjoint Routing Algorithms: A Systematic Literature Review
1 Introduction
1.1 Research Sources
1.2 Search Strategy
1.3 Contributions of this Study
2 Related Work
3 Kinds of Disjoint Paths
4 Advantages and Disadvantages of Disjoint Routing
5 Disjoint Routing Applied on Network Architectures
5.1 Disjoint Routing Applied on Mobile Ad Hoc Networks
5.2 Disjoint Routing Applied on Wireless Sensor Networks
5.3 Disjoint Routing Applied on Defined Network Software
5.4 Disjoint Routing Applied on Video Streaming
5.5 Disjoint Routing Applied on Wireless Mesh Networks
6 Routing Approaches
6.1 Conceptual Frame
6.2 Disjoint Routing Applications
6.3 Deterministic Routing Through Disjoint Paths
6.4 Non-deterministic Routing Through Disjoint Paths
6.5 Technical Comparison of Multi-path Routing Approaches
6.6 Analysis of Routing Algorithms
7 Conclusion
References
Construction of a Core Ontology of Endogenous Knowledge on Agricultural Techniques: OntoEndo
1 Introduction
2 Methodological Approach
3 Ontology Construction by the Scenario 1
3.1 Requirements Specification
3.2 Conceptualization
3.3 Formalization et Implementation
3.4 Implementation
3.5 Evaluation
4 Conclusion and Perspective
References
An Architecture of a Data Lake for the Sharing, Agricultural Knowledge in Burkina Faso
1 Introduction
2 Context
3 Data Lake Architecture Review and Requirements
4 Proposal for a General Agricultural Data Lake Architecture
4.1 Data Ingestion Layer
4.2 Data Storage Layer
4.3 Metadata Management
4.4 Data Exploitation Layer
5 Conclusion and Perspectives
References
E-Services (Health)
Deciphering Barriers and Facilitators of eHealth Adoption in Uganda Using the Systems Thinking Approach - A Systematic Review
1 Introduction
2 Methods and Materials
2.1 Study Setting
3 Discussion
4 Conclusion
References
Barriers and Facilitators of eHealth Adoption Among Patients in Uganda – A Quantitative Study
1 Introduction
2 Methods and Materials
2.1 Study Setting
2.2 Study Design
2.3 Sampling and Data Collection
2.4 Analysis
2.5 Ethical Approval
3 Results
4 Discussion
5 Conclusion and Recommendations
References
Survey of Detection and Identification of Black Skin Diseases Based on Machine Learning
1 Introduction
2 Skin Diseases
2.1 Anatomy of Skin
2.2 Common Skin Diseases in Senegal
3 State of the Art of Detection and Classification Algorithms for Uncolored Skin Diseases
3.1 Machine Learning Algorithms
3.2 Deep Learning Algorithms
4 State of the Art of Black Skin Disease Detection Algorithms
5 Discussion and Potential Challenges
6 Conclusion
References
Rebuilding Kenya’s Rural Internet Access from the COVID-19 Pandemic
1 Introduction
1.1 Overview of the State of Connectivity in Kenya and Our Research Work
2 Research Methodology
3 Research Findings
3.1 Desk Assessment of the State of Connectivity in Kakamega and Turkana Counties
3.2 Field Surveys in Kakamega and Machakos Counties
3.3 Spectrum Measurements in Kakamega County
4 Analysis of the Findings
5 Recommendations
6 Conclusions and Future Works
References
E-Services (Social)
Community Networks in Kenya: Characteristics and Challenges
1 Introduction
2 Literature Review
2.1 Internet Penetration and Affordability in Kenya
2.2 Community Networks as a Solution to the Digital Divide
2.3 Studies on Community Networks in Africa
3 Methodology
4 Results and Discussion
4.1 Case Studies Existing Community Networks in Kenya
4.2 Characteristics of Community Networks in Kenya
4.3 Covid-19, Connectivity and Community Networks in Kenya
4.4 Challenges Hampering Community Networks in Kenya and Solutions
4.5 Critical Success Factors for Community Networks
5 Recommendations
6 Conclusion
References
Determinants of Cybercrime Victimization: Experiences and Multi-stage Recommendations from a Survey in Cameroon
1 Introduction
2 Methodology
2.1 Approach
2.2 Study Variables
3 Results
3.1 Distribution of the Studied Population
3.2 Some Significant Results
3.3 Observations and Understandings
3.4 Critical Considerations
3.5 Framework of Recommendations
4 Conclusion and Perspectives
Appendix
References
E-Services (Education)
A Review of Federated Learning: Algorithms, Frameworks and Applications
1 Introduction
2 Federated Learning
2.1 Federated Learning
2.2 Categorization of Federated Learning
3 Federated Aggregation Models
3.1 Aggregation Models
3.2 Evaluation Dataset
4 Federated Learning Frameworks
5 Applications of Federated Learning
5.1 Federated Learning in Internet of Things (IoT)
5.2 Healthcare
5.3 Natural Language Processing (NLP)
5.4 Transportation
6 Open Challenges in Federated Learning
7 Conclusion
References
Intelligent Tutoring System to Learn the Transcription of Polysemous Words in Mooré
1 Introduction
2 Background
3 Architecture of the ITS to Learn the Transcription of Polysemous Words in Mooré
4 Verification of the System by Petri Net
5 Experimentation of the System
6 Conclusion and Perspectives
References
Advanced ICT
Assessing the Quality of Acquired Images to Improve Ear Recognition for Children
1 Introduction
2 Related Works
3 System Design and Architecture
3.1 Partial Ear Region
3.2 Image Blurriness and Sharpness
3.3 Image Illumination
4 Results Analysis and Discussions
4.1 Partial-Ear Images Testing
4.2 Blur and Sharpen Ear Images Testing
4.3 Images with Illumination Testing
4.4 Results and Discussion
5 Conclusion
References
Autonomous Electromagnetic Signal Analysis and Measurement System
1 Introduction
2 Proposed System
3 Graphical User Interface
4 System Validation
4.1 Equipment Used to Do the Measurement Validation
4.2 Detection of AM Signal Emission
4.3 Detection of VHF/UHF Emission
4.4 Detection of EMI Emission
5 Conclusion
References
Digital Transformation of the Textile and Fashion Design Industry in the Global South: A Scoping Review
1 Introduction
2 Methods
2.1 Protocols and Registration
2.2 Inclusion and Exclusion Criteria
2.3 Information Sources
2.4 Search Strategy
2.5 Selection of Sources of Evidence
2.6 Data Charting Process
2.7 Data Items
2.8 Synthesis of Results
2.9 Demographic Data
3 Results
4 Discussion
4.1 Topical Issues and Trends in the Textile and Fashion Industry in the Global South
4.2 Challenges Confronting the Textile and Fashion Industry in the Global South
4.3 Research Methods Used in the Current Scholarly Works
4.4 Digital Transformation Opportunities for the Textile and Fashion Industry in the Global South
4.5 Limitations, Implications, and Future Research
5 Conclusions
Appendix 1
Appendix 2
References
Electrical Big Data's Stream Management for Efficient Energy Control
1 Introduction
2 State of the Art
3 Summary Model and Architecture
3.1 A Cascading Cube Model
3.2 Architecture
3.3 Data Stream Ingestion Layer
3.4 Data Stream Processing Layer
3.5 Storage Layer
3.6 Data Visualization Layer
4 Implementation
4.1 Electrical Data Stream
4.2 Use Case
4.3 Modeling
4.4 Data Stream Retrieval
4.5 The Summary Construction and Update
5 Results and Discussion
5.1 Results Presentation
5.2 Discussion
6 Conclusion
References
Mobile Money Phishing Cybercrimes: Vulnerabilities, Taxonomies, Characterization from an Investigation in Cameroon
1 Introduction
2 Related Works
3 Background
4 Collection of Cybercrimes
5 Threats Found in MNO
6 Mobile Money Cybercrime Process Flow
7 Taxonomizing Cybercrimes
8 Classification
8.1 Findings
8.2 What Aspects Would be Interesting for Solutions Against MM Phishing
9 Conclusion et Perspectives
References
Software Vulnerabilities Detection Using a Trace-Based Analysis Model
1 Introduction
2 Related Works
2.1 Malware Evolution
2.2 Tracing
2.3 Debugging, Profiling and Logging Techniques
2.4 Logging
3 Tracing and Visualisation Tools
3.1 Applications Tracing Tools
3.2 Viewing Traces
4 Approach
4.1 Choice of Applications
4.2 Using Machine Learning Techniques
4.3 Model Proposed
5 Conclusion
References
Subscription Fraud Prevention in Telecommunication Using Multimodal Biometric System
1 Introduction
2 Overview of Subscription Fraud
3 Biometrics
4 Research Methodology
5 Testing Results and Discussions
6 Conclusion and Future Work
References
Use of Artificial Intelligence in Cardiology: Where Are We in Africa?
1 Introduction
2 Multi-dimensional Data Profiling Framework
2.1 Targeted Data Collection
2.2 Filtering Out the Most Relevant Research Papers
2.3 Multi Dimensional Analysis of the Literature
3 Discussion
4 Conclusion
References
Towards ICT-Driven Tanzania Blue Economy: The Role of Higher Learning Institutions in Supporting the Agenda
1 Introduction
2 Role of ICT in Blue Economy
2.1 ICT in Fishing
2.2 ICT in Aquaculture
2.3 Digital Fish Market
2.4 ICT in Tourism
3 State of the Blue Economy in Tanzania
4 Recommendations
4.1 Curricula Adjustment
4.2 Blue Research in ICT
4.3 The Blue Data Center
5 Conclusion
References
Author Index
Recommend Papers

Towards new e-Infrastructure and e-Services for Developing Countries: 14th EAI International Conference, AFRICOMM 2022, Zanzibar, Tanzania, December ... and Telecommunications Engineering)
 3031348958, 9783031348952

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Rashid A. Saeed Abubakar D. Bakari Yahya Hamad Sheikh (Eds.)

499

Towards new e-Infrastructure and e-Services for Developing Countries 14th EAI International Conference, AFRICOMM 2022 Zanzibar, Tanzania, December 5–7, 2022 Proceedings

123

Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering Editorial Board Members Ozgur Akan, Middle East Technical University, Ankara, Türkiye Paolo Bellavista, University of Bologna, Bologna, Italy Jiannong Cao, Hong Kong Polytechnic University, Hong Kong, China Geoffrey Coulson, Lancaster University, Lancaster, UK Falko Dressler, University of Erlangen, Erlangen, Germany Domenico Ferrari, Università Cattolica Piacenza, Piacenza, Italy Mario Gerla, UCLA, Los Angeles, USA Hisashi Kobayashi, Princeton University, Princeton, USA Sergio Palazzo, University of Catania, Catania, Italy Sartaj Sahni, University of Florida, Gainesville, USA Xuemin Shen , University of Waterloo, Waterloo, Canada Mircea Stan, University of Virginia, Charlottesville, USA Xiaohua Jia, City University of Hong Kong, Kowloon, Hong Kong Albert Y. Zomaya, University of Sydney, Sydney, Australia

499

The LNICST series publishes ICST’s conferences, symposia and workshops. LNICST reports state-of-the-art results in areas related to the scope of the Institute. The type of material published includes • Proceedings (published in time for the respective event) • Other edited monographs (such as project reports or invited volumes) LNICST topics span the following areas: • • • • • • • •

General Computer Science E-Economy E-Medicine Knowledge Management Multimedia Operations, Management and Policy Social Informatics Systems

Rashid A. Saeed · Abubakar D. Bakari · Yahya Hamad Sheikh Editors

Towards new e-Infrastructure and e-Services for Developing Countries 14th EAI International Conference, AFRICOMM 2022 Zanzibar, Tanzania, December 5–7, 2022 Proceedings

Editors Rashid A. Saeed Taif University Ta’if, Saudi Arabia

Abubakar D. Bakari State University of Zanzibar Zanzibar, Tanzania

Yahya Hamad Sheikh State University of Zanzibar Zanzibar, Tanzania

ISSN 1867-8211 ISSN 1867-822X (electronic) Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering ISBN 978-3-031-34895-2 ISBN 978-3-031-34896-9 (eBook) https://doi.org/10.1007/978-3-031-34896-9 © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

We are delighted to introduce the proceedings of EAI AFRICOMM 2022 – the 14th EAI International Conference on e-Infrastructure and e-Services for Developing Countries. This conference brought together researchers, developers and practitioners around the world who are leveraging and developing e-Infrastructure and e-Services for Developing Countries. The theme of AFRICOMM 2022 was “e-Infrastructure and e-Services for Developing Countries”. Developing countries are embracing the use of ICT in order to improve performance of both the public and private sectors. A great challenge is, however, resource availability. Researchers and practitioners alike, from both the global north and south have developed interest in developing the means and ways of efficient diffusion of ICT in resource-constrained environments. AFRICOMM 2022 aimed to bring together researchers, practitioners, and policy makers in ICT to discuss issues and trends, recent research, innovation advances and in-the-field experiences related to e-Infrastructure and e-Services along with their associated policy and regulations with a deep focus on developing countries. The technical program of AFRICOMM 2022 consisted of 31 full papers, including 4 invited papers in oral presentation sessions at the main conference tracks. The conference tracks were: Track 1 - ICT Infrastructures for Critical Environmental Conditions; Track 2 - Cloud Computing and Internet of Things; Track 3 - ICT Infrastructures and Services Based on Alternative Energies; and Track 4 - Technological Development on Developing the Blue Economy for Sustainable Development. Aside from the highquality technical paper presentations, the technical program also featured four keynote speakers: Mohammed S. Elbasheir from Etihad Etisalat Co., U.A.E., the keynote title was “Role of Mobile Network coverage in e-Service”; Bharat S. Chaudhari, from MIT World Peace University, Pune, India, his topic title was “Low-power wide-area network technologies in developing countries”; Hisham Ahmed, from Sudan University of Science and Technology (SUST), Khartoum, Sudan, his topic title was “ICT-Enabled FinTech Innovation: A Catalyst for Inclusive Growth in Developing Countries”; and lastly Amitava Mukherjee, from Amrita Vishwa Vidyapeetham University, Amritapuri, Kerala, India, his topic title was “5G IoT Implementation and Management—From the Edge to the Cloud in Industry 4.0”. Coordination with Abubakar Diwani co-general chair, and the technical program chairs chairs, Yahya Hamad Sheikh, Rania A. Mokhtar, Abdi T. Abdalla, Diery Ngom, and Elmustafa Sayed Ali was essential for the success of the conference. We sincerely appreciate their constant support and guidance. It was also a great pleasure to work with such an excellent organizing committee team for their hard work in organizing and supporting the conference. In particular, we are also grateful to the Conference Manager, Ivana Bujdakova, for her support and to all the authors who submitted their papers to the AFRICOMM 2022 conference and workshops.

vi

Preface

We strongly believe that AFRICOMM 2022 provided a good forum for all researchers, developers and practitioners to discuss all science and technology aspects that are relevant to e-Infrastructure and e-Services for Developing Countries. We also expect that future editions of the AFRICOMM conference will be as successful and stimulating, as indicated by the contributions presented in this volume. Rashid A. Saeed Abubakar D. Bakari Yahya Hamad Sheikh

Organization

General Chair Rashid A. Saeed

Taif University, Saudi Arabia

General Co-chairs Abubakar Diwani Bakar Abdi Talib Abdalla

State University of Zanzibar, Tanzania University of Dar es Salaam, Tanzania

Technical Program Committee Chairs Rania Abdulhaleem Mokhtar Yahya Hamad Sheikh

Taif University, Saudi Arabia State University of Zanzibar, Tanzania

Technical Program Committee Co-chairs Ndeye Massata Ndiaye Christelle Scharff

Universite Virtuelle du Senegal, Senegal Pace University, USA

Web Chair Ramadhan Ahmada Rai

State University of Zanzibar, Tanzania

Publicity and Social Media Chairs Umayra Mohammed Said Raya Idrissa Ahmada

State University of Zanzibar State University of Zanzibar

viii

Organization

Workshops Chair Maryam Massoud Khamis

State University of Zanzibar

Sponsorship and Exhibits Chair Bernd Westphal

German Aerospace Center, Germany

Publications Chairs Yahya Hamad Sheikh Rania Abdulhaleem Mokhtar

State University of Zanzibar Taif University, Saudi Arabia

Panels Chair Ali Idarous Adnan

State University of Zanzibar

Local Chair Said Yunus

State University of Zanzibar

Local Co-chair Khairiya Mudrik Massoud

State University of Zanzibar

Technical Program Committee Bharat S. Chaudhari Amitava Mukherjee Elmustafa Sayed Ali Idris Ahmada Rai Abdi Talib Abdalla Mohammed S. Elbasheir

MIT World Peace University, India Amrita Vishwa Vidyapeetham University, India Sudan University of Science and Technology (SUST), Sudan State University of Zanzibar, Tanzania University of Dar es Salaam, Tanzania Sudan University of Science and Technology (SUST), Sudan

Organization

Christelle Scharf Yahya Hamad Sheikh Hisham Ahmed Bernd Westphal Antero Jarvi Rashid Abdulhaleem Saeed Rania Abdulhaleem Mokhtar Abubakar Diwani Bakar Abdellah Boulouz Alemnew Sheferaw Asrese Ali Idarous Adnan Amar Kumar Seeam Andre Muhirwa Antoine Bagula Avinash Mungur Ben Daniel Clarel Catherine David Baume

Pace University, USA State University of Zanzibar, Tanzania Sudan University of Science and Technology (SUST), Sudan German Aerospace Center, Germany Turku University, Finland Taif University, Saudi Arabia Taif University, Saudi Arabia State University of Zanzibar, Tanzania Ibn Zohr University, Morocco Aalto University, Finland State University of Zanzibar, Tanzania Middlesex University London, UK University of Rwanda, Rwanda University of the Western Cape, South Africa University of Mauritius, Mauritius University of Otago, New Zealand University of Technology, Mauritius University of London, UK

ix

Contents

E-infrastructure Centrality Based Algorithms for Controller Placements in Software Defined Wide Area Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Isaiah O. Adebayo, Matthew O. Adigun, and Pragasen Mudali

3

Modelling and Multi-agent Simulation of Urban Road Network . . . . . . . . . . . . . . Aurel Megnigbeto, Arsène Sabas, and Jules Degila

18

A Fuzzy System Based Routing Protocol to Improve WSN Performances . . . . . . Bakary Hermane Magloire Sanou, Mahamadi Boulou, and Tiguiane Yélémou

33

A Lightweight and Robust Dynamic Authentication System for Smarthome . . . . Elisée Toé, Tiguiane Yélémou, Doliére Francis Somé, Hamadoun Tall, and Théodore Marie Yves Tapsoba

50

Anchor-Free Localization Algorithm Using Controllers in Wireless Sensors Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ahoua Cyrille Aka, Satchou Gilles Armel Keupondjo, Bi Jean Baptiste Gouho, and Souleymane Oumtanaga

64

Assessing the Impact of DNS Configuration on Low Bandwidth Networks . . . . . J. A. Okuthe and A. Terzoli

76

Mathematical Analysis of DDoS Attacks in SDN-Based 5G . . . . . . . . . . . . . . . . . B. O. S. BIAOU, A. O. Oluwatope, and B. S. Ogundare

87

A 5G-Enabled E-Infrastructure for Multipoint Videoconferencing in Higher Education Institutions of Burkina Faso . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Bernard Armel Sanou, Abdoul-Hadi Konfé, and Pasteur Poda E-Services (Farming) Virtual Fences: A Systematic Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Mahamat Abdouna, Daouda Ahmat, and Tegawendé F. Bissyandé

xii

Contents

Digital Technologies for Tailored Agronomic Practices for Small-Scale Farmers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Dieu-Donné Okalas Ossami, Henri Bouityvoubou, Augusto Akira Hecke Kuwakino, Octave Moutsinga, and Ousmane Sall Disjoint Routing Algorithms: A Systematic Literature Review . . . . . . . . . . . . . . . 160 Adoum Youssouf, Daouda Ahmat, and Mahamat Borgou Construction of a Core Ontology of Endogenous Knowledge on Agricultural Techniques: OntoEndo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Halguieta Trawina, Sadouanouan Malo, Ibrahima Diop, and Yaya Traore An Architecture of a Data Lake for the Sharing, Agricultural Knowledge in Burkina Faso . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Safiatou Sore, Yaya Traore, Moustapha Bikienga, and Frédéric T. Ouedraogo E-Services (Health) Deciphering Barriers and Facilitators of eHealth Adoption in Uganda Using the Systems Thinking Approach - A Systematic Review . . . . . . . . . . . . . . . 221 Hasifah Kasujja Namatovu and Mark Abraham Magumba Barriers and Facilitators of eHealth Adoption Among Patients in Uganda – A Quantitative Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Hasifah Kasujja Namatovu and Mark Abraham Magumba Survey of Detection and Identification of Black Skin Diseases Based on Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 K. Merveille Santi Zinsou, Idy Diop, Cheikh Talibouya Diop, Alassane Bah, Maodo Ndiaye, and Doudou Sow Rebuilding Kenya’s Rural Internet Access from the COVID-19 Pandemic . . . . . 285 Leonard Mabele, Kennedy Ronoh, Joseph Sevilla, Edward Wasige, Gilbert Mugeni, and Dennis Sonoiya E-Services (Social) Community Networks in Kenya: Characteristics and Challenges . . . . . . . . . . . . . . 301 Kennedy Ronoh, Thomas Olwal, and Njeri Ngaruiya

Contents

xiii

Determinants of Cybercrime Victimization: Experiences and Multi-stage Recommendations from a Survey in Cameroon . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Jean Emmanuel Ntsama, Franklin Tchakounte, Dimitri Tchakounte Tchuimi, Ahmadou Faissal, Franck Arnaud Fotso Kuate, Joseph Yves Effa, Kalum Priyanath Udagepola, and Marcellin Atemkeng E-Services (Education) A Review of Federated Learning: Algorithms, Frameworks and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Lutho Ntantiso, Antoine Bagula, Olasupo Ajayi, and Ferdinand Kahenga-Ngongo Intelligent Tutoring System to Learn the Transcription of Polysemous Words in Mooré . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Pengwendé Zongo and Tounwendyam Frédéric Ouedraogo Advanced ICT Assessing the Quality of Acquired Images to Improve Ear Recognition for Children . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Sthembile Ntshangase, Lungisani Ndlovu, and Akhona Stofile Autonomous Electromagnetic Signal Analysis and Measurement System . . . . . . 381 Mohammed Bakkali and Abdi T. Abdalla Digital Transformation of the Textile and Fashion Design Industry in the Global South: A Scoping Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 A. A. Ogunyemi, I. J. Diyaolu, I. O. Awoyelu, K. O. Bakare, and A. O. Oluwatope Electrical Big Data’s Stream Management for Efficient Energy Control . . . . . . . . 414 Jean Gane Sarr, Ndiouma Bame, and Aliou Boly Mobile Money Phishing Cybercrimes: Vulnerabilities, Taxonomies, Characterization from an Investigation in Cameroon . . . . . . . . . . . . . . . . . . . . . . . . 430 Alima Nzeket Njoya, Franklin Tchakounté, Marcellin Atemkeng, Kalum Priyanath Udagepola, and Didier Bassolé Software Vulnerabilities Detection Using a Trace-Based Analysis Model . . . . . . 446 Gouayon Koala, Didier Bassole, Telesphore Tiendrebeogo, and Oumarou Sie

xiv

Contents

Subscription Fraud Prevention in Telecommunication Using Multimodal Biometric System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 Freddie Mathews Kau and Okuthe P. Kogeda Use of Artificial Intelligence in Cardiology: Where Are We in Africa? . . . . . . . . 473 Fatou Lo Niang, Vinasetan Ratheil Houndji, Moussa Lô, Jules Degila, and Mouhamadou Lamine Ba Towards ICT-Driven Tanzania Blue Economy: The Role of Higher Learning Institutions in Supporting the Agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 Abdi T. Abdalla, Kwame Ibwe, Baraka Maiseli, Daudi Muhamed, and Mahmoud Alawi Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499

E-infrastructure

Centrality Based Algorithms for Controller Placements in Software Defined Wide Area Networks Isaiah O. Adebayo(B)

, Matthew O. Adigun , and Pragasen Mudali

Department of Computer Science, University of Zululand, Private Bag X1001, Kwadlangezwa 3886, South Africa [email protected]

Abstract. Most controller placement algorithms require as input the estimated number of controllers to be placed in software defined wide area networks (SDWANs). However, determining the correct number of controllers is NP-hard, as it requires selecting the set of nodes whose propagation capabilities exceed a given threshold. To this end, we propose in this study, a number of centrality based algorithms for estimating the capability of nodes to propagate end-to-end traffic flow. Specifically, we explore the Dempster-Shafer (D-S) Theory of evidence as a framework for estimating the capability of nodes by combining the properties of multiple centralities together to derive new joint properties. Nodes whose estimated capabilities exceed a given probability threshold are then selected as controllers. Based on the set of selected controller locations, we evaluate the performance of each joint evidential centrality (JEC) algorithm in terms of latency-related metrics. Experimental results show the superior performance of the combination of degree, node-strength and betweenness centralities in estimating the number of controllers required when worst and average case latencies are to be minimized. Keywords: Centrality · Controller placements · D-S theory of evidence · Latency · Propagating capability

1 Introduction Until recently the management and configuration of computer networks was mostly conducted manually because of the rigid manner networking devices were wired. However, with the emergence of software defined networking (SDN), the underlying infrastructure has been split into two planes namely the control and data planes. The control plane contains the network’s control logic responsible for managing and con-trolling traffic as well as several decision-making tasks [1, 14]. An important aspect of the decision-making tasks for control plane is the selection of the appropriate number of controller locations required for optimal performance. To this end, several algorithms have been proposed in the literature for estimating the number of controllers required. However, most of these algorithms require as an input the number of controllers to be placed beforehand. Thus, © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023 Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 3–17, 2023. https://doi.org/10.1007/978-3-031-34896-9_1

4

I. O. Adebayo et al.

making the procedure cumbersome and computationally intensive due to the curse of dimensionality when the number of controllers to be placed is unknown [2, 3]. Therefore, we consider in this study, centrality measures for estimating the propagation capability of nodes with the aim of selecting the set of nodes that minimizes average and worst case latencies simultaneously. However, given the numerous centralities proposed in the literature for identifying significant nodes in social networks, there is the need to explore centralities relevant to controller placement optimization [4–6]. More so, given the number of available latency related optimization metrics, finding a balanced trade-off is a gap that re-quires further investigation. We envisage that one way to address the issue of multi-objective optimization in controller placements is to sort and rank nodes based on estimated significance using several centralities. Hence, in this study, we evaluate the selection of controller locations based on multiple centralities with the aim of determining the propagation capability of nodes in minimizing both average case (AC) and worst case (WC) latencies. Essentially, in SD-WANs, minimizing worst and average case latencies is critical to the responsiveness of the control plane to different applications [7, 8]. More so, for a consistent and global view of the network’s state both latencies need be optimized simultaneously. Hence, a strategy that offers balanced trade-offs between both latencies needs to be developed. One way to do this is to explore the centralities of net-works with the view of drawing inferences for determining the appropriate number of controllers. To this end, we explore in this study, the Dempster-Shafer’s theory of evidence as a framework for combining multiple centralities with the view of estimating the propagation capability of nodes. The rest of the paper is structured thus. The related centralities relevant to controller placements is presented in Sect. 2. The formulation of the joint evidential centrality algorithm is described in Sect. 3. Results evaluation are discussed in Sect. 4 with conclusions drawn in Sect. 5.

2 Related Works In this section, we discuss six centrality based algorithms that has been used to ad-dress the problem of significant node identification for social networks which is similar to the controller placement problem. Hence, the algorithms used for determining the most significant set of nodes in social networks can also be explored for selecting the set of nodes suitable for hosting controllers in SD-WANs. However, given the NP-hardness of both problems requires that an efficient algorithm be developed for large instances of the problem. Therefore, we consider using centrality measures as an alternative and efficient method for identifying suitable nodes for controller placements in SD-WANs. Centrality is a term used to describe the significance of nodes in social networks [9]. To this end, the literature is replicate with algorithms designed for evaluating the propagating capability of nodes in complex networks. These algorithms can be classified based on the scope of the network properties used to measure the significance of each node in the network. Whilst centralities such as degree (D) and step-neighborhood (N) only utilize local properties, centralities such as betweenness (B), closeness (C), eccentricity (E), average current flow (A) use global properties. Specifically, within the context

Centrality Based Algorithms for Controller Placements

5

of controller placements few scholars have adopted the use of centrality measures for determining the set of nodes in SDNs that are suitable for hosting controllers [4–6]. We provide a brief description of centralities that are relevant to controller placements in the sub-sections below drawing inspiration from the study carried out in [10]: 2.1 Local Centralities for Controller Placements 2.1.1 Degree Centrality (D) This is the measure of the number of nodes directly connected to a given node. The degree centrality of node i is defined mathematically as follows: D(i) =

N j

vi,j

(1)

where (vij) shows connectivity between a pair of observed nodes i and j in the network. Generally, to indicate connectivity, vij = 1 otherwise vij = 0. The degree of a node shows its ability to propagate information to its immediate neighboring nodes. Hence, the higher the degree centrality of a node, the greater its capability to propagate flows in the network. In this study we consider degree centrality as a measure for estimating the probability of each node’s propagating capability with regards to its immediate neighboring nodes. 2.1.2 Weighted Neighborhood Centrality (N) This is the measure of the sum of weights associated with each link L connected to a node to the kth degree. The weighted neighborhood centrality of node i is defined mathematically as follows: Nk(i) =

N i=1

(k)

Li

∗ Wi ∀i ∈ L

(2)

(k)

where Li denote the set of links connected to the observed node i to the kth step degree. For example, when k = 1, the set of nodes covered equals the value returned by the degree centrality without considering the weights on the links. The weighted neighborhood centrality is indicative of the clustering capacity of each node. Hence, the higher the weighted neighborhood centrality of a node, the greater its capacity to be a cluster head. In this study we consider this centrality as a measure for estimating the probability of each node’s clustering capability. 2.2 Global Centralities for Controller Placements 2.2.1 Betweenness Centrality (B) This is the measure of the number of times a node i lies between the shortest path of other nodes in the network. The betweenness centrality of node i is defined mathematically as follows:  njl (i) (3) B(i) = j=l=i njl

6

I. O. Adebayo et al.

where njl denotes the set of binary shortest paths between nodes j and i, and njl (i) denotes the subset of those paths that pass through node i. The betweenness centrality is indicative of the nodes in the network that are bridges or servers between nodes. Hence, the higher the betweenness centrality of a node, the greater its capacity to serve as hotspots. In this study, we consider this centrality as a measure for estimating the probability of each node’s capability to serve as a bridge between nodes in the network. 2.2.2 Closeness Centrality (C) This is the measure of the proximity of node i to every other node in the network. The closeness centrality C of node is defined mathematically as follows: 1 C(i) = N j

dij

(4)

where dij denotes the distance between nodes i and j in the network. It is indicative of nodes with efficient propagating capability. Where efficiency is a function of average case latency. Hence, nodes with higher closeness centrality have higher efficient propagating capability. In this study, we consider this centrality as a measure for estimating the timeliness with which a node can propagate flows across the network. 2.2.3 Eccentricity Centrality (E) This is the measure of a node’s proximity to the furthermost node in the network. It is similar to closeness centrality. However, the difference being that whilst closeness centrality caters for average distance, eccentricity takes care of longest distance. The eccentricity centrality of node i is defined mathematically as follows: E(i) =

1 ∀i, j ∈ N max{dist(i, j)}

(5)

where max (dist (i, j)) is the maximum distance between the node i and its furthermost node. It is therefore also a function of latency and determines to a large extent the efficiency of a node to propagate flows with regards to worst case latency. Hence, nodes with higher eccentricity have lower efficiency whilst those with lower values have higher efficiency. In this study, we consider this centrality as a measure for estimating the timeliness with which a node propagates flows with regards to the further most node. 2.2.4 Approximate Current Flow Betweenness Centrality (A) This is the measure of the amount of traffic flowing through node i with regards to source to target pair nodes. The average traffic flow centrality of node i is defined mathematically as follows:  (jl) j=l∈n In A(i) = 1 (6) 2 n(n − 1)

Centrality Based Algorithms for Controller Placements

7

where (n-1) is the number of reachable nodes from node i and n(n-1)/2 is the normalizing (ij) constant and In is the traffic flowing through node i from nodes j and l. Although a variant of betweenness centrality, it however, allows for traffic estimation in networks. Hence, the higher the approximate average flow betweenness centrality of a node the greater its utilization in the network. In this study, we consider this centrality as a measure for estimating the significance of nodes with regards to traffic flowing across all pairs of source to target nodes in the network.

3 Joint Evidential Centrality In this section, we present new centrality measures called joint evidential centralities (JECs) for estimating the propagating capability of nodes in SD-WANs. With JECs, the attributes of multiple centralities are combined together to enable the provision of new insights for the estimation of node significance. Thus making it possible for nodes with high propagating capability to be selected as controllers. The steps for developing JEC algorithms are given below and follows the steps shown in [11–13]. 3.1 Input Indicators of Centrality To input indicators of centrality we consider the topological layout of SD-WANs which is here represented as a weighted graph:   G = V|n| , E|m| , W (7) where V = {v1, v2, v3…, v|n|} represents the set of switches and n = |n| the total number of switches present in the network. Conversely, E = {e1, e2, e3…, e|m|} represents the set of links connecting nodes and |m| the total number of links. W = {vi, vj} represents a weighted adjacency matrix. In this study, the entries of the weighted adjacency matrix show the shortest path distance between connected nodes. 3.2 Determine the Reference Value Based on the indicators of centralities inputted in the first step, reference values are determined for each network. These values are simply the maximum and minimum centrality values of nodes in the network. For this study, we consider six centralities namely degree (D), node-strength (N), betweenness (B), closeness (C), eccentricity (E) and average traffic flow (A). Given that n = number of nodes in a network, this means that for each considered centrality, the maximum and minimum value of n number of nodes in the network is to be determined. Let DMax = max {D1 , D2 , D3 , …, D|n| } and DMin = min {D1 , D2 , D3 , …, D|n| } represent the maximum and minimum reference values for degree centrality respectively. In like manner, NMax = max {N1 , N2 , N3 , …, N|n| } and NMin = min {N1 , N2 , N3 , …, N|n| }, BMax = max {B1 , B2 , B3 , …, Benn| } and BMin = min {B1 , B2 , B3 , …, B|n| }, CMax = max {C1 , C2 , C3 , …, C|n| } and CMin = min {C1 , C2 , C3 , …, C|n| }, EMax = max {E1 , E2 , E3 , …, E|n| } and EMin = min {E1 , E2 , E3 , …, E|n| }, AMax = max {A1 , A2 , A3 , …, A|n| } and AMin = min {A1 , A2 , A3 , …, A|n| } represent the maximum and minimum reference values for node-strength, betweenness, closeness, eccentricity and average current flow centralities.

8

I. O. Adebayo et al.

3.3 Ascertain the Domain of Influence With the reference values for each network determined a domain for evaluating the level of each centrality’s influence with regards to the propagation capability of nodes is to be ascertained. In this study, the domain of influence θ indicates the level of uncertainty associated with each node’s propagation capability. However, since the propagation capability of a node is a function of the reference values an estimate of the level of uncertainty associated with each centrality needs to be ascertained. Specifically, there are two possible evaluation indices for estimating the propagation capability of a node with regards to considered centralities. These indices are denoted as high (h) and low (l). θ = {High, low}

(8)

Thus the evaluation indices estimate the probability of a centrality’s influence as either high or low. Hence, pD(i) (h) and pD(i) (l) denotes the probability that the influence of degree centrality in finding the propagation capability of nodes is considered high and low respectively. In like manner, pN(i) (h) and pN (i) (l), pB(i) (h) and pB(i) (l), pC(i) (h) and pC(i) (l), pE(i) (h) and pE(i) (l), pA(i) (h) and pA(l) shows that the probabilities that the influence of node-strength, betweenness, closeness, eccentricity and average current flow centralities in finding the propagation capability of nodes is considered to be high as well as low. Where i = |n|. Next, we normalize for both high and low probabilities the influence each centrality exerts on each node i in the network. pD(i) (h) =

|Di − DMin | DMax − DMin + α

(9)

pD(i) (l) =

|Di − DMax | DMax − DMin + α

(10)

pN (i) (h) =

|Ni − NMin | NMax − NMin + α

(11)

pN (i) (l) =

|Ni − NMax | NMax − NMin + α

(12)

pB(i) (h) =

|Bi − BMin | BMax − BMin + α

(13)

pB(i) (l) =

|Bi − BMax | BMax − BMin + α

(14)

pC(i) (h) =

|Ci − CMin | CMax − CMin + α

(15)

pC(i) (l) =

|Ci − CMax | CMax − CMin + α

(16)

pE(i) (h) =

|Ei − EMin | EMax − EMin + α

(17)

Centrality Based Algorithms for Controller Placements

9

pE(i) (l) =

|Ei − EMax | EMax − EMin + α

(18)

pA(i) (h) =

|Ai − AMin | AMax − AMin + α

(19)

pA(i) (l) =

|Ai − AMax | AMax − AMin + α

(20)

where α is a user defined parameter within the range [0 to 1]. The value of α has proven to be negligible with respect to the sorting orders of nodes in networks. 3.4 Estimate Propagation Capability of Nodes To estimate the propagation capacity of nodes we consider a triple set:   P(k) (i) = p(k) (h), p(k) (l), p(k) (θ)

(21)

where k is the kth centrality considered and p(k) (θ) = 1 – (p(k)(h) + p(k)(l)). In this instance θ = {high, low} captures the level of uncertainty associated with the propagation capability of each node with regards to multiple centralities. Hence the propagation capability value of the ith node is derived by combining the elements of Eq. (15) using the Dempster-Shafer’s combination rule. For each node i the joint centrality is denoted as:   P(i) = pi (h), pi (l), pi (θ) (22) However, the complexity of the algorithm is affected by the number of centralities combined together. We illustrate the combination of three and four centralities in the following sub-sections. For simplicity in this section, let a = degree centrality, b = nodestrength centrality, c = betweenness centrality, d = closeness centrality, e = eccentricity and f = average current flow centricity. 3.4.1 Estimating the Propagation Capability Based on Three Centralities We consider the combination of degree (a), node-strength (b) and betweenness (c) in this section to illustrate the Dempster-Shafer’s combination rule for three centralities. pi (h) =

a(h) · b(h) · c(h) + a(h) · b(θ) · c(θ) + c(h) · b(h) · a(θ) 1 − [a(h) · b(l) · c(l) + a(l) · b(h) · c(h)]

(23)

a(l) · b(l) · c(l) + a(l) · b(θ) · c(θ) + c(l) · b(l) · a(θ) 1 − [a(h) · b(l) · c(l) + a(l) · b(h) · c(h)]

(24)

a(θ) · b(θ) · c(θ) 1 − [a(h) · b(l) · c(l) + a(l) · b(h) · c(h)]

(25)

pi (l) =

pi (θ) =

10

I. O. Adebayo et al.

3.4.2 Estimating the Propagation Capability Based on Four Centralities We consider the combination of degree (a), node-strength (b), betweenness (c) and closeness (d) centralities in this section to illustrate the Dempster-Shafer’s combination rule for four centralities. pi (h) =

a(h) · b(h) · c(h) · d (h) + a(h) · b(θ) · c(θ) · d(θ) + d(h) · c(h) · b(h) · a(θ) 1 − [a(h) · b(l) · c(l) · d (l) + a(l) · b(h) · c(h) · d (h)] (26)

pi (l) =

a(l) · b(l) · c(l) · d (l) + a(l) · b(θ) · c(θ) · d(θ) + d(l) · c(l) · b(l) · a(θ) (27) 1 − [a(h) · b(l) · c(l) · d (l) + a(l) · b(h) · c(h) · d (h)] pi (θ) =

a(θ) · b(θ) · c(θ) · d (θ) 1 − [a(h) · b(l) · c(l) · d (l) + a(l) · b(h) · c(h) · d (h)]

(28)

To normalize the value of pi(h) and pi(l) for both three and four centralities, we divide Eq. (23) and Eq. (24) by Eq. (25) and Eq. (26) and Eq. (27) by Eq. (28). The normalized value is given as: Pi (h) = pi (h) +

1 2pi (θ)

(29)

Pi (l) = pi (l) +

1 2pi (θ)

(30)

Equation (29) and Eq. (30) represents the probability of high and low for each node i with respect to the centralities being considered. In D-S theory, it is assumed that the significance of a node i is proportional to the probability that a set of centralities have high influence and inversely proportional to the probability that as a set of centralities have low influence. Hence, the propagation capability of each node i is given as (Table 1): PC(i) = Pi (h) − Pi (l)

(31)

Centrality Based Algorithms for Controller Placements

11

Table 1. JEC algorithm illustrating steps explained in Sect. 3.

4 Evaluation of Results The essence of developing centrality based algorithms for controller placements is to identify the set of nodes capable of propagating traffic flow across networks within the shortest possible time. To this end, identifying the correct number of nodes suitable for hosting controllers depends on the considered centralities for determining each node’s significance. Table 2. Joint Evidential Centralities (JEC) considered Symbol

Details of each JEC

DNB

Degree, weighted neighborhood & betweenness

DNBC

Degree, weighted neighborhood, betweenness & closeness

DNBE

Degree, weighted neighborhood, betweenness & closeness

DNA

Degree, weighted neighborhood & average current flow betweenness

DNAE

Degree, weighted neighborhood, average current flow betweenness & eccentricity

DNCA

Degree, weighted neighborhood, closeness & average current flow betweenness

Table 2 shows the details of each considered Joint Evidential Centrality (JEC) used for evaluating the placement of controllers in four SD-WANs with regards to average and worst case latencies.

12

I. O. Adebayo et al. Table 3. Indicators of centrality for network topologies

Network name

Indicators of centrality: number of nodes/links

Number of controllers selected by JEC: Minimum/maximum

Abilene

11/14

2/5

Agis

25/30

3/6

Internet2

34/42

8/9

Bell Canada

48/64

9/11

Table 3 gives a description of the properties of four of network topologies gotten from the internet topology zoo and used in this study. The networks are Abilene, Agis, internet2 and Bell Canada [15]. The JEC algorithms have been implemented in python using the evidential based controller placement (EBCP) framework. 4.1 Performance of JEC Based on Number of Controllers Selected

Fig. 1. Illustrates the cost of JEC algorithms with regards to number of selected nodes

Figure 1 shows how six variants of joint evidential centralities affect the selection of controller locations in SD-WANs. Determining the appropriate number of controllers is critical and infers different trade-offs between cost and performance of controller placement algorithms. To evaluate the cost of implementation, we find the ratio of controllers selected to the number of nodes present in the network. Specifically, DNBE out performs other JECs as it selects about 20% of the entire set of nodes as controllers thus reducing the search space by more than 75%.

Centrality Based Algorithms for Controller Placements

13

4.2 Performance of JEC Algorithms in SD-WANs

Fig. 2. Trade-off between average case controller to controller and switch to controller latencies in Abilene network.

Fig. 3. Trade-off between worst case controller to controller and switch to controller latencies in Abilene network.

Figure 2 and Fig. 3 show the performance of six JEC algorithms in Abilene network. Here for average case controller to controller as well as switch to controller, DNAE and DNB have the least latencies of 0.27 km/s and 0.07 km/s respectively. Whilst for worst case controller to controller as well as switch to controller DNBE and DNB have the least latencies of 0.39 km/s and 0.28 km/s respectively. Figure 4 and Fig. 5 show the performance of six JEC algorithms in Agis network. Here for average case controller to controller as well as switch to controller DNB, DNA and DNAE behave similarly with the least latencies of 0.33 km/s and 0.8 km/s respectively. Whilst for worst case controller to controller DNCA returns the least latency of 1.1 km/s closely followed by DNAE with 1.3 km/s. Conversely, for worst case switch to controller the first five algorithms have similar latencies of 0.84 km/s. Figure 6 and Fig. 7 show the performance of six JEC algorithms in internet2 network. Here for average case controller to controller DNB returns the least latency of 0.19 km/s. Whereas for average case switch to controller DNB and DNA behave similar with a latency of 0.02 km/s. For worst case controller to controller DNB, DNBE, DNA and DNAE behave similarly with least latency of 1.35 km/s. Whereas for worst case switch

14

I. O. Adebayo et al.

Fig. 4. Trade-off between average case controller to controller and switch to controller latencies in Agis network

Fig. 5. Trade-off between worst case controller to controller and switch to controller latencies in Agis network

Fig. 6. Trade-off between average case controller to controller and switch to controller latencies in internet2 network

to controller DNCA returns the least latency of 0.27 km/s followed by DNB and DNA each having a latency of 0.31 km/s.

Centrality Based Algorithms for Controller Placements

15

Fig. 7. Trade-off between worst case controller to controller and switch to controller latencies in internet2 network

Fig. 8. Trade-off between average case controller to controller and switch to controller latencies in Bell Canada network

Fig. 9. Trade-off between worst case controller to controller and switch to controller latencies in Bell Canada network.

Figure 8 and Fig. 9 show the performance of six JEC algorithms in Bell Canada network. Here for average case controller to controller DNB returns the least latency of 0.16 km/s. Whereas for average case switch to controller DNA and DNCA behave similar with a latency of 0.01 km/s. For worst case controller to controller DNB outperforms other algorithms with the least latency of 1.55 km/s whilst for worst case switch to controller DNCA behaves better with the least latency of 0.16 km/s.

16

I. O. Adebayo et al.

5 Conclusion Determining the correct number of controllers to be placed is critical to the management of large scale networks like SD-WANs. In this study, we investigate the use of centrality measures to address controller placements in order to minimize average and worst case latencies between controllers as well as switch to controllers. We evaluate the performance of six joint evidential centralities on four real network topologies. To this end, we propose that combining multiple centralities together provides an opportunity for nodes to be sorted based on their propagation capability. This way, nodes capable of propagating traffic across the network can be identified as possible controller locations. Hence, based on the set of joint evidential centralities considered, we determine the appropriate number of controllers required for optimizing average and worst case latencies. Results gotten from experiments conducted indicate that combining the right set of centralities enhances optimal selection of controller locations to balance average and worst case latencies simultaneously.

References 1. Rasol, K.A., Domingo-Pascual, J.: Joint placement latency optimization of the control plane. In: 2020 International Symposium on Networks, Computers and Communications (ISNCC), Canada, pp. 1–6. IEEE (2020) 2. Hock, D., Hartmann, M., Gebert, S., Jarschel, M., Zinner, T., Tran-Gia, P.: Pareto-optimal resilient controller placement in SDN-based core networks. In: Proceedings of the 2013, 25th International Teletraffic Congress (ITC), China, pp. 1–9. IEEE (2013) 3. Sallahi, A., St-Hilaire, M.: Optimal model for the controller placement problem in software defined networks. IEEE Commun. Lett. 19, 30–33 (2014) 4. Alhazmi, K., Moubayed, A., Shami, A.: Distributed SDN controller placement using betweenness centrality & hierarchical clustering. In: Proceedings of the 8th ACM Symposium on Design and Analysis of Intelligent Vehicular Networks and Applications, USA, pp. 15–20 (2018) 5. Alowa, A., Fevens, T.: Combined degree-based with independent dominating set approach for controller placement problem in software defined networks. In: 22nd Conference on Innovation in Clouds, Internet and Networks and Workshops (ICIN), France, pp. 269–276. IEEE (2019) 6. Alowa, A., Fevens, T.: Towards minimum inter-controller delay time in software defined networking. Procedia Comput. Sci. 1(75), 395–402 (2020) 7. Rasol, K.A.R., Domingo-Pascual, J.: Joint latency and reliability-aware controller placement. In: 2021 International Conference on Information Networking (ICOIN), South Korea, pp. 197–202. IEEE (2021) 8. Mohanty, S., Shekhawat, A.S., Sahoo, B., Apat, H.K., Khare, P.: Minimizing latency for controller placement problem in SDN. In: 2021 19th OITS International Conference on Information Technology (OCIT), India, pp. 393–398. IEEE (2021) 9. Liu, D., Nie, H., Zhao, J., Wang, Q.: Identifying influential spreaders in large-scale networks based on evidence theory. Neurocomputing 359, 466–475 (2019) 10. Cao, W., Feng, X., Jia, J., Zhang, H.: Characterizing the structure of the railway network in China: a complex weighted network approach. J. Adv. Transp. 2019 (2019) 11. Mo, H., Gao, C., Deng, Y.: Evidential method to identify influential nodes in complex networks. J. Syst. Eng. Electron. 26(2), 381–387 (2015)

Centrality Based Algorithms for Controller Placements

17

12. Bian, T., Deng, Y.: A new evidential methodology of identifying influential nodes in complex networks. Chaos Solitons Fractals 103, 101–110 (2017) 13. Mo, H., Deng, Y.: Identifying node importance based on evidence theory in complex networks. Phys. A: Stat. Mech. Appl. 529, 121538 (2019) 14. Mbodila, M., Isong, B., Gasela, N.: A review of SDN-based controller placement problem. In: 2020 2nd International Multidisciplinary Information Technology and Engineering Conference (IMITEC), South Africa, pp. 1–7. IEEE (2020) 15. http://www.topology-zoo.org/dataset.html. Accessed 12 Mar 2022

Modelling and Multi-agent Simulation of Urban Road Network Aurel Megnigbeto(B) , Ars`ene Sabas , and Jules Degila Institut de Math´ematiques et de Sciences Physiques, Universit´e d’Abomey-Calavi, Dangbo, Benin {aurel.megnigbeto,jules.degila}@imsp-uac.org, [email protected]

Abstract. In many African cities, there is a constant increase in road network usage, while we notice an absence of proper means of traffic regulation. Decision-makers should make decisions about road network improvement to ease the management and availability of the road network. They must consider the smart city’s concept because of its features that will enable smart mobility. The most popular and tedious method to provide mobility data is the traditional traffic count, which is used in many cities. However, this method does not make it possible to assess mobility according to travel scenarios and requires significant financial and human resources compared to a computer simulations-based way. We propose, a simulation tool based on multi-agent technology to facilitate the testing of mobility scenarios and to help in decision-making about traffic regulation. The tool we designed has been applied to the city of Cotonou to simulate mobility and to have a fully functional representation of the existing road network. This virtual representation can help to identify the key metrics decision-makers can leverage to improve the traffic and get an insight into the city road network. To test it, we simulated the traffic by considering a travel scenario: the home-work journey of the citizens of Cotonou. The results help in decisions making to improve mobility under this scenario. Even though our example is applied to the city of Cotonou, our model from its design is flexible enough to support the peculiarity of African cities.

Keywords: Smart mobility simulation

1

· Multi-agent system · Road network

Introduction

According to the World Bank report [9], African cities are getting increasingly crowded. This increase in the city’s inhabitants requires better infrastructure management in different sectors such as transport, energy, finance, and governance. The concept of smart cities offers a framework for improvement in each c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023  Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 18–32, 2023. https://doi.org/10.1007/978-3-031-34896-9_2

Modelling and Multi-agent Simulation of Urban Road Network

19

of those sectors, promising a better and sustainable use of the existing resources to of the quality of life and even a reduced impact on the environment. To improve transportation by following some features of smart mobility, avoiding traffic jams, and optimizing the usage of the infrastructures involved in transport, the takers of the rules must have a deep understanding of the traffic. This understanding can be made available using one MAS application field: simulation. However, it requires knowledge of the existing traffic and a MAS model that is extensible enough to represent of the actual traffic scenarios. With a good model and good modelling of the existing transportation system, we can predict a city’s transportation needs and its inhabitant’s behaviour toward the transportation infrastructure for each transportation activity. Unfortunately, the current MAS transport simulation models are not extensive enough or not built to account for the quirks of transportation in African cities. The motivation for our work is the difficulty of foreseeing the impacts of regulatory policies on the current mobility of cities like Cotonou. The lack of good regulatory policies and road congestion strongly impact the economy of countries. Also, most of the software that helps measure the impact of mobility regulation policies needs to be adapted to the singular mobility in cities like Cotonou. In our work, we set up a multi-agents system model, which makes it possible to describe the traffic in the city of Cotonou. We have considered the unique features of mobility in a town like Cotonou. To show the feasibility of our system, we set up a simulation considering a simple scenario which can serve as a basis for setting up other scenarios. Our paper is organized as follows. The Sect. 3 presents our approach to designing a MAS model targeted toward the transportation simulation through data gathering and the model’s design. We dedicate Sect. 4 to implementing our system with the MAS simulation framework GAMA. We discuss the result of our experiment modelling the commute in Cotonou in Sect. 5 and conclude in Sect. 6.

2

Related Work

A remarkable amount of work has been done in the field of traffic simulation, either using agent-based models or using finite state machines. We focus on multiagents systems because of their ability to model complex systems, especially for traffic simulation. For example, Transim [14] began in 1995 intending to provide a software suite that uses simulation to set up a realistic environment for analyzing regional transportation. Transim [14] uses an activity-based approach and microscopically simulates traffic. That is, it considers the individual behavior of vehicles and their local interaction. The activity-based method [8] associates trips with a need and a reason. It also allows for the analysis of the reasons for travel in the event of congestion. Transim [14] does not allow motorcycle cab drivers or street vendors. Its major problem is that it is no longer maintained.

20

A. Megnigbeto et al.

In 2013, the authors of the work [15] created a traveller mobility simulation tool to help understand and predict road network usage. In order to archive this, they multi-agents systems consists of three main components: – – – – –

Planner Agent Car Agent TravelerAgent Public Transport Vehicle Agent Itinerary Monitoring Agent

It uses a multi-agents approach but does not model motorcycle cab drivers and the entire ecosystem around them. Like our study, the paper [7] aims to provide a decision-support tool. To operate, the simulator takes as input data describing: – – – – –

the public transport network the possible stops the pedestrian network (roads that can be taken) the displacement model traveler profiles

However the initial motivation and aim of this study differ from our aim in that the authors were interested in the impact of real-time information provision on the quality of transit trips. The objective of the work [11] is to provide a decision-support tool to help better study the impact of regulatory decisions. However, the paper did not address some aspects of the possible scenarios. For example, the article does not offer the possibility of adding a road to the transport network or modelling the transport encountered in Cotonou. Also, the research did not consider the ecological impacts of preparing a framework for intelligent mobility.

3

Methodology

3.1

Data Gathering

The mobility of the citizen of Cotonou and more generally west African cities is different from that of developed countries. It is mainly characterized by a high presence of motorcycles for mobility demand, instead of taxi-car. To create a model of such a mobility schema, we gathered data from the ministry in charge of transport in Benin (MIT). The data we collected consists of the following: – – – –

The road network map of Cotonou The reason for travelling inside the city The number of motorcycles, cars, and other vehicles grouped because of travel The driving behavior of the drivers

The road network is GIS data that can be visualized as a map, as shown in Fig. 5.

Modelling and Multi-agent Simulation of Urban Road Network

21

All the other data we collected are road usage stats, which help us understand the Driver’s behaviour, the reason for the travel and the expected travel scenarios in Cotonou. Understanding how the inhabitant of Cotonou handles mobility is essential to providing a model that can describe transport. 3.2

Design

Like other modelling paradigms, a multi-agent based system, has its own methodologies, tools, and framework to help the modeler in his task. Through the MUCCMAS [10] methodology comparison framework and the comparison work in the paper [3], we compared four multi-agent based methodologies according to the needs of our target domain, transport system modelling. We compared Gaia [5,6], Mase [1]]), Passi [4] and Prometheus [2] through the different criteria in Table 1. Table 1. Comparison of SMA methodologies Criterion

Methodology Gaia Prometheus Passi Mase

Easy identification of modelling phases

Yes

Yes

Yes

Yes

Top-down method

Yes

No

No

No

Requirements specification phase

Yes

Yes

Yes

Yes

Model reusability

Yes

Yes

Yes

Yes

Graphical representation of models

Yes

Yes

Yes

Yes

Autonomy and heterogeneity of agents

Yes

Yes

Yes

Yes

Mobility of agents

Yes

No

Yes

Yes

Representation of organisation

Yes

Yes

Yes

Yes

Modelling of communication types and communication channel Yes

Yes

Yes

Yes

Comprehensive documentation

No

Yes

Yes

Yes

The criteria are selected based on the requirements of travel modelling. We need to represent mobile, heterogeneous and collaborative agents in such a system. Gaia [5,6] is a methodology that suits such a system well. In the Gaia methodology [5,6] specifications, the analysis phase following the requirement’s definition should be characterized by two abstracts models: – System roles specification through the role model – System interaction specification through the interaction model a. Role and interaction model – The role model helps to identify the different features of the agents that should be represented in the system, while the interaction model help to describe how the different role communicate to perform the global system task. Figure 1 shows the different roles that are useful for our system and how the different roles communicate to achieve the global system task.

22

A. Megnigbeto et al.

Fig. 1. Role hierarchy of our system

To be able to simulate a transportation system, our model should consist of some roles such as: Drivers: This role is to represent those that drive. There is a vehicle that will be used on their journey. Taxi: This role helps outline drivers that make a business from their trips. It can be for public or private transportation. Any vehicle used to help anyone on a trip is part of the Taxi role. Traffic regulator: In current transportation systems, the drivers and all the transport actors are managed by a centralized or decentralized entity. This entity can either consist of passive objects, such as road signs, or active tools, such as the traffic light system or police officers in some situations. This represents those actors that are part of the traffic regulation. Police officers: in addition to the standard traffic regulation, police officers are entitled to enforce the respect of the traffic regulations by performing further arbitrary control on the drivers. We represent their actions through this role. Gas and fuel station: Most vehicle engines are either gas or fuel powered. Those essential products must be available for the drivers before and during a trip. Here come the gas or fuel stations. Even though Cotonou vehicles are powered by fuel, we present both possibilities through the gas and fuel station. GPS role: When a traveller does not know about his travel destination, he uses GPS to find the best itinerary, or requests help from pedestrians or people around him during his travel. Those people are represented in our system with their role named GPS. Pedestrians: This role consists of those that should request taxi services to perform their travel. Troublemaker: In the traffic of cities such as Cotonou, many external actors are involved in the disorder that causes traffic trouble. This role is the representation of that reality in our systems.

Modelling and Multi-agent Simulation of Urban Road Network

23

Apart from the roles that help represent real traffic scenarios, we need some meta roles to help us collect stats about what is happening in the simulation and determine from the actual date which agent should be created. The meta-role Stat manager in our role model is responsible for data collection in the system. On the other hand, the meta-role Population generator is responsible for creating an agent from actual data of real-world traffic. To efficiently implement the feature of agent generation, we divided the latest meta-role into two metaroles: – As its name suggests, the activity scheduling mapper assigning activities to the agents. Activity means to travel and planning reasons. – Population generator, which is the one that should create the agents and add them to our simulator. It determines which agent should be created and added to the running simulation. Table 2. Liveness properties operators Operator Description x.y

x followed by y

x|y

x where y happen

x∗

x happen 0 or more time

x+

x happen 1 or more time



x happen infinitely

[x]

x is optional

x||y

x and y happen simultaneously

The Gaia methodology [5,6] requires us to describe a role using a formal template. The template comprises the role description, permissions, protocols, activities and responsibilities. Approval for a role represents the different resources the role should access to accomplish its responsibilities. Responsibility is a function of the role, an accomplishment expected from the role’s presence in the system. There are two categories of responsibility, responsibilities that define a constraint on the behaviour of a role (Safety properties) and those that specify the functions of a role (Liveness properties). A Liveness properties type responsibility comprises “activities” or “protocols”. An activity is a task that does not require interaction or cooperation with the other roles of the system. At the same time, a protocol is a task that involves negotiation with different roles in the system to be completed. In the Gaia methodology [5,6], we represent a responsibility of the Liveness properties type as a composition of the activities and protocols with the operators illustrated in Table 2. For simplicity, we will describe only the interaction model behind the Driver role. Those interactions are shown in Fig. 2.

24

A. Megnigbeto et al.

Fig. 2. Description of the role driver

A driver interacts with other system roles by buying fuel into his vehicle, asking where he should park, his destination to the GPS role, and producing data about his travel. That is what we represent in the protocols of the role of Driver. In some scenarios, such as purely driving, he does not need to interact with other roles or when he searches for a gas station. The activities listed in Fig. 2 represent those tasks. However, like approval, the Driver needs to be able to use the roads or to have some activities and finally use fuel. The liveness responsibility sums up well what is expected from a driver role in our model. b. Agent model – After planning to describe the abstract models that are part of our systems, he comes up with a concrete model that we can extract from them (Fig. 3). In reality, there is more than one type of Taxi driver in Cotonou, those that use a car, motorcycle or bus. Hence, three different agent types MotorcycleTaxiAgent, BusTaxiAgent and CarTaxiAgent. In addition, a PedestrianAgent is made of a Pedestrian role, Gps role and Troublemaker role. This helps to represent the scenarios in which they can be involved. As with the Taxi role, there is more than one type of gas station in Cotonou. We have legal gas stations and illegal ones in the city. That’s why we define two agent types for the Gas station role. Police officers serve as traffic regulators and their traditional police role, such as a checking of the Driver’s activities. We have only one type of agent for each meta-role: PopulationGeneratorAgent, ActivityMapperAgent, and StatManagerAgent.

Modelling and Multi-agent Simulation of Urban Road Network

25

Fig. 3. Agent model: Roles are bordered by a red rectangle while a black rectangle borders Agent Types. (Color figure online)

4

Implementation Table 3. Comparison of simulation tools

Features

Specialized tools Generic tools MATSim MITSimlab GAMA Repast Symphony

Can model agent

Yes

Can model agent organisation

Yes

Yes

Yes

-

-

Yes

Yes

Can model agent environnement -

-

Yes

Yes

Can model interactions

-

-

Yes

Yes

Open source

Yes

Yes

Yes

Yes

Comprehensive documentation

Yes

Yes

Yes

Yes

Community

Yes

No

Yes

Yes

Urban mobility

Yes

-

Yes

Yes

26

A. Megnigbeto et al.

Fig. 4. Our system architecture

Our simulator is made up of two main modules (Fig. 4). The interface which allows the user to interact with our simulator and the kernel of our simulator, which is responsible for interpreting queries, is provided via the graphical interface (Fig. 5). The simulation kernel consists of a configuration loader which reads the data about the environment that should be simulated. In our case, we need a map of Cotonou city described in a Shapefile format and data relating to the Driver’s behaviour. The Agent Species module is the actual definition of the agents we discussed earlier. At the same time, the processing unit is responsible for creating the environment, the agents of the environment, the roads, and the buildings. For our implementation, we used GAMA [12,13], a simulation platform that made it easy to implement spatial agent-based simulation. In addition, to be convenient for mobility-driven multi-agent systems, it is open source and well documented. We made an initial benchmark of pertinent simulation tools in Table 3. To test the operation of our model, we represented using our simulator a scenario in which we simulated the commute of the residents of Cotonou. For example, a DriverAgent in our method starts a day at 6 a.m. and ends at 9 p.m, travels to work, and then returns home. This scenario, even though it seems

Modelling and Multi-agent Simulation of Urban Road Network

27

simple, allows us to represent an activity which constitutes an essential part of the reasons for travel in the city of Cotonou.

Fig. 5. Simulator overview

In figure Fig. 5, you can see how the GAMA [12,13] simulator interface is represented. The left side shows the configuration (Fig. 6) panel that should be used to change the simulation parameters. On the right side, we have the variables that describe the state of the simulation. This module is configured to display the current date inside our simulated environment. Our implementation with GAMA [12,13] allows us to run the simulation and collect data about the running system. We discuss the results in the next section.

5

Results and Discussion

Our simulation consists of 10000 drivers agents, with both car and motorcycle drivers. It’s the average number of Drivers for the scenario we plan to run. From the data that we have collected from the ministry in charge of transport in Benin (MIT), we have the different reasons for travel within the city as well as the statistics related to these reasons. These statistics contain the destinations of drivers for each reason and their itinerary. After analyzing these data, we can assume that 10000 drivers for the home journey scenario is a representative number of drivers.

28

A. Megnigbeto et al.

Fig. 6. Simulator configuration panel

The agents used in our scenario are the Drivers, the Police Agents. Roads and Buildings are some artefacts for the simulation. The Drivers represent the inhabitants of Cotonou that are going to work or coming back home.

Fig. 7. Running simulation

In Fig. 7, we have an overview of the simulation which represents a complete map of the road network of the city of Cotonou and its division into two large pieces connected by three bridges. The road free spaces are uninhabited places, either because it is a lake, a lowland or an area prohibited from traffic such as Cotonou airport located in the southwest of the figure.

Modelling and Multi-agent Simulation of Urban Road Network

29

Fig. 8. Most used roads

Fig. 9. Road usage per hour

In the simulation perspective, the blue trails represent the commuting drivers, while the red dots represent the buildings that are their final destinations. The

30

A. Megnigbeto et al.

building points are placed according to the traffic data we collected from the MIT and the drivers are placed in such a way that they represent the observations we made. This figure provides an interactive and real-time display of the running simulation. It helps to visualize, at a high scale, the transportation network of the whole city. Even though this kind of data is not as detailed enough as the data we gathered, it helps the viewer to see the traffic behaviour on a large scale. For example, we can notice the traffic density on two of the bridges that connect the two parts of the city. Those bridges help to connect two densely populated areas of Cotonou, where either the population is heading to one side or the other. A moment before the traffic is dense on the bridge, the traffic is dense on one of the main traffic roundabout in the city: the roundabout of Etoile Rouge. This figure is a good example of visualizing the traffic density on a large scale, before we go into more details about the traffic density on a smaller scale. Such information is useful for a quick insight into the traffic behaviour and represents a good starting point for further analysis. While the simulation was running, we collected data about the traffic behaviour using the GAMA [12,13] platform built-in data collection tools. A csv file that dumps all the events that happened during the simulation was generated. With the help of the Python programming language, we were able to extract the data from the csv file and process it to get the data we needed The result of this processing is two useful graphs that help to understand the traffic behaviour in Cotonou. The first one Fig. 8 shows the most used roads in the city whereas the second one Fig. 9 shows the road usage per hour of the day. As you can see in Fig. 8 the most used roads are the ones that connect the two parts of the city together and the main road that goes through the city. This result is coherent with the hypothesis made in [16], that the high density in traffic faced by the city is due to its geographical aspect, which increase the mean time to reach the city center from the outskirts. In addition, the most used roads are the ones that are the most convenient to use, as they are the shortest and the most direct routes to reach the city center. The interests of having such data combined are multiple: 1. First, it helps to understand the traffic behaviour in the city especially for having a deeper understanding of the traffic density on the most used roads and having a better understanding of the traffic density on the bridges. 2. Second, it helps to plan the road network in the city according to specific improvement needs. For example, if the goal of the regulatory authority is to reduce traffic density on the bridge for the workers, it can be done by designing or improving the existing roads to be suitable for the workers. Not only this decision will help to reduce the traffic density for this scenario, but it can also be used to reduce the traffic density for other scenarios such as the school journey scenario or even reduce the carbon footprint of the city. Reducing the footprint of a city is one of the axis of improvement required by smart cities concept.

Modelling and Multi-agent Simulation of Urban Road Network

31

3. Third, it helps to reduce the humain workload, and error generated by the current traditional data collection methods. As we can see in Fig. 9, the traffic density is the highest during the morning and evening rush hours. A couple of hours before 8 a.m, which is the conventional office opening hour, the traffic density is increasing. This is due to the fact that drivers are leaving their homes to work and some are aware of the traffic density and leave earlier to avoid the traffic. The traffic density is also increasing a couple of hours before 5 p.m, which is the conventional office closing hour. We can clearly see that, because the workers are at work, there is almost no traffic during the work hours (8 a.m to 5 p.m). The availability of such data is very useful for regulatory authorities in many ways. For example, this simple information can help to know at which hours the traffic density is almost zero, and that road maintenance can be done during this time. A more useful use cas of this information is to know at which hours the emergency vehicles and services can use the roads without being stuck in traffic. Such information are interesting when planning the economic activities of the city, especially when it comes to the transportation of goods, people. Both graphs in Fig. 9 and Fig. 8 help us understand the road usage of the network in this scenario. One of the main aspect of the simulation is the reusability of the model and the ability to have an overview of the traffic scenario. Multiple scenarios can be run on the same model, and the results can be compared to each other, and to the real life data to gain a valuable insight into the traffic behaviour of the city.

6

Conclusion and Future Work

In this paper, we have described a MAS model that helps to represent traffic in cities like Cotonou. We used the well-known MAS methodology GAIA to ease the modelling process. The model we created has been implemented and used in the simulator we created based on GAMA. The simulation outcome gives some understanding of the different scenario traffic. We simulate Cotonou inhabitant’s commutes, and we have been able to gather data related to road usage and traffic jam hours. The data we collected are just a few amounts of what is possible. Our work is original, as we designed a model specifically to target the traffic in west African cities (taking data from Cotonou). Due to the peculiarity of their inhabitant mobilities, the African cities require a traffic model designed explicitly for them. Because it is built with that problem at its core, our model is generic enough to be extended and adapted for other kinds of mobility. In future works, we plan to simulate more complex scenarios with our simulator and optimize the performance. For example, we could represent shopping activity and leisure of citizens along with the homework to evaluate the interrelation between those reasons for travel. In addition, we are planning with the ministry in charge of transport to make it commercial-ready and available for use by enterprises.

32

A. Megnigbeto et al.

References 1. Wood, M., DeLoach, S.: An overview of the multiagent systems engineering methodology. In: International Workshop on Agent-Oriented Software Engineering, pp. 207–221 (2000) 2. Winikoff, M., Padgham, L.: The prometheus methodology. In: Methodologies and Software Engineering for Agent Systems, pp. 217–234 (2004) 3. Dam, K., Winikoff, M.: Comparing agent-oriented methodologies. In: International Bi-Conference Workshop on Agent-Oriented Information Systems, pp. 78–93 (2003) 4. Picard, G.: M´ethodologie de d´eveloppement de syst`emes multi-agents adaptatifs et conception de logiciels ` a fonctionnalit´e ´emergente (2004) 5. Wooldridge, M., Jennings, N., Kinny, D.: The Gaia methodology for agent-oriented analysis and design. Auton. Agents Multi-agent Syst. 3, 285–312 (2000) 6. Blanes, D., Insfran, E., Abrah˜ ao, S.: RE4 Gaia: a requirements modeling approach for the development of multi-agent systems. In: International Conference on Advanced Software Engineering and its Applications, pp. 245–252 (2009) 7. Othman, A.: Simulation multi-agent de l’information des voyageurs dans les transports en commun (2016) 8. Bhat, C., Koppelman, F.: Activity-based modeling of travel demand. In: Handbook of Transportation Science, pp. 35–61 (1999) 9. Somik, V., Vernon, H., Anthony, V.: Opening Doors to the WorldAfrica’s Cities (2017) 10. Sabas, A., Badri, M., Delisle, S.: A multidimentional framework for the evaluation of multiagent system methodologies. In: Proceedings of the 6th World Multiconference on Systemics, Cybernetics and Informatics (SCI-2002), pp. 211–216 (2002) 11. Nguyen, Q.: Plate-forme de simulation pour l’aide ` a la d´ecision: application ` a la r´egulation des syst`emes de transport urbain (2015) 12. Taillandier, P., Drogoul, A.: From grid environment to geographic vector agents, modeling with the GAMA simulation platform. In: 25th International Cartographic Conference ICC 2011, pp. 16-p (2011) 13. Taillandier, P., Grignard, A., Gaudou, B., Drogoul, A.: Des donn´ees g´eographiques a la simulation ` ` a base d’agents: application de la plate-forme GAMA. Cybergeo: Eur. J. Geogr. (2014) 14. Smith, L., Beckman, R., Baggerly, K.: TRANSIMS: Transportation analysis and simulation system. Los Alamos National Lab, NM (United States) (1995) 15. Zargayouna, M., Zeddini, B., Scemama, G., Othman, A.: Agent-based simulator for travelers multimodal mobility. In: KES-AMSTA, pp. 81–90 (2013) 16. Briod, P.: Les Zemidjans de Cotonou, un obstacle ` a une mobilit´e urbaine plus durable? Cotonou face ` a la contrainte ´energ´etique et environnementale. S´eminaire ´ ´ De Politiques Urbaines Et Ecologies, Institut Des Hautes Etudes Internationales Et Du D´eveloppement, Gen`eve (2011)

A Fuzzy System Based Routing Protocol to Improve WSN Performances Bakary Hermane Magloire Sanou, Mahamadi Boulou, and Tiguiane Y´el´emou(B) Nazi BONI University, Bobo-Dioulasso, Burkina Faso [email protected] Abstract. Wireless sensor networks (WSN) have become very popular in recent years. Once deployed, WSNs are very rigid in terms of reconfiguration. Software Defined Networking (SDN) technology is being explored by several researchers to facilitate the reconfiguration of WSN nodes. Several architectures have been proposed, among SDN-WISE. SDN-WISE separates the data plane executed by the sensor nodes and the control plane executed by a software program hosted in a controller. In SDN-WISE, the choice of data transmission path is the best path in terms of hops count. One problem with this approach is that the chosen path is used until one of its nodes exhausts its energy before a path change process may be initiated. This impacts network efficiency and reduces the network lifetime. We then propose the Fuzzy Routing Protocol (FRP) which relies on the fuzzy system to compute a cost based on the metrics residual energy, RSSI, number of packets in the queue (buffer) and number of hops to reach the sink, to calculate the cost of each node. Nodes with the high cost close to sink are chosen to form the path. When a node in the path used for data transmission has its cost decreased by K% compared to its previous cost, then a new path is computed even if it is less optimal in terms of number of hops compared to the previous one. This approach allows a better distribution of energy consumption in the network and better congestion management. Keywords: WSN · SDN · Energy consumption system · Routing protocol

1

· Fuzzy inference

Introduction

Wireless Sensor Networks (WSN) are becoming more and more indispensable in our daily life. These networks composed of sensor nodes are of great use in many areas for various applications. These autonomous sensor nodes are most often deployed in thousands in environments that are difficult to access. Unlike traditional networks that are directly connected to electricity and with fairly high resources, these nodes are limited in terms of resources such as: memory capacity, storage space, computing power and especially on-board energy [8]. However, once deployed, WSNs typically face a difficulty in reconfiguring and managing c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023  Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 33–49, 2023. https://doi.org/10.1007/978-3-031-34896-9_3

34

B. H. M. Sanou et al.

all of these sensor nodes. To intervene on the sensor nodes in the WSN, the administrator had to go on each sensor node and make changes. This method is very difficult to achieve if there are a large number of sensors to reconfigure or troubleshoot. To facilitate this process, a new technology has appeared, the software defined networking (SDN) [1]. The principle of this technology is the separation of the control plane from the data plane. The control of the data is thus transferred to a software entity called a controller. Thus, with a centralized view of the network, it allows an automatic, flexible and dynamic reconfiguration of the network. This technology is mainly intended for networks with an infrastructure to facilitate the administration of network equipment, including simplifying their configuration. In order to solve the problem of reconfiguration complexity in WSNs, the researchers proposed to adopt this new SDN technology. Several architectures have emerged including SDN-WISE. SDN-WISE is an SDN solution adapted to WSN [2]. Unlike existing SDN solutions for wireless sensor networks, SDN-WISE is a stateful solution. One of its goals is to reduce the amount of information exchanged between the sensor nodes and the network controller. This approach uses Dijkstra’s algorithm for route calculation. The nodes on the shortest paths in terms of number of hops are more stressed and therefore discharge their energy faster than the others. This also leads to congestion at these nodes and consequently increases the packet loss rate and packet transmission delays. To improve the efficiency of the SDN-WISE architecture, we propose the Fuzzy Routing Protocol (FRP). Based on energy metrics, Received Signal Strength Indicator (RSSI), number of packets in the queue and number of hops to reach the sink, FRP relies on a fuzzy logic system to determine the cost of each node. Thus, the nodes with the highest cost that are likely to be close to the sink are chosen as the next hops to form the path. This approach not only allows a better distribution of energy consumption in the network, but also manages congestion. It improves the risk of packet loss and therefore decreases transmission delays. The remainder of the paper is organized as follows. In Sect. 2, we present related work. A brief overview of SDN-WISE is given in Sect. 3. In Sect. 4, we present and analyze the performance of our FRP routing solution compared to standard SDN-WISE. Finally, we conclude in Sect. 5.

2

Previous Work

The adoption of SDN has changed the way wireless sensor networks are designed. It is involved in architecture, deployment, operation and maintenance. Software defined networking is an emerging research topic. It focuses on facilitating network management and scalability. In these types of networks, the data plane is clearly separated from the control plane. That is, the traditional data transmission logic is defined in the control plane and implemented in the network devices. Therefore, the controller is responsible for making routing decisions and sending them to the network nodes. Angelos-Christos G. Anadiotis and al. [3] proposed an entire SD-WISE software-defined wireless sensor network solution. Their solution integrates

A Fuzzy System Based Routing Protocol to Improve WSN Performances

35

Openflow features adapted to WSNs. In their work, energy consumption is reduced by efficiently managing the duty cycle in the sensor nodes and controlling the transmission power of their radio resources. They use the operating system of the wireless sensor nodes to virtualize network functions (routing protocol). Also, they ensure regulatory behavior compliance according to the remotely recognized authority (through software and mechanisms that attest the sensor nodes supported by Trusted Platform Modules (TPM)), based on the context of the nodes, exploiting only the interaction between the hardware and the trusted software. Their results showed a high packet sending delay due to the large proportion between the data size and the header size. A new approach based on fuzzy logic has been thought by N. Abdolmaleki and al. [4], called Fuzzy Topology Discovery Protocol (FTDP). This approach was on an SDN-WISE architecture to discover the network topology and route. It takes into account several properties of the nodes in the network in order to choose the best routing node and thus improve the network performance. These are: number of neighbors, workload level (queue length), remaining energy. The results of their proposed solution use fuzzy systems in the control plane to increase the packet transfer rate, reduce packet loss and improve the energy consumption of the network. Due to the channel bandwidth for data and control traffic in SDN-based WSNs, control messages are a bottleneck, disrupt network performance and controller responsiveness. So, a flow configuration request control algorithm (FR CMQ) was proposed by M. Ndiaye and al [5] to reduce the duplication of flow configuration request control messages. The results showed a significant reduction of control messages, thus leading to a decrease in energy consumption. But FR CMQ affects the packet delivery rate and delay. This could be due to the lack of a loss control or delivery success mechanism for initial flow configuration request messages. The implementation of QoS brings challenges such as the difficulties of adaptability and implementation in traditional network architecture. For this purpose, X. Tan and al [6] presented QSDN-WISE, a hierarchical software defined network architecture for wireless sensor networks. It enables complex network management and makes the system more adaptable. The DCHUC clustering algorithm in QSDN-WISE uses a non-uniform clustering mechanism. It ensures that the cluster head nodes are (as close as possible) to the sink node. This avoids energy holes and increases the lifetime of the network. In addition, DCHUC determines two cluster head nodes, one with the least congestion and good link stability for the other, based on criteria of congestion, link stability and residual energy of the nodes. These two types of cluster heads not only form two hierarchical network topologies, but also reduce the load on each cluster head and maintain the quality of intra-cluster services. The QSDN-WISE centralized routing algorithm considers residual energy, node congestion, link stability and distance between nodes as parameters for choosing the next hop. Based on the two topologies and data classification, it constructs two heterogeneous routing paths for nodes to

36

B. H. M. Sanou et al.

meet the requirements of different data classes. The results show that QSDNWISE can not only balance the energy consumption of WSNs, but also provide QoS support for data with different QoS requirements. And it performs better in energy saving, end-to-end delay, packet loss rate and message count control compared to SDN-WISE, SDN-DMRP and IRPL. To improve traffic distribution, J. Schaerer and al. [1] implemented the dynamic traffic aware routing protocol DTARP. It takes into account the centrality and dynamic traffic statistics of nodes when computing the path. With this information, more active central nodes are recognized and are less eligible for retransmissions. Therefore, less active nodes are chosen even if they have a slightly higher hop count. In addition, it is possible to have several paths of the same length between two nodes. Therefore, a cost is determined and then associated to each link by a cost function defined in the algorithm, based on the average traffic on the link and the radio signal strength indicator (RSSI). The link with a low cost is chosen. The results showed that DTARP could reduce the network activity of the most active node by 25% compared to the SDN-WISE protocol. This traffic distribution will increase the overall network lifetime, as the lifetime of the most active nodes is increased. N. Q. Hieu and al. [7] proposed the idea of using an automatic timer mechanism. It adjusts the communication rate between the controller and the sensor nodes. The generation of control packets (Beacon and Report) is regulated by this timer called “Trickle Timer” in order to optimize the network performance. The Trickle Timer allows the sensor nodes to exchange a few packets during a given time if the network state is stable. Trickle Timer is inspired by the Trickle algorithm in RPL. It runs at defined time intervals. At the first half of the interval called the listening period, each node looks at the transmissions of its neighbors. If the information received from the neighbors is consistent, a counter is incremented. And in the second half of the time interval the node sends its information to its neighbors only if the counter value is below a predefined redundancy constant. Otherwise, the packets in its queue are dropped. When an inconsistent transmission (discovery of a new neighbor) or an external event (defined by the controller) is detected, the Trickle algorithm resets. The results show that the implementation of the Trickle timer in SDN-WISE provides better performance in terms of power consumption and transmission and reception duty cycle.

3

SDN-WISE Approach

In this approach the exchange and processing of the appropriate packet called TD packet is paramount [2]. This packet carries information about the battery level and the number of hops from the (nearest) sink. Each time a node receives this packet, it compares it with the current best hop and then chooses the next best hop. The choice is made according to the priority to the number of hops, then to the RSSI value received with the message and finally to the residual energy level. This information also feeds a list containing the WISE neighbors.

A Fuzzy System Based Routing Protocol to Improve WSN Performances

37

In this list we have the addresses of the neighbor nodes, their RSSI and their battery levels. It is sent periodically to the topology management (TM) layer in order to establish a graphical representation of the network. Then, the table is completely emptied and rebuilt with the incoming TD packets in order to always have an up-to-date view of the network topology. One of the controllers acts as a proxy between the physical network and the other controllers. It is called WISEVisor like FlowVisor in traditional OpenFlow networks. The controllers define the network management policies to be implemented by the WSN and are often application dependent. As a result, controllers can interact with the application. Sensor nodes have limited memory capacity which makes it important to choose the size of the different data structures. This size depends on several deploymentspecific characteristics defined by WISE-Visor during the initialization phase. To extend the life of the network, traffic should be distributed as evenly as possible across the network. With SDN-WISE, packets are sent on the shortest path. This approach uses the Djikstra algorithm for route calculation. Nodes on the shortest paths in terms of hop count are found to be more stressed and therefore deplete their energy faster than nodes on the edge of the network.

4 4.1

Fuzzy Routing Protocol Based on SDN-WISE Principle of Our FRP Approach

Our FRP approach is a new packet routing approach that enables load balancing, optimization of sensor node energy consumption, reduction of transmission delay and packet loss rate. The Dijkstra algorithm in SDN-WISE, in order to estimate the shortest routing path, prioritizes the number of hops as the routing metric, followed by the received RSSI value and finally the residual energy level. Instead of this protocol, we propose the centralized routing approach called Fuzzy Routing Protocol (FRP) to determine the best paths by a combination of several metrics. With FRP, the choice of a path will be conditioned by a computed cost of each node of this path. Thus, a relatively long path may be chosen for data transmission at the expense of a shorter path. To enable efficient balancing of energy consumption and achieve optimal network load balancing, the best cost (highest cost node) is required for route selection. FRP uses the fuzzy system that calculates a cost corresponding to each node. This cost is determined based on the reconciliation of four (04) decision parameters that are the residual energy, the number of packets in the queue of the node, the RSSI between the node and each of its neighbors and the number of hops to reach the sink node. These four parameters define for us the performance level of the nodes for their eligibility as intermediate node of the path for data transmission. Once these costs are calculated, the controller establishes the different routing paths in the network to reach the destination. This information is sent back to the source nodes and other intermediate nodes so that the intermediate nodes simply relay the packets along pre-calculated paths. The state of the nodes varies over time due to changes in the values of the decision parameters, often at a high frequency (the number of packets in the queue, etc.). To mitigate the impact of

38

B. H. M. Sanou et al.

these dynamic changes on routing decisions, a threshold K defining the level of performance degradation of the nodes is set. So, if the new calculated cost of a node on a path is less than K% of its current cost, this means that the node has lost performance, but the change has not yet occurred. The controller then searches again for a path whose cost degradation has not yet reached K% among the possible paths to choose the new best path. 4.2

FRP Flowchart

The following variables provide a better understanding of the proposed flow chart: – – – – – – –

K: constant of the degradation level of a node in %; Ni: designates a given node; Nv: designates a neighboring node of Ni; V: the set of neighbors of a node Ni or next jumps; P: the set of nodes in a given path; C(Ni): cost of a node Ni; C(Nv): cost of a neighboring node Nv.

After the initialization of the constant K, the following sequences of actions are performed by the controller to find an optimal routing path: – First, it looks for the one with the highest cost C(Nv) among the neighbors V of the source node. This neighbor node Nv is then chosen and will be the first intermediate node in the set P of the routing path for the source node. – Then for this first intermediate node, it also searches among its neighbors V for the node that has the highest cost C(Nv). This new neighbor node Nv is chosen as the second intermediate node in the routing path set P. – Then the process is repeated in this way until the set P of the complete path between the source node and the sink is established. – The fuzzy inference system is executed each time, after receiving the control packets from the sensor nodes. – If the cost of a node belonging to a routing path decreases by K%, this means that the node has lost performance. Then the controller searches again for the best path among the possible paths including the nodes with the highest costs. Otherwise the same path is maintained for data routing. FRP flow chart is illustrated in Fig. 1. 4.3

Calculating the Cost of a Node

Fuzzy Inference System. The fuzzy logic reasoning system allows us to transform several input metrics (residual energy, RSSI, number of packets in the queue and number of hops) into a single output value (cost). The operation of the fuzzy inference system can be summarized in these main steps: fuzzification, fuzzy inference, aggregation and defuzzification. We use the fuzzy inference model of Mamdani because of its simplicity and efficiency [8,9].

A Fuzzy System Based Routing Protocol to Improve WSN Performances

39

To illustrate the FRP routing mechanism, we consider the following network architecture. Node N1 wants to send data to the sink. We assume that the values of the decision parameters taken from node N1 are as follows: energy 7, RSSI 80, number of packets in the queue 5 and number of hops 2. Architecture of the fuzzy inference engine To directly combine the four linguistic input variables into a single output, would entail the management of 3 * 3 * 3 * 4 = 108 possible combinations of the different membership functions of the inference engine. To avoid the complexity associated with this management, we organize the inference system into three functional blocks as illustrated in the figure below. In the first step, energy and RSSI are combined to produce the linguistic variable ER (Energy and RSSI). In a second step, the number of packets in the queue and the number of hops are combined to produce the linguistic variable FS (Queue and Hops). Subsequently, these last two are combined together to produce a single “cost” output. This last linguistic variable provides information about the ability of a node to serve as a relay. Fuzzification. Fuzzification is the step in which we make sense of or interpret the input variables of our decision model [8]. Instead of belonging to the “true” or “false” set of traditional binary logic, fuzzy logic admits degrees of belonging to a given fuzzy set. For this purpose, several functions are available (triangular, sinusoidal, trapezoidal, etc.), but we will use the trapezoidal function to measure the degree of membership of input variables to the corresponding fuzzy set. First step fuzzification In the first step, we combine the residual energy and RSSI to calculate ER which in turn is considered in the next step. The initial energy of each node is considered to be 10 and the RSSI varies between 0 and 100. The energy variable can belong depending on the value of the scalar input to the fuzzy sets: low, medium and full (for a low, medium or full battery respectively). The RSSI variable can belong according to the value of the scalar input to the fuzzy sets: weak, moderate and strong (for a weak, moderate or strong signal respectively). Their membership functions are represented in Fig. 8 and Fig. 9. As assumed for node N1, with an energy equal to 7, the projections made on the fuzzy sets indicate us a full value at 100% and a medium value at 0%. As for the RSSI = 80, a similar reasoning leads us to find for the very high and high fuzzy sets, the degree of membership of 50% each. Table 1 summarizes the relationship between these two input variables and the output ER (Energy and RSSI). It is described by the fuzzy sets: very low, low, medium, high, very high. The table is based on the idea that ”the lower the RSSI and energy, the lower the ER output.

40

B. H. M. Sanou et al. Table 1. Output linguistic variable ER RSSI/Energy LOW

MEDIUM FULL

Very low

Very low Low

Low

Low

Low

Medium

Medium

Medium

Low

Medium

High

High

Medium High

High

Very high

Medium High

Very high

To determine the output, we use Mamdani’s inference model, and use the probabilistic operator ”AND” as the composition function and the maximum as the aggregation operator. For node N1, the controller computes two non-zero ER membership functions: very high (ER) = 0.50 and high (ER) = 0.50. These values are defuzzified (using the procedure described below) into a single ER output (ER = 0.81), and then used in step 3 of the fuzzification phase. Second stage fuzzification In the second step, we will combine the number of packets in the queue and the number of hops to calculate FS, which is also taken into account in the next step. We assume that the buffer of our nodes can only hold 10 packets and as for the number of hops, it is 100 at most. The variables, number of packets in the queue and number of hops, vary from small, medium to large and short, medium to large respectively. The Fig. 10 and Fig. 11 represent their membership functions. Now for the number of packets in the queue and the number of hops of node N1, we obtain respectively the degrees of membership in the medium 100 The relationship between these two input variables and the FS output (Queue and Jumps) is shown in Table 2 below. Table 2. Output linguistic variable FS Queue — Number of hops Short

Medium Large

Small

Very high high

low Very

Medium

high

Medium low

Large

Low

Very low Very low

Here the controller calculates a non-zero FS output membership function: high (FS) = 1. This value is defuzzified (according to the defuzzification procedure described below) into a single FS output (FS = 0.70), then used in step 3 of the fuzzification phase. Third stage fuzzification In this step, we combine ER (Energy and RSSI) and FS (Queue and Hops) according to the relationships established in Table 3 below to determine the node cost. The fuzzification of the variables ER and FS corresponds to the degrees of membership in the very high (ER) 50%, high (ER)

A Fuzzy System Based Routing Protocol to Improve WSN Performances

41

50% and high (FS) 100% fuzzy sets, respectively. These variables vary from very low, low, medium, high, very high fuzzy sets. Table 3. Output linguistic variable cost FS — ER Very low Low Very low

Medium High

Very high

Very low Very low low

low

Low

Very low low

low

medium

Medium

low

medium high

high

High

low

medium

high

high

Very high

medium

high

Very high Very high

Very high medium

medium

medium medium

Defuzzification. As the name suggests, defuzzification is the reverse operation of fuzzification. All fuzzy values obtained after the inference and aggregation steps are converted into a single concise result [4]. Among the many defuzzification methods (average of maxima, centers of gravity) proposed in the literature, the center of gravity method is the most widespread and attractive of all defuzzification methods. The defuzzification process of the linguistic variable “cost” is presented in Fig. 12 below. The output value indicates what is the level of cost to choose a node as the next hop when computing routing paths, according to the selected metrics. For the proposed topology, two cost output membership functions are enabled at node N1: very high 0.50 and high 0.50. The center of gravity of the represented region is 0.81. 4.4

Analytical Study of Our Approach Performance

In this section, we considered a WSN where similar routing cost calculations performed for all nodes in the network. As illustrated in the Sect. 4.3, the calculated costs are shown in the following Figure 2 As a preliminary evaluation, we perform an analytical study comparing the FRP and the standard SDN-WISE mode of operation. Node N1 must be able to transmit data to the sink. To do so, we apply the two routing approaches: the SDN-WISE standard and the FRP on an SDN-WISE network architecture, then we present the chosen paths. Case of Dijkstra-based SDN-WISE routing. In this first case, the choice of the route is based on the shortest path in terms of number of hops. The path used by node N1 would therefore be N1-N2-N6-sink. For the case of FRP-based routing, we assume K = 25% as the level of node performance degradation. That is, if a node on the routing path were to have a cost lower than the previous one by 25%, then a new routing path is computed. The FRP looks for paths consisting of nodes close to the sink in terms of number of hops and with a high cost. Step 1: the calculated path from node N1 to reach the sink would be N1-N2-N4-N7-sink (see Fig. 3). To choose this path, the controller

42

B. H. M. Sanou et al.

first checks which of the neighbors of node N1 has the highest cost. Thus, N2 is chosen to be the first node of the path. The same process is performed to then choose node N4 as the next hop in the path. At this level, the choice of the next hop of the path is between the nodes N6 and N7 which are closest to the sink. And N7 having the highest cost is then chosen to reach the sink. Step 2: at this step, the new calculated cost of node N2 = 0.59. We see that this cost is less than the initial cost by 25%, i.e. a difference of 22. Then the control recalculates a new path for node N1. Thus, the new path would be N1N3-N5-N7-sink (see Fig. 4).

Fig. 1. FRP flow chart

A Fuzzy System Based Routing Protocol to Improve WSN Performances

43

Fig. 2. Calculated costs of nodes

In a standard SDN-WISE, we note that when an optimal path (shortest path) is used to send data, that path remains until a node in the path exhausts all of its energy, before moving to another path. In this approach, core nodes discharge faster since they belong to the shortest paths. While others with a large amount of energy are less used for data transmission, due to belonging to less optimal paths. This route selection method reduces the lifetime of the network. Indeed, we assume that if one of the nodes of the used path falls (runs out of energy), the system is no longer efficient and we conclude the end of the network mission.

44

B. H. M. Sanou et al.

Fig. 3. Initial routing path for N1

In FRP routing, the paths chosen for data transmission are not necessarily the shortest. Instead, the nodes in the paths are the ones with the highest costs among the eligible nodes during the formation of the path. When a node in the path has its cost reduced by K% compared to the old one, then a new path is computed even if it is less optimal in terms of number of hops compared to the previous one.

A Fuzzy System Based Routing Protocol to Improve WSN Performances

Fig. 4. New routing path for N1

Fig. 5. Fuzzy inference system

45

46

B. H. M. Sanou et al.

Fig. 6. Network architecture

From this analytical study, we can state that FRP increases the network lifetime compared to the standard SDN-WISE routing mode. However, intensive simulations of its enhanced route selection process should confirm this claim (Figs. 5, 6 and 7).

A Fuzzy System Based Routing Protocol to Improve WSN Performances

Fig. 7. Fuzzy inference engine

Fig. 8. Residual energy membership function

Fig. 9. RSSI membership function

47

48

B. H. M. Sanou et al.

Fig. 10. Membership function for the number of packets in the queue

Fig. 11. Number of hops membership function

Fig. 12. Cost defuzzification

5

Conclusion

In our work, we have focused on the issue of optimizing energy efficiency in routing and congestion management of nodes in WSN especially SDN-WISE. We have seen that wireless sensor networks consist of sensor nodes with energy, storage and processing constraints. Also, we have highlighted the uneven distribution of energy consumption, network instability and the risk of overloading the sensor nodes in the centralized routing approach based on SDN-WISE. In view of these, we then proposed the Fuzzy Routing Protocol (FRP) which relies on

A Fuzzy System Based Routing Protocol to Improve WSN Performances

49

the fuzzy system to compute a cost based on the metrics energy, RSSI, number of packets in the queue and number of hops to reach the sink. The nodes with high costs and close to the sink are chosen as the next hops to constitute the path. The analytical study shows that this new approach improves the lifetime of the network. It not only allows a better balancing of the energy consumption in the network, but also manages the congestion. It then improves the risk of packet loss and thus decreases the transmission delays. Simulation tests are needed to confirm the performance of this new approach.

References 1. Schaerer, J., Zhao, Z., Braun, T.: DTARp: a dynamic traffic aware routing protocol for wireless sensor networks. In: RealWSN 2018 - Proceedings of the 7th International Workshop on Real-World Embedded Wireless Systems and Networks, Part of SenSys 2018, pp. 49–54 (2018) 2. Galluccio, L., Milardo, S., Morabito, G., Palazzo, S.: SDN-WISE: design, prototyping and experimentation of a stateful SDN solution for WIreless SEnsor networks. In: Proceedings - IEEE INFOCOM, vol. 26, pp. 513–521 (2015) 3. Anadiotis, A.-C., Galluccio, L., Milardo, S., Morabito, G., Palazzo, S.: SD-WISE: a software-defined wireless sensor network. Comput. Netw. 159, 84–95 (2019). https://www.sciencedirect.com/science/article/pii/S1389128618312192 4. Abdolmaleki, N., Ahmadi, M., Malazi, H.T., Milardo, S.: Fuzzy topology discovery protocol for SDN-based wireless sensor networks. Simul. Model. Pract. Theory 79(Dec), 54–68 (2017) 5. Ndiaye, M., Abu-Mahfouz, A.M., Hancke, G.P., Silva, B.: Exploring control-message quenching in SDN-based management of 6LoWPANs. In: IEEE International Conference on Industrial Informatics (INDIN), vol. 2019-July, pp. 890–893 (2019) 6. Tan, X., Zhao, H., Han, G., Zhang, W., Zhu, T.: QSDN-WISE: a new QoS-based routing protocol for software-defined wireless sensor networks. IEEE Access 7, 61070–61082 (2019) 7. Hieu, N.Q., Thanh, N.H., Huong, T.T., Thu, N.Q., Van Quang, H.: Integrating trickle timing in software defined WSNs for energy efficiency. In: 2018 IEEE 7th International Conference on Communications and Electronics, ICCE 2018, pp. 75– 80 (2018) 8. Kamgueu, P.O.: Configuration dynamique et routage pour l ’ internet des objets. To cite this version: HAL Id: tel-01687704 soutenance et mis ` a disposition de l ’ ensemble de la Contact: [email protected] (2018) 9. Kipongo, J., Esenogho, E.: Efficient topology discovery protocol for software defined wireless sensor network. Int. J. Electr. Comput. Eng. (IJECE) 9, 19 (2020)

A Lightweight and Robust Dynamic Authentication System for Smarthome Elisée Toé1 , Tiguiane Yélémou1(B) , Doliére Francis Somé2 , Hamadoun Tall1 , and Théodore Marie Yves Tapsoba1 1

2

Nazi BONI University, Bobo-Dioulasso, Burkina Faso [email protected] CISPA-Stanford Center for Cybersecurity, Saarbrücken, Germany [email protected]

Abstract. The advent of smarthomes improves the comfort in the homes. In this work, we are interested in automatic opening of gate at the arrival of legitimate vehicles in order to avoid that the occupants have to wait at the door. Indeed, due to the rise of insecurity in our cities, cases of aggressions in front of the doors are regularly reported. Also, the noise pollution necessary to be noticed at the door disturb the neighborhood. The major challenge of the gate automation is security. Most of the proposed solutions do not sufficiently take into account the robustness of the vehicle authentication mechanisms. We propose a security enhancement for vehicle access control with a sensor node on board at a gate that also has a sensor node. Our mutual authentication protocol between the two sensor nodes takes into account the resource limitations of the used objects. It is based on dynamic one-time passwords. We exploit random number generation and processing functions, Elliptic Curve Diffie Hellman Ephemeral (ECDHE) key exchange principle and HMAC-SHA256 hash function. The lightweight system with only one message exchanged in total offers a very low reaction time for the gate. Its dynamic nature allows it to resist to cryptanalysis and spooking attacks. Keywords: Internet of Things · Smarthome · Cybersecurity · dynamic authentication · one-time password · Elliptic Curve Diffie Hellman Ephemeral (ECDHE)

1

Introduction

More and more objects of everyday life are connected to the Internet or are accessible remotely in an Intranet. Many of these objects can thus be controlled automatically according to the physico-chemical conditions of their environment or the goodwill of their owners [1]. However, problems related to the security of these connected objects are considerably slowing down the evolution and deployment in the Internet of Things. The use of connected objects often meets the needs of users who are not sufficiently aware of security issues. The manufacturers of this type of object are often companies with little or no expertise in the c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023  Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 50–63, 2023. https://doi.org/10.1007/978-3-031-34896-9_4

A Lightweight and Robust Dynamic Authentication System for Smarthome

51

field of security and focus on the functionalities and ease of use of objects to the detriment of security. In smarthome, flaws in the authentication mechanisms of connected door locks, are at the origin of several cyber attacks because most of them are focused on an unsecured use in their classic form of these mechanisms. Moreover, objects cannot use classical security protocols (e.g. TLS) and most of the existing robust solutions do not allow to ensure good performances or they are not adapted to the capacities of objects which are limited in computing power. Beyond services and comfort offered by connected portals, mobile nodes such as cars connecting discontinuously to the system, a robust authentication mechanism is needed to govern the exchanges between the vehicle and the connected portal. This work aims at implementing a security mechanism adapted to limited resources contexts that prevents any malicious or unauthorized object from accessing a home automation system, in particular the control of access to the gate by vehicles. We propose a mutual authentication protocol suitable for resourceconstrained objects between the vehicle and the portal. This protocol faces the above mentioned problems and challenges and thereby: – it allows a light mechanism for a good implementation by our resourceconstrained objects; – it allows a quick authentication through a single exchange of information in order not to expose to assault the drivers of the vehicles if they have to wait for the opening of the gate; – it provides dynamic authentication with one-time passwords to resist brute force, replay, cryptanalysis and man-in-the-middle attacks. The rest of the paper is organized as follows. Section 2 is devoted to related work on authentication and access control mechanisms applied in IoT. We present our contribution in Sect. 3. The performance evaluation of our solution is discussed in Sect. 4. We conclude in Sect. 5.

2

Previous Work

Security is one of the major issues in increasing use of connected objects. The designers of these objects are often much more focused on innovation in functionalities and miniaturization. Also, due to the low capacities (memory, CPU, storage, embedded energy) the traditional robust security bricks cannot be deployed in these objects. However, sensitive information is often transmitted or stored in these IoT devices. They can be exposed to illicit exploitation. Resource-efficient (CPU, memory, energy) solutions must be developed to face multiple threats on these objects. In the last few years, many research works have focused on these security issues in the IoT. But many of them struggle to be efficient in contexts where the constraints of action delay or resources are high. Communications with wireless connected objects are very exposed to sniffers. To make the authentication process of these objects efficient, one-time passwords are recommended.

52

E. Toé et al.

HMAC-based one-time password (HOTP) [2] is one of the standards developed by the Initiative for Open AuTHentication (OATH) which is an international collaborative group aiming to promote strong authentication in opensource. The HOTP algorithm is based on a counter and a static symmetric key known only to the token and the validation service. Since the output of the HMAC-SHA [3] calculation is 160 bits. This value must be truncated to something that can be easily entered by a user (see Eq. 1). HOT P (K, C) = T ronquer(HM AC − SHA(K, C))

(1)

– Tronquer represents the function that converts an HMAC-SHA-1 value to a HOTP value. – The key (K), the counter (C) and the data values are hashed by the first high byte. Another technique for generating one-time passwords is Time-Based One-Time Password (TOTP) [4] appeared in 2011. It is also part of the standards developed by OATH. It is based on HOTP, with the only difference that the change factor is time and not a counter. It is based on “POSIX” time. The initial sharing of the “secret” between the two entities remains the same. However, the generation of the OTP will be done with the couple “secret” and time (or more precisely a “timestamp”) over a defined period (usually 30 to 60 s). This means that TOTP uses time incrementally and each OTP is valid for the duration of the time interval. Basically, TOTP=HOTP(K,T), where T is an integer and represents the number of time steps between the initial counter time T0 and the current time. These two mechanisms are often confronted with synchronization problems and reside on a shared secret. Their change factors can be easily broken through by cryptanalysis. Mohamed Tahar HAMMI and al. [5,6] implements the OTP principle in a customized key management mechanism. In this solution, a mutual authentication of nodes enforced by key management and AES algorithm to guarantee the integrity and confidentiality of exchanges is proposed. First, the device makes an association request, receives an authentication request, generates and sends otp1, and finally, authenticated by the Personal Area Network Coordinator (CPAN) if it is legitimate. Then, a ku key (derived from the identifier) is generated using the Pseudo Random Function (PRF) defined in RFC 5246 [7] which allows to have very robust keys. The generation of the keys is established only after the authentication operation has been successfully completed in order to avoid unnecessary calculations. The next step is a secure kb broadcast key exchange mechanism called hidden key broadcast and a calculation of a second OTP (otp2) for CPAN authentication. The hidden key broadcast mechanism is realized in 2 phases: 1. A value named signature is generated by calculating a HMAC (ku, otp1), then, 2. We XOR the result with kb.

A Lightweight and Robust Dynamic Authentication System for Smarthome

53

This solution brings security bricks into wireless sensor networks. The work of Esfahani and al. [8,9] are almost in the same logic as Hammi. However, the key management technique used in this authentication algorithm requires quite a lot of computing power, which is a limitation for our objects, especially since it completes the authentication process only after several exchanges on the channel. Research has been conducted to find cryptographic systems suitable for resource-constrained objects known as lightweight cryptography. These mechanisms are based on elliptic curves. Badis Hammi and others [10] have proposed an Elliptic Curve Diffie Hellman Ephemeral (ECDHE) key exchange algorithm based on elliptic curves. It is an algorithm used for key exchange that allows two entities to establish a shared secret. It is actually an adaptation of the Diffie-Hellman (DH) key exchange protocol that uses elliptic curve cryptography to minimize key length and improve performance. Elliptic curves are widely used in various key exchange methods, including DH key agreement. Elliptic Curve Cryptography (ECC) [11] provides a similar level of security to RSA, but with smaller key sizes that allow for fast computations and lower power consumption for IoT devices. Previous authentication methods (such as RSA) use a certificate to authenticate devices. Although using a certificate provides a high level of security, signing and verifying the certificate uses a high computational process that increases CPU and power consumption. Therefore, instead of using certificate-based authentication algorithms, the proposed mechanism to use the Pre Shared Key (PSK) [12] authentication algorithm. The pre-shared key is used to authenticate other parties as well as the ECDHE key exchange algorithm. In this process, to use ECC, each party must agree on the domain parameters (p, a, b, G, n, h) that define the elliptic curve. In addition, the client and server must have a key pair for elliptic curve cryptography, consisting of a private key d (a random integer) and a public key Q (Q=dG). Therefore, the private key dc and the public key Qc (Qc=dcG ) are for the client and the keys ds and Qs=dsG are for the server. Then, the client and the server exchange their public keys. They compute the secret key S using their own private key and the public key of the other party. The client computes S=dcQs and the server computes S=dsQs. The shared secret key (S) is the same for both parties since S = dcQs = dcdsG = dsdcG = dsQs. The ECDHE algorithm does not provide authentication per se, since the key is different each time and neither party can be sure that the key comes from the intended party. Therefore, the authentication algorithm (PSK) is used with the ECDHE algorithm to authenticate both parties. The PSK authentication algorithm applies a string of characters (64 hexadecimal digits) that is used as an authentication key (shared secret) and shared between the client and server in advance for text encryption. When the secret key is shared between them, they authenticate each other through the four-step procedure of the shared key authentication algorithm. The advantage of the PSK algorithm is that it avoids the heavy computation of the public key for authentication. The only problem is that if the attacker found the shared secret key, the previous and future sessions will be compromised. When the proposed

54

E. Toé et al.

mechanism uses PSK with the ECDHE key exchange algorithm, it provides the Perfect Forward Secrecy (PFS) feature that protects past sessions from future compromise by providing a separate key for each session. Even if the attacker somehow accesses this shared secret, he will only compromise that specific session. Previous or future sessions would not be compromised. For mobile nodes such as vehicles that connect discontinuously to the core system, basic authentication methods are easily defeated, especially by cryptanalysis and spoofing attacks. Also too robust methods [13,14] do not allow to have the desired speed of action. As a reminder, the desired operational objective is for the door to open as quickly as possible to prevent the vehicle from stopping at the door. This can expose the occupants to aggression.

3

Contributions

Authentication in the Internet of Things environment is very important to ensure that each legitimate node communicates with the target devices. Despite the resource constraints (computing power, storage space and bandwidth), the authentication mechanism for these objects must be strong. The desired functional requirements are: – Efficiency: in the context of smart home aiming on the one hand at improving comfort, it is necessary that the authentication takes place as quickly as possible to avoid waiting too long in front of the gate and then being object of aggression while waiting. On the other hand, to ensure the efficiency of the communications during the mutual authentication phase, the number of messages and the size of the data exchanged must be minimized. – Lightweight: in order to respect the resource constraints of smart objects, the authentication protocol must be lightweight. The primitives and functions used on smart objects must have a lower computational cost than traditional cryptographic algorithms (e.g. RSA, AES). Also the data storage on smart objects must be minimized. – A distributed solution: for machine-to-machine (M2M) communication, intelligent objects must be able to communicate directly without requiring the intervention of a central server during the authentication phase. Our approach is based on the combination of the One Time Password (OTP) mechanism and the password integrity check by exploiting the principle of the Elliptic Curve Diffie-Hellman (ECDH) key exchange protocol for the generation of session keys for the hash function (HMAC-SHA256) in order to dynamize the hash computation. We exploit the Perfect Forward Secrecy feature of ECDH key management to exchange these session keys and session information for future connections.

A Lightweight and Robust Dynamic Authentication System for Smarthome

55

By definition, an OTP is a password that is valid only once. Therefore, it is very robust against replay and cryptanalysis attacks. We opte for the asynchronous mode which is based on the challenge/response method because it does not require any prior agreement between the different nodes like the synchronous mode. In order to propose a robust and lightweight authentication, our approach has the following characteristics: – – – –

a dynamic and mutual authentication; multi-factor authentication: biometrics-OTP-session key-location; authentication in a single information exchange; a mechanism for authenticating exchanges.

Some tricks have been observed to reduce the complexity of this mechanism. As far as we manage to integrate the identification and authentication information by sending a single message, the privacy management is not a concern for the operation of the portal because its implementation requires computing power and other additional steps. The only authentication step after an approved verification is enough to operate the portal without sending any other information and the dynamic aspect of the information used for authentication is a great asset. Figure 1 shows our solution for strengthening the portal access control.

Fig. 1. Authentication protocol

– The Fotp function is used to generate the OTP. It takes as input a random number dv, the authentication session number seq and a challenge C. This number is incremented at each valid session.

56

E. Toé et al.

– Ui represents the unique identifier of the node and is a number between 41 and 49. The Tr(Ui) function generates a four-digit random number using the median square method so that the two digits representing Ui are in the middle. This generated number will represent the first 4 characters of the OTP. – When the gate side node receives the OTP, the Ui is retrieved by theM(OTP) function which extracts the first four characters of the OTP making up the number generated to mask the Ui. The retrieval of the Ui will allow the portal node to identify the node and to take on the challenge of calculating the OTP for the car and to verify its compliance. – C represents the pre-exchanged challenge between the two nodes during the last session. This challenge is dynamic and unique for a session. It corresponds to theUi for the first communication session between the two nodes. The challenge exchange between the two nodes is based on the ECDH key exchange principle via the challenge(dv) function which allows us not to transmit the real challenge on the channel. This function is executed after a valid authentication to allow an update of a new shared secret for the next communication. This challenge is used to create the session key for the next authentication request and at the same time boost the key of the hash function. On the car side, another function Save(ch, dv) is used to compute the real challenge using the received answer (ch) and the random number generated (dv) beforehand. – The sending of the OTP is accompanied by its hash which is generated using the HMAC-SHA256 function that takes as input the OTP and the session key. It should be added that before this whole OTP generation process is triggered, a valid biometric authentication by fingerprint of the driver is required. – If authentication fails, there will be no response from the gate side and the process ends without performing a data update. Figure 2 shows the details of the functionalities at the entity level and the exchanges between the two entities. This algorithm allows to authenticate the vehicle in a single information sending. We exploit some functions to achieve this: – the HMAC message authentication mechanism with the sha256 hash function: HMAC-SHA256 for hash generation – the ECDH (Elliptic Curve Diffie-Hellman) key generation principle is exploited for HMAC-SHA256 key exchange: implemented by the challenge() function. By this trick we manage to exchange securely the session keys for the next connection. – the method of median squares for the transformation of the UI identifier. The portal identifies the vehicle by doing the inverse operation on the number extracted from the OTP.

A Lightweight and Robust Dynamic Authentication System for Smarthome

57

Fig. 2. description of functions

4

Performance Evaluation of Our Solution

In this section, we present experimental setup and main results. 4.1

Experimental Setup

Our connected gate access control system is an exchange of authentication information between the vehicle and the gate. Each sensor node, in general, communicates via radio transmission modules. We focus our study on a connected portal whose architecture is presented as follows (see Fig. 3): – the sensor node of the vehicle consists of an Arduino Uno board equipped with an ultrasonic sensor hc05 and a transmission module nRF24L01. – at the gate there is another Arduino Uno board equipped with a motor controller, a reception module NRF24L01, an ultrasonic sensor hc05, two (02) motors to operate the gate.

58

E. Toé et al.

Fig. 3. experimental architecture

Figures 4 and 5 present extracts of source code of the different functions in C++ language. They have been edited and raised with the Arduino IDE.

Fig. 4. OTP generation (fotp) and data recovery (M) functions

We captured the output of the algorithm through the IDE’s serial monitor. Figure 6 shows us an example of execution and generation of OTPs followed by their hashes calculated by session keys.

A Lightweight and Robust Dynamic Authentication System for Smarthome

59

Fig. 5. the challenge exchange and hash (hmacsha256) functions

4.2

Responses to Functional Requirements

We analyze the performance of our authentication protocol in terms of computational costs, communication costs and storage requirements. – Computational costs: we determine the computational cost of our algorithm by evaluating its complexity. We have a complexity O(1) which is the constant time. It is hard to beat since the execution time is always the same, regardless of the input value. However the complexity of the solution proposed by HAMMI Sect. 2 is of order O(n) depending on the size of the session key chosen. We use our session key only to ensure the integrity of the OTP sent through the hash calculation. – Communication costs: to determine the communication costs, we calculated the total bit size of the OTP transmitted during the authentication phase and we also counted the number of exchanges. Thus we have that at the output of our algorithm (see Fig. 6) an OTP that is the result of the concatenation K || dv||seq||pass. We have: K = xxxx dv = xx seq = x pass = yyyyyyxyyy Hence OTP = xxxxxxxyyyyxyyyy with x and y representing respectively an integer and a character. We can deduce a size of at least 16 character strings (128 bits) for the one-time passwords we generate. Also compared to the mutual authentication solution via OTPs and session keys that we saw in the Sect. 2 section, we see that our solution has a lower number of exchanges which is two (02) against four (04) in general.

60

E. Toé et al.

Fig. 6. result of a test: OTP + Hash

– Storage requirement: it’s defined as the space used to store the data of the communicating entities in the system. In any protocol, this represents the domain parameters and additional data (such as credentials, shared keys and identities) stored at the end of the configuration phase. For our case, we store only three pieces of information: the identifier (Ui), the challenge (Ci) and the authentication sequence number (seq). These data occupy little memory space and only the challenge and the sequence number are updated after each authentication session.

4.3

Resistance to Attacks

Our security system must meet the requirements for various possible attacks. As discussed above (see Sect. 2), the attacks to which connected portals may be vulnerable are considered as our evaluation criteria in order to test the robustness of our solution. Our solution resists these attacks: – Spoofing attack: the two communicating entities authenticate each other using OTPs. These are based on secret information pairs (not known by the intruder) and by a unique random or pseudo-random number valid only for a single use (challenge C, dv, the hash key). In addition, the use of the transformation with the method of squares are linked to the object identifier (Ui ). Therefore, a node without knowledge of the personalized information can neither be authenticated nor impersonate a legitimate user.

A Lightweight and Robust Dynamic Authentication System for Smarthome

61

– Replay attack: the fact that the OTP is valid for only one use, protects the system against malicious users trying to resend (replay) the same messages in order to have an unauthorized use of the system. The generation of the OTP includes the sequence number (seq) and since no two CONNECT packets can have the same number, no replay attack can be possible. – Man-in-the-middle attack: our authentication protocol cannot protect the system when an intruder interrupts traffic during a communication. However, as explained in the description, if an intruder obtains all the exchanged messages, he cannot exploit them to obtain secret information or to forge a message because we ensure the integrity of the OTP by its hash that is sent to the portal. The HMAC-sha256 generation algorithm is robust because we use the session key which is dynamic in addition to the Salt that we add at the beginning and at the end of the OTP hash. – Cryptanalysis attack: the use of the EDCH key exchange principle, random information transformation method and irreversible hash function with session keys, protects the system against any cryptanalysis attack that can be deployed to recover the keys or the exchanged data. Because it would be necessary to go back and discover all these processing methods used by our algorithm. – Brute force attacks: the password is only valid for one node and lasts only for one session. Retrieving keys or one-time passwords for a communication session of a few seconds by a brute force attack is almost impossible. According to [15], finding a 12 character (96 bit) password using hardware with good performance can take 2 centuries. Our OTP has a length of 16 characters. Despite the evolution of technologies in computing power, the dynamic character of the exchanged passwords is an asset. – Physical attack: an adversary can gain physical access to conduct attacks such as copying collected data (confidentiality breach), modifying or deleting collected data (integrity breach). The nodes themselves can be cloned and/or modified without the knowledge of their owner. However, our home application requires physical protection of the nodes with strong boxes and changing the default addresses of the radio module used for pipes.

5

Conclusion and Perspectives

Home automation networks are increasingly targeted by attackers. IoT attacks are increasing due to the ease with which hackers can penetrate IoT systems where functionality is generally privileged at the expense of security. It is essential to strengthen their security through techniques adapted to the resource constraints of the objects that are interconnected. Authentication is the first line of defense for any system. We propose a robust and efficient authentication solution for smart home connected portals. After analyzing a connected portal solution, we were able to identify security breaches on the basic authentication mechanism used for most IoT solutions and the application constraints of some existing robust solutions. These are the authentication by ID/password or by SSL/TLS certificates.

62

E. Toé et al.

We propose a lightweight and robust dynamic authentication solution based on one-time passwords. These OTPs are generated thanks to the exploitation of the ECDHE key exchange principles for the exchange of session keys used by a hash function. Our solution ensures a perfect mutual authentication adapted to the context where the authentication step follows to make decisions. It uses a single information exchange to ensure the action. It uses elliptic curve algorithms that are less resource intensive (CPU, memory). The effectiveness of the encryption mechanisms coupled with the dynamics of the passwords allows us to have a robust and efficient solution. Thus our solution would be more robust if we manage to reinforce the confidentiality of the exchanged OTP by exploiting or adapting the mechanisms of the light cryptography for the objects limited in computing power. This could allow an application of our solution in all types of IoT networks by strengthening the authentication mechanisms of the IoT lightweight protocols.

References 1. Hammi, B., Khatoun, R., Zeadally, S., Fayad, A., Khoukhi, L.: IoT technologiesfor smart cities. IET Netw. 7(1), 1–13 (2018). eprint: https://onlinelibrary.wiley.com/ doi/pdf/10.1049/iet-net.2017.0163 2. View, M., M’Raihi, D., Hoornaert, F., Naccache, D., Bellare, M., Ranen, O.: HOTP: An HMAC-Based One-Time Password Algorithm. Request for Comments RFC 4226, Internet Engineering Task Force (2005). Num Pages: 37 3. Krawczyk, H., Bellare, M., Canetti, R.: HMAC: keyed-hashing for message authentication. Technical report, RFC2104, RFC Editor (1997) 4. View, M., Rydell, J., Pei, M., Machani, S.: TOTP: Time-Based One-Time Password Algorithm. Request for Comments RFC 6238, Internet Engineering Task Force (2011) 5. Hammi, M.T., Livolant, E., Bellot, P., Serhrouchni, A., Minet, P.: A lightweight IoT security protocol. In: 2017 1st Cyber Security in Networking Conference (CSNet), pp. 1–8 (2017) 6. Hammi, M.T., Livolant, E., Bellot, P., Serhrouchni, A., Minet, P.: A lightweight mutual authentication protocol for the IoT. In: Kim, K.J., Joukov, N. (eds.) ICMWT 2017. LNEE, vol. 425, pp. 3–12. Springer, Singapore (2018). https:// doi.org/10.1007/978-981-10-5281-1_1 7. Rescorla, E., Dierks, T.: The Transport Layer Security (TLS) Protocol Version 1.2. Request for Comments RFC 5246, Internet Engineering Task Force (2008) 8. Han, J.-H., Kim, J.: A lightweight authentication mechanism between IoT devices. In: 2017 International Conference on Information and Communication Technology Convergence (ICTC), pp. 1153–1155 (2017) 9. Makhtoum, H.E.L., Bentaleb, Y.: An improved IOT authentication process based on distributed OTP and Blake2. IJWMT 11, 1–8 (2021). Ibn Tofail University/Engineering sciences laboratory, Kenitra, Morocco 10. Hammi, B., Fayad, A., Khatoun, R., Zeadally, S., Begriche, Y.: A lightweight ECCbased authentication scheme for internet of things (IoT). IEEE Syst. J. 14(3), 3440–3450 (2020) 11. Igoe, K., McGrew, D., Salter, M.: Fundamental Elliptic Curve Cryptography Algorithms. Request for Comments RFC 6090, Internet Engineering Task Force (2011)

A Lightweight and Robust Dynamic Authentication System for Smarthome

63

12. Blumenthal, U., Goel, P.: Pre-Shared Key (PSK) Ciphersuites with NULL Encryption for Transport Layer Security (TLS). Request for Comments RFC 4785, Internet Engineering Task Force (2007) 13. Yan, S.C.S., et al.: Authentication of IoT device with the enhancement of one-time password (OTP). JITA 9, 29–40 (2021) 14. Esfahani, A., et al.: A lightweight authentication mechanism for M2M communications in industrial IoT environment. IEEE Internet Things J. 6, 288–296 (2019) 15. https://www.betterbuys.com/estimating-password-cracking-times/. Estimating Password Cracking Times

Anchor-Free Localization Algorithm Using Controllers in Wireless Sensors Networks Ahoua Cyrille Aka1(B) , Satchou Gilles Armel Keupondjo1 , Bi Jean Baptiste Gouho2 , and Souleymane Oumtanaga1 1 Laboratoire de Recherche en Informatique et Télécommunication (LARIT), Institut National

Polytechnique Félix Houphouët-Boigny, Yamoussoukro, Côte d’Ivoire {ahoua.aka18,armel.keupondjo,souleymane.oumtanga}@inphb.ci 2 Laboratoire de Mathématique Informatique (LMI), UFR SFA Université Nangui Abrogoua, Abidjan, Côte d’Ivoire [email protected]

Abstract. Localization of wireless sensors is a very important aspect of Wireless Sensors Networks (WSNs) as it determines the proper functioning and lifetime of the wireless sensors that constitute the network. The precise localization of fixed or mobile sensors in the network is a challenging problem that has attracted the attention of many researchers. Indeed, due to the constraints of wireless sensors, and the limitations of localization systems such as GPS, equipping each sensor with a localization system is an unviable solution. To solve the localization problem, several localization algorithms have been proposed in the literature. These algorithms fall into two broad categories, namely range-based algorithms, and range-free algorithms, which are either anchor-based or anchor-free. However, these algorithms are energy intensive, less accurate, and suffer from a low rate of localized nodes. To improve the existing solutions in the literature, a localization algorithm called AFLAC (Anchor Free Localization Algorithm using Controllers) has been proposed. Regardless of the communication range between wireless sensors, AFLAC allows estimating the distance between them to derive their position with low energy consumption and good localization accuracy. Keywords: Localization algorithm · Wireless sensor network · Anchor-free · Controllers

1 Introduction The emergence of new technologies, wireless communications, and advances in networking have enabled the development of small devices called wireless sensors. These sensors can communicate with each other via radio waves. However, when they are deployed in an area, they form an infrastructure called a Wireless Sensor Network (WSN). Typically, WSNs have a very large number of nodes and are considered a special type of ad hoc network where the nodes of this type of network are made up of many micro-sensors capable of collecting, processing, and transmitting environmental data in an autonomous © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023 Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 64–75, 2023. https://doi.org/10.1007/978-3-031-34896-9_5

Anchor-Free Localization Algorithm Using Controllers

65

way. WSNs are deployed randomly in hostile environments (places where human presence is undesirable) and allow to substitute the processing that humans could bring on local data. In recent years, WSNs have been used in several application areas: military, civil security, transportation, industrial, environmental, and many others. These networks have emerged with many challenges, such as security, routing, localization, etc. Although in the literature there are several localization systems such as GPS (Global Positioning System) [1]. Because of the energy constraints of wireless sensors and the limitations of these localization systems, equipping each sensor with a localization system is an economically unviable solution. Hence the need to develop a localization algorithm without any localization system. In this work, we proposed an anchor-free localization algorithm that aggregates the wireless sensors in a cluster network and uses a controller to estimate the distances between intra-cluster nodes that are not within the communication range. A controller is a Cluster Head that is elected among the set of Cluster Heads and it’s able to communicate with all the wireless sensors of its adjacent clusters. Our algorithm is executed in three steps: • The first step is to partition the network into clusters considering the 1-hop neighborhood. To do so, we based on the work of C. Lin et al., [2]. The objective of this step is to reduce the number of communications to minimize energy consumption, it also reduces the impact of environmental factors on the signals when the sensors estimate the distances between them. • The second step is the intra-cluster position derivation phase using Cluster Head as a reference. • The third step is to group the clusters into zones and elect a controller among the Cluster Heads to manage the zones to support distance calculations between wireless sensors that are not in communication range. The rest of our work is organized as follows. Section 2 presents a review of existing anchor-free range-based localization algorithms. Section 3 gives the problem specification. We describe in detail our anchor-free localization algorithm in Sect. 4. Section 5 presents the evaluation criteria, the results of the simulation and discussions are presented in Sect. 6 and ended in Sect. 7 with a conclusion and some perspectives.

2 Literature Review In this section, we survey the different approaches of anchor-free localization that exist in the literature. But first, it is necessary to specify the different step of executing localization algorithm. 2.1 The Steps Involved in Executing a Localization Process As shown in Fig. 1 below, a localization process is executed in three steps: • Distance estimation phase • Position computing phase

66

A. C. Aka et al.

Fig. 1. Localization process

• Localization algorithm phase The distance estimation phase is very important because the calculation of the position depends on the estimated distance. Indeed, an accurate estimation between nodes allows to derive the position of the nodes with less error. However, the methods used by localization algorithms to estimate distances can be divided into two groups: – Methods based on distance such as AoA (Angle of Arrival), ToA (Time of Arrival) [3], TDoA (Time Difference of Arrival) [4], RSSI (Received Signal Strength Indicator) [5, 6]. – Methods based on connectivity information such as DV-Hop (Distance Vector Hop) [7], WCL (Weighted Centroid Localization) [8], APIT (Approximate Point in Triangulation) [9]. Methods based on connectivity information generally require anchor nodes, although they have the advantages of being cheaper, simple to implement but less accurate than distance-based methods and therefore not suitable for applications where the localization of a node is very critical. Among the distance-based methods, AoA, TDoA, ToA can achieve better localization accuracy compared to the methods using RSSI because of the environmental factors that greatly affect the amplitude of the RSSI signal which is very sensitive to noise and obstacles. However, the RSSI method is the most common, least expensive and simplest technique as it does not require additional equipment (e.g. infrared or ultrasound) [6]. This method is suitable for networks with a high density of nodes. The RSSI method uses the received signal strength to estimate the distance between two nodes (sender and receiver nodes). There are several channel models for RSSI signal transmission, the most popular of which is the log-normal shadowing [5]. Depending on the method used in the distance estimation phase, they can be distinguished into two main groups of localization algorithm: range-based localization algorithms and range-free localization algorithms [9]. These algorithms can be either anchor-based or anchor-free.

Anchor-Free Localization Algorithm Using Controllers

67

In the most hostile environments, anchor-free approaches are preferred for three reasons. First, the deployment of anchors in the network is done manually, which can be difficult or impossible. Secondly, the numerical stability of the estimated positions of anchor-based approaches is questionable as the accumulated errors of the anchors have a very high weight, which could considerably affect the overall solution. Finally, anchor-based approaches are less scalable because to combat the instability described above, many anchors would be needed to configure a working area that can evolve over time. 2.2 Anchor-Free Localization Algorithms The anchor-free localization algorithm problem has been widely explored in the literature. To solve this problem, Youssef et al., [10] proposed an anchor-free localization algorithm called AAFL (Accurate Anchor-free Localization) which is based on clustering, all nodes are grouped into clusters and each cluster has an edge node which provides communication with other edge nodes although the distributed nature of AAFL reduces the computational complexity, but its localization accuracy is very low, moreover the network needs high connectivity to make the network more flexible and also the communication load of the edge nodes is too high which affects the lifetime of the network. Moore et al., [11] proposed an anchor-free distributed localization algorithm called RODL (RObust Distributed network Localization with noisy range measurements) based on distance measurement. It adopts a cluster-based localization technique that uses quadrilaterals to avoid inconsistencies. The formation of clusters optimises communication costs and therefore energy; this makes their algorithm scalable. Teijiro Isokawa et al., [12] proposed an anchor-free localization algorithm in which no geometric information of the sensors is required, it relies on a link quality indicator and the number of hops to estimate the distances between the nodes using Kalmann filter to reduce the errors induced by the link quality indicator. The performance of their protocol is limited only to a network with only four wireless sensors and requires all sensors to be equipped with an angle measurement device. Zhe Qu et al., [13] proposed an Energy Efficient Anchor-Free Localization Algorithm, which uses a single well node so the transport capacity and energy is not limited. This well node is taken as a reference position for a global localization of the system by sending an active packet throughout the network, each sensor that receives this packet is activated to calculate the angle and distance between it and the sink node. This algorithm requires all sensors in the network to be equipped with an angle measuring device and is less accurate for networks with a hostile environment. Chen Liu et al., [14] proposed an anchor-free localization algorithm that uses the node with high connectivity as a virtual anchor and relies on the asynchronous change of learning factor adaptive weights particle swarm optimization algorithm (SAAPSO) to estimate the positions and to obtain an accurate position, they use the Taylor algorithm. The resulting algorithm reduces the accumulated error and increases the localization accuracy. Wang Ming et al., [15] proposed a Distributed Cluster-based Anchor-free node localization algorithm (CDAP) which is executed in 3 steps. The first step is based on the

68

A. C. Aka et al.

clustering of one-hop nodes according to the technique used by ICAND (1-hop node selection method) and uses the ToA technique to estimate the distances between nodes. The second step is the cluster synchronization phase, in this phase all nodes of a cluster are synchronized, and the local coordinates are established using the angle and distance information. The third step is the global localization phase. Although this approach is scalable and accurate, all sensors need to be equipped with an angle measuring device. Tao Du et al., [16] proposed an anchor-free localization algorithm called LDLA (Ladder Diffusion node Localization Algorithm) in which each node calculates its relative position with the well node based on the principle of the algorithm proposed by Zhe et al., but only the nodes located at 1- hop are activated to calculate the angles and distances to the well nodes iteratively until all the sensors discover their position. Their algorithm optimizes the energy management of the sensors but does not take into account the scalability of the network and requires all of them to be equipped with an angle measuring device. Although the existing anchor-free localization algorithms solve the localization problem without anchor nodes, but there are still some shortcomings such as high computational complexity, low localization accuracy, high deployment cost and network scalability over time. Most of the existing research focuses either on improving the localization accuracy or on the energy consumption problem of the nodes.

3 Problem Specification 3.1 Assumption We assumed that all wireless sensor in the network: • • • • • •

Are not equipped with a localization system, are deployed randomly in an area of interest, after deployment are static, have sufficient energy to perform the localization task, have an omnidirectional antenna, have a unique identifier (ID).

3.2 Problem Formulation In general, a wireless sensor network is represented by a random geometric graph G = sensors deployed in a Euclidean (V , E) with V = {v1 , v2 , ..., vn } the set    of wireless space and the set of links and E = vi ; vj ∈ V 2 |di,j ≤ ri + rj where ri , rj denote the communication ranges of the sensors vi and vj and n denote the number of wireless sensors in the network. The localization problem we are solve is to make a good estimate of the distances di,j between the wireless sensors vi and vj in order to derive their position Pi (xi , yi ) and Pj (xj , yj ) in a 2D Euclidean plane where no element in V has knowledge of its position, i.e. no element in V is equipped with a localization system.

Anchor-Free Localization Algorithm Using Controllers

69

3.3 Objectives Coverage, complexity, scaling, robustness, cost (in energy and hardware) and localisation accuracy are the major challenges that existing localization algorithm in the literature have difficulty in meeting. Indeed, a good localization algorithm must consider these performance criteria. However, the specific objective we set ourselves in this work is to propose a localization algorithm that is good: • rate of localized nodes, • less energy consumption, • localization accuracy.

4 Contribution Network Model Consider a wireless sensor network represented by the architecture below (Fig. 2):

Fig. 2. Cluster and zone network architecture

Let us designate all the components of the network by the following parameters: – CH k : the k-th cluster head of the network (k ∈ { 1, . . . , m} ) with m the number of clusters in the network. – Cont p : p-th controller of the network (p ∈ { 1, . . . , q} ) with q the number of controllers in the network. A controller is a Cluster Head that is elected among the set of Cluster Heads. It’s able to communicate with all the wireless sensors of its adjacent clusters. – P(xi , yi ): the position of a sensor vi in a 2D Euclidean plane. – ri : the communication range of a wireless sensor vi . – dk,i : the distance between a Cluster Head CH k and a wireless sensor vi with k ∈ { 1, . . . , m} and i ∈ { 1, . . . , n} .

70

A. C. Aka et al.

4.1 Proposed Localization Algorithm In this section, we describe our proposed anchor-free localization algorithm. This localization algorithm called AFLAC (Anchor Free Localization Algorithm using Controllers) is executed in 4 steps: Clustering Phase of the Network – Nodes are deployed randomly in the network – Election of a group leader and cluster formation considering 1-hop sensors. Each node vi has a unique identifier id (v i ) and knows the identifiers of its 1-hop neighbors, all stored in a set Γ (vi ). If a node has the lowest identifier in, Γ (vi ) then it becomes Cluster Head and the Cluster Head ID id (CH ) is set to id (vi ). It then broadcasts a CLUSTER (id , id (CH )) message to inform its neighbours of its decision and removes id (CH ) from the set Γ . If the node does not yet know its cluster head, id (CH ) is set to unknown each node vi executes a series of instructions until Γ becomes the empty set. Distance Estimation Phase Use of the log-normal Shadowing model (RSSI) to estimate the distances between all nodes in a cluster that are in communication range. And when two nodes in a cluster are not within communication range, the controller steps in to calculate the distance between these wireless sensors. Position Calculation Phase Cluster Heads (CH) are taken as the coordinate reference position CH k (0, 0). Each Cluster Head chooses one sensor vi and a sensor vj in its cluster from among its peers to form its reference axes (the x-axis and y-axis). Assuming that the position sensors CH k (0, 0), Si (dki ,0) and Sj (0, d kj ) are taken as reference, the relative position of all wireless sensors for all clusters that constitute the network is determined iteratively by the trilateration method. When the position of a sensor is calculated, it is taken as a reference. Global Localization Phase – Division of the network into zones (a zone consists of a set of clusters) – The Cluster Head who has more energy and can communicate with all the wireless sensors of its adjacent clusters is taken as the zone leader (controller). Each zone is managed by a controller who has a global view of the zone assigned to him. – Each cluster head communicates its cluster ID to its dedicated controller. – Each controller can communicate with its nearest neighbouring controllers for global localization. Figure 3 summarises the steps of our localization algorithm call AFLAC.

Anchor-Free Localization Algorithm Using Controllers

71

Fig. 3. Flowchart of the AFLAC algorithm

5 Performance Evaluation MATLAB software was used as a simulation tool to evaluate the performance of the proposed localization technique. MATLAB has a large library of functions particularly those that manipulate vectors and matrices. We assume that the wireless sensor nodes are randomly deployed in an area of 50 * 50 m2 . At the beginning of the simulation, all nodes have the same energy. To evaluate the performance of our algorithm, we considered certain criteria such as: 5.1 Positioning Accuracy Equation (1) allow to compute the position error of a wireless sensor vi .    2 2  ei = x i − xi + (y i − yi )

(1)

72

A. C. Aka et al.

   where (xi , yi ) and x i , y i define respectively the actual and estimated position of a wireless sensor vi . 5.2 Rate of Localized Node The Eq. (2) calculates the ratio of localized nodes to the total number of nodes deployed in the network. %N =

Number of nodes located ∗ 100 Total number of nodes

(2)

5.3 Energy Consumption Model The energy expended to transmit and to receive a message of β – bits at distance d are defined respectively by Eq. (3) and Eq. (4): β ∗ Eelec + β∗ ∈fs ∗d 2 , if d < d0 ETX (β, d ) = (3) β ∗ Eelec + β∗ ∈amp ∗d 4 , if d ≥ d0 where Eelec is the energy consumed by the radio, ∈fs and ∈amp are used to amplify the  ∈fs . signal, with d0 = ∈amp ERX (β) = β∗Eelec

(4)

6 Results and Discussions The localization algorithm we proposed has a very high localized node rate. However, Fig. 5 shows that when we vary the nodes between 10 and 50 AFLAC keeps its performance in terms of localized node rate unlike that of Wang Ming et al. This performance is due to the controller nodes which have this ability to calculate the distance between nodes that are not in communication range (Fig. 4). It can be seen from Fig. 6 that during the localization process, the AFLAC localization algorithm gives an estimated low error rate on the position of the nodes compared to the proposed CDAP. Figure 6 shows that as the number of nodes in the network increases, the rate of energy consumed by CDAP localization algorithm increases sharply while the rate of energy consumed by the proposed AFLAC algorithm remains almost constant regardless of the number of nodes in the network.

Anchor-Free Localization Algorithm Using Controllers

Rate of nodes localized (%)

100

AFLAC

73

CDAP

80 60 40 20 0 10

20 30 Number nodes

40

50

Fig. 4. Rate of localized nodes

AFLAC CDAP

Localization error (%)

50 40 30 20 10 0 10

20

30 40 Number of nodes

50

Fig. 5. Error in the estimated position of nodes

Our experiments have shown that, when there are more nodes within communication range, the possibility of calculating distances between nodes favors a good rate of nodes located by the AFLAC and CDAP localization algorithms, which results in an estimated low error rate on the position of nodes. When there are fewer nodes within communication range, the AFLAC localization algorithm keeps its performance and the CDAP algorithm decreases in performance.

74

A. C. Aka et al.

Energy consomed (%)

AFLAC

CDAP

25 20 15 10 5 0 10

20

30 40 Number of nodes

50

Fig. 6. Energy consumed as a function of the number of nodes

7 Conclusion and Perspective In this work, we have addressed the problem of anchor-free localization in wireless sensor networks. Major challenges such as scalability, energy consumption, robustness, accuracy is still to be addressed by existing localization algorithms. This inspired us to propose a localization algorithm called AFLAC (Anchor Free Localization Algorithm using Controllers), which use a controller to compute the distance between two sensors which are not in communication range. To evaluate the performance of our algorithm AFLAC compared to one proposes by Wang et al., we based ourselves on some performance criteria such as the rate of located nodes, the error on the positioning accuracy and energy consumed in the localization process. The results of our experiment showed that the AFLAC localization algorithm has a good performance compared to the CADP algorithm. The clusters formed in the first phase can have different topologies. In our future work we plan to propose a localization algorithm that considers the topology of the clusters.

References 1. Anjasmara, I.M., Pratomo, D.G., Ristanto, W.: Accuracy analysis of (GNSS, GPS, GLONASS and BEIDOU) observation of positioning, vol. 01019, pp. 1–6 (2019) 2. Lin, C.R., Gerla, M.: Adaptive Clustering for Mobile Wireless Networks, vol. 15, no. 7, pp. 1265–1275 (1997) 3. Zhang, L., Yang, Z., Zhang, S., Yang, H.: Three-Dimensional Localization Algorithm of WSN Nodes Based on RSSI-TOA and Single Mobile Anchor Node, vol. 2019 (2019) 4. Liu, Z., Zhao, Y., Hu, D., Liu, C.: A Moving Source Localization Method for Distributed Passive Sensor Using TDOA and FDOA Measurements, vol. 2016 (2016) 5. Zhang, X., Fang, J., Meng, F.: An Efficient Node Localization Approach with RSSI for Randomly Deployed Wireless Sensor Networks, vol. 2016 (2016)

Anchor-Free Localization Algorithm Using Controllers

75

6. Ding, X., Dong, S.: Improving positioning algorithm based on RSSI. Wirel. Pers. Commun. 110(4), 1947–1961 (2019). https://doi.org/10.1007/s11277-019-06821-0 7. Labraoui, N., Gueroui, M., Aliouat, M.: Secure DV-Hop localization scheme against wormhole, no. December 2011, pp. 303–316 (2012). https://doi.org/10.1002/ett 8. Zhang, C.W., Zhao, X.: The wireless sensor network (WSN) triangle centroid localization algorithm based on RSSI, vol. 05008 (2016) 9. Mesmoudi, A., Feham, M., Labraoui, N.: Wireless Sensor Networks Localization Algorithms: A Comprehensive Survey, vol. 5, no. 6 (2013) 10. Youssef, A., Agrawala, A.: Accurate Anchor-Free Node Localization in Wireless Sensor Networks Anchor-free Localization Protocol (2005) 11. Moore, C.: Robust Distributed Network Localization with Noisy Range Measurements (2006) 12. Isokawa, T., et al.: An Anchor-Free Localization Scheme with Kalman Filtering in ZigBee Sensor Network, vol. 2013 (2013) 13. Qu, Z., et al.: An Energy Efficient Anchor-Free Localization Algorithm for No-Identity Wireless Sensor Networks, vol. 2015 (2015). https://doi.org/10.1155/2015/595246 14. Liu, C.: A localization algorithm based on anchor-free wireless sensor network, vol. 1056, pp. 221–226 (2014). https://doi.org/10.4028/www.scientific.net/AMR.1056.221 15. Ming, W.: Distributed Node Location Algorithm Using Non-anchor Node Clustering, no. iccse (2016) 16. Du, T., Qu, S., Guo, Q., Zhu, L.: A simple efficient anchor-free node localization algorithm for wireless sensor networks, vol. 13, no. 4 (2017). https://doi.org/10.1177/155014771770 5784

Assessing the Impact of DNS Configuration on Low Bandwidth Networks J. A. Okuthe1,2(B) and A. Terzoli2 1 Walter Sisulu University, Potsdam, East London 5200, South Africa

[email protected]

2 Rhodes University, Drosty Road, Grahamstown 6139, South Africa

[email protected]

Abstract. Domain name system (DNS) is an essential enabler for connecting users and services on the Internet. DNS translates human readable domain names into IP addresses and precedes client connection to a server via a domain name. DNS service is therefore expected to consume network bandwidth even though it offers no direct benefit to the user. Having observed the large component of DNS traffic on the community network local loop in our previous study, we migrated the DNS service from a server on the LAN to the gateway router and reconfigured the cache time-to-live. Results from the analysis of network traffic captured from the gateway router interface show a 26% decrease in the downlink bandwidth utilization and 46% decline in the uplink bandwidth utilization. The DNS component of the local loop traffic reduces from 45.28% to 4.11%. On the other hand, the Web component of the local loop traffic increases from 49.42% to 95.49%. Data collected from a mirroring port on the LAN switch indicate a decrease in the DNS portion of the internal traffic from 0.38% to 0.18%. Although the DNS reconfiguration helps alleviate network bandwidth constraints and reduces DNS component of the traffic, the Web portion increases. The implementation of effective, efficient and sustainable Web traffic management is therefore required. Keywords: Bandwidth Utilization · Local Loop · Cache · Time-to-Live

1 Introduction The common thread in low resourced community networks is limited bandwidth capacity. This necessitates careful use of the available bandwidth to avoid network throughput degradation and possible poor user experience [1]. It therefore becomes essential to deploy network bandwidth conservation techniques to counter the ever increasing user application bandwidth affinity. Since community networks operate within stringent budget constraints, it is not always feasible to enhance bandwidth in response to user requirements. Instances of high levels of DNS traffic traversing low bandwidth community networks have been reported [2]. In this study, we attempt to reduce high proportions of

© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023 Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 76–86, 2023. https://doi.org/10.1007/978-3-031-34896-9_6

Assessing the Impact of DNS Configuration on Low Bandwidth Networks

77

DNS traffic by relocating the DNS service from the Linux Terminal server to the gateway router. We subsequently monitor the local loop bandwidth utilization to establish the accrued impact. We also analyze the traffic composition of both the local loop and LAN to ascertain whether DNS reconfiguration results in less DNS traffic on the network. Since DNS service is a precursor to Web access, we examine the relationship between DNS reconfiguration and Web component of the network traffic. The Linux Terminal Server Project (LTSP) helps in the booting of LAN clients from a single template installation that resides in a virtual machine image housed on the server [3]. This makes LTSP a strong candidate for deployment in a diskless client environment. Network services that can be provisioned on the LTSP server include DHCP, DNS, TFTP and SSH. For community networks, the LTSP server offers cost effectiveness, secure environment and ease of maintenance. However, a standard LTSP server installation is inefficient due to its limited ability to act as a recursive caching name server [4]. With the DNS service provisioned on the LTSP server, we analyzed the network bandwidth utilization for one month and thereafter migrated the DNS service from the LTSP server to the gateway router. We then carried out a similar evaluation for a three month period. The results show that the inbound bandwidth utilization dropped by 26% from 9.57 Mbps to 7.07 Mbps while the DNS component of the network traffic experienced a 41% decrease from 45.28% to 4.11%.

2 Related Work DNS is the naming system of the Internet that translates human readable domain names into IP addresses. When a client connects to a web server a query that is resolved by the DNS server maps the server’s domain name to an IP address. DNS clients often initiate requests to DNS servers over UDP but may forward requests over TCP [5]. UDP is preferred mode for DNS message transfer because setting up TCP connections require additional network utilization and necessitate the opening of TCP connections for longer durations leading to additional consumption of client and server resources. Danzig [6] established that implementation errors can cause DNS to consume 20 times extra WAN bandwidth than the norm. According to Danzig [6], 14% of all wide-area packets were DNS in 1990, compared to 8% in 1992. As reported by Frazer [7], by 1995 the corresponding proportion of DNS traffic from the NSFNET study was 5%. From the 1997 study of the MCI backbone, the DNS portion of the WAN traffic reduced to 3% [8] indicating consistent improvement in DNS service configuration architecture. However, considering that most TCP traffic incrementally continue to consist mainly of Web traffic, which produces four connections for each successful DNS mapping [9], the reduction in DNS component of the WAN traffic is justifiable. DNS servers can be categorized as either authoritative or recursive resolvers [10]. While authoritative name servers have the root information for a domain, recursive resolvers temporarily save the mapping results in a cache. Although each host has the capacity to run its own resolver, the practice is to host a central resolver within an organization [11]. To improve network performance, users tend to opt for recursive resolvers situated outside of their organizations [12]. A study by Muller [13] established that recursive resolvers consult all authoritative servers over time and half of recursive resolvers

78

J. A. Okuthe and A. Terzoli

prefer low latency service from authoritative servers. The preference is noticeable when there is significant latency differences in the offerings from authoritative servers. High latency results in more network bandwidth requirements. Efficiency in DNS service requires extensive caching of responses based on the predefined time-to-live (TTL) [14]. While longer TTL values result in higher cache hit rates and less DNS traffic, lower TTL values give service providers more dynamic IP address to host name mappings and enhanced traffic flow. In mobile networking, dynamic DNS in tandem with low TTL bindings are provisioned for host mobility support [15]. Jung [16] established that when popular web sites were commissioned, the percentage of TCP connections made to DNS servers with low TTL values increased from 12% to 25% due to the increased server selection for popular sites. Results from this study suggest that caching is critical to DNS scalability since it helps reduce load on the root severs and generic top-level domain servers. The Akamai Content Delivery Network attempts to provide media resources from nearby servers using DNS to drive clients to close proximity data centers [17]. In the Akai setup, the first IP address received from DNS responses emanates from the closest server. This IP address has a higher likelihood of being adopted than subsequent addresses. However in standard DNS configurations, the IP address rotation performed by the recursive DNS servers distributes load across the advertised servers [18]. A study by Callahan [14] reports that 75% of the hostnames map to a single IP address while 8% have a mapping to two IP addresses. Results from the study further indicate that 11% of the hostnames associate with a minimum of five IP addresses affirming the existence of IP address replicas for specific hostnames.

3 Network Environment 3.1 Local Loop The Assumption Development Center (ADC) community network local loop consists of a unidirectional wireless system stationed at Khula Technology Solutions and an antenna deployed at the community center. The terrain view of the local loop from Khula Technology Solution to ADC is shown in Fig. 1. Khula Technology Solutions is the Wireless ISP (WISP). The aerial distance covered by the local loop is approximately 4.31 km. At the commencement of this study, the bandwidth of the local loop was 10 Mbps. However, the capacity was subsequently increased to 40 Mbps two months later. Figure 2 represents the satellite view of the local loop from the WISP to the community center. A clear line of site exists between the two locations. The signal strength shown in red shading is sufficient to cater for the requirements of the local community. 3.2 Local Area Network For community networks, thin clients offer a viable and sustainable option. The deployment of thin clients is best achieved using the Linux Terminal Server Project (LTSP) architecture [19]. It is for this reason that the ADC community network uses thin clients solution. LTSP allows thin clients in a LAN to boot through the network from a single

Assessing the Impact of DNS Configuration on Low Bandwidth Networks

79

Fig. 1. Terrain view of the local loop from WISP to the Community Centre (ADC)

Fig. 2. Satellite view of the local loop from WISP to the Community Centre

server. The communication between clients and the server is relayed through SSHtunnels making transactions secure. In our initial LAN configuration shown in Fig. 3, the LTSP server is responsible for booting the thin clients and providing both DNS and DHCP services. We subsequently relocate the DNS service to the gateway router and analyze the traffic to establish impact on bandwidth utilization. The community center has a computer laboratory where 10 thin clients with access to the Internet are situated. A wireless access point is available for staff and community members who prefer to use their own devices.

80

J. A. Okuthe and A. Terzoli

Fig. 3. Local Area Network Topology for the Community Centre

4 Network Usage For this study, we collect packets from both the ingress gateway router interface and a mirroring port on the LAN switch using N-Top Deep Packet Inspection (nDPI) software [20]. While the router interface data provides traffic information to and from the Internet, the mirroring port on the switch gives an indication of the LAN traffic composition. We dump packets received from the two measurement points to disk for offline analysis. Initially, we configure DNS service on the LTSP server with the default TTL value of 24 h [21] and collect data from the 1st of March 2021 to the 31st of March 2021. We subsequently relocate the DNS service to the gateway router and collect packets from the 1st of April to the 30th of June 2021 retaining the default DNS TTL value for MikroTik RouterBoard RB750GR3 hEx of 60 s [22]. 4.1 DNS Hosted by LTSP Server The weekly router interface traffic graph with DNS provisioned on the LTSP server is shown in Fig. 4. The maximum incoming traffic rate is 9.57 Mbps while the outgoing is 8.01 Mbps. The composition of the monthly traffic collected at the router interface is shown in Fig. 5 and highlighted in Table 1. The highest proportion of traffic traversing the community network is web based at 49.42% followed closely by DNS with a contribution of 45.28%. The analysis of data captured from the switch mirroring port shown in Fig. 6 indicates that DNS component accounts for 0.38% of the total LAN traffic. The focus of the analysis is on DNS and contributions to the LAN traffic from some applications are omitted.

Assessing the Impact of DNS Configuration on Low Bandwidth Networks

Fig. 4. Router interface weekly traffic data collected on the 26th of March 2021

Fig. 5. Monthly traffic distribution for data collected from the 1st to 31st of March 2021

Table 1. Percentage distribution of the top five applications Application

Web

DNS

ICMP

NetBIOS

Unknown

Distribution (%)

49.42

45.28

4.63

0.03

0.64

Fig. 6. Analysis of data collected from switch mirroring port – Focus is on DNS

81

82

J. A. Okuthe and A. Terzoli

4.2 DNS Hosted by Gateway Router The weekly router interface traffic with DNS provisioned at the default gateway router is shown in Fig. 7. The maximum incoming traffic rate is 7.07 Mbps while the outgoing is 4.29 Mbps.

Fig. 7. Router interface weekly traffic data collected on the 13th of April 2021

The composition of the traffic collected at the router interface for a period of three months is shown in Fig. 8 and highlighted in Table 2. The highest proportion of traffic traversing the community network is web based at 95.49%. DNS portion of the traffic is 4.11%.

Fig. 8. Composition of traffic collected from the 1st of April to 30th of June 2021 Table 2. Percentage distribution of the top five applications immediately after network upgrade Application

Web

DNS

ICMP

NetBIOS

Unknown

Distribution (%)

95.49

4.11

0.24

0.00

0.16

The analysis of data captured from the switch mirroring port shown in Fig. 9 indicates that DNS component accounts for 0.18% of the total LAN traffic. The focus of the

Assessing the Impact of DNS Configuration on Low Bandwidth Networks

83

analysis is on DNS and contributions to the LAN traffic from some applications are omitted.

Fig. 9. Analysis of data collected from switch mirroring port – Focus is on DNS

4.3 Comparison Migrating DNS service from the LTSP server lowers the downlink bandwidth utilization by 26% from 9.57 Mbps to 7.07 Mbps. The uplink bandwidth utilization reduces by 46% from 8.01 Mbps to 4.29 Mbps. The DNS composition of the traffic traversing the community network decreases from 45.28% to 4.11%. On the other hand, the web portion of the traffic increases significantly from 49.42% to 95.49% as indicated in Fig. 10.

Fig. 10. Composition of traffic traversing the community network

When DNS is provisioned on the LTSP server, the DNS component of the LAN traffic is 0.38% whereas with the DNS hosted by the gateway router, the contribution of DNS to the LAN traffic is 0.18% reflecting a decrease of 52.6%.

84

J. A. Okuthe and A. Terzoli

5 Discussion With DNS provisioned on the LTSP Server, the DNS component of the traffic traversing the community network is relatively high. We lower the DNS component of the traffic to the WISP by relocating DNS service to the gateway router. Incidentally this action also results in the reduction of DNS component of the LAN traffic. The advantages the gateway router has over the LTSP server are threefold. Firstly, the DNS cache TTL is 30 s instead of 24 h as is the case with the LTSP server. Secondly, the gateway router is a hop closer to the WISP than the LTSP server and therefore able to contact authoritative DNS servers faster. Lastly, the gateway router has less tasks to content with compared to the LTSP server which takes care of DHCP services and is also responsible for booting the thin clients. Reducing the DNS traffic component has the desirable effect of lowering the network bandwidth utilization which is our main goal. Since the majority of community network users are interested largely in Web services which requires DNS resolution beforehand, lowering DNS traffic leads to improved experience. As a result, a significant increase in Web component of the traffic traversing the community network occurs. The implication of this is that the community network is able to accommodate more users. The composition of Web traffic observed after relocating the DNS service from the LTSP server to the gateway router is 95.49%. We intend to investigate how this can be lowered. One way of achieving this is by caching web pages at either the LTSP server or the gateway router and establishing the impact on network bandwidth utilization. Any solution that could help mitigate bandwidth consumption would be beneficial to the community and remains our number one objective.

6 Future Work To ensure the community network service offering remains of acceptable quality we intend to implement Web caching capabilities so as to reduce the Web traffic. We will then establish the impact of Web caching on bandwidth utilization. Having observed that network usage is lower when the DNS service is provisioned at the gateway router, better performance may accrue from configuring Web caching on this device as well. We intend to limit the maximum inbound bandwidth utilization to 80% below the available network capacity.

7 Conclusion The change in DNS configuration resulted in lower bandwidth utilization and reduced the DNS component of the traffic traversing both the network to the WISP and LAN. However, the existence of a large portion of the web component of the local loop traffic is an issue that needs to be resolved. Therefore, our next effort will be devoted to the implementation of effective, efficient and sustainable web traffic management. Our goal is to enforce prudent network bandwidth management based on the observed traffic profile so as to avoid frequent bandwidth upgrade requirements.

Assessing the Impact of DNS Configuration on Low Bandwidth Networks

85

Acknowledgements. The research reported in this paper was in part supported by Telkom SA and Infinera SA. We are thankful and appreciate their input which enabled successful completion of this work.

References 1. Johnson, D.L., Pejovic, V., Belding, E.M., van Stam, G.: Traffic characterization and internet usage in rural Africa. In: 20th International World Wide Web Conference, March/April 2011, Hyderabad, India (2011) 2. Okuthe, J.A., Terzoli, A.: Quantifying the shift in network usage upon bandwidth upgrade. In: Sheikh, Y.H., Rai, I.A., Bakar, A.D. (eds.) E-Infrastructure and E-Services for Developing Countries, pp. 340–354. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-063749_22 3. McQuillan, J.: The Linux terminal server project: thin clients and Linux. In: 4th Annual Linux and Showcase and Conference, 10–14 October 2000, Atlanta Georgia (2000) 4. Van der Toorn, O., Müller, M., Dickinson, S., Hesselman, C., Sperotto, A., van Rijswijk-Deij, R.: Addressing the challenges of modern DNS a comprehensive tutorial. Comput. Sci. Rev. 45, 100469 (2022) 5. Mockapetris, P.: Domain Names - Implementation and Specification, RFC 1035, RFC Editor, 1987 (1987). http://tools.ietf.org/html/rfc1035 6. Danzig, P., Obraczka, K., Kumar, A.: An analysis of wide-area name server traffic: a study of the Internet domain name system. In: the Proceedings of the .ACM SIGCOMM, Baltimore, MD, August 1992, pp. 281–292 (1992) 7. Frazer, K.: NSFNET: a partnership for high-speed networking (1995). http://www.merit.edu/ merit/archive/nsfnet/final.report/. Accessed Aug 2022 8. Thompson, K., Miller, G., Wilder, R.: Wide-area traffic patterns and characteristics. IEEE Netw. 11, 10–23 (1997) 9. Balakrishnan, H., Padmanabhan, V., Seshan, S., Stemm, M., Katz, R.: TCP behavior of a busy web server: analysis and improvements. In: Proceedings of IEEE INFOCOM, San Francisco, CA, vol. 1, pp. 252–262 (1998) 10. Koch, P., Larson, M., Hoffman, P.: Initializing a DNS Resolver with Priming Queries, RFC 8109, RFC Editor (2017). https://tools.ietf.org/html/rfc8109 11. Mockapetris, P., Dunlap, K.J.: Development of the domain name system. In: Symposium Proceedings on Communications Architectures and Protocols, pp. 123–133 (1988) 12. Gao, H., et al.: Reexamining DNS from a global recursive resolver perspective. IEEE/ACM Trans. Netw. 24(1), 43–57 (2016) 13. Muller, M., Moura, G.C.M., Schmidt, R. de O., Heidemann, J.: Recursives in the wild: engineering authoritative DNS servers. In: Proceedings of the ACM International Measurement Conference, London, United Kingdom, November 2017 (2017) 14. Callahan, T., Allman, M., Rabinovich, M.: On modern DNS behavior and properties. ACM SIGCOMM Comput. Commun. Rev. 43(3), 7–15 (2013) 15. Snoeren, A., Balakrishnan, H.: An end-to-end approach to host mobility. In: The Proceedings of the 6th ACM MOBICOM, Boston, MA, pp.155–166 (2000) 16. Jung, J., Sit, E., Balakrishnan, H., Morris, R.: DNS performance and the effectiveness of caching. IEEE/ACM Trans. Netw. 10(5), 589–603 (2002) 17. Dilley, J., Maggs, B., Parikh, J., Prokop, H., Sitaraman, R., Weihl, B.: Globally distributed content delivery. IEEE Internet Comput. 6(5), 50–58 (2002) 18. Adhikari, V.K., Jain, S., Chen, Y., Zhang, Z.L.: Vivisecting youtube: an active measurement study. In: INFOCOM, pp. 2521–2525 (2012)

86

J. A. Okuthe and A. Terzoli

19. Martínez-Mateo, J., Munoz-Hernandez, S., Pérez-Rey, D.: A discussion of thin client technology for computer labs. In: International Multi-Conference on Innovative Developments in ICT, INNOV 2010, Athens, Greece, 29–31 July 2010 (2010) 20. Becchi, M., Franklin, M., Crowley, P.: A workload for evaluating deep packet inspection architectures. In: IEEE International Symposium on Workload Characterization, Seattle, Washington, USA, pp. 79–80 (2008) 21. IONOS: Digital Guide. DNS TTL best practices: Understanding and configuring DNS TTL (2022). https://www.ionos.com/digitalguide/server/configuration/understanding-and-config uring-dns-ttl/. Accessed Aug 2022 22. MikroTik Documentation: Dynamic DNS (2022). https://wiki.mikrotik.com/wiki/Manual:IP/ Cloud. Accessed Aug 2022

Mathematical Analysis of DDoS Attacks in SDN-Based 5G B. O. S. BIAOU1(B) , A. O. Oluwatope1 , and B. S. Ogundare2 1

Comnet Laboratory, Department of Computer Science and Engineering, OAU, Ile-Ife, Nigeria [email protected] 2 Department of Mathematics, Obafemi Awolowo University, Ile-Ife, Nigeria

Abstract. Data transmission in a high speed is the most requested by every internet user. Fortunately, the implementation of the fifthgeneration (5G) cellular network was carried out by Verizon in 2019 to ameliorate and overcome some challenges of the 4G cellular networks. Software-defined networking (SDN) as a good networking prototype of the hour extremely welcomed the 5G to render the full performance of itself. Unluckily, the best integration of SDN and 5G is seriously being confronted the Distributed Denial of Service (DDoS) attacks day in and day out. The goal of this paper is to analyse the DDoS attacks in the SDN-based 5G technology. Hence, a proposed VIS (Vulnerable-InfectedSecured) epidemic model is provided to investigate the security issues posed by DDoS attacks in SDN-based 5G. In this paper, the mathematical formulation for the epidemic VIS model was developed. The equilibria points, DDoS-free equilibrium, basic reproduction number and stabilities of DDoS-free equilibria were provided in MATLAB.

Keywords: SDN

1

· 5G · DDoS · VIS epidemic model. · SDN-based 5G

Introduction

5G telecom is picking up impressive speed from numerous areas like business, industry, IoT, e-Health, smart cities, scholarly community and academic [1,2]. In the middle of 2021, the Global Mobile Equipment Suppliers Association (GSA) reported more than 800 5G devices by May and reach 822 after a month. In addition, it has reported that about 443 telecom administrators in 70 Nations invested in 5G [3]. With the spring of 5G, IDC projects the number of connected IoT tools per minute at 152,200 by 2025 [4]. Likewise, SDN allows network administrators to handle network services through the abstraction of lower-level functionality. [5] recognized SDN as an emerges and powerful new technology that provides global visibility of the network by decoupling the control logic from the forwarding devices and the abstraction of network services in SDN

c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023  Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 87–100, 2023. https://doi.org/10.1007/978-3-031-34896-9_7

88

B. O. S. BIAOU et al.

architecture provides more flexibility for network administrators to execute various applications. DDoS attack is one of the most dangerous attacks that smashes the industries, the businesses and the private systems every day. The principal resolution of the DDoS attacks is to interrupt the services by flooding superfluous massive traffic over the network. The three major processes using by DDoS to attack are either on volume based-attack (in bits per second) or on protocol attacks (in packets per second) or on application layer attacks (in requests per second). The main goal of this research is to analyse mathematical-based the DDoS attacks in SDN-enabled 5G. Hence, the infection of nodes behaves in a similar way as human epidemic such as the novel COVID-19. In SDN-enabled 5G, a single centralized controller can manage multiples forwarding nodes which could lead to a faster propagation process of the DDoS attack. Therefore, an epidemic model could be adapted to the SDN-enabled 5G since the forwarding nodes and the controllers themselves can be infected in order to become part of a botnet which later could be used to perform the DDoS attacks. Consequently, the clear overview on the join area of 5G and SDN will be presented including the security issues and the damages of the DDoS in SDNenabled 5G. Moreover, VIS model will be designed with the mathematical formulations to evaluate the proposed VIS epidemic model. Furthermore, the solutions and the stabilities of the model will be analysed at the local and global asymptotic stabilities. Finally, the results for the numerical simulations of the proposed model will be carried out using MATLAB. This manuscript is systematically arranged in seven sections. Section one and two express clearly the introduction and the state-of-the-art respectively while the section three presents the mathematical modelling of the VIS model. Section four is based on the solutions and the stabilities of the system. The VIS model is simulated in the section five following by the discussions in section six while the manuscript is concluded in the last section seven.

2

Conceptualization of Related Works

In an Accenture review of 2,600 5G consumers, 35% of them voiced out about the security of 5G, whereas 62% considered 5G as an open door to more attacks [6]. The author in [7] presented 5G as a wide area of research. Even though, the transition from 4G to 5G SA can extraordinarily boost the quantity of mobile devices and the data rate, at the same time, it can rise up the DDoS treats. The more devices are connected the more the network is exposed to treats. Besides, the multitude of links among the devices in 5G can be the significant security issues [8]. The author in [9] presented a DDoS attacks detection in 5G networks using statistical and higher-order statistical features while the paper [10] presented DDoS detection and mitigation mechanisms in 5G Mobile Technologies. Nevertheless, according to the researchers, DDoS in SDN can be caused by the centralized control [11], the limited size of flow tables in switches [12], the separation of planes [13] and even the single point of failure [14].

Mathematical Analysis of DDoS Attacks in SDN-Based 5G

89

Although, the SDN-enabled 5G has modernized wireless networks but with numerous treats as presented in Fig. 1 required tremendous exploration [15].

Fig. 1. Paradigm of SDN-enabled 5G [15].

Figure 1 illustrates the overall view of the integration of SDN and 5G. Henceforth, several opportunities including edge computing, satellite network, health care, drones, aviation, mobile and many of others are took-off with the achievement of the SDN-enabled 5G. On the other hand, several challenges such as threats and attacks had outstretched in the paradigm of 5G and SDN. The delivery of QoS in the high level of the SDN for 5G networks are more perplex and represents a genuine issue that should be tended to [16]. Furthermore, the bit error speed caused by the network congestion and latencies in the SDN-based 5G is raised in [17]. Hence, Researches demonstrated that each of the layer of the SDN-enabled 5G including the connection between the layers are all exposed to threats and the DDoS attacks as showed in Fig. 2. [18] proposed a new method to equalize the processing burden among the dispersed controllers in SDN-based 5G networks. In Fig. 2, displays the architectural of the SD-based 5G and the security issued related to it. The application layer including its services communicates with the control layer, which is responsible for the management of the entire traffic flow through the northbound interface (NBI). The southbound interface (SBI) connects the control layer to the infrastructure layer. In addition, the figure presents the security issues even at each level of the architecture including the two interfaces (NBI and SBI).

90

B. O. S. BIAOU et al.

Fig. 2. SDN-based 5G: Architecture and Security Issues.

3 3.1

Mathematical Modeling of VIS System Presentation of VIS Model

The Epidemic model is known as the greatest illustrative mathematical analysis for investigation on viruses or infections in a static society [19]. The spread of the DDoS attack over time is analysed by a proposed epidemic model VIS (Vulnerable, Infected, and Secured). The fraction of vulnerable class of nodes (V) are the class that has never been attacked by DDoS while the infected class of nodes (I) has been attacked by DDoS and the secured class of nodes (S) has recovered from the attack. A node can migrate from one compartment to another by different transmission rates (the rate β from V to I, the rate μ from V to S and the rate ϕ from I to S). A node in vulnerable class of nodes can move directly to secured class of nodes without being infected. The proposed VIS epidemic model is built in an open population rate (α) of free input and output of node into the system where their will not be the migration of the node from secured to vulnerable. The rate transition (δ ) for any node to be disconnected from the system in this VIS model is equal while in infected class, a node can be disconnected with the rate ω due to the DDoS attack in the system.

Mathematical Analysis of DDoS Attacks in SDN-Based 5G

91

Hence, the illustration of the proposed VIS epidemic model of the DDoS attack in SDN-based 5G technology is presented in Fig. 3.

Fig. 3. Representation of the Proposed VIS Model.

3.2

Mathematical Model of VIS

The matching ordinary differential equations (ODE) for the proposed VIS epidemic model are presented in the system (1). ⎧ dV ⎪ ⎪ = α − βV I − μV − δV ⎪ ⎪ dt ⎪ ⎪ ⎪ ⎨ dI (1) = βV I − ϕI − δI − ωI ⎪ dt ⎪ ⎪ ⎪ ⎪ ⎪ dS ⎪ ⎩ = ϕI + μV − δS dt In the system (1), (U) represents the viable point for the model VIS. So, (U) is given by U = {(V, I, S) ∈ R3 : V > 0, I ≥ 0, S ≥ 0}.

4

Solutions and Stabilities Analysis of the Model

The proposed VIS model will be mathematically analysed at the local and global equilibrium in this section.

92

4.1

B. O. S. BIAOU et al.

Equilibrium Points

Every equation in the system (1) will be equal to zero for proper finding of the equilibria points (E) of VIS model. dI dS For E = (V, I, S) ∈ Ω and dV dt = dt = dt = 0 We have: ⎧ α − βV I − μV − δV = 0 ⎪ ⎪ ⎨ βV I − ϕI − δI − ωI = 0 ⎪ ⎪ ⎩ ϕI + μV − δS = 0

4.2

(2)

DDoS-Free Equilibrium

At a point where there is no attack in the integration of SDN and 5G, that particular stage is epidemically called attack-free equilibrium point. Let E ◦ represents the attack-free equilibrium E ◦ (V ◦ , I ◦ , S ◦ ) in R3 . We initially obtain ⎧ α − βV ◦ I ◦ − μV ◦ − δV ◦ = 0 ⎪ ⎪ ⎨ βV ◦ I ◦ − ϕI ◦ − δI ◦ − ωI ◦ = 0 ⎪ ⎪ ⎩ ϕI ◦ + μV ◦ − δS ◦ = 0

(3)

In absence of attack, I ◦ = 0. Consequently, the system (3) become ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨

V◦ =

⎪ ⎪ ⎪ ⎪ ⎪ ⎩ S◦ =

α μ+δ I◦ = 0

(4)

μα δ(μ + δ)

Therefore, the VIS model is free from attack at E • = (V ◦ , I ◦ , S ◦ ) = ( 4.3

μα α , 0, ) μ+δ δ(μ + δ)

(5)

Local Stability of DDoS-Free Equilibrium

In this section, we find out the adequate conditions for the local stability of the attack-free equilibria of the model.

Mathematical Analysis of DDoS Attacks in SDN-Based 5G

93

We have ⎧ dV ⎪ ⎪ = α − βV I − μV − δV = 0 ⎪ ⎪ dt ⎪ ⎪ ⎪ ⎨ dI = βV I − ϕI − δI − ωI = 0 ⎪ dt ⎪ ⎪ ⎪ ⎪ ⎪ dS ⎪ ⎩ = ϕI + μV − δS = 0 dt Let J be the Jacobian Matrix of the system (6). ⎞ ⎛ −βV ◦ 0 −βI ◦ − μ − δ βV ◦ − ϕ − δ − ω 0 ⎠ βI ◦ J• = ⎝ μ ϕ −δ

(6)

By considering the equation (5), we get ⎞ ⎛ βα 0 −μ − δ − μ+δ ⎟ ⎜ βα J• = ⎝ 0 μ+δ − ϕ − δ − ω 0 ⎠ μ ϕ −δ With the 3 × 3 Jacobian matrix, three eigenvalues will be obtained after solving the determinant of |J ◦ − Iψ| = 0. Then, ⎧ ⎪ ⎪ ⎪ ⎪ ⎨

ψ1 = −δ ψ2 = −μ − δ

⎪ ⎪ ⎪ ⎪ ⎩ ψ3 = βα − (μ + δ)(ϕ + δ + ω) μ+δ

(7)

Therefore, the three eigenvalues are ψ1 = −δ, ψ2 = −μ − δ and ψ3 = βα−(μ+δ)(ϕ+δ+ω) μ+δ 4.4

Basic Reproduction Number (BRN)

BRN is found from the eigenvalue that is not obviously negative out of the three eigenvalues (ψ1 , ψ2 and ψ3 ). So, in this case, the BRN will be determined from ψ3 . We establish βα − (μ + δ)(ϕ + δ + ω) 0) such that β(

α + λ0 ) − (ϕ + δ + ω) < 0 μ+δ

(11)

From the system (1), we get dV (t) = α − (μ + δ)V dt

(12)

For (t0 > 0), V (t) 

α , (∀t  t0 ) μ+δ

(13)

From the equation (13), and the second equation in the system (1), we obtain α dI(t)  β( + λ0 )I − (ϕ + δ + ω)I dt μ+δ = [β(

α + λ0 ) − (ϕ + δ + ω)]I(t) μ+δ

(14) (15)

From the Eqs. (11) and (15), we obtained lim I(t) = 0

t→∞

(16)

From the equation (16) and the first equation in the system (1), we get dV (t) = α − (μ + δ)V dt

(17)

Mathematical Analysis of DDoS Attacks in SDN-Based 5G

95

From Equation (17), we get lim V (t) =

t→∞

α μ+δ

(18)

By considering Eqs. (18), (16) and the third equation in the system (1), it follows tat S(t) is asymptotic to the following system μα dS(t) = δS dt μ+δ

(19)

Then, by the theory for asymptotically autonomous semi flows lim S(t) =

t→∞

μα δ(μ + δ)

(20)

By considering all the Eqs. (16), (18) and (20), the prove is concluded.

Fig. 4. General Stability of the VIS Model.

5

VIS Model Simulation

In this section, we evaluate the proposed VIS model to sustain the mathematical analysis of the DDoS attack in SDN-based 5G. The numerical simulations were carried out in Matlab. However, the local stability of the attack free equilibrium as shown in Fig. 4 is considered over the unit time of 25 min with the following numerical

96

B. O. S. BIAOU et al.

values: V0 = 0.1; I0 = 0.1; S0 = 0.1; ω = 0.6; δ = 0.2; ϕ = 0.4; μ = 0.35; β = 0.15 and α = 0.25. Nevertheless, we analyse each class of nodes (V, I and S) when we assume five different variations of the open population rate (α = 0.25, α = 0.3, α = 0.4, α = 0.5 and α = 0.7) as shown in Figs. 5, 6, 7 and 8. The variations of the open population rate (α) are related to the reality of SDN for 5G in term of Ultra-Reliable Low Latency Communication (URLLC) services that will increasingly affect the open population rate (α).

6

Discussion

Figure 4 presents the graphical representation of the three classes of nodes (V, I and S). Hence, the class V following by the class I increased over time while the class S decreased considerably over time. The gap observed between the three classes of nodes is significantly demonstrated the state of nodes when the DDoS attack occurred in any of the SDN-based 5G device.

Fig. 5. General Stability of the Vulnerable Class when α = 0.25, α = 0.3, α = 0.4, α = 0.5, α = 0.7.

Mathematical Analysis of DDoS Attacks in SDN-Based 5G

97

Fig. 6. General Stability of the Infected Class when α = 0.25, α = 0.3, α = 0.4, α = 0.5, α = 0.7.

Fig. 7. General Stability of the Secured Class when α = 0.25, α = 0.3, α = 0.4, α = 0.5, α = 0.7.

98

B. O. S. BIAOU et al.

Fig. 8. General Stability of the VIS Model when α = 0.25, α = 0.3, α = 0.5.

The Fig. 5 likewise the Fig. 6 show the gaps in which the number of vulnerable nodes and infected nodes respectively increases five times as the five different variations of the open population rate (α = 0.25, α = 0.3, α = 0.4, α = 0.5 and α = 0.7) were considered over time. Unlike the previous results showed in Figs. 5 and 6, Fig. 7 presents a different result where there is approximatively no gap between the number of secured nodes even though the five different variations of the open population rate (α = 0.25, α = 0.3, α = 0.4, α = 0.5 and α = 0.7) were applied over time. As well, Fig. 8 presents a general graphical view of the VIS model in three different variations of the open population rate (α = 0.25, α = 0.3 and α = 0.5) over time. The result showed in Fig. 8 is the compile results of the Figs. 5, 6 and 7 to confirm the full analysis of the proposed VIS epidemic model in the paradigm of SDN-based 5G technology.

7

Conclusion

In this research, a VIS epidemic model to analyse the DDoS attacks related to the great security issues of the DDoS in the SDN-based 5G technology was proposed. The mathematical formulation of the proposed model was numerically analysed using MATLAB. Furthermore, it was demonstrated that the DDoS attacks disrupt atrociously the well being of the targeted SDN-based 5G resource.

Mathematical Analysis of DDoS Attacks in SDN-Based 5G

99

Besides, this manuscript enumerate clearly that the great advantages of SDNbased 5G technology are as high as the DDoS attacks are over-high in the great playground opened by the implementation of 5G technology. Consequently, the proposed VIS epidemic model laid down the problematic of the security issues of the DDoS attack in an environment where 5G technology is implemented as we refer to SDN-based 5G. In the future, the research will look at the global stability of DDoS free equilibrium, the endemic equilibrium, the global hopf bifurcation and the optimal control analysis of the delayed VIS model. As well, it will implement the proposed VIS model in a real Telecom (5G) infrastructure under DDoS attacks. Acknowledgement. The authors appreciate the support received from the Africa Centre of Excellence, ICT-Driven Knowledge Park (ACE OAK-Park), OAU, Ile-Ife in funding this research.

References 1. Santos, G.L., Endo, P.T., Sadok, D., Kelner, J.: When 5G meets deep learning: a systematic review. Algorithms 13(9), 208 (2020) 2. Dutta, A., Hammad, E.: 5G security challenges and opportunities: a system approach. In: 2020 IEEE 3rd 5G World Forum (5GWF), pp. 109–114. IEEE (2020) 3. GSA: 794 organisations are deploying LTE or 5G Private Mobile Networks worldwide. Press Release published by Global mobile Suppliers Association, under 5G, Manufacturing, Market Research/Analysis, Small Cells, 15 Jun 2022. https://www.cambridgewireless.co.uk/news/2022/jun/15/gsa-794organisations-are-deploying-lte-or-5g-priv/ 4. Rosen, M.: Driving the digital agenda requires strategic architecture (2015) 5. Aujla, G.S., Singh, M., Bose, A., Kumar, N., Han, G., Buyya, R.: BlocksDN: blockchain-as-a-service for software defined networking in smart city applications. IEEE Netw. 34(2), 83–91 (2020) 6. Accelerating the 5G future of business, 25 February 2020. https://www.accenture. com/us-en/insights/communications-media/accelerating-5g-future-business 7. Dzik, S.: COVID-19 convalescent plasma: now is the time for better science. Transfus. Med. Rev. 34(3), 141 (2020) 8. G¨ undo˘ gan, C., Ams¨ uss, C., Schmidt, T.C., W¨ ahlisch, M.: Content object security in the internet of things: challenges, prospects, and emerging solutions. IEEE Trans. Netw. Serv. Manage. 19(1), 538–553 (2021) 9. Dahiya, D.: DDoS attacks detection in 5G networks: hybrid model with statistical and higher-order statistical features. Cybern. Syst. (2022). https://doi.org/10. 1080/01969722.2022.2122002 10. Dahiya, D.: DDoS attacks detection in 5G networks: hybrid model with statistical and higher-order statistical features. Cybern. Syst. 1–26 (2022) 11. Wu, P., Yao, L., Lin, C., Wu, G., Obaidat, M.S.: FMD: a DoS mitigation scheme based on flow migration in software-defined networking. Int. J. Commun. Syst. 31(9), e3543 (2018) 12. Durner, R., Lorenz, C., Wiedemann, M., Kellerer, W.: Detecting and mitigating denial of service attacks against the data plane in software defined networks. In: 2017 IEEE Conference on Network Softwarization (NetSoft), pp. 1–6. IEEE, July 2017

100

B. O. S. BIAOU et al.

13. Mohammadi, R., Javidan, R., Conti, M.: Slicots: an SDN-based lightweight countermeasure for TCP SYN flooding attacks. IEEE Trans. Netw. Serv. Manage. 14(2), 487–497 (2017) 14. Wang, T., Chen, H., Qi, C.: MinDos: a priority-based SDN safe-guard architecture for DoS attacks. IEICE Trans. Inf. Syst. 101(10), 2458–2464 (2018) 15. Kazmi, S.H.A., Qamar, F., Hassan, R., Nisar, K., Chowdhry, B.S.: Survey on Joint Paradigm of 5G and SDN Emerging Mobile Technologies: Architecture. Security, Challenges and Research Directions (2022) 16. Chen, M., Qian, Y., Mao, S., Tang, W., Yang, X.: Software-defined mobile networks security. Mob. Netw. Appl. 21(5), 729–743 (2016) 17. Duan, X., Liu, Y., Wang, X.: SDN enabled 5G-VANET: adaptive vehicle clustering and beamformed transmission for aggregated traffic. IEEE Commun. Mag. 55(7), 120–127 (2017) 18. Sheibani, M., Konur, S., Awan, I.: DDoS attack detection and mitigation in software-defined networking-based 5G mobile networks with multiple controllers. In: 2022 9th International Conference on Future Internet of Things and Cloud (FiCloud), pp. 32–39. IEEE, August 2022 19. Yang, L., et al.: Analysis of psychological state and clinical psychological intervention model of patients with COVID-19. MedRxiv (2020)

A 5G-Enabled E-Infrastructure for Multipoint Videoconferencing in Higher Education Institutions of Burkina Faso Bernard Armel Sanou, Abdoul-Hadi Konf´e(B) , and Pasteur Poda Laboratoire d’Alg`ebre, de Math´ematiques Discr`etes et d’Informatique (LAMDI), Universit´e Nazi BONI, Bobo-Dioulasso, Burkina Faso [email protected], {ahkonfe,pasteur.poda}@u-naziboni.bf

Abstract. Many actors in public institutions of higher education in Burkina Faso and around the world have used videoconferencing applications during the Covid-19 pandemic to conduct courses, pedagogical activities, participate in conferences and many other scientific activities. However, in the context of Burkina Faso where most of the actors use mobile Internet, the comfort of use of the different videoconference applications is often not satisfactory due to connectivity issues. In this paper, we propose a cutting-edge e-infrastructure to improve the usability of multipoint videoconferencing while preserving the mobility of users. The proposed e-infrastructure consists of a logical extension of the National Research and Education Network (FasoREN) access network such that multipoint videoconference calls initiated within a 5G network are routed towards FasoREN. It can be applied to any netwotk of type FasoREN a priori and in particular to those whose actors are more inclined to use the mobile Internet. Keywords: mobile Internet · 5G · multipoint videoconferencing videoconferencing · higher education · National Research and Education Network

1

· web

Introduction

Classically, point-to-point videoconferencing required only a few hundred kbps, typically over a dedicated link. However, this traditional mode of use has the disadvantage of being doubly restrictive: not only does it require adequate infrastructure and institutional equipment, but also physical presence in the room dedicated to this service. With the development of mobile Internet but also because of the coronavirus disease (covid-19), we have witnessed an intensive use and an accelerated popularization of multipoint videoconferencing applications (Zoom, Jitsi meet, Google meet, ...). In higher education in Burkina Faso, these multipoint videoconferencing tools have been widely used to mitigate the effects of c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023  Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 101–112, 2023. https://doi.org/10.1007/978-3-031-34896-9_8

102

B. A. Sanou et al.

covid-19 on academic calendars. The work of Pilabr´e et al. [1] can serve as an illustration. Although no formal survey data are available, it is safe to assume that multipoint videoconferencing is becoming a permanent fixture in the pedagogical system and materials of higher education in Burkina Faso, and that not only for distance learning but also for all types of meetings and academic activities involving geographically distant actors. On the other hand, a serious constraint remains the low speed of available connections. Indeed, public institutions of higher education in Burkina Faso do not have institutional access to the Internet. The various actors of the Burkinab`e university mostly resort to individualized access to the 3G/4G mobile Internet offered by telecommunications operators. This ultimate recourse to mobile Internet has very quickly shown its limits due to the low traffic capacity of these networks in Mbps per km2 . Throughout the world, the answer to the problem of high-speed Internet connectivity for the world of education and research lies in the NREN (National Research and Education Network) policy. NRENs are grouped together to form continental-scale networks: for Europe, there is GEANT (Gigabit European Advanced Network) grouping together 39 NRENs with a backbone bandwidth that will be upgraded to 500 Gbps in 2020 [2], and for Africa, there is AFRICACONNECT with 29 NRENs to be connected [3]. AFRICACONNECT is made up of three regional groups namely WACREN (West and Central African Research and Education Network) for NRENs of Western and Central Africa, UbuntuNet Alliance for NRENs of Eastern and Southern Africa, and ASREN (Arab States Research and Education network) for Middle East and North Africa [4]. Burkina Faso is a member of WACREN with its NREN called FasoREN. FasoREN, like many other neighboring NRENs in the WACREN zone, is not yet operational but is still in project [4]. Based on data from a consultant’s report1 , Bashir [4] reported that bandwidth requirements per student and per device are in the range of 600 kbps to 8 Mbps for various forms of videoconferencing and webinar services. Focusing on the needs of WACREN users, the work of Kashefi et al. [5] found that over 87% of those surveyed expressed a need to access a number of services including videoconferencing. The same study reported that 96% of the respondents would like to access services using laptops and 81% mobile devices. There is an extensive literature on the use of videoconferencing systems for higher education around the world. Among the most recent studies, we mention a few. Indeed, authors have been interested in evaluating how students perceive a course delivered via an interactive videoconferencing system and whether they are satisfied [6]. Other studies conducted [7,8] have focused on the effectiveness of videoconferencing in education to find out the factors that can influence the learning experience. Al-Samaraie [9] was interested in comparing the three types of videoconferencing systems that are desktop videoconferencing, interactive videoconferencing and web videoconferencing (multipoint videoconferencing). To do so, he based his work on the results of various previous studies on 1

source: data from Fox and Jones (2016); Brookings 2016 (https://www.brookings. edu); Arney, 2019.

A 5G-Enabled E-Infrastructure for Multipoint Videoconferencing

103

the use of videoconferencing in higher education. These videoconferencing systems have been analyzed in relation to the different approaches to knowledge acquisition (constructivism and cognitivism) and their importance in terms of learning opportunities, benefits but also challenges for higher education actors has been highlighted. More recently, an evaluation of the quality of the learning and teaching experience for web videoconferencing systems such as Zoom, Skype, Teams and WhatsApp has been made [10]. From these existing works, we note the relevance of using videoconferencing in higher education and the need to build networks dedicated to the world of education and research. It is of definite interest that the technologies for implementing NRENs will be constantly upgraded. There have been no specific studies of extended NREN architectures to account for user mobility in an ecosystem of heterogeneous digital networks where mobile Internet use predominates. In a context of higher education such as that of Burkina Faso marked by a delay to concretize its project of FasoREN, dominated by the recourse to the individualized accesses to the mobile Internet and where the use of the videoconference for the pedagogical and research activities occupies from now on a not negligible place, the present proposal wants to be a forward-looking response to the architecture of innovative digital network able to allow a more comfortable use of the multipoint videoconference via the mobile Internet for the institutions of higher education and research in Burkina Faso. The objective here is the design of a cutting-edge e-infrastructure architecture optimized for the use of multipoint videoconferencing for academic activities via the mobile Internet. By leveraging the immense potential of the 5G mobile network and opting for a dedicated NREN-type network, it is possible to design an e-infrastructure that optimizes the QoS required by multipoint videoconferencing applications. In the rest of this paper, we discuss the theoretical support of the proposed e-infrastructure (Sect. 2), then decline and discuss the proposed e-infrastructure (Sect. 3) and end with the conclusion (Sect. 4).

2 2.1

Background Technologies and Protocols 3GPP 5th Generation Mobile Network

After 4G, 5G is the fifth generation of the 3GPP mobile network standard. It is characterized by impressive technological advances over its predecessor. Compared to 4G with a bandwidth of 100 MHz, 5G comes with a bandwidth of 1 − 2 GHz and promises speeds up to 10 Gbps in uplink against 20 Gbps in downlink. A ten times lower latency of 1 ms is promised. In terms of density of connected objects, 5G is ten times denser than 4G with 106 connected objects per km2 . Its traffic capacity per area is a hundred times greater with 10 Mbps per m2 [11,12].

104

B. A. Sanou et al.

5G is also three major sets of use cases. The first is the mMTC (massive machine type communications), the second for more mobile broadband Internet (eMBB: evolved mobile broadband) and the third referred to as URLLC (ultrareliable low latency communications). This third set supports isochronous applications such as videoconferencing. 5G offers a variety of connectivity services, including the PDU (protocol data unit) connectivity service. Through this service, the 5G network allows a mobile terminal to become a member of a data network (e.g.: a NREN). A PDU session, an association between a mobile terminal and a data network, is then established to allow the exchange of data by PDU transfer [13]. 2.2

Physical and MAC Layers Technologies

In the era of 5G, suitable technology solutions must be offered to meet the objectives of high bandwidth and low latency. In this sense, cable suppliers are much more interested in innovative fiber optic cables for both core and access networks. Optical fiber is a physical medium made of glass or plastic wires that uses light to transport information over very long distances. It allows a faster connection speed, a very high throughput and a lower latency. ITU-T recommendations describe the geometrical, mechanical and transmission properties of multimode and single-mode optical cables and fibers. The current ITU-T recommendations are G.651.1, G.652, G.653, G.654, G.655, G.656 and G.657 [14]. Among them, three versions are proposed by Corning [15], the world leader in the fiber optics market. They are specified for the installation and use of innovative optical fibers. They include ITU-T G 652.D for SMF-28 Ultra fiber, ITU-T G.654.E for TXF-type fiber and ITU-T G.657.A1 for SMF-28 Ultra 200 fiber. ITU-T G.652 describes the characteristics of an optical fiber and a singlemode cable, with zero dispersion wavelength in the vicinity of 1310 nm and 1550 nm when optimized [16]. The ITU-T G.652.D version is around 1310 nm. The ITU-T G.654 recommendation describes the characteristics of a single-mode optical fiber and cable with a cutoff offset, where the zero-dispersion wavelength is in the vicinity of 1300 nm and the cutoff wavelength is offset in the vicinity of 1550 nm [17]. Optimized, the interval of 1530 nm to 1625 nm can be reached. The ITU-T G.654.E version is around 1550 nm. ITU-T G.657 describes the characteristics of a single-mode optical fiber and cable insensitive to bending losses, with a zero-dispersion wavelength between 1260 nm and 1625 nm [18]. The ITUT version G.657.A1 is around 1310 nm. Recommendation G.654.E is designed for high-speed, long-distance submarine and terrestrial optical networks. It is optimized for next-generation ultra high-speed optical transmissions. It is therefore an ideal candidate for backbone networks. ITU-T G.652.D and ITU-T G.657.A1 are used for broadband interconnection of buildings and homes. ITU-T G.657 is the latest release of the single-mode optical fiber recommendations and specifies the characteristics of bend-insensitive single-mode optical fibers. It is therefore the ideal choice for connecting access networks through its ITU-T G.657.A1 version.

A 5G-Enabled E-Infrastructure for Multipoint Videoconferencing

2.3

105

Upper Layer Protocols

Several protocols are associated with each layer of the standard OSI and TCP/IP. Here we focus on the different upper layer protocols that are involved in the operation of videoconferencing. At the Internet layer, the IPv6 protocol, specified in RFC 8200, is designed to allow extended addressing capability with a 128bit format, improve support for extensions and options, increase flow labeling capability, and authentication and privacy capability [19]. At the transport layer, the main protocols are TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). The TCP protocol, specified in RFC 9293, allows two hosts to connect and exchange data [20]. It is a reliable protocol that is suitable for applications with connection-oriented association semantics. The UDP protocol, specified in RFC 768, allows the transmission of data in the form of datagrams without a connection [21]. This protocol provides a minimal transport, unreliable compared to TCP. At the application layer, the Session Initiation Protocol (SIP), specified in RFC 3261, is a signaling protocol used to create, modify and terminate sessions with one or more participants [22]. These sessions include voice and video calls over IP. HTTP (HyperText Transfer Protocol) is the communication protocol between a client and a server on the World Wide Web. Its variant HTTPS (HyperText Transfer Protocol Secure), specified by RFC 2818, adds a layer of encryption with the TLS (Transport Layer Security) or SSL (Secure Sockets Layer) protocols in order to offer more secure exchanges on the Internet [23]. 2.4

High Availability and Computing Environments

A data center is an infrastructure based on a network of computers and storage spaces. This storage space is used to organize, process, store and warehouse large amounts of data. A data center comprises a set of servers that are set up by an organization for its own use or made available to other organizations or individuals as a service. In this case, it is called Cloud and one has to pay to benefit from these services. Modern networks have a tendency to bring data centers closer to the users. This trend allows for better performance. Cloud Computing is a technology and a business model that offers computing resources to individuals or organizations on demand. These shared computing resources can be servers, storage, applications, etc. It is an infrastructure that activates the main digital trends such as mobile computing, internet of things, big data, artificial intelligence for a successful digital transition [24]. There are three types of clouds: the public cloud, accessible via the Internet, the private or enterprise cloud, accessible in a private network, and the hybrid or intermediate cloud, which is a combination of the two previous types.

106

B. A. Sanou et al.

Multi-access Edge Computing (MEC) formerly known as Mobile Edge Computing is a standard of European Telecommunications Standard Institute (ETSI). ETSI GS MEC 003 defines its reference architecture [25]. The standard ETSI GR MEC 031 V2.1.1 (2020-10) is for the deployment of MEC technology in 5G networks. When it is deployed in a 5G network, it implements the functional entity called Application Function (AF) of the 5G network [26]. As such, MEC can ensure, with the cooperation of the PCF (Policy Control Function) entity, a particular management of the data flows for an application session, e.g.: for multipoint videoconference calls. By its principle, MEC acts as an enabler for 5G URLLC use cases. Indeed, it brings user equipments closer to the storage and computing power of the network [27]. Thus, it can be deployed in such a manner that its resources share the same location with either the base stations or the backhaul equipments of the 5G network [28]. This technology makes it actually possible for 5G operators to have their RAN (Radio Access Network) edge open to partners (e.g.: universities), giving them the opportunity to extend their services to the air interface part of the 5G network [29].

3 3.1

Proposed E-Infrastructure Architecture and Operation

The protocol stack of the e-infrastructure as seen from the NREN end is shown in Fig. 1. Since our objective is to propose a cutting-edge e-infrastructure architecture, a technological upgrade of the network of FasoREN should be made to meet this protocol stack. The backbone network should be upgraded with TXF (Corning distribution) fiber according to ITU-T recommendation G.654.E. For the upgrade of the access network that will principally serve university campuses and halls of residence, SMF-28 Ultra 200 fiber of ITU-T recommendation G.657.A1 may be selected. Mainly made of FTTH (fiber to the home), the access network ecosystem should be completed with WiFi wireless access to take into account the mobility of users. The proposed e-infrastructure is shown in the diagram in Fig. 2. FasoREN is provided with its own videoconferencing server installed on a virtualized data center. We propose the installation of an open source, free and secure videoconferencing server software such as Jitsi Meet [30]. The choice of a solution like Jitsi Meet aims at satisfying cost requirements. As we are about an academic environment, it is also suitable for enabling additional developments on the software through academic projects in order to meet specific needs. An interconnection of FasoREN to WACREN takes place in the proposed e-infrastructure in order to align with the general spirit of NRENs around the world. The proposed architecture also interconnects FasoREN to a 5G network. The 5G network integrates MEC servers within it.

A 5G-Enabled E-Infrastructure for Multipoint Videoconferencing

107

Fig. 1. Protocol stack of the FasoREN part of the e-infrastructure.

Now let’s explain how this e-infrastructure will work when a multipoint videoconferencing application session is in progress with geographically distributed actors, some using the 5G network and others on university campuses for example. When a multipoint videoconferencing call is made by a user (a mobile terminal) on the 5G network air interface, their call is supported by the network. The network implements the PDU connectivity service to configure the mobile terminal to be identified according to the addressing scheme implemented in FasoREN. A PDU session is then established and the mobile terminal can transfer the PDUs of the call to the videoconferencing server of FasoREN via first a gNodeB base station and then one or more UPFs (User Plane Function) in the 5G network. The UPF is a node in the 5G network responsible for routing user data. The multipoint videoconferencing service requires the guarantee of a constant jitter to provide a satisfactory user experience. The traffic capacity per m2 offered by the 5G network is largely large compared to that of 4G networks,

108

B. A. Sanou et al.

Fig. 2. Proposed e-infrastructure for multipoint videoconferencing within FasoREN.

so that PDUs could be simply forwarded following the PDU connectivity scheme. However, connectivity failures can occur for a variety of reasons and impact the overall quality of the call in progress. To maximize the likelihood of good quality of the videoconference call, the MEC server steps in to ensure that the PDUs in the videoconference call are treated appropriately according to the required quality of service. Multipoint videoconference calls initiated from FasoREN access networks (e.g.: on a university campus) are routed directly to the videoconference server using IP packet transport techniques, i.e., successive hops. 3.2

Discussion

Let’s recall that the purpose of this paper is to make a proposal for a cuttingedge digital network architecture to improve the comfort of use of multipoint videoconferencing via mobile Internet for higher education and research actors in Burkina Faso. The approach has been to draw on the technologies and protocols of the TCP/IP stack but also the opportunities offered by 5G. An einfrastructure is ultimately proposed. It includes two distinct and autonomous digital networks in terms of their management. The 5G mobile network of a telecom operator and the FasoREN network. The 5G network provides anytime,

A 5G-Enabled E-Infrastructure for Multipoint Videoconferencing

109

anywhere mobile Internet access for users in the higher education and research community. It offers PDU connectivity service allowing 5G mobile Internet users to be redirected to the FasoREN network dedicated to them for their academic activities. The choice of the MEC solution maximizes the convenience of using the multipoint videoconferencing service. An upgrade of the FasoREN network is proposed and consists of all-fiber to the end-user with additional WiFi access options. The proposed e-infrastructure is a result that converges in its FasoREN part with technical studies [2,31] addressing the backbone topology and that have made technological upgrades. It does, however, make a difference at this point where an interconnection with the 5G network is proposed to extend NREN access to mobile Internet users. The proposed e-infrastructure therefore preserves user mobility in an increasingly heterogeneous digital ecosystem. Seen from the point of view of the multipoint videoconferencing service, it is presented as an enabler to further boost the adoption, for academic activities, of this videoconferencing service; existing studies, e.g.: [7,10], dealing with the use of this service having largely demonstrated its interest and relevance. At this stage of the study, it is clear that the proposed e-infrastructure is technically sustainable to ensure the mobility of users and allow them to have more comfort in using the videoconferencing service. The geographical proximity of the videoconferencing server to the users, coupled with the MEC technology, is a major asset to guarantee a sufficiently constant jitter even when the number of participants in a multipoint videoconferencing session increases. On the other hand, at this stage of the study, it is premature to provide figures on, e.g.: the maximum number of participants to maintain a certain level of comfort considered good, the evolution of certain QoS parameters according to this number. A next step in this study should therefore be to design a simulation model to answer these questionings. The proposed e-infrastructure is based on the FasoREN user profile. Nevertheless, it has the advantage of being duplicable for any NREN, especially those in countries where the higher education and research actors have a similar user profile. This advantage results from the universality of the selected technologies. This same analysis applies to the videoconferencing service on which we focused in this study. In other words, any other type of digital service with the same QoS requirements can be supported. Ultimately, this study reinforces the relevance of national and regional policies to build dedicated networks for education and research. It presents an interesting opening of existing or future NRENs to the wealth of digital services and use cases that 5G abounds. It could be a harbinger of future studies to re-evaluate whether research and education actors are better accommodated with the digital environment than lecture halls and offices for face-to-face.

110

4

B. A. Sanou et al.

Conclusion

In the context of Burkina Faso where actors of higher education and research have widely adopted multipoint videoconferencing for various academic and scientific activities and where mobile Internet is the most widely used, the purpose of this paper is to design a cutting-edge and optimized e-infrastructure to improve the comfort of use of videoconferencing. We have proposed a digital network architecture consisting of two components. A first component consisting of an upgrade of FasoREN backbone and access network with at its core a videoconferencing server deployed in a cloud environment. A second component consisting of a 5G operator network integrating MEC servers. The 5G network, through its PDU connectivity service, allows the transfer of user data from the 5G network to the FasoREN network, thus ensuring an extension of the FasoREN access network. The proposed architecture geographically bringing the videoconference server closer to the users and implementing an enabling technology that is favorable to URLLC type use cases effectively improves the user experience of the videoconference service. This e-infrastructure provides an ideal framework for the intensive use of information and communication technologies in education and research. Thus, it will undoubtedly reduce the existing digital divide and accelerate the digital transition, particularly in the education and research sector. The future work in the framework of this study will focus on proposing and implementing a simulation protocol to quantitatively evaluate how well the proposed e-infrastructure performs.

References 1. Pilabr´e, A.H., Ngangue, P., Barro, A., Pafadnam, Y.: An imperative for the national public health school in Burkina Faso to promote the use of information and communication technologies in education during the COVID-19 pandemic: critical analysis. JMIR Med. Educ. 7(2), e27169 (2021). https://doi.org/10.2196/ 27169 2. Castillo-Velazquez, J.I., Revilla-Melo, L.C.: Management emulation of advanced network backbones in Africa: 2019 Topology. In: Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 1–4 (2020). https://doi.org/10.1109/ CCECE47787.2020.9255779 3. AfricaConnect2 map, AfricaConnect2 project, TENET (2015) 4. Bashir, S.: Connecting Africa’s Universities to Affordable High-Speed Broadband Internet. World Bank, Washington, DC (2020). https://openknowledge.worldbank. org/handle/10986/34955 5. Kashefi, A., et al.: User requirements for national research and education networks for research in West and Central Africa. Inf. Dev. 35(4), 575–591 (2018). https:// doi.org/10.1177/0266666918774113 6. Klibanov, O.M., Dolder, C., Anderson, K., Kehr, H.A., Woods, J.A.: Impact of distance education via interactive videoconferencing on students’ course performance and satisfaction. Adv. Physiol. Educ. 42(1), 21–25 (2018). https://doi.org/ 10.1152/advan.00113.2016

A 5G-Enabled E-Infrastructure for Multipoint Videoconferencing

111

7. Ghazal, S., Al-Samarraie, H., Aldowah, H.: “I am Still Learning”: modeling LMS critical success factors for promoting students’ experience and satisfaction in a blended learning environment. IEEE Access. 6, 77179–77201 (2018). https://doi. org/10.1109/ACCESS.2018.2879677 8. Malinovski, T., Vasileva-Stojanovska, T., Trajkovik, V., Caporali, E.: The educational use of videoconferencing for extending learning opportunities. Video conference as a tool for higher education: The TEMPUS ViCES experience, pp. 37–51. Frenzie University Press (2010) 9. Al-Samarraie, H.: A scoping review of videoconferencing systems in higher education: learning paradigms, opportunities, and challenges. Int. Rev. Res. Open Distrib. Learn. 20(3) (2019). https://doi.org/10.19173/irrodl.v20i4.4037 10. Correia, A.P., Liu, C., Xu, F.: Evaluating videoconferencing systems for the quality of the educational experience. Distance Educ. 41(4), 429–452 (2020). https://doi. org/10.1080/01587919.2020.1821607 11. Eluwole, O. T., Udoh, N., Ojo, M., Okoro, C., Akinyoade, A. J.: From 1G to 5G, what next? IAENG Int. J. Comput. Sci. 45(3) (2018) 12. Imoize, A. L., Adedeji, O., Tandiya, N., Shetty, S.: 6G enabled smart infrastructure for sustainable society: opportunities, challenges, and research roadmap. Sensors. 21(5), 1709 (2021). https://doi.org/10.3390/S21051709 13. LAGRANGE, X.: “Explorer la 5G”. Institut Mines Telecom (2020). https://www. fun-mooc.fr/fr/cours/explorer-la-5g/ 14. ITU: Transmission systems and media, digital systems and networks. https://www. itu.int/rec/T-REC-G 15. Corning. https://www.corning.com/ 16. ITU: G.652: Characteristics of a single-mode optical fibre and cable. https://www. itu.int/rec/T-REC-G.652-201611-I/en 17. ITU: G.654: Characteristics of a cut-off shifted single-mode optical fibre and cable. https://www.itu.int/rec/T-REC-G.654-202003-I/en 18. ITU: G.657: Characteristics of a bending-loss insensitive single mode optical fibre and cable. https://www.itu.int/rec/T-REC-G.657-201611-I/en 19. RFC 8200 - Internet Protocol, Version 6 (IPv6). https://datatracker.ietf.org/doc/ rfc8200/ 20. RFC 9293 - Transmission Control Protocol (TCP). https://datatracker.ietf.org/ doc/html/rfc9293 21. RFC 8085 - UDP Usage Guidelines. https://datatracker.ietf.org/doc/rfc8085 22. RFC 3261 - SIP: Session Initiation Protocol. https://www.rfc-editor.org/rfc/ rfc3261#section-1 23. RFC 2818 - HTTP Over TLS. https://www.rfc-editor.org/rfc/rfc2818 24. Sunyaev, A.: Cloud Computing. Internet Computing, pp. 195–236. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-34957-8-7 25. E25. ETSI: ETSI GS MEC 003 V2.2.1 (2020–12): Multi-Access Edge Computing (MEC); Framework and Reference Architecture. https://fr.scribd.com/document/ 522784055/gs-MEC003v020201p 26. ETSI: ETSI GR MEC 031 V2.1.1 (2020–10): Multi-access Edge Computing (MEC); MEC 5G Integration. https://www.etsi.org deliver/etsi-gr/MEC/001099/031/02.01.0-60/gr-MEC031v020101p.pdf 27. Wang, X., Ji, Y., Zhang, J., Bai, L., Zhang, M.: Low-latency oriented network planning for MEC-enabled WDM-PON based fiber-wireless access networks. IEEE Access 7, 183383–183395 (2019). https://doi.org/10.1109/ACCESS.2019.2926795

112

B. A. Sanou et al.

28. Habibi, M.A., Nasimi, M., Han, B., Schotten, H.D.: A comprehensive survey of RAN architectures toward 5G mobile communication system. IEEE Access 7, 70371–70421 (2019). https://doi.org/10.1109/ACCESS.2019.2919657 29. ETSI: Multi-access Edge Computing. https://www.etsi.org/technologies/multiaccess-edge-computing 30. Jitsi: Free Video Conferencing Software for Web & Mobile. https://jitsi.org/ 31. Palanga, E.T.G., Sagna, K., Kodjo, K.M., Kondo-Adi, A.M., B´edja, K.S.: Architecture of an education and research network: case of TogoRER. Am. J. Mod. Phys. 8(1), 5–13 (2019). https://doi.org/10.11648/j.ajmp.20190801.12

E-Services (Farming)

Virtual Fences: A Systematic Literature Review Mahamat Abdouna1 , Daouda Ahmat1,2(B) , and Tegawend´e F. Bissyand´e3,4 1

4

Virtual University of Chad, N’Djamena BP: 5711, Chad {mahamat.abdouna,daouda.ahmat}@uvt.td 2 University of N’Djamena, 1117 N’Djamena, Chad 3 University of Luxembourg, Luxembourg 4365, Luxembourg [email protected] University Ouaga I Pr Joseph Ki-Zerbo, Ouagadougou 7021, Burkina Faso Abstract. Virtual fencing is a technique for animal control where a physical infrastructure is not needed to implement a fence. Control is achieved by modifying the behavior of the animal by means of one or more sensory signals, which may be auditory and/or electrical. These signals are transmitted to the animal when it tries to cross an electronically constructed boundary. This demarcation can be of any shape that respects geometric properties. While invisible to the naked eye, it is detectable by an electronic device worn by the animal. Due to its potential, this notion of virtual fencing for the management of free-range livestock is attracting growing interest in the literature. First, he advanced ecological management by transforming physical labor into cognitive labor. It is proved that there is a considerable number of methods that rely on stress in the growth of virtual fences, which can be classified in three classes: the first one is interested in virtual fences that focus on auditory stimuli, the second one is the one that depends on electrical stimuli and the third class merges both. These three categories can be divided into two classes: the first class relates to static virtual fences and the second class relates to dynamic virtual fences. The purpose of this work is to first provide an overview of the existing approaches inherent to virtual fences while noting their technical characteristics, advantages and disadvantages. Then, we compare the different virtual fencing approaches and the associated localization/delimitation techniques. Finally, we discuss the remaining challenges for optimal animal control. Keywords: Virtual fencing · Auditory stimulus Auditory and electrical stimulus

1

· Electrical stimulus ·

Introduction

On the whole of the Earth, we have approximately 25% that is being exploited by pastoralists [1]. The optimization and management of pastures requires resources and labor costs. In recent years [2,3], the growth of virtual fencing systems has facilitated change in livestock management, as it has provided herders with the c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023  Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 115–148, 2023. https://doi.org/10.1007/978-3-031-34896-9_9

116

M. Abdouna et al.

ability to supervise livestock in a flexible manner. Having the ability to keep animals in desired areas and exclude them in other areas is crucial to effective animal husbandry in the face of expanding urban spaces [4,5]. However, the words fence are used in many notions [6]. However, they all have in common the fact that the barrier or boundary is implemented by relying on non-physical objects in the landscape to modify the behavior of the animals. The concept of virtual fence appears more and more in the works relating to scientific research [7]. The virtual fence concept does not alter the landscape by introducing physical infrastructure [8]. The virtual fence is a system that allows to define and control a given perimeter without any physical barrier. The movement of the animal in the virtual fence can be defined by decision algorithms [9]. Conservationists give specific credit to this technology because of its benefits and potentials. The virtual fence overcomes some of the drawbacks of conventional [10] fences. These include the lack of flexibility and construction costs to erect and maintain physical fences. Some virtual fence approaches not only have the ability to define and readjust grazing area boundaries quickly and at low cost, but also allow remote administration through an interface by sending commands to devices. Technologies employed. It is also possible that animals can be controlled, moved or rounded up remotely [11]. In the article [12], we find a summary dealing with the virtual fence and a very interesting overview in this field of research. The purpose of this study is to provide a more detailed overview of the different concepts and technologies related to virtual fences found in the literature. In this article, we study virtual fences while emphasizing the different stimulus techniques used to control animals. The essential complements of our work are summarized as follows: We present a specific classification of virtual fences taking into account the physical and logical requirements, the characteristics of the fences, the diversity of areas and seasons. Driven by advances in the field of IoT, virtual fencing covers almost every aspect of animal life. We expect a clear taxonomy to provide readers with the information needed to better understand these innovative technologies; an overview will be described on localization algorithms, the most used proactive protocols showing their strengths and weaknesses [9,13]; a technical comparison and the evolution (Cf. Fig. 1) of the different virtual location fences is presented to put their properties into perspective.

Fig. 1. Evolution of cattle fences

Virtual Fences: A Systematic Literature Review

1.1

117

Study Process

It is important to overview how this research was carried out in order to better understand the documents used in the context of this review. We enumerate below the research questions. Section 1.2 explains the search process with the selection criteria for the relevant literature. The main objective of this review is to provide an overview of existing approaches to the development of virtual fences. For an article to be considered relevant, it must meet a number of criteria formulated by the following questions: Question 1: What are the recent studies on this topic? Are there any previous studies showing the benefits of virtual fencing? Question 2: What can be achieved by monitoring animals through technologies associated with virtual fencing? Question 3: What are the limitations of monitoring animals using virtual fences? Are there still opportunities to explore for optimizing livestock management through virtual fencing? The purpose of this study is to demonstrate the potential of virtual fences to achieve better management of domestic and free-ranging animal resources. 1.2

Method

To obtain articles deemed relevant for this literature review, we performed search queries on major databases (IEEE Xplore, ACM, Springer, Google Scholar, ResearchGate and ScienceDircet). Table 1 shows the results.

Table 1. Search queries for scientific databases Databases

Queries

IEEE Xplore

(“virtual fencing”) AND (“Livestock Tracking Or Livestock Tracking Algorithm”) AND (“Scope of virtual fencing”)

ACM

(“virtual fencing”) AND (“Livestock Tracking Or Livestock Tracking Algorithm”) AND (“Scope of virtual fencing”)

Springer

(“virtual fencing”) AND (“Livestock Tracking Or Livestock Tracking Algorithm”) AND (“Scope of virtual fencing”)

Google Scholar (“virtual fencing”) AND (“Livestock Tracking Or Livestock Tracking Algorithm”) AND (“Scope of virtual fencing”) ResearchGate

(“virtual fencing”) AND (“Livestock Tracking Or Livestock Tracking Algorithm”) AND (“Scope of virtual fencing”)

ScienceDirect

(“virtual fencing”) AND (“Livestock Tracking Or Livestock Tracking Algorithm”) AND (“Scope of virtual fencing”)

During our various researches, it was observed that it was necessary to delete certain keywords in the queries when the results obtained are not related to

118

M. Abdouna et al.

the theme sought. In this journal, searches were conducted from January 2022. Another criterion for retrieving relevant information was to select only papers published in indexed and abstracted journals with a high impact factor or academic repositories. The main objective is the use of Google Scholar to obtain information in IEEE Xplore, ACM, Springer. 1.2.1 Selection of Publications: The search was successful to varying degrees on the various databases mentioned above. These results are presented in the Table 2. For each article encountered, we proceeded to read and analyze its title, its summary, its results in order to ensure its relevance and to produce a summary. We also systematically checked the impact factor of the journals in which the articles are published. The process of selecting or rejecting a scientific article after reading the above points obeys the rules below and is illustrated in Fig. 2. Rule 1 : Select only publications whose primary purpose is to locate or monitor animals; Rule 2 : Consider only articles selected in English; Rule 3 : Select articles that rely on virtual fencing technology, excluding those that are not published in serious journals or sources; Rule 4 : Do not consider virtual fencing technology used for anything other than animal monitoring or control so as not to confuse the concepts and thus prejudice the results of this study. 1.2.2 Collection of Articles: Once we have read the title and abstract of an article, and it seems relevant, the next step is to read it in its entirety (introduction, results, discussion and conclusion).

Fig. 2. Process for identifying relevant articles

Virtual Fences: A Systematic Literature Review

119

Table 2. Summary of research results

Databases / logs Number of Number of documents Number of search results selected for a complete documents reading deemed relevant IEEE Xplore

37

32

29

ACM

21

13

6

Springer

26

16

11

Google Scholar

29

22

20

ResearchGate

16

11

7

ScienceDirect Total

2

57

13

2

141

101

79

Related Works

The related work is shown in this section. We are indeed emphasizing the research work of the existing literature, our research on the different types of virtual fences and the most important location algorithms. Additionally, the articles explored in this study provide a practical implementation of the animal management model. We review the static and dynamic models that have been tested on cattle and others on sheep. To overcome the problems related to livestock management, the studies carried out in the articles [11,14–20] on virtual fences have been presented to show how these are essential for grazing management. With this in mind, we summarize recent comparative studies of virtual fences [21]. Ayesha et al. [22,23] discuss location-based protocols for livestock management. They explain that the problem arises when transferring data when used in mountainous or riverine areas. In addition, they point out performance issues with each deployment method. Highland end al. [17,24,25] conducted a comprehensive study of positioning algorithms based on GPS (Global Positioning System or Personal Navigation Assistance System). They categorized them according to their goals and methods. Therefore, the authors provide a detailed classification of virtual fences based on existing research. In the articles [17,26], the authors make an in-depth study of virtual fences in various applications. They describe the objectives of virtual fences for good animal management and elaborate their new classification. Beforehand, the properties of virtual fences with auditory and electrical stimulation, virtual fences with acoustic stimulation and virtual fences based on the auditory signal. The second concerns the characteristics of the virtual fence using the Internet of Things. The third point concerns dynamic and static virtual fences. They also ranked location algorithms by area to prove that virtual fences are far more valuable than traditional fences.

120

M. Abdouna et al.

Zach Butler et al. [21,27–31] describe the solutions offered by virtual fences in animal management, taking into account their requirements. They classified the applications of virtual fences according to their properties. These properties are related to the range of the signal emitted by the electronic device, the quality of service, security and mobility. Then, considering these properties, they propose a new classification of the latter. Finally, they divided virtual fences into two categories: virtual fences with a physical marker and virtual fences without a physical marker. Aquilani et al. [32,33] presented the advantages and disadvantages of virtual fences. They recommend considering both different location techniques and mobility when designing an algorithm. Danila et al. [34] examined the question of the location system, especially in the context of retail management using virtual fencing technology could be very effective in animal monitoring. The authors also listed virtual fence approaches based on their applications and implementation. Accordingly, they analyzed the virtual fence characteristics as a function of the battery life of device worn by the animal [35]. Dana LM et al. [21,36] provide a detailed study that focuses on the comparison of virtual fencing based on localization techniques and warning signs. Otabek et al . [13,29] conducted an enriching review including tracking protocols and algorithms using a comprehensive performance-based comparison and application hypotheses showing their effects on animal welfare. They presented a classification of these protocols according to geolocation techniques. To highlight the uniqueness of this study, a comparison of related work is summarized in Table 3. Table 3. Comparative table of related works Ref.

Brief description

Stimulus

Type of fence

Controlled Animal Animal welfare

´ Sound Sound + Electric conventional Electric Virtual Electric Marini et al. [21]

The use of a virtual × fence to restrict sheep access to pastures for effective pasture management is presented in this work.



×

×

×



Fogarty et al. [35]

A quantitative and ? systematic methodology study is conducted to examine how sensors have been applied in precision technology to revolutionize livestock management.

?

?

?

?

?











Umstatter The article critically et al. [10] analyzes the progress made to date and highlights the benefits and challenges of virtual fences.



Sheep

+ + ++

Sheep

++

All

−−−

Virtual Fences: A Systematic Literature Review

– – – – –

121

(+) : more benefits or fewer concerns; (−): fewer benefit or more concern; (?) √ : Requires more research; ( ) : Accepted or supported; (×) : Not accepted or supported.

In the light of comparison on the most recent related works presented in the table above (Cf. Table 3). It appears that the above-mentioned journals are very enriching in relation to this theme. There are other slightly older works, the majority of which are already taken into account in [10]. 2.1

Contributions

In this work, 1◦ We review the various works inherent to virtual fencing, in particular for the management and monitoring of livestock, which have been proposed in the literature. We highlight the evolutions, the advantages and their disadvantages relating to the different virtual approaches. 2◦ We provide an extensive and detailed discussion of the various techniques and technologies that can be used for livestock tracking. We will highlight their advantages and disadvantages while specifying the relevance and challenges associated with animal localization techniques. 3◦ Due to recent concepts related to livestock management, we present an introduction and discuss some emerging technologies that optimize animal management. We conclude, based on our analysis, that emerging virtual fencing technologies can provide innovative solutions on pastoral management. 4◦ This work also discusses the applications of virtual fences and the various challenges they currently face. 2.2

Acronyms

The acronyms contained in this article are presented in the Table 4 in order to facilitate its reading.

122

M. Abdouna et al. Table 4. List of acronyms used in this review

Acronym Description GPS

Global Positioning System

IoT

Internet of Things

WGS84

World Geodetic System 1984

DTN

Delay Tolerant Networks

MANET

Mobile Ad hoc Network

ToA

Time-of-Arrival

TDoA

Time-Difference-of-Arrival

RSSI

Received Signal Strength Indicator

AoA

Angles of arrival

RToF

Return Time of Flight

NB-IoT

Narrowband Internet of Things

LoRa

Long Range Area

LoRaWAN Long Range Area-Wide Network

2.3

MAC

Media Access Control

DBSCAN

Density-Based Spatial Clustering of Applications with Nois

WSN

Wireless Sensor Networks

NFC

Near Field Communication

Advantages of Virtual Fencing Comparing to Conventional Fencing

Virtual fences offer the possibility of improving the efficiency of grazing management. Their major advantages are the flexibility and optimal management of stocking density they can provide. Conventional fencing is limited and expensive when it comes to managing large areas of pasture. Conventional paddocks or electric fences are the most common primary means of livestock control in developed countries. Conventional pens or electric fencing are the most common primary means of livestock control in developed countries. Conventional fences are not scalable. This inflexibility does not accommodate seasonal changes or times when animals must be excluded in certain areas. We offer a comparison in Table 5 between the different paradigms used in animal control.

Virtual Fences: A Systematic Literature Review

123

Table 5. Advantages and disadvantages of fencing Type of fence Advantages

Disadvantages

Conventional fencing

- Difficult to modulate;

-Perceptible by animals.

-

Little flexibility in its management; Implementation is expensive; Maintenance is expensive; Possibility of harming wild animals; Barbed wire can cause serious health issues.

Electric fence -

More flexible than wire fences; - High maintenance cost; Easy to implement on steep areas; - Absence of a remote monitoring mechanism; Suitable for many species; - Electric wires can trap animals and they can die The risk of injury to animals is reduced.

Virtual fence

Can be very flexible; - Difficult to see for animals; Modular and manageable remotely; - Hazardous to humans; Controls and monitors animals remotely; - Presenting risks for animals Opens interesting perspectives.

-

Table 6. Technical comparison of animal fences R´ef.

Type of fence

Stimulus Sound Sound and electric

[10]

Conventional fencing

[37,38] Electric fencing [39,40] Virtual Fence

– – – – – –

×

×

×

×





Remote Cost in terms of Localization Need for Visible to Animal Electric monitoring implementation technology Internet the naked welfare eye × √ √

×

+++

−−−

−−−

+++

+/ − −−

×

++

−−

−−−

+ + /−

 



?

++

+++

++

 

(+): more benefits or less worries; (−): less benefit or more worry; (?): √ Needs more research; ( ): Accepted or supported; (×): Not accepted or not supported; ( ): Partially satisfactory.

Virtual fences are applied in several areas including precision agriculture and pet management [41]. We present in Table 6 the main techniques used to control the animals and compare their features. 2.4

(Types of Virtual Fences)

A virtual fence is a logical boundary with no physical infrastructure fixed to the ground. It is a particularly interesting concept because of its potential to transform livestock management, making it simpler and more flexible. There are multiple approaches to virtual fencing that provide an effective means of animal control. In the following points, we describe the different techniques used to erect virtual fences.

124

M. Abdouna et al.

A) Stimulus-based virtual fences Virtual fences have the potential to significantly change animal behavior and improve pasture management. Traditional enclosures such as chain-link fences and electric fences are widely used in many countries around the world [42]. Vidya NL et al. explain that this fence is expensive, difficult to implement and vulnerable to damage that can be caused by the weather. They point out that virtual fencing not only offers producers a more cost-effective, low-maintenance solution, but also an efficient way to move animals from one area to another, simply by logically readjusting the boundaries of the fence. [7,14,43]. For their part, Juliana Ranches and Rory O’Connor claim that in the different experiments, the animals are trained in advance to distinguish between the different stimuli. The auditory stimulus is applied when the animal approaches the fence and the electrical stimulus is only applied if the animal continues to move forward within the boundary of the fence despite having received the first stimulus [44]. Danila Marini et al. determined the importance of the auditory warning signal in an open pen and tested whether the criterion can affect the ability of sheep to learn virtual fencing [45]. They demonstrated that virtual fence training impacted the ability of sheep to learn to respond correctly to an auditory cue associated with an electrical stimulus. However, they were not able to confirm in their experiment that the character can have a change in learning. They claim that during electrical stimulus training, more than 70% of sheep appeared not to learn to respond correctly to auditory cues after several [31] trials. They conclude that animals that learned to avoid the virtual fence through auditory cues associated with electrical stimuli displayed interesting behaviors compared to animals that received only the electrical stimulus [46,47]. For their part, Marek Doniec et al. [48] carried out an experiment to spot cows with auditory and electrical stimuli. They proposed an algorithm for gathering by an auditory stimulus. Algorithm 1: Gathering by an auditory stimulus     while Pcow,i − Pgoal >  do SOUND (tsound ) WAIT(twait )

Algorithm 2: Gathering by auditory and electrical stimuli     while Pcow,i − Pgoal >  do SOUND (tsound ) if cowspeed==0 then SHOCK(tshock ) WAIT(twait ) else WAIT(twait )

The authors proposed a second algorithm for herding cows with auditory and electrical stimuli. This second algorithm is the extension of the Algorithm 1 which only uses auditory signals. If the cow does not move after the broadcast of an auditory signal, the Algorithm 2 applies an electrical stimulus to it.

Virtual Fences: A Systematic Literature Review

125

Still in their experience, the authors point out that it is difficult to predict the behavior of cattle during a thunderstorm using the Algorithm 2. In such a case, the electrical stimuli can bring the cows together increasing the level of stress. Based on this observation, they propose a third algorithm as an extension of the first two (Algorithms 1 and 2 ). The Algorithm 3 adapts the frequency and the intensity of the signals to the behavior of the animal. The algorithm has the ability to stop both signals as soon as the animal starts to react. Algorithm 3: Gathering by adaptive sound and electrical stimuli intensity  = 0.1;    while Pcow,i − Pgoal >  do if cowspeed==0 then SOUND (tsound ) if intensity > 0.3 then SHOCK(tshock , intensity) intensity = min(1.0, intensity + 0.1) else intensity = max(0.1, intensity − 0.1) WAIT(twait )

In the article [33], the authors carried out an experiment on a group of ten cattle wearing an electronic device in the form of a collar in order to exclude them from a riparian zone. In their experiment conducted on an automated virtual fence, they found an interaction between the animals and the fence. Auditory signals and pulsed electrical stimuli vary from individual to individual [4]. The authors show that the animals learned to react differently between auditory and electrical stimuli. In the work of N´estor Acosta et al. [11], the authors presented the procedures for designing a platform virtual confinement of animals based on auditory and tactile stimuli. Auditory and tactile stimuli make this platform compatible with animal welfare. They maintain that the main contribution of their work is the creation of this research platform on virtual animal confinement techniques. They also performed tests to verify that the entire system is working properly. They conclude that they are confident that after a period of training, the combination of auditory and tactile stimuli can cause cattle to remain confined inside the virtual fence. B) IoT-based virtual fences The IoT is a concept that provides interconnection via the Internet of devices to collect, share and analyze data for innovative applications [49]. Bernard et al. asserted that the IoT is considered an evolution of the Internet. It collects, transmits, distributes, detects and analyzes large-scale [50] data. They point out that all these possibilities offered by the IoT can be exploited for good livestock management. The IoT enables remotely controlled operations with sensors to monitor animals [51]. They claim that virtual fences can be

126

M. Abdouna et al.

reconciled with this technology for better livestock management [19]. Lu´ıs N´obrega et al. [52] set out an animal behavior monitoring platform based on IoT technology. This platform contains a local IoT network to collect animal data. It has the ability to do processing and storage while providing machine learning capabilities. C) Virtual fences using GPS Virtual fences have been studied for several years to replace the physical barriers used for animal confinement and have considerable advantages. They are generally implemented with an electronic device carried by an animal, with a GPS and means to deliver a sensory stimulus (auditory or electrical) to the animal to prevent it from crossing the pre-established limits. Azamjon Muminov et al. [17] presented in their work that virtual fences can be added or removed at any time and several of them can be created at once from information stored in a database. When GPS readings indicate that an animal has crossed the boundary of the fence, an auditory or electrical stimulus is triggered. They introduced an algorithm for measuring the distance between the animal and the virtual fence. They introduce the use of Google Maps and the Spherical Mercator projection method based on WGS84 in the virtual fence system [53,54].  AB = (x1 − x2 )2 + (y1 − y2 )2 (1) Otabek Sattarov et al. [55] introduced a notion of distance estimation using GPS data. To achieve their objective, they point out that it is necessary to have a technique for measuring the distance between two GPS points. In general, the closest distance between two points A(x1 , y1 ) and B(x2 , y2 ) is calculated by the formula of the Eq. (1) presented in the article citemuminov2019modern . To solve this problem, they turn to a frequently used algorithm called Haversine [56], and to the Google Maps API, which provides out-of-the-box functions that can be used.The Haversine [56] technique. The Haversine technique is applied to measure the length of two geolocations on the Earth’s surface. To determine the distance between two points, we use the principle of the Haversine method illustrated in Fig. 3.   d (2) = hav(ϕ1 − ϕ2 ) + cos ϕ1 × cos ϕ2 × hav(λ2 − λ1 ) hav r where h is the haversine function, d is the central angle between two points located on the great circle, r is the radius of the sphere, ϕ1 and ϕ2 are the latitude of the first and second point in radians, λ1 and λ2 are the longitude of the first and second point in radians [57,58].

Virtual Fences: A Systematic Literature Review

127

Fig. 3. Illustration of the haversine law

– – – – –

u, v, and w: three points on the sphere; a: length (from u to v); b: length (from u to w); c: length (from w to v); C: the angle of  the corner opposite c. d The ratio bigg( r of the Eq. (2) allows to know the distance d [57,58].  d = 2r • arcsin

 sin2

Δlatt 2



 + cos(latt1 ) cos(latt2 )sin2

Δlong 2

 (3)

D) Virtual fences based on wireless sensor networks Real-time data collection requires the use of wireless sensor networks [59,60]. Virtual fences replace physical barriers and gather cows based on stimuli (auditory or electrical signals). In the paper [61], the authors provided an overview of a wireless sensor network system to collect and analyze data collected on cow behaviors. They implemented an Algorithm 4 that runs locally on each device. Algorithm 4: Data processing on each device proc´ edure Update bin count ∀i ∈ B xi,t ← γ.xi,t−1 + b(i, zt ) Update bin distribution (simplify) ∀i ∈ B  yi,t ← xi,t / i∈Bxi,t y ← previous distribution at time t  significant    if ∃i ∈ B : yi,t − yi >  or t − t ≥ theartbeat then Update y ← yt and t ← t Estimate new state st  ← f (DT, yt ) Eventful ? Yes, if state differs from last update Store in Flash state s t and time t

To validate the method they proposed, the authors collect the data on three axes at a sampling rate 1 Hz. The data collected is X, Y when the animal

128

M. Abdouna et al.

is throughout the l axis, and Z when placed horizontally on a flat surface. These data are used to calculate the acceleration of each animal along the axes X, Y and Z by the following formula:

atotal =

 a2X + a2Y + a2Z

(4)

Ryo Yamamoto et al. [62] brought a new information localization method for the dissemination of geolocation data with ad hoc communication. They also suggested a new local storage architecture to accomplish information dissemination using the geo-fence concept [63], using DTN and MANET technologies [64]. Each endpoint moves according to a specific policy while being able to communicate with others. In addition, each terminal contributes to the development of the network and regularly exchanges HELLO messages with the other terminals. E) Dynamic virtual fence vs Static virtual fence Virtual fences are little explored concepts. It is a computer-animal interaction based on many [29] concepts. These are devices that impel stimuli (auditory or electrical signals) to an animal based on its position relative to one or more fence lines. The establishment of virtual fences is done with a computer device sensitive to the movement made by the animal, called a smart collar. Apart from the dynamic virtual fences which ensure the continuity of pasture management service and the static ones which are fixed to the ground, there are also virtual fences that support nomadism. Unlike dynamic virtual fences, these do not allow continuity of service when changing location. (a) Dynamic virtual fences In the work of Zack Butler et al. [29], the authors explain that the real power of virtual fences emerges when they are dynamic and can move in the landscape. They point out that this is a problem of animal movement planning. They tested dynamic virtual fences. Their test resulted in interesting prospects. The results they obtained are confirmed by the work of Magnus Fjord Aaser et al. [65] Z. Butler et al. [15] described that a Virtual fence is created by applying a warning stimulus to an animal when it approaches a predefined boundary. It is implemented by a small computer system carried by the animal with a GPS receiver. This approach makes it possible to consider animals as agents with controllable natural mobility and to have a planning of animal movement. It should be noted that dynamic virtual fences can have variable dimensions and several geometric shapes [66]. (b) Static virtual fences Virtual fences are technologies that use the GPS device built into an electronic collar worn by an animal to keep it in a given area. MO Monod et al. [4] stated a virtual fence based on a principle of electromagnetic coupling. The fence is a looped insulated wire unrolled on the ground around the animals. An electromagnetic field is created and coupled to

Virtual Fences: A Systematic Literature Review

129

a device (collar) worn by each animal. When the signal from the fence is detected by the collar, a behavioral algorithm decides what action to apply to the animal to prevent it from crossing the wire. They point out that they have found that the animals need to be trained for half a day in order to familiarize them with the fence. In order to put related work into perspective and determine their strengths and weaknesses, we propose in the two Tables 7 and 8 a technical comparison of different types of fences and a summary of the advantages and disadvantages of virtual fences compared to the different notions mentioned above in Sect. 2.4 (Fig. 4).

Table 7. Technical comparison of different types of fences R´ ef.

Type of virtual fences

Localization technologies

Data storage

Ground Visibility Type of device stimulus

[46,67]

Sound stimulus

GPS

Local

No

No

Auditory signals

[67]

Electrical stimulus

GPS

Local

No

No

Electrical signals

[42,67,68] Sound and GPS electrical stimuli

Local

No

No

Auditory or electrical signals

[69]

IoT

GPS

Local

No

No

Auditory or electrical signals

[11,22]

GPS

GPS

Central server

No

No

Auditory and electrical signals

[15,16]

Dynamics

GPS

Local/cloud No

No

Auditory and electrical signals

[16,70]

Static

GPS

Local/cloud No

No

Electrical signals

[61,62]

WSN

GPS

Cloud

No

Auditory or electrical signals

No

Table 8. Advantages and disadvantages of fences Type

Advantages

Stimulus Easy implementation

Disadvantages Sometimes immobile

IoT

Remote control: flexible management Need for Internet

GPS

No physical barrier

Mobility Adaptable to the landscape WSN

Animal must wear an electronic device Animal must wear an electronic device

Remote control: flexible management Internet required

130

M. Abdouna et al.

Fig. 4. Taxonomy of virtual fences

3

Location of Livestock

Locating livestock using sensor nodes is an essential process for tracking and monitoring animals. It should be noted that there are several techniques and technologies used in livestock tracking, especially in pasture management. However, we focus on those that are major and most cited in the literature. The Fig. 5 represents the taxonomy of the concepts and technologies used in the localization of animals or mobile objects.

Virtual Fences: A Systematic Literature Review

131

Fig. 5. Taxonomy of approaches for locating animals and/or mobile objects

3.1

Conceptual Approaches to Locating Livestock

Localization techniques have made rapid progress. Several localization systems have indeed been developed for various needs. One of the best known systems is GPS. GPS is an extremely efficient technology that has been continuously evolving for years. (a) Time-of-Arrival (ToA) SEBASTIAN SADOWSKI et al. [74–78] have used the so-called “Time-of-Arrival (ToA)” localization technique which consists of calculating the distance between the transmitter and the receiver based on a clock synchronized between the signals transmitted and the signals received. They point out that ToA is one of the most accurate techniques available that can provide localization with high [76] accuracy.  (x2 − x)2 (y2 − y)2 = v(t2 − t0 ) (5)  (x3 − x)2 (y3 − y)2 = v(t3 − t0 ) (6)  (7) (x4 − x)2 (y4 − y)2 = v(t4 − t0 ) The sender broadcasts a time-stamped message whose header indicates the time and date the message was received. They illustrate the ToA technique by the Fig. 6.

132

M. Abdouna et al.

Fig. 6. Illustration of the ToA scheme

(b) Time-Difference-of-Arrival In the articles [53,79], the authors state the so-called Time-Difference-ofArrival (TDoA) concept based on the differences in arrival times between anchor nodes.   (x2 − x)2 (y2 − y)2 − (x3 − x)2 (y3 − y)2 = v(t2 − t3 ) (8)   (x2 − x)2 (y2 − y)2 − (x4 − x)2 (y4 − y)2 = v(t2 − t4 ) (9) In Fig. 7, the distances are calculated as a function of the propagation times. The efficiency of this approach can be impacted by the delay that the transmitted signal can have.

Fig. 7. Time-Difference-of-Arrival (TDoA)

(c) Received Signal Strength Indicator (RSSI) Rajika Kumarasiriet al. [75,80] propose a localization technique based on the Received Signal Strength Indicator (RSSI) concept. This concept has received a lot of attention in recent years. The concept of RSSI is commonly used for target detection, but also for locating and tracking animals [81]. However, it must be specified that the measurement of the intensity of the

Virtual Fences: A Systematic Literature Review

133

received signal is very sensitive to interference and can therefore undergo significant deviations from one measurement to another. To relate RSSI values to distance the following, equation is used : RSSI = −10n log10 (d) + C

(10)

Fig. 8. RSSI - Received Signal Strength Indicator

In the paper [74], the authors presented a schematic of a trilateration experiment shown in Fig. 8. e2 = x2 + y 2

(11)

f 2 = (x − p)2 + y 2

(12)

g 2 = (x − q)2 + (y − r)2

(13)

By solving the system of equations, we get: e2 − f 2 + p2 2p

(14)

q e2 − g 2 + q 2 + r2 − x 2r r

(15)

x= y=

(d) Angle-of-Arrival (AoA) In the articles [79,82], the authors introduced the Angle-of-Arrival (AoA) technique based on the triangulation method and requires the use of two anchor nodes. The distance between the two anchor nodes is known (d) as shown in Fig. 9. Each anchor node calculates the angle of arrival of the received signal. The height h of the received signal is calculated using the following equation: d sin(∅1) sin(∅2) (16) h= sin(∅1 + ∅2) The distances between the target and the reference points are determined as follows:

134

M. Abdouna et al.

Fig. 9. Angles of arrival (AoA)

d1 =

h sin(∅1)

(17)

d2 =

h sin(∅2)

(18)

The authors conclude that the position estimation is made by the intersection of these distances in the direction of the two calculated angles, i.e. a triangulation is performed in order to determine the position of the receiver. (e) Return Time of Flight (RToF) Faheem Zafari et al. [78] presented the Return Time of Flight (RToF) localization technique. It is used to measure the propagation time of the round trip signal (transmitter-receiver) to estimate the distance between Tx and Rx . Upon receiving a signal from the transmitter, the receiver responds and calculates the total T oA of the round trip. The main advantage of RT oF over T oA is that clock synchronization between Tx and Rx is required. The important problem of RT oF based systems is the response delay at the receiver, which strongly depends on the technical aspects of the receiver. This factor can be neglected if the propagation delay between transmitter and receiver is large compared to the response time, but the delay cannot be ignored in short range systems (Fig. 10).

Fig. 10. Localization based on RToF

Virtual Fences: A Systematic Literature Review

135

Let t1 be the time Txi sends a message to Rxj which receives it at t2 where t2 = t1 + tp . j, at time t3 , transmits a signal back to i which receives it at t4 Thus, the distance between i and j can be calculated using the Eq. ( 19). Dij =

(t4 − t1 ) − (t3 − t2 ) ×v 2

(19)

(f) Triangulation Song Chai et al. [83] proposed a positioning algorithm implemented in Java using the Android SDK and executable on a mobile phone. They used the [84] “Kalman filtering” method and the RSSI to determine the distance between the target and the reference point. Once the mobile device connects to the three remote beacons, triangulation is performed to determine its coordinates. Three circles, centered on each beacon as shown in Fig. 11. The location of the triangulation is the centroid of the triangle ABC, which consists of chords of intersection of three circles.

Fig. 11. Triangulation using three beacons

The positioning algorithm based on Kalman filtering is as follows: Algorithm 5: Kalman filtering Initial A , H , r , p , q and d ; while true do Input dist ; d=A×d ; p=A×A×p+q ; p×H ; gain = p×H×+r p = (1 − gain × H) × p ; d = d + gain × (dist − H × d) ; Output d ;

In the article [85], Thurmond et al. state that localization techniques have evolved over time. However, they have advantages and disadvantages which are presented in the table.

136

M. Abdouna et al.

(g) Multilateration Asif Iqbal Baba et al. [86] have shown the importance of the multilateration technique. They explain that localization is a process of obtaining information about the location (of an object or animal) using nodes that already have the information. To measure distances between nodes, the authors use the time-of-flight (ToF) ranging technique [78]. These distance measurements are then used by the multilateration technique which requires distance measurements between three or more nodes to estimate the location. Multilateration consists of calculating a position according to the distances relative to reference positions. They illustrated this basic idea multilateration using Fig. 12.

Fig. 12. Illustration of multilateration



−1 position = AT × A AT × b

(20)

The authors specify that the position is determined by the Eq. (20). If AT A is a singular matrix, that means the starting point nodes are placed on a line, then the equation becomes invalid. The lower the number of starting points used, the greater the difference [87]. 3.2

Localization Algorithms

Davide Cannizzaro et al. [75,88–92] examined in their work a distance-based algorithm and three (3) algorithms that rely on fingerprints. Consistent with their writings, the authors studied these algorithms as those that provide the most consistent results in different types of environments. The algorithms represented in this subsection are among the most used methods in the process of locating animals and moving objects. A) Trilateration algorithm The trilateration algorithm shown in Fig. 13 is a sophisticated version of triangulation that directly calculates the distance from the target object,

Virtual Fences: A Systematic Literature Review

137

rather than indirectly through the angles. Still in Fig. 13, U is the known distance between beacon B1 and B2 , while Vx and Vy are the coordinates of beacon B3 with respect to B1 . The radii of the three circles, obtained from the RSSI measurements are r1 , r2 and r3 . With this notation, the coordinates of the object P in a space 2D are calculated using the following equations: V 2 = Vx2 + Vy2 r12 − r22 − U 2 2U

(22)

r12 − r32 + V 2 − 2Vx x 2Vy

(23)

x= y=

(21)

Fig. 13. Trilateration algorithm for locating objects using three tags B1, B2 and B3

In the same vein on the concept of trilateration, Xiantao Zeng et al. [93] have proposed an algorithm (Algorithm 6) for trilateral localization. The trilateral location algorithm is the most common distance-based method, while having simple and efficient characteristics, while the distance measurement error between nodes is smaller. They achieve higher location accuracy. The trilateral localization algorithm proposed by the authors is as follows: Algorithm 6: Trilateral location Input: neighbor anchor list (alias as node) ; Initialize: index = 0, count = size(node) ; for i = 1 to count − 2 do for j = i + 1 to count − 1 do for k = j + 1 to count do candidatl iste(index + +) = triposition(node(i), node(j), node(k));

138

M. Abdouna et al.

B) Herd tracking algorithm based on the peak method In the articles [94,95], the authors proposed an algorithm based on the point method and considered it as a dynamic model. They conclude that the algorithm has been tested on confined herds and those in freedom as well. Algorithm 7: Calculation of φ(δT ) Input: f (δT ), f ((δ − 1)T ) Output: φ(δT ) if f (δT ) − f ((δ − 1)T ) < 0 then φ(δT ) ← φ((δ − 1)T ) + φ0 else φ(δT ) ← φ((δ − 1)T ) − φ0

C) Algorithm DBSCAN Xiaohui Li and Li Xing [96] deployed drones to minimize the average distance drone-animal based on the information obtained on the location of the targeted animals. Their objective is to cover a maximum number of targeted animals. For this, they adopted a density-based clustering algorithm DBSCAN to achieve this goal. The DBSCAN algorithm presented by the authors is the 8 algorithm. They apply this algorithm by taking into account the locations of all target animals as initial points of an input data set. The output is a set of targeted animal clusters. Algorithm 8: DBSCAN Input: 0 }, , ρth U 0 = {U10 , U20 , ...UN Output: A set of density-based targeted animal clusters mark all data points in U 0 as unvisited ; repeat randomly select an unvisited point Ui0 mark Ui0 0 as visited if the  − neighbor density of Ui0 is larger than ρth then create a new cluster Cn , and add Ui0 to Cn let Np be the set of  − neighbor of Ui0 ; for each point p in Np do if p is unvisited then if p is not yet a member of any cluster then add p to Cn ; output Cn ; else mark Ui0 as noise ; until no point in U 0 is unvisited.

Virtual Fences: A Systematic Literature Review

139

D) Real time livestock tracking system Nthetsoa Alinah M. et al. [97] implemented a real-time livestock tracking system. For this system, they perform a location function based on estimating the distance between two nodes. The distance d is therefore calculated using the following equation: d=

c × (Ttotal − Tprocess ) 2

(24)

where c represents the speed of the signal transmitted between beacon nodes, Ttotal represents the time during which the sender sends a packet and the receiver receives a response, and Tprocess is the time it takes the receiver to process a response (Cf. 14). To estimate the position of a node, the authors used the trilateration method [93]. They point out that two conditions must be met: P 1 must be on the y-axis and P 2 on the x-axis. x and y of P 4 are determined using Eqs. (25) and (26). r12 − r22 + e2 2e

(25)

f × P4 x r12 − r32 + f 2 + g 2 = 2g g

(26)

P4 x = P4 y =

Fig. 14. Trilateration method with 3 beacon nodes

140

M. Abdouna et al.

Algorithm 9: Localization with activation of beacon nodes Input: accelerometer Output: animal and position Initialisation nrf24L01 + as a primary transmitter for i ← 0 to 3 do beacon[i] ← payload start timer while (response = 0) do restart timer beacon[i] ← payload stop timer tag ← accelerometer data if accelerometer data = previous accelerometer data then change[i] ← 1 else change[i] ← 0 if (change[0] = 1 change[1] = 1 change[2] = 1) then animal ← active else animal ← inactive calculate position of node basestation ← position basestation ← animal

3.2.1 Analysis of Localization Algorithms There has been a lot of research done on localization in WSNs over the past few years. The process of locating a node can be explained as determining the position of a node relative to several landmarks using the specific computational methods and location technologies. Guangjie Han et al. [98–100] explained that today in WSNs, it is possible that an unknown node can itself calculate its own position based on on connectivity information between nodes and landmarks. Table 9. Technical comparison of localization algorithms Property

Case under consideration Algorithm 6 Algorithm 7 Algorithm 8 Algorithm 9

Complexity Worst case Best case

O(n3 ) O(n3 )

O(φ) + O(f ) O(n2 ) O(φ) + O(f ) O(n2 )

O(n) O(1)

Table 9 shows a technical comparison relating to the time complexity of localization algorithms commonly used in the context of animal localization.

4

Discussion

Over the past two centuries, the development of fencing systems has been a revolution in grazing management [2], success in livestock management is directly

Virtual Fences: A Systematic Literature Review

141

linked to the ability to retain animals in areas , and exclude them in others or move them from one area to another [4]. Following the historical evolution of virtual fences [7], the words virtual fence are used in various fields. However, they all have in common that the barrier(s) are imperceptible in the landscape to alter animal behavior. The concept of virtual fencing appears more and more in the rehabilitation, whose work involves the management of animals in the wild. 4.1

Strengths and Limitations

In the literature, multiple advantages of virtual fences can be found, especially in the area of free-range animal management. Due to the possibility of gathering livestock at different speeds, time savings can be observed here. However, the disadvantages are related to the fact that the animals must wear a device has some impact on them and, especially in the case of collars sometimes larger [7,16,17,26,101]. In addition, accidents can occur if the animal receives a strong enough stimulus while a person is nearby. On the other hand, progress has been made in the miniaturization of collars worn by animals: [4,11]. Another problem with virtual fencing is the application of electrical stimuli to the animal. This poses animal welfare problems. However, there is work that allows the electrical stimuli to be adapted to the animal’s reactions, i.e., the intensity decreases when the animal takes the desired direction [48]. It is important to develop a virtual fence system that allows the animals to learn how the technology works. However, the literature review showed that it is possible to develop a virtual fence to improve animal management and provide opportunities for farmers who could transform the virtual fence. 4.2

Future Challenges and Opportunities

In this subsection, we outline challenges in current work and future possibilities. First, it is true that many works have been devoted to the problem of virtual fences, in particular the question of the battery life of the device carried by the animal, the range of the signal emitted by the devices, etc. Nevertheless, some fundamental questions have not yet been addressed. For example, can we set up a dynamic virtual corridor taking into account the characteristics of different periods of the year, to provide a computer solution to conflicts related to the management of resources between farmers and pastoralists? If the answer is yes, to what extent and how to assess the performance of the different techniques and technologies involved in this innovation? Second, there are many approaches and advances in the development of virtual fences. A direction can be for example, to find an IT solution to the conflicts related to the management of resources between farmers and pastoralists. which could constitute a direction for our future work.

142

5

M. Abdouna et al.

Conclusion

Virtual fencing has several advantages, as shown in the literature. Livestock management takes full advantage of virtual fencing technology, which allows access to a large amount of information about animal grazing behavior and activities, without human intervention, for long periods of time, in remote and difficult locations. The ability to monitor animals wherever they are is an invaluable benefit to farmers, who can be immediately alerted to unusual animal behavior and therefore respond quickly. This technology also has the potential to revolutionize pasture management. From this literature review, it is evident that there are many approaches to the development of virtual fencing. In this paper, we reviewed recent research efforts, including tracking techniques and technologies that have contributed enormously to the implementation of virtual fences. We presented a general taxonomy to classify existing virtual fencing improvements according to the specific aspects they aim to optimize good grazing management. Then, we performed a comparative study on the different techniques and technologies of localization. Subsequently, we proposed a taxonomy of the different types of virtual fences and reviewed a number of localization algorithms. Finally, we identified some potential directions for future research that we hope will serve as a useful guide for the development and implementation of virtual fence technologies .

References 1. Gerber James, S., et al.: Increasing importance of precipitation variability on global livestock grazing lands. Nat. Climate Change. 30(1), 91758–6798 (2018) 2. Medeiros, I., Fernandez-Novo, A.: Susan@articleboutrais1990derriere, title=Derri`ere les clˆ otures...: essai d’histoire compar´ee de ranchs africains, author=Boutrais, Jean, journal=Cahiers des sciences humaines, volume=26, number=1-2, pages=73-95, year=1990 a Astiz, and Jo˜ ao Sim˜ oes. Historical evolution of cattle management and herd health of dairy farms in OECD countries. Veterinary Sci. 9(3), 125 (2022) 3. Anderson, D.M., Estell, R.E., Holechek, J.L., Ivey, S., Smith, G.B.: Virtual herding for flexible livestock management-a review. Rangeland J. 36(3), 205–221 (2014) 4. Monod, M.O., Faure, P., Moiroux, L., Rameau, P.: A virtual fence for animals management in rangelands. In: MELECON 2008-The 14th IEEE Mediterranean Electrotechnical Conference, pages 337–342. IEEE (2008) 5. Terrasson, G., Villeneuve, E., Pilniere, V., Llaria, A.: Precision livestock farming: a multidisciplinary paradigm. In: Proceedings of the SMART (2017) 6. Chan, H.T., Rahman, T.A., Arsad, A.: Performance study of virtual fence unit using wireless sensor network in IoT environment. In: 2014 20th IEEE International Conference on Parallel and Distributed Systems (ICPADS), pp. 873–875. IEEE (2014) 7. Anderson, D.M.: Virtual fencing-past, present and future1. Rangeland J. 29(1), 65–78 (2007)

Virtual Fences: A Systematic Literature Review

143

8. McSweeney, D., et al.: Virtual fencing without visual cues: design, difficulties of implementation, and associated dairy cow behaviour. Comput. Electron. Agric. 176, 105613 (2020) 9. Sattarov, O., et al.: Virtual fence moving algorithm for circulated grazing. In: 2019 International Conference on Information Science a@articlecampbell2019virtual, title=Virtual fencing is comparable to electric tape fencing for cattle behavior and welfare, author=Campbell, Dana LM and Lea, Jim M and Keshavarzi, Hamideh and Lee, Caroline, journal=Frontiers in Veterinary Science, pages=445, year=2019, publisher=Frontiers nd Communications Technologies (ICISCT), pp. 1–6. IEEE (2019) 10. Umstatter, C.: The evolution of virtual fences: a review. Comput. Electron. Agric. 75(1), 10–22 (2011) 11. Acosta, N., Barreto, N., Caitano, P., Marichal, R., Pedemonte, M., Oreggioni, J.: Research platform for cattle virtual fences. In: 2020 IEEE International Conference on Industrial Technology (ICIT), pp. 797–802. IEEE (2020) 12. Adam et Rawnsley Richard Verdon, Megan et Langworthy. Technologie de clˆ oture virtuelle pour le pˆ aturage intensif des vaches laiti`eres en lactation. ii : Effets sur le bien-ˆetre et le comportement des vaches. Revue des sciences laiti`eres. 104, 7084–7094 (2021) 13. Gon¸calves, P., N´ obrega, L., Monteiro, A., Pedreiras, P., Rodrigues, P., Esteves, F.: Sheepit, an e-shepherd system for weed control in vineyards: experimental results and lessons learned. Animals 11(9), 2625 (2021) 14. Vidya, N.L., Meghana, M., Ravi, P., Kumar, N.: Virtual fencing using yolo framework in agriculture field. In: 2021 Third International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV), pp. 441–446. IEEE (2021) 15. Butler, Z., Corke, P., Peterson, R., Rus, D.: Dynamic virtual fences for controlling cows. In: Ang, M.H., Khatib, O. (eds.) Experimental Robotics IX. STAR, vol. 21, pp. 513–522. Springer, Heidelberg (2006). https://doi.org/10.1007/11552246 49 16. Correll, N., Schwager, M., Rus, D.: Social control of herd animals by integration of artificially controlled congeners. In: Asada, M., Hallam, J.C.T., Meyer, J.-A., Tani, J. (eds.) SAB 2008. LNCS (LNAI), vol. 5040, pp. 437–446. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-69134-1 43 17. Muminov, A., Na, D., Lee, C., Kang, H.K., Jeon, H.S.: Modern virtual fencing application: monitoring and controlling behavior of goats using GPS collars and warning signals. Sensors 19(7), 1598 (2019) 18. Jurdak, R., et al.: Energy-efficient localization for virtual fencing. In: Proceedings of the 9th ACM/IEEE International Conference on Information Processing in Sensor Networks, pp. 388–389 (2010) 19. Ijesunor Akhigbe, B., Munir, K., Akinade, O., Akanbi, L., Oyedele, L.O.: Iot technologies for livestock management: a review of present status, opportunities, and future trends. Big Data Cogn. Comput. 5(1), 10 (2021) 20. de Marcos, J.M.F., Mu˜ noz, G.R., Tarifa, J.M.M., Stewart, B.G.: Survey on the performance of source localization algorithms (2017) 21. Marini, D., Cowley, F., Belson, S., Lee, C., Wilson, C.: Comparison of virtually fencing and electrically fencing sheep for pasture management. Animal Production Science (2022) 22. Llaria, A., Terrasson, G., Arregui, H., Hacala, A.: Geolocation and monitoring platform for extensive farming in mountain pastures. In: 2015 IEEE International Conference on Industrial Technology (ICIT), pp. 2420–2425. IEEE (2015)

144

M. Abdouna et al.

23. Naureen, A., Zhang, N., Furber, S., Shi, Q.: A GPS-less localization and mobility modelling (LMM) system for wildlife tracking. IEEE Access 8, 102709–102732 (2020) 24. Farooq, M.S., Riaz, S., Abid, A., Abid, K., Naeem, M.A.: A survey on the role of IoT in agriculture for the implementation of smart farming. IEEE Access. 7, 156237–156271 (2019) 25. Mohamed, S.A.S., et al.: A survey on odometry for autonomous navigation systems. IEEE Access. 7, 97466–97486 (2019) 26. Bishop-Hurley, G.J., Swain, D.L., Anderson, D.M., Sikka, P., Crossman, C., Corke, P.: Virtual fencing applications: implementing and testing an automated cattle control system. Comput. Electron. Agric. 56(1), 14–22 (2007) 27. Butler, Z., Corke, P., Peterson, R., Rus, D.: Virtual fences for controlling cows. In: Proceedings of IEEE International Conference on Robotics and Automation, ICRA 2004, vol. 5, pp. 4429–4436. IEEE (2004) 28. Verdon, M., Horton, B., Rawnsley, R.: A case study on the use of virtual fencing to intensively graze angus heifers using moving front and back-fences. Front. Animal Sci. 2 (2021) 29. Butler, Z., Corke, P., Peterson, R., Rus, D.: From robots to animals: virtual fences for controlling cattle. Int. J. Robot. Res. 25(5–6), 485–508 (2006) 30. Anderson, D.M., et al.: Gathering cows using virtual fencing methodologies (2009) 31. Brunberg, E.I., Bøe, K.E., Sørheim, K.M.: Testing a new virtual fencing system on sheep. Acta Agric. Scand. Sect. A Animal Sci. 65(3–4), 168–175 (2015) ´ 32. Etude des performances d’une unit´e de clˆ oture virtuelle utilisant un r´eseau de capteurs sans fil dans un environnement iot. In: 2014 20th IEEE International Conference on Parallel and Distributed Systems (ICPADS), pp. 873–875 (2014) 33. Campbell, D.L.M., Haynes, S.J., Lea, J.M., Farrer, W.J., Lee, C.: Temporary exclusion of cattle from a riparian zone using virtual fencing technology. Animals. 9(1) (2019) 34. Langrock, R., et al.: Modelling group dynamic animal movement. Methods Ecol. Evol. 5(2), 190–199 (2014) 35. Fogarty, E.S., Swain, D.L., Cronin, G., Trotter, M.: Autonomous on-animal sensors in sheep research: a systematic review. Comput. Electron. Agric. 150, 245–256 (2018) 36. John, K., Philip, M., Mathew, M.M., Rajesh, P., Roby, R., Swathy, S.: Comparative study on different techniques for fencing and monitoring moisture content of soil. In: 2019 2nd International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT), vol. 1, pp. 54–58. IEEE (2019) 37. Gehring, T.M., VerCauteren, K.C., Cellar, A.C.: Good fences make good neighbors: implementation of electric fencing for establishing effective livestock-protection dogs. Human-Wildlife Interact. 5(1), 106–111 (2011) 38. Dodd, C.L., Pitchford, W.S., Hocking Edwards, J.E., Hazel, S.J.: Measures of behavioural reactivity and their relationships with production traits in sheep: a review. Appl. Animal Behav. Sci. 140(1–2), 1–15 (2012) 39. Friha, O., Ferrag, M.A., Shu, L., Maglaras, L.A., Wang, X.: Internet of things for the future of smart agriculture: a comprehensive survey of emerging technologies. IEEE CAA J. Autom. Sinica. 8(4), 718–752 (2021) 40. Odintsov Vaintrub, M., Levit, H., Chincarini, M., Fusaro, I., Giammarco, M., Vignola, G.: Precision livestock farming, automats and new technologies: possible applications in extensive dairy sheep farming. Animal. 15(3), 100143 (2021)

Virtual Fences: A Systematic Literature Review

145

41. Sharma, A., Jain, A., Gupta, P., Chowdary, V.: Machine learning applications for precision agriculture: a comprehensive review. IEEE Access 9, 4843–4873 (2020) 42. Quigley, T.M., Reed Sanderson, H., Tiedemann, A.R., McInnis, M.L.: Livestock control with electrical and audio stimulation. Rangelands Arch. 12(3), 152–155 (1990) 43. Jachowski, D.S., Slotow, R., Millspaugh, J.J.: Good virtual fences make good neighbors: opportunities for conservation. Anim. Conserv. 17(3), 187–196 (2014) 44. Ranches, J., et al.: Effects of virtual fence monitored by global positioning system on beef cattle behavior. Transl. Animal Sci. 5(Suppl. S1), S144–S148 (2021) 45. Marini, D., Llewellyn, R., Belson, S., Lee, C.: Controlling within-field sheep movement using virtual fencing. Animals 8(3), 31 (2018) 46. Marini, D., Cowley, F., Belson, S., Lee, C.: The importance of an audio cue warning in training sheep to a virtual fence and differences in learning when tested individually or in small groups. Appl. Anim. Behav. Sci. 221, 104862 (2019) 47. Umstatter, C., Morgan-Davies, J., Waterhouse, T.: Cattle responses to a type of virtual fence. Rangeland Ecol. Manage. 68(1), 100–107 (2015) 48. Doniec, M., Detweiler, C., Vasilescu, I., Anderson, D.M., Rus, D.: Autonomous gathering of livestock using a multi-functional sensor network platform. In: Proceedings of the 6th Workshop on Hot Topics in Embedded Networked Sensors, pp. 1–5 (2010) 49. Anderson, D.M., Nolen, B., Fredrickson, E., Havstad, K., Hale, C., Nayak, P.: Representing spatially explicit directional virtual fencing (DVF TM) data. In: 24th Annual ESRI International User Conference Proceedings, San Diego, CA (2004) 50. Muja, M., Lowe, D.G.: Scalable nearest neighbor algorithms for high dimensional data. IEEE Trans. Pattern Anal. Mach. Intell. 36(11), 2227–2240 (2014) 51. Ghosh, R.K., Das, S.K.: A survey on sensor localization. J. Control Theory Appl. 1 (2010) 52. Deepa, S., Vitur, H., Navaneeth, K., Vijayrathinam, S.: Animal monitoring based on IoT technologies. Waffen-und Kostumkunde J. 11, 332–336 (2020) 53. Gao, L., Sun, H., Liu, M.-N., Jiang, Y.: TDOA collaborative localization algorithm based on PSO and newton iteration in WGS-84 coordinate system. In: 2016 IEEE 13th International Conference on Signal Processing (ICSP), pp. 1571–1575. IEEE (2016) 54. Noureddine Benamrani. Vers un syst`eme de projection icosa´edral hi´erarchique global sans distorsions pour cartographie Web. Ph.D. thesis, Universit´e Laval (2015) 55. Muminov, A., et al.: Reducing GPS error for smart collars based on animal’s behavior. Appl. Sci. 9(16), 3408 (2019) 56. Safitri, R.R., Pratiarso, A., Zainudin, A.: Mobile-based smart parking reservation system with rate display occupancy using heuristic algorithm and haversine formula. In: 2020 International Electronics Symposium (IES), pp. 332–339. IEEE (2020) 57. Hartono, S., Furqan, M., Siahaan, A.P.U., Fitriani, W.: Haversine method in looking for the nearest masjid. Int. J. Recent Trends Eng. Res. (IJRTER) 3(8), 187–195 (2017) 58. Haversine formula. Wikip´edia 59. Ahmed, A.J., et al.: A review of wireless sensor network. In: 2022 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), pp. 1–5. IEEE (2022)

146

M. Abdouna et al.

60. Schwager, M., Detweiler, C., Vasilescu, I., Anderson, D.M., Rus, D.: Data-driven identification of group dynamics for motion prediction and control. J. Field Robot. 25(6–7), 305–324 (2008) 61. Bhargava, K., Ivanov, S., Kulatunga, C., Donnelly, W.: Fog-enabled WSN system for animal behavior analysis in precision dairy. In: 2017 International Conference on Computing, Networking and Communications (ICNC), pp. 504–510. IEEE (2017) 62. Yamamoto, R., Ohzahata, S., Kato, T.: Adaptive geo-fencing with local storage architecture on ad hoc networks. In: 2018 International Conference on Electronics, Information, and Communication (ICEIC), pp. 1–4. IEEE (2018) 63. Abbas, A.H., Habelalmateen, M.I., Jurdi, S., Audah, L., Alduais, N.A.M.: GPS based location monitoring system with geo-fencing capabilities. In: AIP Conference Proceedings, vol. 2173, pp. 020014. AIP Publishing LLC (2019) 64. Kang, M.W., Chung, Y.W.: An improved hybrid routing protocol combining manet and DTN. Electronics. 9(3), 439 (2020) 65. Aaser, M.F., et al.: Is virtual fencing an effective way of enclosing cattle? Personality, herd behaviour and welfare. Animals. 12(7), 842 (2022) 66. Lipschitz, F.: Expanding the field: Virtual fencing as responsive landscape technology 67. Muminov, A., Na, D., Lee, C., Jeon, H.S.: Virtual fences for controlling livestock using satellite-tracking and warning signals. In: 2016 International Conference on Information Science and Communications Technologies (ICISCT), pp. 1–7. IEEE (2016) 68. Marsh, R.E.: Fenceless animal control system using GPS locdu b´etailation information, February 9 1999. US Patent 5,868,100 69. Suseendran, G., Balaganesh, D.: Cattle movement monitoring and location prediction system using Markov decision process and IoT sensors. In 2021 2nd International Conference on Intelligent Engineering and Management (ICIEM), pp. 188–192. IEEE (2021) 70. Campbell, D.L.M., Lea, J.M., Farrer, W.J., Haynes, S.J., Lee, C.: Tech-savvy beef cattle? How heifers respond to moving virtual fence lines. Animals. 7(9), 72 (2017) 71. Sonia et Bansaye Vincent Berthelot, Geoffroy et Sa¨ıd. Comment utiliser les marches al´eatoires pour mod´eliser le mouvement des animaux sauvages. bioRxiv 72. N´ obrega, L., Tavares, A., Cardoso, A., Gon¸calves, P.: Animal monitoring based on IoT technologies. In: 2018 IoT Vertical and Topical Summit on Agriculture-Tuscany (IOT Tuscany), pp. 1–5. IEEE (2018) 73. Kearton, T., Marini, D., Cowley, F., Belson, S., Lee, C.: The effect of virtual fencing stimuli on stress responses and behavior in sheep. Animals. 9(1), 30 (2019) 74. Sadowski, S., Spachos, P.: RSSI-based indoor localization with the internet of things. IEEE Access 6, 30149–30161 (2018) 75. Cannizzaro, D., et al.: A comparison analysis of BLE-based algorithms for localization in industrial environments. Electronics 9(1), 44 (2019) 76. Guvenc, I., Chong, C.-C.: A survey on TOA based wireless localization and NLOS mitigation techniques. IEEE Commun. Surv. Tutor. 11(3), 107–124 (2009) 77. Amundson, I., Koutsoukos, X.D.: A survey on localization for mobile wireless sensor networks. In: Fuller, R., Koutsoukos, X.D. (eds) Mobile Entity Localization and Tracking in GPS-less Environnments. MELT 2009. LNCS, vol. 5801, pp. 235–254. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-043857 16

Virtual Fences: A Systematic Literature Review

147

78. Zafari, F., Gkelias, A., Leung, K.K.: A survey of indoor localization systems and technologies. IEEE Commun. Surv. Tutor. 21(3), 2568–2599 (2019) 79. Khelifi, F., Bradai, A., Benslimane, A., Rawat, P., Atri, M.: A survey of localization systems in internet of things. Mobile Netw. Appl. 24(3), 761–785 (2019) 80. Kumarasiri, R., Alshamaileh, K., Tran, N.H., Devabhaktuni, V.: An improved hybrid RSS/TDOA wireless sensors localization technique utilizing Wi-Fi networks. Mob. Netw. Appl. 21(2), 286–295 (2016) 81. Santos, V.D.N., Neves, B., Fonseca Ferreira, N.M.: Novel RSSI-based localization system for cattle and animal tracking. In: 2019 International Conference in Engineering Applications (ICEA), pp. 1–7. IEEE (2019) 82. Ojo, M.O., Viola, I., Baratta, M., Giordano, S.: Practical experiences of a smart livestock location monitoring system leveraging GNSS, Lorawan and cloud services. Sensors. 22(1), 273 (2022) 83. Chai, S., An, R., Du, Z.: An indoor positioning algorithm using Bluetooth low energy RSSI. In: 2016 International Conference on Advanced Materials Science and Environmental Engineering, pp. 274–276. Atlantis Press (2016) 84. Lehmann, F., Pieczynski, W.: Suboptimal Kalman filtering in triplet Markov models using model order reduction. IEEE Signal Process. Lett. 27, 1100–1104 (2020) 85. Halcomb, E.J., Andrew, S.: Triangulation as a method for contemporary nursing research. Nurse Res. 13(2) (2005) 86. Baba, A.I., Wu, F.: Energy-accuracy trade-off in wireless sensor network localization. Int. J. Handheld Comput. Res. (IJHCR) 6(4), 1–18 (2015) 87. Dargie, W., Poellabauer, C.: Fundamentals of Wireless Sensor Networks: Theory and Practice. John Wiley & Sons (2010) 88. Mitilineos, S., Kyriazanos, D.M., Segou, O.E., Goufas, J.N., Thomopoulos, S.: Indoor localisation with wireless sensor networks. Progr. Electromagn. Res. 109, 441–474 (2010) 89. Jondhale, S.R., Jondhale, A.S., Deshpande, P.S., Lloret,J.: Improved trilateration for indoor localization: neural network and centroid-based approach. Int. J. Distrib. Sens. Netw. 17(11), 15501477211053997 (2021) 90. Liu, R., et al.: Selective AP-sequence based indoor localization without site survey. In: 2016 IEEE 83rd Vehicular Technology Conference (VTC Spring), pp. 1–5. IEEE (2016) 91. Goldoni, E., Savioli, A., Risi, M., Gamba, P.: Experimental analysis of RSSI-based indoor localization with IEEE 802.15. 4. In: 2010 European Wireless Conference (EW), pp. 71–77. IEEE (2010) 92. F´elix, G., Siller, M., Alvarez, E.N.: A fingerprinting indoor localization algorithm based deep learning. In: 2016 Eighth International Conference on Ubiquitous and Future Networks (ICUFN), pp. 1006–1011. IEEE (2016) 93. Zeng, X., Baoguo, Yu., Liu, L., Qi, X., He, C.: Advanced combination localization algorithm based on trilateration for dynamic cluster network. IEEE Access 7, 180965–180975 (2019) 94. Gnanasekera, M., Katupitiya, J., Savkin, A.V., Eranga De Silva, A.H.T.: A range-based algorithm for autonomous navigation of an aerial drone to approach and follow a herd of cattle. Sensors. 21(21), 7218 (2021) 95. Koh, K.C., Cho, H.S.: A smooth path tracking algorithm for wheeled mobile robots with dynamic constraints. J. Intell. Robot. Syst. 24(4), 367–385 (1999)

148

M. Abdouna et al.

96. Li, X., Li, X.: Reactive deployment of autonomous drones for livestock monitoring based on density-based clustering. In: 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 2421–2426. IEEE (2019) 97. Molapo, N.A., Malekian, R., Nair, L.: Real-time livestock tracking system with integration of sensors and beacon navigation. Wirel. Pers. Commun. 104(2), 853–879 (2019) 98. Han, G., Xu, H., Duong, T.Q., Jiang, J., Hara, T.: Localization algorithms of wireless sensor networks: a survey. Telecommun. Syst. 52(4), 2419–2436 (2013) 99. Michael Buehrer, R., Wymeersch, H., Vaghefi, R.M.: Collaborative sensor network localization: algorithms and practical issues. Proc. IEEE. 106(6), 1089–1114 (2018) 100. Han, G., Jiang, J., Shu, L., Yongjun, X., Wang, F.: Localization algorithms of underwater wireless sensor networks: a survey. Sensors 12(2), 2026–2061 (2012) 101. Xia, F., Liu, J., Nie, H., Yonghao, F., Wan, L., Kong, X.: Random walks: a review of algorithms and applications. IEEE Tran. Emerg. Top. Comput. Intell. 4(2), 95–107 (2019)

Digital Technologies for Tailored Agronomic Practices for Small-Scale Farmers Dieu-Donn´e Okalas Ossami1,2(B) , Henri Bouityvoubou1 , Augusto Akira Hecke Kuwakino2 , Octave Moutsinga1 , and Ousmane Sall3 1

Universit´e des Sciences et Techniques de Masuku (USTM), Franceville, Gabon [email protected], [email protected], [email protected] 2 E-TUMBA, Avenue de l’Europe, 34830 Clapiers, France [email protected] 3 Universit´e virtuelle du S´en´egal (UVS), Dakar, Senegal [email protected] http://www.e-tumba.com

Abstract. The rapid widespread of digital technologies over the past decades has been changing the way to deliver agricultural extension services to farmers in rural areas in Africa. This shift is driven by the development of digital agricultural advisory initiatives. They provide knowledge and practices improvement to farmers in order to increase their production and, thus their income. However, although they are promising, these initiatives often have a limited impact on agricultural practices or farm-gate prices for three main reasons: (1) the advice is too general and doesn’t match local farming processes, (2) the change of scale, due to in-person dependent agricultural extension efforts that are expensive and fraught with accountability problems, and (3) finally its cost. In this context, it becomes interesting to investigate how to transform the widespread adoption of mobile technology to real agricultural development opportunities. This paper presents a tool-supported approach that overcomes these difficulties. In our approach, agronomic extension services are science-based, locally customized and individualised at plot level. Advice is delivered at the appropriate time during the agricultural season by an automated crop management plan designed by local extension service support. The advice is then specific, and the extension officers can reach out to many more farmers than solely through field visits. Finally, as the implementation service is cloud-based, costs are reduced. Keywords: AgTech · Agronomic extension services Smart agriculture · Smallholder farmers

· digital tools ·

c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023  Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 149–159, 2023. https://doi.org/10.1007/978-3-031-34896-9_10

150

1 1.1

D.-D. O. Ossami et al.

Introduction The Context

Sub-Saharan Africa faces a dramatic dilemma, its current population of 1.1 billion tends to achieve 2.1 billion by 2050 and food production rises in a slower rate and is below the world average. Eastern and Western Africa respectively had 884 and 674 kg/ha in 1960 and the World’s yield average was 1353 kg kg/ha. In 2019 the world’s average achieved 4113 kg kg/ha, but both areas in Africa reached just 2027 and 1269 kg kg/ha. This explains why 17.6% of Africa’s population suffered from undernourishment in 2014 and in 2019 it reached 19.1%. Despite they represent more than 60% of the sub-Saharan population and 23% of their countries’ GDP, smallholder farmers are those that mostly suffer from food insecurity. It is essential to reduce poverty and meet the growing food demand faced with climate change. 1.2

An Underperforming Agricultural Sector

Some of the reasons why the agricultural sector is underperforming in Subsaharan Africa - and most of these constraints are shared by a vast majority of rural poor in Gabon - are low quality inputs (seeds, fertilizers, ...), lack of access to agronomic knowledge and best practices, illiteracy, fragmented markets, age of farmers, the rural exodus of young people, and the impacts of climate change [2]. As a result, Gabon imports almost 90% of the food consumed in the country. Another aspect of the problem is that 80% of local production in Gabon is done by smallholder farmers in rural areas, who also bear the burden of domestic chores and are responsible for feeding their families. Without assistance, their average food production can achieve one ton per hectare, but mostly of them only get few hundred kilograms. One of the biggest challenges they face is the lack of individualized agronomic monitoring and guidance, due to the limited amount of agricultural experts, where the ratio between an extension officer and farmer is approximately 1:1000, a common case in many Sub-Saharan African countries. 1.3

Improving Agricultural Extension Services to Boost the Local Production

As widely reported, improved extension service and proper use of agricultural information in developing countries can enable farmers to adopt and/or adapt to new and improved practices that enhance yields and incomes [13]. This can be achieved by adapting the pedagogical model, using information and communications technology (ICT) to reach farmers directly with more tailored and timely information, incentivizing trainers based on learning outcomes, and leveraging social networks for last-mile information delivery. In fact, extant literature shows that access to accurate agronomic information can lead to change and engender progress in the agricultural sector by empowering farmers with the

Digital Technologies for Tailored Agronomic Practices

151

ability to make informed decisions pertaining to value-adding agricultural production [7–10,12]. In support of this point, several experimental studies [1,3,6] show that mobile -based extension information improves farmers’ knowledge and self-reported adoption or planned adoption of recommended agricultural inputs and practices [4,14–16]. This clearly implies that if one want to see substantial development in the agricultural sector [5], we have to ensure farmers’ access to timely, reliable, and field specific relevant agricultural information. As such, prosperity in the sector significantly depends on smallholder farmers’ ability to not just access and acquire reliable agricultural information but also to use them accordingly at the right time and at the right location [11]. Lessons learned from our field with African farmers suggest that the knowledge provided to the farmer needs to be individualized down to the level of his particular plot of land for best accuracy and relevance to the local context. It also needs to be generated through dynamic crop management planning and delivered via digital tools for scalability, and to get around the lack of available experienced agronomists locally. This paper presents a tool-supported approach that overcomes these difficulties. In our approach, agronomic extension services delivered to farmers are science-based, locally customized and individualised at the plot level. Advice is delivered at the appropriate time during the agricultural season by an automated crop management plan designed by local extension service support. This solves both the relevance of the advice, which is then specific, and the problem of changing scale, since the extension officers can reach out to many more farmers than solely through field visits. Finally, as the implementation service is cloud based, costs are inevitably reduced. The paper is organised as follows. Section 2 presents the big picture our method for elaborating plot level tailored advice. Section 3 introduces the case study used to illustrated ideas presented in the paper. Section 4 illustrates the application the proposed methodology on the case study. Section 5 concludes and introduces some future works.

2 2.1

Proposed Method General Architecture of the Proposed Method

The assessment presented in this paper is based on research conducted in various smallholder landscapes, visiting hundreds of smallholders, cooperatives and agribusinesses in sub-Saharan Africa; and we have managed to articulate the work around four dimensions of work, namely: 1. Step 1: Understanding the local context and collecting data (soil profiles, observations, actual yields, farmers, traditional practices in place, crops, etc.), including the delimitation of plots. The types of agricultural activities are one of the main factors responsible for variations in the information needs of rural smallholders. It is necessary to identify this precisely in order to design or adapt the tools properly.

152

D.-D. O. Ossami et al.

2. Step 2: Categorization of farmers’ information needs according to context (situation-specific, internally or externally driven), frequency (recurrent or new need), predictability (anticipated or unexpected need), importance (degree of urgency) and complexity (easily solved or difficult) in order to be able to design a tailored agronomic extension service easily adaptable on particular context or location. 3. Step 3: Integration of captured needs in the Platform, and their coupling with a mobile phone communication service for scalability, and – to get around the lack of available experienced agronomists locally and – to reach out to many more farmers than solely through field visits, especially in situations where the extension officer-to-farmer ratio is high (approximately 1:1000), a common case in many Sub-Saharan African countries. 4. Step 4: Automatic dissemination of agronomic advice through the automatization of technical itineraries to monitor farmers’ activity and send reminders of interventions to be carried out via simple text messaging (sms) in real-time. Figure 1 illustrated the general framework of the implementation of the proposed method.

Fig. 1. The big picture of the framework of the digitalization of the proposed method

2.2

Proposed Model Stages

From the framework presented in Fig. 1, the implementation of our method results in the architecture presented in Fig. 2. In this figure, the farmer receives

Digital Technologies for Tailored Agronomic Practices

153

recommendations from the Platform via simple text messages. Literate farmers can register all actions they perform by sending text messages (through formatted commands). This allows the Platform to get up to date field information, which is then made available to the Extension Officers and authorities via actions’ timelines and dashboards. As the platform receives farmers’ feedbacks, it combines them with already provided information on what has been done till now to calculate appropriate actions that can be applied next. The use of mobile phones in a two-way communication between farmers and the Platform allows real-time monitoring and easy collection of field data. This approach gives an added value to the Extension Officer whose role is not only to control the compliance of farmer’s actions anymore, but also to act in a proactive way by giving explanations and precise recommendations all along the crops cycle.

Fig. 2. Architecture of the proposed method

3

The Case Study

The objective is to bring small-scale coffee/cacao producers into a concrete process of increasing their agricultural production at all in a sustainable and environmentally friendly way, by making the best use of subsidized nutritional inputs. Gabonese government hopes to have a knock-on effect on the food chains through a parallel development of the coffee/cacao to guaranteeing the country’s food autonomy with a controlled impact on the environment. In particular, the project will allow small farmers who today have difficulty ensuring their food autonomy, to make a leap forward on innovative and sustainable cultivation practices, by benefiting from digital technologies, climate-smart practices and

154

D.-D. O. Ossami et al.

Fig. 3. Sample Dashboard view

distribution of information adapted to the managers of the sector up to the personalized agricultural advice to small producers. To achieve this objective, we start by collecting reliable and up to date field data in the Region of Ngounie (South of Gabon). These data have then been analysed, categorized and integrated into our Platform FieldSim (see Figs. 3 and 4). The results of this first study is presented in the next section.

Fig. 4. Sample interventions’s Timeline

4

The Application on the Case Study

With the data collected in FieldSim we can propose a visualisation of this data to provide a wide view of the context, helping the decision making process. Business

Digital Technologies for Tailored Agronomic Practices

155

Inteligence tools are a way to provide quick and easy visualisations, with filters R R from M icrosof t to get the graphs and and agregations. We used P owerBI  maps. We grouped farmers and their plots per crop and we notice that there’s an inverse relationship between farmers and surfaces, in Ngouni´e there are less cocoa farmers, but their surface are larger than coffee farmers Fig. 5. Since we collected the plots’ borders and calculated the area, we can plot their locations and see their geographical distribution (see Fig. 6).

Fig. 5. Relationship between farmers and surfaces

In Fig. 7, the farmers that have more than one plot, we see how many of them are just cocoa and coffee farmers. From the 15 farmers that have more than 1 plot, there are 3 farmers that have both crops. We also asked their ability to read to see if it would be a difficult part of the SMS system implementation. True are those who answered that can they read, blank are those that it wasn’t assigned and False that they can not read (Fig. 8). We also show by their age, 0 (zero) represents [0,20[ ; 20 - [20,40[ ; 40 - [40, 60[ ; 60 - [60, 80[ ; 80 is above 80. That way we can better develop strategies to target farmers. Most of farmers from Ngouni´e know how to read and they are between 40 to 60 years old. We also digitalized the most important agricultural practices (see Fig. 9) that farmers apply in their orchards, this way the responsible from a zone can track farmers practices and send advice according to the practice in the right time for it. Eight cropping management plans were designed for cocoa and coffee, seedling, implantation (year 0), before production (year 1–3), production (year 4-later). To each practice there are SMSs that were previously written, if needed they can be written in a local language and sent to farmers assigned to this language. In Gabon, people are alphabetized in French and local languagesdon’t have a formal

156

D.-D. O. Ossami et al.

Fig. 6. Geographical farmers distribution according to the plots and surface

Fig. 7. Average number of plots cacao/coffee per farmers

writing and are not taught in schools, which disfavours the implementation of SMS in local languages in Gabon, in other African countries it is possible.

Digital Technologies for Tailored Agronomic Practices

157

Fig. 8. Number of farmers and their ability to read text messages per age

Fig. 9. Digitalization of cacao (left) and coffee (right) technical routes

5

Conclusion and Future

The use of digital tools enables to collect and visualise data to have a wide view of farmers and their context. They help to provide insights in how to drive agricultural advice and the implementation of an SMS bulk system. Farmers were grouped according to their age, ability to read, location, surface and crop it will be possible to design the SMSs to be sent according to their needs along the season. The next steps consists in collecting farmers practices, mainly their execution dates and then correlate to soils data from the ISDA soil map, crop

158

D.-D. O. Ossami et al.

models and the yields at the end of the season. The presentation of the first results to the country’s authorities raised a great interest. The data collection, analysis and the digitalization of agronomic extension services, which was initially planned in 4 regions (Estuaire, Haut-Ogoou´e, Ngouni´e and Woleu-Ntem), will be extended to the whole country.

References 1. Cole, S.A., Fernando, A.N.: Mobile’izing agricultural advice technology adoption diffusion and sustainability. Econ. J. 131(633), 192–219 (2020). https://doi.org/ 10.1093/ej/ueaa084 2. Eric, L., Dieu Donn´e, O.O., Ralf, K.: Addressing food insecurity in the democratic republic of the Congo (2020) 3. Fu, X., Akter, S.: The impact of mobile phone technology on agricultural extension services delivery: evidence from India. J. Dev. Stud. 52(11), 1561–1576 (2016). https://doi.org/10.1080/00220388.2016.1146700 4. Gandhi, R., Veeraraghavan, R., Toyama, K., Ramprasad, V.: Digital green: participatory video for agricultural extension. In: 2007 International Conference on Information and Communication Technologies and Development, pp. 1–10. IEEE (2007) 5. Kaske, D., Mvena, Z.S.K., Sife, A.S.: Mobile phone usage for accessing agricultural information in southern Ethiopia. J. Agric. Food Inf. 19(3), 284–298 (2018). https://doi.org/10.1080/10496505.2017.1371023 6. Larochelle, C., Alwang, J., Travis, E., Barrera, V.H., Dominguez Andrade, J.M.: Did you really get the message? Using text reminders to stimulate adoption of agricultural technologies. J. Dev. Stud. 55(4), 548–564 (2019) 7. Lwoga, E.T., Stilwell, C., Ngulube, P.: Access and use of agricultural information and knowledge in Tanzania. Libr. Rev. 60(5), 383–395 (2011) 8. Mkenda, P.A., Mbega, E., Ndakidemi, P.A.: Accessibility of agricultural knowledge and information. J. Biodivers. Environ. Sci. 11(5), 216–228 (2017). https://www. innspub.net/wp-content/uploads/2017/12/JBES-Vol-11-No-5-p-216-228.pdf 9. Mtega, W.P., Ngoepe, M., Dube, L.: Factors influencing access to agricultural knowledge: the case of smallholder rice farmers in the Kilombero district of Tanzania. S. Afr. J. Inf. Manag. 18(1), 1–8 (2016) 10. Mwantimwa, K.: Use of mobile phones among agro-pastoralist communities in Tanzania. Inf. Dev. 35(2), 230–244 (2019) 11. Ndimbwa, T., Ndumbaro, F., Mwantimwa, K.: Delivery mechanisms of agricultural information and knowledge to smallholder farmers in Tanzania: a meta-analysis study. Univ. Dar es Salaam Libr. J. 14(2), 87–98 (2019) 12. Aina, L.O.: Towards improving information access by semi and non-literate groups in africa: a need for empirical studies of their information-seeking and retrieval patterns. In: Bothma, T.J.D., Kaniki, A. (eds.) ProLISSA 2004 Progress in Library and Information Science in Southern Africa: Proceedings of the Third Biennial DISSAnet Conference, Farm Inn, Pretoria, South Africa, University of South Africa, pp. 11–20 (2004) 13. Soyemi, O.D., Haliso, Y.: Agricultural information use as determinant of farm income of women in Benue State, Nigeria. Res. Humanit. Soc. Sci. 5(18), 1–6 (2015)

Digital Technologies for Tailored Agronomic Practices

159

14. Tjernstr¨ om, E., Lybbert, T.J., Hern´ andez, R.F., Correa, J.S.: Learning by (virtually) doing: experimentation and belief updating in smallholder agriculture. J. Econ. Behav. Organ. 189, 28–50 (2021) 15. Van Campenhout, B., Spielman, D.J., Lecoutere, E.: Information and communication technologies to provide agricultural advice to smallholder farmers: experimental evidence from Uganda. Am. J. Agr. Econ. 103(1), 317–337 (2021) 16. Vasilaky, K., Toyama, K., Baul, T., Mangal, M., Bhattacharya, U.: Learning digitally: evaluating the impact of farmer training via mediated videos. In: Northeast Universities Development Consortium Conference, Providence, RI, vol. 7 (2015)

Disjoint Routing Algorithms: A Systematic Literature Review Adoum Youssouf1 , Daouda Ahmat1,2(B) , and Mahamat Borgou3 1

3

Virtual University of Chad, N’Djamena BP: 5711, Chad {adoum.youssouf,daouda.ahmat}@uvt.td 2 Univerty of N’Djamena, 1117 N’Djamena, Chad National School of Information and Communication Technologies, N’Djamena 5363, Chad [email protected]

Abstract. The expansion of the Internet and technological innovations have revolutionized the world. This digital effervescence has fostered in its path an increased connectivity of services and applications. Optimal use of these requires support for quality of service (QoS), fault resilience, reduced transmission delay, reliable packet delivery, network security, and fair sharing of data. Resources available in the network as needed. In order to satisfy these requirements, disjoint routing algorithms have been developed to improve network performance. These algorithms distribute the load evenly over several paths between the different nodes of the network. The objective of this article is to provide a comprehensive literature review on disjoint routing algorithms as a whole, unlike several research works in the literature which focus only on specific network architectures including MANET, SDN, WSN, ... First, we review the advantages and disadvantages of disjoint routing algorithms, which will allow us to identify future research work. Then, we will present a detailed study of disjoint routing algorithms. Finally, we will also describe disjoint routing algorithms based on some specific network architectures.

Keywords: Multipath routing paths

1

· Vertex-disjoint paths · Edge-disjoint

Introduction

Nowadays, the Internet and technological innovations continue to advance at an exciting and impressive rate in various fields, including communication networks, multimedia applications, energy industry, high-speed circuit integration scale (VLSI) brings in its wake an increased community of services and applications. In order to ensure optimal connectivity and efficient data sharing in the network, new disjoint routing techniques have been developed to improve the quality of service (QoS) in the network. c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023  Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 160–192, 2023. https://doi.org/10.1007/978-3-031-34896-9_11

Disjoint Routing Algorithms: A Systematic Literature Review

161

On the other hand, multipath routing allows data to be sent over a set of paths leading from a source to a destination. Disjoint routing algorithms seem to be efficient solutions in networks allowing to ensure the continuity of traffic in case of failure of a route between source and destination [1]. They also provide certain benefits, such as improved package delivery reliability [2], reducing network congestion, increasing network security [3–5] and higher aggregate bandwidth [6]. These allow to improve the performance of the communications and to aggregate the resources available in the network. However, disjoint routing algorithms also have many drawbacks, including special message control, duplicate packet handling, longer path, and large route demand. Overall, this paper contributes to a holistic investigation of disjoint routing algorithms as a whole, unlike several works in the literature that focus only on specific network architectures including MANET, SDN, WSNs, etc. The study attempts to encompass almost all relevant scientific publications. The specific contributions of the study are as follows: • Exhaustive analysis on specific disjointed architectures, evaluating on different metrics. • Comparative study of the advantages and disadvantages of disjoint routing algorithms. • Taxonomy of applications of the multipath routing • Relevant aspects of disjoint routing approaches are rational load distribution, resilience to ensure service continuity in case of failures, packet delivery reliability, etc. The rest of the review is organized as follows: Sect. 2 presents the body of related work on surveys of disjoint routing algorithms. Section 3 introduces the notion of disjoint path types. Section 4 describes the advantages and disadvantages of disjoint routing algorithms. Section 5 presents the notion of disjoint routing algorithms based on specific network architectures. Section 6 presents analyses and approaches to disjoint routing. Finally, Sect. 7 concludes this paper by presenting future work. For the understanding of this document, the acronyms used are presented in the Table 1

162

A. Youssouf et al. Table 1. Notations used Name

Description

CandidateSet

all candidates

TraceSet

the trace set

Failprobability

the probability of end-to-end failure of the paths

mmax

upper limit of

Pu

end-to-end reliability requirement

0 πS,D (t)

the path reliability set

ET Xj

link cost function at node j

Route path

route of the itinerary

Route reply

a set of route response messages

Route request

route requests from origin to receiver

RPT

received packets throughput

S’s bucket

out of the bucket

BL

length of the bucket outlet port

PN

number of ports in the bucket

SW

OpenFlow switch

tmp bucket

temporary bucket

f0

a zero value stream contains no paths

nextHop

next jump

stack.pop

stack jump

Sneighbors

list of unsorted neighbouring nodes

Ssorted neighbors list of sorted neighbouring nodes ResBatt wuv = PL

1.1

b−1 δuv

residual battery the capacity assigned to the links set that contains the disjoint paths in the original graph.

PN

set that contains the node-joined paths in the original graph

TCP

transmission control protocol

UDP

user datagram protocol

Research Sources

This method aims at listing and collecting reliable data related to the theme during research. We conducted an exhaustive search on several electronic databases, the list of which is as follows • • • • •

IEEE Xplorelatex math symbols amsmath amssymb example Google scholar ACM ScienceDirect ResearhGate

Disjoint Routing Algorithms: A Systematic Literature Review

163

• Springer • Wiley The information retrieval procedure is performed on the above mentioned electronic databases. These databases include the most important journals and conferences in the field of computer networking; specifically, on the disjoint routing approach based on specific network architectures, which are listed in the Table 2. Table 2. Fundamental journals and conferences Classification

Abbreviation

Description

Journal

JISE IACC IJCAT

IEEE Access IEEE Transactions On Wireless Communications International Journal of Distributed Sensor Networks Journal of Information Science and Engineering IEEE International Advance Computing Conference International Journal of Computer Applications Technology and Research Journal of Network and Computer Applications International Symposium on Pervasive Systems, Algorithms and Networks Institute of Electronics, Information and Communication Engineers IEEE Symposium on Foundations of Computer Science Society for Industrial and Applied Mathematics Journal of Combinatorial Theory, Series B Journal of Computing and Information Technology International Journal of Communication Systems

JNCA I-SPAN IEICE FOCS SIAM JTCB CIT Conference

ICRAIE ICECS ICNP CMCE JCSSE LCN

International Conference on Ubiquitous Intelligence and Computing IEEE International Conference on Recent Advances and Innovations in Engineering International Conference on Electronics and Communication System International Conference on Informatics, Electronics International Conference on Network Protocols International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery International Conference on Computer, Mechatronics, Control and Electronic Engineering International Joint Conference on Computer Science and Software Engineering International Conference on Information and Telecommunication Technologies and Radio Electronics IEEE Conference on Local Computer Networks

164

A. Youssouf et al.

1.2

Search Strategy

This section mainly elaborates the search steps, which are used to identify articles deemed relevant (journal quality, number of citations, impact factor, etc.) in accordance with the theme. The search strategy adopted is based on four principles: • • • •

Principle 1: Select articles published in English in the indexed journals. Principle 2: Read the title and abstract of the selected articles. Principle 3: List only articles dealing with disjoint routing algorithms. Principle 4: Categorize and reference items using disjoint routing approaches.

The four-pronged research strategy is shown in Fig. 1.

Fig. 1. Research strategy

Disjoint Routing Algorithms: A Systematic Literature Review

1.3

165

Contributions of this Study

Unlike existing studies, this study focuses on disjoint routing algorithms in their entirety, unlike several works in the literature that focus only on specific network architectures including MANET, SDN, WSNs, etc. We conduct a comprehensive review of various related works on disjoint routing algorithms based on specific network architectures. We will also highlight their advantages and limitations while indicating some challenges in terms of security related to the protection of information and the scheduling of the and packet scheduling. In addition, we discuss the technical notions of disjoint routing and compare the effectiveness of these approaches at the end of their complexity. Finally, we conclude on the basis of our analysis that disjoint routing algorithms can improve the performance of communications and aggregate the resources available in the network. resources available in the network.

2

Related Work

The search for disjoint paths has received attention from several researchers in the literature review because of its interest in many applications, such as very large scale integrated circuit technology [7], communication protocols [8], secure data transmission protocols [9], robust design and optimization of telecommunication networks [10] and the reliability of network communications [11]. Several works have been widely studied for decades in various contexts in the literature concerning disjoint routing algorithms. Tsai et al. [12] presented a study on a disjoint routing approach based on wireless ad hoc and mesh networks. In the case of wireless ad hoc networks, the proposed approach traverses properties such as mobility, disturbance and architecture to increase routing efficiencies. The path selection protocol for the IEEE 802.11s mesh standard handles both on-demand and proactive tree-based routing. Selvam et al. [13] proposed a survey on disjoint routing algorithms related to data security in wireless sensor networks. Standard security protection mechanisms cannot be implemented directly in the network due to low energy and low computational power. Some research work has shown that the security of sensor networks can be enhanced by applying new cryptographic algorithms [9]. Hassan et al. [14] conducted a survey on disjoint routing approaches oriented to wireless multimedia sensor networks. These protocols ensure network performance by providing quality of service. Fu et al. [15] have studied a survey on the disjoint multi-path routing algorithm inherent to software-defined network (SDN). The authors compared the disjoint approaches to the classical one-way path routing approach to achieve load balancing. Tarique et al. [16] proposed a review on disjoint routing techniques for mobile ad hoc networks. The primary objective of this review is to increase the reliability of mobile ad hoc networks based on delay and transmission rate. In Hodroj et al. [17], the authors reviewed the literature on methods, mechanisms, and latest standards for video streaming in multipath overlay networks and multihoming.

166

A. Youssouf et al.

In addition, the autors shown that the using of multi-homing enables increase reliability, resiliency and performance. Finally, different studies have been conducted on mathematical and statistical approaches to generate adaptive algorithms. Afzal et al. [18] presented a comprehensive review of the literature on wireless video streaming over disjoint paths by evaluating each approach at the protocol stack level from the beginning to the end of a time period. The autors presented a taxonomy of the different schemes. Myoung Lee et al. [19] proposed a study on the different mechanisms of disjoint routing algorithms. These mechanisms can be used in the MPLS/GMPLS network, to enhance the performance of the network by the technique of information carrying capacity. In addition, the discussed algorithms provide efficient solutions for computing multiple paths, reducing delay and increasing throughput. Satav et al. [20] proposed a robust and less energy consuming disjoint routing technique for mobile ad hoc networks. The proposed techniques mainly select the best path among the available paths between the nodes at the end of the network. In the work of Adibi et al. [21], the authors proposed a study on various disjoint routing approaches for mobile ad hoc networks. These approaches are sensitive to security and computational power. Nazila et Raziyeh [22] investigated existing energy-efficient routing mechanisms in MANETs. The proposed approaches allow for efficient energy use and increased network performance. Qadir et al. [23] proposed a comprehensive survey of the literature on multipath routing based on network layer. This paper focuses on the problems related to multipath propagation, namely the control plane problem, which consists in computing and selecting routes; and the data plane problem, which consists in dividing the traffic flow over the computed paths. Gulati et al. [24] presented a survey on some multipath routing protocols for mobile ad hoc networks. The approaches presented in this survey provide optimal quality of service. Radi et al. [25] presented a comprehensive analysis of multipath routing protocols for wireless sensor networks. The proposed approach improves the network performance by efficiently using the available resources. Dua et al. [26] proposed a systematic literature review on various routing schemes for vehicular ad hoc networks. The purpose of the proposed approach is to select a particular strategy based on its applicability in a particular application. There are also related works discussed on other disjoint categories, for example, in the work of Daouda et al. [27], The authors have applied Dijkstra’s algorithm for searching disjoint paths. The approach is based on graph traversal to determine the shortest path in a positive weighted graph. Then, the same procedure is applied on the residual graph determined previously, until P are determined, with P ≤ k disjoint paths and k being the minimum number of disjoint paths required. The disjoint paths thus determined are used in the routing of anonymous subpackets. Chekuri et Khanna [28] have proposed a greedy non-combinatorial algorithm using a subroutine called the V - separable from the problem. The algorithm [29] first finds the shortest paths and removes them from the graph; then, when there

Disjoint Routing Algorithms: A Systematic Literature Review

167

are more than all pairs of information to route, v is considered a node and uses the paths disjointed by V as a strategy to find all disjoint paths in the residual graph G that pass through v. Most of the above work in this paper focuses on specific architectures. The summaries of related work from this survey are summarized in the Table 3 and illustrated in the form of a taxonomy (See Fig. 2).

Fig. 2. Taxonomy of applications of the multipath routing

3

Kinds of Disjoint Paths

This section presents finding disjoint paths approaches in the given graphs. As shown in the Figs. 3 and 4, vertex-disjoint and edge-disjoint cases can be considered to determine the disjoint paths in the graphs. The Table 4 summarizes the different problems of finding disjoint paths in graphs and their features. Disjoint routing algorithms improve in particular load balancing and reliability of packet delivery with low collision probability and ensure fault tolerance in case of equipment failure.

168

A. Youssouf et al. Table 3. Summary table of related work Network architecture based on disjoint routing

Reference

Mobile Ad hoc Network (MANET)

Majdkhyavi et al. [22], Tsai et al. [12] Adibi et al. [21], Tarique et al. [16]

Vehicular Ad hoc Network (VANET)

Dua et al. [26]

Wireless Mesh Network

Tsai et al. [12]

Wireless Sensor Network (WSN)

Selvam et al. [13]

Software Defined Network (SDN)

Fu et al. [15]

Streaming Video

Hodroj et al. [17], Afzal et al. [18]

MPLS/GMPLS

Myoung Lee et al. [19] Table 4. Features of disjoint path routing

Properties Vertex-disjoint Hypotheses

Paths Edge-disjoint

- Have no common vertex

- Have no common edges

- Except source and destination vertices n  Pi \ {s, d} = ∅

- Can have common vertices n  Pj {s, d} = ∅

i=0

i=0

Reference Tsai et al. [12] Smail et al. [30], Daouda et al. [31]

-G = (E, V )/{E = V × V } -Pj = {(x, y)/x, y ∈ E} -=⇒Pj ⊂ E Fault tolerance

Robust

Low

Tsai et al. [12]

Reliability

Best

Average

Xie et al. [32]

Probability of collision

Weaker

Strong

Xie et al. [32]

Applications

More used

Less used

Hassan et al. [14]

Fig. 3. Vertex-disjoint paths

Fig. 4. Edge-disjoint paths

Disjoint Routing Algorithms: A Systematic Literature Review

4

169

Advantages and Disadvantages of Disjoint Routing

Using multiple paths to route packets through the network has many advantages. Disjoint routing algorithms offer better features such as transmission delay, failure resistance, bandwidth efficiency, aggregation of available resources in the network and continuity of services. They also present some challenges in terms of security such as information protection and packet scheduling. The Table 5 describes avantages inherent to disjoint routing. Table 5. Advantages of disjoint routing algorithms Parameters

Description

Fault tolerance

Possibility of using alternative paths in case Tsai et al. [12] of failure of the main path.

Reference

Load balancing

Possibility of using multiple routes to transmit packets from the source to the destination. This mechanism ensures bandwidth optimisation and also load balancing in order to prevent link congestion.

Bandwidth aggregation

Using multiple paths allows for load Tsai et al. [12] balancing between network nodes to achieve better bandwidth using

Reduction in transmission time

Reduced delay in disjoint routing because backup routes are identified during route discovery.

Application support

High bandwidth multimedia applications Qadir et al. [23] can benefit from the resilience offered by the techniques of disjoint routing algorithms

Secure key/information exchange

A key exchange mechanism that explores disjoint transmission paths based on both the Diffie Hellman protocol and the Shamir threshold.

Tsai et al. [12]

Tsai et al. [12]

Daouda et al. [31]

Disjoint routing algorithms have not only many advantages as mentioned in the Table 5, but also theses routing schemes have also several disadvantages which are listed in the Table 6 [12,21].

170

A. Youssouf et al. Table 6. Disadvantages of disjoint routing algorithms

Parameters

Description

Reference

Longer paths

Packets generally travel over more hops, which will increase the end-to-end delay and waste more bandwidth.

Tarique et al. [16] Wu et al. [33], Satav et al. [20]

Message control mechanism

Special messages used in multipath routing can overload the network, especially when the network is large.

Satav et al. [20] Tarique et al. [16]

Route request storm

Multipath routing causes intermediate nodes to transmit a duplicate request message which can cause a large amount of redundant overhead packets in the network.

Satav et al. [20] Tarique et al. [16]

Duplicate packet Duplicate packets create redundant packets and processing thus take up useful bandwidth

Satav et al. [20] Tarique et al. [16], Tsirigos et al. [34]

Dynamic environment

Daouda et al. [27]

5

It is hard to keep paths disjoint in a dynamic environment (the cost of the Dijkstra algorithm is O(n2 )).

Disjoint Routing Applied on Network Architectures

Multimedia applications require increased transmission to have good quality of services, reduction of energy consumption and securing traffic on networks. To meet these requirements, disjoint paths can be the solution to address bandwith management, packet loss and data security. Many researchers have been interested in disjoint routing algorithms, especially in the areas of ad hoc networks, wireless sensor networks, software-defined networking, video streaming and wireless mesh networks, as well as other specific types of network architectures (cf. Table 7). 5.1

Disjoint Routing Applied on Mobile Ad Hoc Networks

Routing algorithms play an important role in MANETs. In these networks, all nodes act as routers, access points and servers (see Fig. 5). Some research works have focused on disjoint routing algorithms for mobile ad hoc networks [35–37]. Velusamy et al. [38] proposed a new multi-path routing algorithm based on multi-objective and node-disjoint functions for mobile ad hoc networks. The approach aims at finding multi-node disjoint paths that satisfy the multi-objective optimization problem in terms of minimizing energy consumption and reducing information transmission delays. Robinson et al. [39] presented a disjoint multi-path routing algorithm to solve the optimization problem in a real-time network environment. The proposed method selects the best possible path using the dynamic control technique in MANETs. It also provides better performance compared to other related methods in terms of energy efficiency, reliability, load balance and works well in the dynamic network environment.

Disjoint Routing Algorithms: A Systematic Literature Review

171

Leung et al. presented a dynamic disjoint routing approach to guarantee data delivery in wireless ad hoc networks. The proposed approach transmits outgoing packets along several paths that are subject to particular end-to-end reliability. The pseudo-code that formally illustrates the selection of disjoint paths is presented in the Algorithm 1. Several research works on ad hoc networks addressed disjoint routing schemes [39–47]. The results of various works have shown that disjoint routing algorithms based on mobile wireless networks offer the best performance in terms of minimization of energy consumption, reliability, load balancing, reduced packet delivery time, considerable resiliency and operates very well in a dynamic network environment.

Fig. 5. Ad hoc architecture using disjointed paths

Algorithm 1: Disjoint path selection [48] set CandidateSet ∈ {P0 , P1 , ..., Pn−1 } 0 0 0 (t), πS,D (t), ..., πS,D (t)} set P athReliabilitySet ∈ {πS,D set T raceSet ∈ {} int DisjointP athSelection(si, f ailP robability) if (1 − f ailP robability ≥ Pu ) then return Success; for (each f easible path Pi in subset {Psi , ..., Pn−1 )} do if (Pi does not contains node  in any paths of T raceSet) then T raceSet = T raceSet Pi i if ((DisjointP athSelection (i + 1), f ailP robability ∗ (1 − πS,D (t)) = 1) and(|T raceSet| < mmax )) then return success; Remove Pi f rom T raceSet return f ailure;

5.2

Disjoint Routing Applied on Wireless Sensor Networks

Several research works have proposed disjoint routing algorithms on wireless sensor networks. Indeed, Junho et al. [49] presented a disjoint routing scheme for real-time data transmission in multimedia sensor systems. The approach relies on combined routing based on both Bluetooth and Zigbee to overcome the restriction of bandwidth deficit in sensor networks and on non-overlapping

172

A. Youssouf et al.

disjoint path parameterization methods based on concurrency to address the overhead produced by the disjoint path control mechanism. In the work of Sun et al. [50], The authors presented a disjoint routing algorithm based on the lowest hop count metric with a congestion control mechanism combined with time slice load balancing. This algorithm allows for a higher proportion of data to be received and a longer latency. This method is more efficient than the normal disjoint paths without the throttling mechanism. Huang et al. propose a method for key sharing in WSN using disjoint routing and Reed-Solomon code [51]. Marjan et al. [52] have developed a disjoint routing algorithm for wireless sensor networks with reduced disruption and lower power consumption. The proposed approach was designed primarily to ensure reliable data transmission, and reduce packet latency by discovering multiple disjoint paths between the originating and destination nodes. Therefore, when a group of sensor nodes detects an event, and the proposed method attempts to define multiple node-joined routes and minimize the disturbance between the originating node and the receiving node as illustrated in Fig. 6. After the definition of each route and some aggregation of data, resulting in a decrease in the proportion of information received at the receiving node. In this case, it disables the newly created path, stops the setup process, and distributes traffic on the previously created paths. To start the route definition mechanism, a packet of type (Route request) is routed from the origin to the receiver. On each node, the neighbor with the best cost metric is chosen as the next hop: Costi,j = (ET Xj +

1 1 ). (1 + Interf erence Levelj ) pi,j qi,j ResBattj

(1)

Other research works on multipath sensor networks is not listed in this paper have been also proposed [53–56]. Research results have shown that disjoint routing algorithms that address wireless sensor networks improve network performance by effectively using available resources, efficient on the overhead problem generated by the routing path configuration and robust in terms of security. The respective descriptions of the above mentioned mechanisms are presented in the Algorithms 2, 3 and 4.

Fig. 6. Structure of multiple paths in sensor network

Disjoint Routing Algorithms: A Systematic Literature Review

Algorithm 2: Source node’s algorithm [52] if (no Route request packet is sent bef ore) then for (all of this node s neighbors) do if (Route pathi == 0) then Calculate costi f or neighbor i; Send Route request to the node which has minimum costi ; if (Route reply f or nth path is received) then T ransmit data packets over the n created paths, using load balancing algorithm; if (a positive f eedback is received f or nth path) then Continue data packet transmission over the n created paths, using load balancing algorithm; for (all of this node s neighbors) do if (Route path i == 0) then Calculate costi f or neighbor i; Send Route request to the node which has minimum costi ; if (a negative f eedback is received f or nth path) then Disable the nth path; T ransmit data packets over the n − 1 previously created paths, using load balancing algorithm;

Algorithm 3: Intermediate nodes’ algorithm[52] if (Route request packet is received) then for (all of this node s neighbors) do if (Route pathi == 0) then Calculate costi f or neighbor i; Send Route request to the node which has minimum costi ; Route path = 1; if (Route reply packet is received) then Send this packet in the reverse path to the source node; Route path = 2; if (Route reply packet is overheard f rom node i) then Ref er to neighbor table and extract backward packet reception rate to node i; Add the extracted value to Interf erence Level;

173

174

A. Youssouf et al.

Algorithm 4: Sink node’s algorithm [52] if (the f irst Route request packet is received) then Send Route reply packet in reverse path; if (the nth Route request packet is received) then Calculate RP T of using n − 1 paths; if ((RP T of n − 2 paths) 0 Input : path2.length > 0 sort(path1) sort(path2) i, j = 0 while i < path1.length or j < path2.length do if path1[i] = path2[j] then return (f alse) else if path1[i] < path2[j] then i++ else if path1[i] > path2[j] then j++ return (true)

Disjoint Routing Algorithms: A Systematic Literature Review

5.5

177

Disjoint Routing Applied on Wireless Mesh Networks

Wireless mesh network (WMN) technology has gained momentum in recent times due to its advantage in certain application areas such as community networks and enterprise backbones [73]. In the work by Kushwaha et al. [74], the authors applied an AOMDV protocol that relies on local repair to define the best metrics in terms of the proportion of packet loss, overhead, and the proportion of packet transmission in high-speed networks with mobile nodes. Zhang et al. [75] presented a route discovery method to find the shortest and second shortest disjoint paths in a wireless mesh network. This approach achieves the result at a reduced delay, which leads to better communication in a unit path mesh network. Experimental results have shown that this method is applied to find the shortest path and the second shortest path disjoint from the shortest in wireless mesh networks. In the work of Ikenaga et al. [76], the authors proposed a new disjoint routing technique to prevent disturbance variations and enhance the transmission speed in the mesh system. For example, in Fig. 7, the source vertex first detects the disjoint routes P1 and P2 , then it records P1 in its routing structure so that the vertex identifier s is less than t. Conversely, vertex t finds paths P1 and P2 , so it stores P2 in its data structure so that the vertex identifier t is larger than s. However, the authors described an algorithm using flow to find disjoint paths. The flow f is determined by the man defined below (Fig. 9):  |f | = f(s,v) (1) v:(s,v)∈E

Fig. 9. Example of path selection [76].

The approach consists in determining two disjoint paths using the linkdisjoint, and on the other hand if the routes are not yet disjoint, then this approach builds node-disjoint paths from previously defined routes by applying the flow technique. The link-disjoint search determines an appropriate flow f with |f | = 2, decomposed into two separate routes Pˆ1 and Pˆ2 [77,78]. The technique using flow defines node-disjoint routes from link-disjoint routes. More precisely, it first constructs a new graph devoid of the original graph defined by f , and obtains

178

A. Youssouf et al.

a cycle that introduces the node and cross flows, applying Dijkstra’s approach. The techniques used are formally described in the Algorithm 7. Algorithm 7: for finding node disjoint paths(G, s, t) [76] input : G − the graph, s − source node, t − destination node output : (Pˆ1 , Pˆ2 ) − two node disjoint paths (P1 , P2 ) ← FindLinkDisjointPaths (G, s, t) f 0 ← {P1 , P2 } if P 1 and P 2 are node disjoint then return P1 andP2 f ← ImproveFlow(G, f 0 ) Decompose f low f into two paths, P1 and P2 Jump to 3. P rocedure FindLinkDisjointPaths (G, s, t) Identif y path P1 in G by using Dijkstra s Algorithm f ← {P1 } Construct the residual network G(f ) of G imposed by f : Add to G(f ) each link in G that doesn t belong to P1 foreach link (u, v) ∈ P1 do Add a link (v, u) to G(f ) with c(v,u) = 0 Identif y path P2 in G(f ) by using Dijkstra s Algorithm Augment f low f along path P2 : foreach link l(u,v) ∈ P2 do if f(v,u) = 0 then f(u,v) ← 1 else f(u,v) ← 0 Decompose f low f into two paths, Pˆ1 and Pˆ2 return Pˆ1 and Pˆ2 P rocedure ImproveFlow (G, f 0 ) f ← f0 Construct the residual network G(f ) of G imposed by f : Add to G(f ) each link l in G f or which fl = 0 foreach link(u,v) in G f or which f(u,v) = 1 do Add a link(v,u) to G(f ) with c(v,u) = 0 F ind the cycle W including a node with crossed paths in G(f ) that minimizes C(W ) Augment f low f along W return f

Many research works on wireless mesh networks that deal with the notion of disjoint routing are not taken into account [79–86]. The research results showed that the disjoint routing algorithms based on wireless mesh networks offer the best performance metrics in both packet loss rate, routing overhead and packet delivery rate in the networks and effectively increase the reliability of the communication. The work summary of some specific architectures are mentioned in the Table 7.

Disjoint Routing Algorithms: A Systematic Literature Review

179

Table 7. Research works on disjoint routing according network architectures

6

Network architecture based on disjoint routing

Reference

Mobile Ad hoc Network (MANET)

[6, 35–39, 42–44]

Vehicular Ad hoc Network (VANET)

[87–95]

Wireless Sensor Network (WSN)

[49, 50]

Software Defined Network (SDN)

[57, 58]

Streaming Video

[6, 63]

Wireless Mesh Network

[73–75]

Backbone Network

[96]

Hypercube based Network

[97, 98]

Torus Network

[99]

Routing Approaches

In this section, various schemes of disjoint routing algorithms considered in this paper, as shown in Fig. 10.

Fig. 10. Structure of disjoint routing algorithms

6.1

Conceptual Frame

In communication networks, a packet passes through one or more intermediate nodes on its way to its destination. [100]. The Fig. 11 describes the process of searching for disjoint paths connecting the source s and the destination t. The source node sends a first packet to determine the path P1 (V and P1 are initialized to ∅) ; the packet takes the most optimal path to t. Thus, the intermediate

180

A. Youssouf et al.

nodes V1 , V2 , ..., Vn form P1 (P1 = V1 , V2 , ..., Vn ) and are marked in the process: they are added to V (V = V1 , V2 , ..., Vn ). Le nœud destinataire, en l’occurence t, acknowledges the receipt of the packet by sending back to the source node an acknowledgement packet containing the list V made up of the updated values, as well as a subkey that it has generated. The same process is executed for the determination of P2 (P2 = V1 , V2 , ..., Vn ), leads V (V = V ∪ P2 ). Finally , P1 etP2 are disjoint(P1 ∩ P2 = ∅) [101].

Fig. 11. Illustration of the case of two disjointed paths

6.2

Disjoint Routing Applications

Over the past few decades, the finding of disjoint paths has been widely studied in several contexts. The use of multiple paths has many advantages in various application areas mentioned in the Table 5. These applications are listed in the following items: • • • • • • • • •

Routing problems Load balancing and reliability in networks Very Large Scale Integration (VLSI) Multimedia applications Electrical or hydraulic circuits Parallel processing systems Transport modeling Secret exchange ...

6.3

Deterministic Routing Through Disjoint Paths

Deterministic routing consists of routing subpackets through predetermined disjoint paths. In the work of Daouda et al. [27], The authors proposed a disjoint deterministic routing algorithm. In this type of routing, the subpackets from the source node to the exit node rely on disjoint paths. In other words, the subpackets of the same packet do not use the same paths to be routed from the source node to the exit node, where all hops are determined in advance. In accordance with the Algorithm 8 which determines the shortest paths in the anonymous network. Initially, each subpacket has a stack of routers (see Fig. 12) containing routing

Disjoint Routing Algorithms: A Systematic Literature Review

181

information leading to the exit node. This mechanism is known as deterministic routing. Deterministic routing is not suitable for dynamic environments where topologies are constantly changing. The initially determined disjoint paths may no longer be valid by the time they are actually used for routing sub-packets. Algorithm 8: Disjoint deterministic routing [27] route(packet, destinationNode) return Success|Fail; stack ← packet.StackOfSubAddresses begin do nextHop ← stack.pop() if nextHop = null then node ← nextHop forwardto(packet,nextHop) while nextHop = null return (node = destinationNode) ? Success : Fail

Fig. 12. Illustration of routing [27]

6.4

Non-deterministic Routing Through Disjoint Paths

Non-deterministic routing involves making a routing decision at each hop to redirect traffic to the next best hop. Unlike deterministic routing, it determines on the fly the hops that form each disjoint path as presented in the Algorithm 9.

182

A. Youssouf et al.

Algorithm 9: Non-deterministic disjoint routing route(packet, destinationNode) return Success|Fail; e ← packet.NeighborOfSubAddresses begin do node ← s if neighbor = null then node ← neighbor forwardto(e,node) while node = t return (node = destinationNode) ? Success : Fail

6.5

Technical Comparison of Multi-path Routing Approaches

Table 8 presents a technical comparison related to the performance in terms of complexities provided by various disjoint routing algorithms. Table 8. Comparison of some modes of multi-path routing Finding disjoint paths

Deterministic routing

Complexity

Reference

Time complexity

Spatial complexity

O(K(|E| + |V |log|V |))

O(K|V |)

Non-Deterministic routing O(K(|E| + |V |(1 + log|V |))) O(K|V |) Menger’s theorem

NP

Daouda et al. [31]

-

Junho et al. [49] proposed a concurrency-based multi-path routing model to solve the excessive energy consumption on specific nodes due to the overlapping nodes on multiple paths in wireless multimedia sensor networks. As shown in Fig. 13, all nodes know their neighbors’ location and traffic information. Using this information, the nodes receiving the packets select the closest node among their neighbors to forward the packets. This process is performed repeatedly until the packet has arrived at the destination. The Algorithm 10 presents the pseudocode of the disjoint multipath configuration based on concurrency (Fig. 14).

Disjoint Routing Algorithms: A Systematic Literature Review

183

Fig. 13. Competition-based disjoint multipath configuration [49]

Algorithm 10: Competition-based disjoint multipath configuration [49] Input : Segment = Data P acket Sneighbors = U nsorted N eighbor N odes List Ssorted neighbors = Sorted N eighbor N odes List destLoc = Inf ormation about the location of the sink Sneighbors = [S0 , S1 , S2 , ..., Sn ] Ssorted neighbors = sort(Sneighbors , destLoc) foreach S = Ssorted neighbors [i] do sendP robM sg(s) if receiveDenyM sg(s) then continue else insertN extN odetoRoutingT able(s) break sendDataP acket(segment)

Fig. 14. Receiving and combining split packages [49]

In Fig. 7, the authors showed that a special node receives and reassembles all subpackets using a timer once the waiting threshold is required. If the threshold is not reached, the receiver does not perform the association of the subpackets. This mechanism is illustrated by the Algorithm 11.

184

A. Youssouf et al.

Algorithm 11: Receiving and combining split packages [49] Input : γ = P acket(Segments) Reception Rate T imer = M aximum W aiting T ime if T imer == End then if count(ReceivedSegments) > γ then foreach s = slot[i] do if s is empty then s = N U LL else continue combine(thisF rame) else eliminate(thisF rame) T imer.restart()

In the work of Xu et al. [54], the authors proposed an algorithm using the Suurballe method [102] to find the k − chemins disjoint paths of minimum weight between the source and destination on a weighted graph. This approach is based on dynamic scheduling for relay assignment on each path to reduce energy consumption. The Suurballe method executes each iteration until there are k paths of disjoint nodes. During each iteration, this algorithm first adopts Dijkstra’s method [103] to find the shortest paths, and modifies the weights of the edges of the graph. The weight modification preserves the non-negativity while allowing the Dijkstra algorithm to find the correct path. The formal description is presented in the Algorithm 12. Algorithm 12: Formal Description of the CMPR Step 1 : N ode − Disjoint M ulti − P ath Routing Construction foreach luv ∈ G do wuv = b−1 δuv

Adopt the Suurballe s algorithm to f ind out k node − disjoint paths on graph G, denoted by P1 , P2 , ..., Pk ; Step 2 : Relay N ode Assignment foreach searched path Pj = vj0 (s)vj1 ...vjm−1 vjm (d) do foreach nodevjk do Compute a weight vwk f or node vjk ;  0 , k= 0,1 Wk = max = {Wk−2+vwk−1 ,Wk−1 } 2 ≤ k ≤ m if Wk = Wk−2 + vwk−1 then state[k] = true W = Wm foreach node vjk f rom node vjm−1 to node vj1 do if Wk = W and state[k] = true then node vjk−1 is selected as a cooperative relay; W = W − vwk−1

Disjoint Routing Algorithms: A Systematic Literature Review

185

Meghanathan [104] proposed methods to define link-separated and nodeseparated s − d routes on any graph G. The author first uses Dijkstra’s approach to determine the s − d path with a minimum hop on an original graph with n nodes, while removing the links defined on the p route. The author introduces the route s − d with the minimum hop p usually P L when he has at least one route. The Algorithm 13 presents the pseudo-code of the above approach. Algorithm 13: To Determine the Set of Link-Disjoint s-d Paths in a Network Graph Input : Graph G (V, E), source s and destination d Output : Set of lien − disjoint paths PL Auxiliary V ariables : Graph GL (V, E L ) Initialization : GL (V, E L ) ←− G(V, E), PL ←− ϕ begin while (∃ at least one s − d path in GL ) do p ←− M inimum hop s − d path in GL PL ←− PL U {p} ∀ GL (V, E L ) ←− GL (V, E L − {e}) edge, e ∈ p return PL To determine the node-disjoint paths, the author also used Dijkstra’s approach to find the s − d path in the original graph, while removing the intermediate nodes that define the p route. If there is at least one path s − d in G, the author includes the path s − d with minimum jump p generally P N . The procedure is repeated until there are no more paths s − d in the network. We now say that the set P N contains the node-disjoint paths in the original graph G. The Algorithm 14 presents the pseudo-code for determining the set of node-disjoint paths in a graph. Algorithm 14: Set of Node-Disjoint s-d Paths in a Network Graph Input : Graph G (V, E), source s and destination d Output : Set of node − disjoint paths PN Auxiliary V ariables : Graph GN (V N , E N ) Initialization : GN (V N , E N ) ←− G(V, E), PN ←− ϕ begin while (∃ at least one s − d path in GN ) do p ←− M inimum hop s − d path in GN PN ←− PN U {p} ∀ GN (V N , E L ) ←− GN (V N − {v}, E N − {e}) vertex, v ∈ p v = s, d edge, e ∈ Adj − list(v) return PN

186

6.6

A. Youssouf et al.

Analysis of Routing Algorithms

Table 9, presents a technical comparison related to the performance in terms of complexities provided by various disjoint routing algorithms and their routing techniques. Table 9. Technical comparison of some disjoint routing algorithms Algorithm

7

Properties Time Deterministic Complexity routing

Algorithm 1

O(n)

×

Algorithm 2

O(n)

×

Algorithm 3

O(n)

Algorithm 4

O(n)

× √

Algorithm 6

O(n)

-

Algorithm 7

O(n)

×

Algorithm 5

O(n3 )

×

Algorithm 10 O(n)

×

Algorithm 11 O(n)

×

Algorithm 12 O(n2 )

×

Algorithm 13 O(n)

×

Algorithm 14 O(n)

×

Non-deterministic routing √ √ √ × √ √ √ √ √ √ √

Conclusion

In this paper, we reviewed the existing literature in general on disjoint routing algorithms across several categories of graphs and application domains. However, disjoint routing algorithms seem to be effective solutions in networks, allowing to facilitate network security, to increase the reliability of delivery, to ensure the continuity of services in case of failure of the path taken and to distribute the load in a rational way between the source and the destination on several paths. These improve the performance of communications and aggregate the resources available in the network. Although disjoint routing algorithms improve network performance, they also have several limitations among others; longer path, message control mechanism, large route demand, duplicate packet processing and dynamic environment in a network. These limitations may provide a direction for future research.

References 1. Suzuki, H., Tobagi, F.A.: Fast bandwidth reservation scheme with multi-link and multi-path routing in ATM networks. In: Proceedings of IEEE INFOCOM 1992: The Conference on Computer Communications, pp. 2233–2240. IEEE (1992)

Disjoint Routing Algorithms: A Systematic Literature Review

187

2. Ishida, K., Kakuda, Y., Kikuno, T.: A routing protocol for finding two nodedisjoint paths in computer networks. In Proceedings of International Conference on Network Protocols, pp. 340–347. IEEE (1995) 3. Bohacek, S., Hespanha, J.P., Obraczka, K., Lee, J., Lim, C.: Enhancing security via stochastic routing. In: Proceedings of Eleventh International Conference on Computer Communications and Networks, pp. 58–62. IEEE (2002) 4. Tang, C., McKinley, P.K.: A distributed multipath computation framework for overlay network applications. Technical report, Technical Report MSU-CSE-0418, Michigan State University (2004) 5. Bohacek, S., Hespanha, J., Lee, J., Lim, C., Obraczka, K.: Game theoretic stochastic routing for fault tolerance and security in computer networks. IEEE Trans. Parallel Distrib. Syst. 18(9), 1227–1240 (2007) 6. Vijay, S., Sharma, S.C., Gupta, V., Kumar, S.: Notice of violation of IEEE publication principles: CZM-DSR: a new cluster/zone-disjoint multi-path routing algorithm for mobile ad-hoc networks. In: 2009 IEEE International Advance Computing Conference, pp. 480–485. IEEE (2009) 7. Lengauer, T.: Combinatorial algorithms for integrated circuit layout. Springer Science & Business Media, Wiesbaden (2012). https://doi.org/10.1007/978-3-32292106-2 8. Qian-Ping, G., Peng, S.: Node-to-set and set-to-set cluster fault tolerant routing in hypercubes. Parallel Comput. 24(8), 1245–1261 (1998) 9. Murthy, S., D’Souza, R.J., Varaprasad, G.: Digital signature-based secure node disjoint multipath routing protocol for wireless sensor networks. IEEE Sensors J. 12(10), 2941–2949 (2012) 10. Ma, C., et al.: Pre-configured multi-dimensional protection (p-MDP) structure against multi-failures in high-degree node based optical networks. In: 2013 8th International Conference on Communications and Networking in China (CHINACOM), pp. 756–760. IEEE (2013) 11. Hsu, C.-C.: A genetic algorithm for maximum edge-disjoint paths problem and its extension to routing and wavelength assignment problem. North Carolina State University (2013) 12. Tsai, J., Moors, T.: A review of multipath routing protocols: from wireless ad hoc to mesh networks. In: ACoRN Early Career Researcher Workshop on Wireless Multihop Networking, vol. 30. Citeseer (2006) 13. Selvam, R., Senthilkumar, A.: Cryptography based secure multipath routing protocols in wireless sensor network: a survey. In: 2014 International Conference on Electronics and Communication Systems (ICECS), pp. 1–5. IEEE (2014) 14. Hasan, M.Z., Al-Rizzo, H., Al-Turjman, F.: A survey on multipath routing protocols for QOS assurances in real-time wireless multimedia sensor networks. IEEE Commun. Surv. Tutor. 19(3), 1424–1456 (2017) 15. Fu, M., Wu, F.: Investigation of multipath routing algorithms in software defined networking. In: 2017 International Conference on Green Informatics (ICGI), pp. 269–273. IEEE (2017) 16. Tarique, M., Tepe, K.E., Adibi, S., Erfani, S.: Survey of multipath routing protocols for mobile ad hoc networks. J. Netw. Comput. Appl. 32(6), 1125–1143 (2009) 17. Hodroj, A., Ibrahim, M., Hadjadj-Aoul, Y.: A survey on video streaming in multipath and multihomed overlay networks. IEEE Access 9, 66816–66828 (2021) 18. Afzal, S., Testoni, V., Rothenberg, C.E., Kolan, P., Bouazizi, I.: A holistic survey of wireless multipath video streaming. arXiv preprint arXiv:1906.06184 (2019)

188

A. Youssouf et al.

19. Lee, G.M., Choi, J.: A survey of multipath routing for traffic engineering. Information and Communications University, Korea (2002) 20. Satav, P.R., Jawandhiya, P.M.: Review on single-path multi-path routing protocol in manet: a study. In: 2016 International Conference on Recent Advances and Innovations in Engineering (ICRAIE), pp. 1–7. IEEE (2016) 21. Adibi, S., Erfani, S.: A multipath routing survey for mobile ad-hoc networks. In: CCNC 2006. 2006 3rd IEEE Consumer Communications and Networking Conference, 2006, vol. 2, pp. 984–988. IEEE (2006) 22. Majdkhyavi, N., Hassanpour, R.: A survey of existing mechanisms in energy-aware routing in Manets. Int. J. Comput. Appl. Technol. Res. 4(9), 673–679 (2015) 23. Qadir, J., Ali, A., Yau, K.-L.A., Sathiaseelan, A., Crowcroft, J.: Exploiting the power of multiplicity: a holistic survey of network-layer multipath. IEEE Commun. Surv. Tutor. 17(4), 2176–2213 (2015) 24. Gulati, M.K., Kumar, K.: Survey of multipath QOS routing protocols for mobile ad hoc networks. Int. J. Adv. Eng. Technol. 3(2), 809 (2012) 25. Radi, M., Dezfouli, B., Bakar, K.A., Lee, M.: Multipath routing in wireless sensor networks: survey and research challenges. sensors. 12(1), 650–685 (2012) 26. Dua, A., Kumar, N., Bawa, S.: A systematic review on routing protocols for vehicular ad hoc networks. Vehic. Commun. 1(1), 33–52 (2014) 27. Ahmat, D., Hissein, O., Hassan, M.B.: Syst`eme anonyme bas´e sur le routage disjoint des sous-identit´es prises sur les points d’interpolation de lagrange. Revue Scientifique du Tchad (2014) 28. Nguyen, T.: On the disjoint paths problem. Oper. Res. Lett. 35(1), 10–16 (2007) 29. Chekuri, C., Khanna, S.: Edge disjoint paths revisited. In: Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 628–637 (2003) 30. Smail, O., Rebbah, M.: Networks lifetime maximization in ad hoc wireless networks with link-disjoint paths routing 31. Ahmat, D., Choroma, M., Bissyand´e, T.F.: Multipath key exchange scheme based on the Diffie-Hellman protocol and the Shamir threshold. Int. J. Netw. Secur. 21(3), 418–427 (2019) 32. Xie, H., Boukerche, A., Loureiro, A.A.F.: A multipath video streaming solution for vehicular networks with link disjoint and node-disjoint. IEEE Trans. Parallel Distrib. Syst. 26(12), 3223–3235 (2014) 33. Kui, W., Harms, J.: Multipath routing for mobile ad hoc networks. J. Commun. Netw. 4(1), 48–58 (2002) 34. Tsirigos, A., Haas, Z.J.: Multipath routing in mobile ad hoc networks or how to route in the presence of frequent topology changes. In: 2001 MILCOM Proceedings Communications for Network-Centric Operations: Creating the Information Force (Cat. No. 01CH37277), vol. 2, pp. 878–883. IEEE (2001) 35. Abbas, A.M., Jain, B.N.: An analytical framework for path reliabilities in mobile ad hoc networks. In: Proceedings of the Eighth IEEE Symposium on Computers and Communications. ISCC 2003, pp. 63–68. IEEE (2003) 36. Abbas, A.M., Khandpur, P., Jain, B.N.: Ndma: a node disjoint multipath ad hoc routing protocol. In Proceedings of 5th World Wireless Congress (WWC), pp. 334–339 (2004) 37. Jie, W.: An extended dynamic source routing scheme in ad hoc wireless networks. Telecommun. Syst. 22(1), 61–75 (2003) 38. Velusamy, B., Karunanithy, K., Sauveron, D., Akram, R.N., Cho, J.: Multiobjective function-based node-disjoint multipath routing for mobile ad hoc networks. Electronics 10(15), 1781 (2021)

Disjoint Routing Algorithms: A Systematic Literature Review

189

39. Robinson, Y.H., et al.: Link-disjoint multipath routing for network traffic overload handling in mobile ad-hoc networks. IEEE Access 7, 143312–143323 (2019) 40. Huang, J.-W., Woungang, I., Chao, H.-C., Obaidat, M.S., Chi, T.-Y., Dhurandher, S.K.: Multi-path trust-based secure Aomdv routing in ad hoc networks. In: 2011 IEEE Global Telecommunications Conference-GLOBECOM 2011, pp. 1–5. IEEE (2011) 41. Abbas, A.M., Istyak, S.: Multiple attempt node-disjoint multipath routing for mobile ad hoc networks. In: 2006 IFIP International Conference on Wireless and Optical Communications Networks, pp. 5-pp. IEEE (2006) 42. Chowdhury, T., Mukta, R.B.M.: A novel approach to find the complete nodedisjoint multipath in AODV. In: 2014 International Conference on Informatics, Electronics & Vision (ICIEV), pp. 1–6. IEEE (2014) 43. Robinson, Y.H., Julie, E.G., Saravanan, K., Kumar, R., Son, L.H.: FD-AOMDV: fault-tolerant disjoint ad-hoc on-demand multipath distance vector routing algorithm in mobile ad-hoc networks. J. Amb. Intell. Human. Comput. 10(11), 4455– 4472 (2019) 44. Neenavath, V., Krishna, B.T.: An energy efficient multipath routing protocol for manet. J. Eng. Res. (2022) 45. Sen, J.: A multi-path certification protocol for mobile ad hoc networks. In: 2009 4th International Conference on Computers and Devices for Communication (CODEC), pp. 1–4. IEEE (2009) 46. Mueller, S., Ghosal, D.: Analysis of a distributed algorithm to determine multiple routes with path diversity in ad hoc networks. In: Third International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt 2005), pp. 277–285. IEEE (2005) 47. Ye, Z., Krishnamurthy, S.V., Tripathi, S.K.: A framework for reliable routing in mobile ad hoc networks. In: IEEE INFOCOM 2003. Twenty-second Annual Joint Conference of the IEEE Computer and Communications Societies (IEEE Cat. No. 03CH37428), vol.1, pp. 270–280. IEEE (2003) 48. Leung, R., Liu, J., Poon, E., Chan, A.-.L.C., Li, B.: MP-DSR: a QOS-aware multipath dynamic source routing protocol for wireless ad-hoc networks. In Proceedings LCN 2001. 26th Annual IEEE Conference on Local Computer Networks, pp. 132– 141. IEEE (2001) 49. Park, J., Jo, M., Seong, D., Yoo, J.: Disjointed multipath routing for real-time data in wireless multimedia sensor networks. Int. J. Distrib. Sens. Netw. 10(1), 783697 (2014) 50. Sun, G., Qi, J., Zang, Z., Xu, Q.: A reliable multipath routing algorithm with related congestion control scheme in wireless multimedia sensor networks. In: 2011 3rd International Conference on Computer Research and Development, vol. 4, pp. 229–233. IEEE (2011) 51. Huang, D., Medhi, D.: A byzantine resilient multi-path key establishment scheme and its robustness analysis for sensor networks. In: 19th IEEE International Parallel and Distributed Processing Symposium, p. 8. IEEE (2005) 52. Radi, M., Dezfouli, B., Razak, S.A., Bakar, K.A.: Liemro: a low-interference energy-efficient multipath routing protocol for improving QOS in event-based wireless sensor networks. In: 2010 Fourth International Conference on Sensor Technologies and Applications, pp. 551–557. IEEE (2010) 53. Maimour, M.: Maximally radio-disjoint multipath routing for wireless multimedia sensor networks. In: Proceedings of the 4th ACM workshop on Wireless Multimedia Networking and Performance Modeling, pp. 26–31 (2008)

190

A. Youssouf et al.

54. Hongli, X., Huang, L., Qiao, C., Zhang, Y., Sun, Q.: Bandwidth-power aware cooperative multipath routing for wireless multimedia sensor networks. IEEE Trans. Wirel. Commun. 11(4), 1532–1543 (2012) 55. Wu, J., Stinson, D.R.: Three improved algorithms for multipath key establishment in sensor networks using protocols for secure message transmission. IEEE Trans. Dependab. Sec. Comput. 8(6), 929–937 (2010) 56. El Hajj, Y., Shehadeh, O.A., Hogrefe, D.: Towards robust key extraction from multipath wireless channels. J. Commun. Netw. 14(4), 385–395 (2012) 57. Liao, Y.-Z., Tsai, S.-C.: Fast failover with hierarchical disjoint paths in SDN. In: 2018 IEEE Global Communications Conference (GLOBECOM), pp. 1–7. IEEE (2018) 58. Abe, J.O., Mantar, H.A., Yayimli, A.G.: k-maximally disjoint path routing algorithms for SDN. In: 2015 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery, pp. 499–508. IEEE (2015) 59. Chiu, K.-C., Liu, C.-C., Chou, L.-D.: Reinforcement learning-based serviceoriented dynamic multipath routing in SDN. Wirel. Commun. Mob. Comput. (2022) 60. Alenazi, M.J.F., Cetinkaya, E.K.: Resilient placement of SDN controllers exploiting disjoint paths. Trans. Emerg. Telecommun. Technol. 31(2), e3725 (2020) 61. Liu, L., Zhou, J.-T., Xing, H.-F., Guo, X.-Y.: Flow splitting scheme over linkdisjoint multiple paths in software-defined networking. Concurr. Comput. Pract. Exp. 34(10), e6793 (2022) 62. Tomovic, S., Radusinovic, I.: Fast and efficient bandwidth-delay constrained routing algorithm for SDN networks. In: 2016 IEEE NetSoft Conference and Workshops (NetSoft), pp. 303–311. IEEE (2016) 63. Seo, D., Thottethodi, M.: Disjoint-path routing: efficient communication for streaming applications. In: 2009 IEEE International Symposium on Parallel Distributed Processing, pp. 1–12. IEEE (2009) 64. Chen, Y.-C., Towsley, D., Khalili, R.: MSplayer: multi-source and multi-path video streaming. IEEE J. Sel. Areas Commun. 34(8), 2198–2206 (2016) 65. Golubchik, L., et al.: Multi-path continuous media streaming: what are the benefits? Perform. Eval. 49(1–4), 429–449 (2002) 66. Tsai, M.-F., Chilamkurti, N., Park, J.H., Shieh, C.-K.: Multi-path transmission control scheme combining bandwidth aggregation and packet scheduling for realtime streaming in multi-path environment. IET Commun. 4(8), 937–945 (2010) 67. Begen, A.C., Altunbasak, Y., Ergun, O.: Multi-path selection for multiple description encoded video streaming. In: IEEE International Conference on Communications, ICC 2003, vol. 3, pp. 1583–1589. IEEE (2003) 68. Begen, A.C., Altunbasak, Y., Ergun, O., Ammar, M.H.: Multi-path selection for multiple description video streaming over overlay networks. Signal Process. Image Commun. 20(1), 39–60 (2005) 69. Aliyu, A., et al.: Multi-path video streaming in vehicular communication: approaches and challenges. In 2017 6th ICT International Student Project Conference (ICT-ISPC), pp. 1–4. IEEE (2017) 70. Setton, E., Zhu, X., Girod, B.: Congestion-optimized multi-path streaming of video over ad hoc wireless networks. In: 2004 IEEE International Conference on Multimedia and Expo (ICME)(IEEE Cat. No. 04TH8763), vol. 3, pp. 1619–1622. IEEE (2004) 71. Sun, L., et al.: Multi-path multi-tier 360-degree video streaming in 5g networks. In: Proceedings of the 9th ACM Multimedia Systems Conference, pp. 162–173 (2018)

Disjoint Routing Algorithms: A Systematic Literature Review

191

72. Guan, Y., Zhang, Y., Wang, B., Bian, K., Xiong, X., Song, L.: Perm: neural adaptive video streaming with multi-path transmission. In: IEEE INFOCOM 2020IEEE Conference on Computer Communications, pp. 1103–1112. IEEE (2020) 73. Akyildiz, I.F., Wang, X., Wang, W.: Wireless mesh networks: a survey. Comput. Netw. 47(4), 445–487 (2005) 74. Kushwaha, U.S., Gupta, P.K., Ghrera, S.P.: Performance evaluation of AOMDV routing algorithm with local repair for wireless mesh networks. CSI Trans. ICT. 2(4), 253–260 (2015) 75. Zhang, C., Liu, S., Sun, Z., Sun, S.: A breadth-first and disjoint multi-path routing algorithm in wireless mesh networks. In: 2013 15th IEEE International Conference on Communication Technology, pp. 560–564. IEEE (2013) 76. Ikenaga, T., Tsubouchi, K., Nobayashi, D., Fukuda, Y.: Disjoint path routing for multi-channel multi-interface wireless mesh network. Int. J. Comput. Netw. Commun. (IJCNC) 3(2), 165–178 (2011) 77. Orda, A., Sprintson, A.: Efficient algorithms for computing disjoint QOS paths. In: IEEE INFOCOM 2004, vol. 1. IEEE (2004) 78. Ahuja, R.K., Magnanti, T.L., Orlin, J.B.: Network flows. In: Theory, Algorithms, and Applications. Network Flows (1993) 79. Qu, Z., Ren, W., Wang, Q.: A new node-disjoint multi-path routing algorithm of wireless mesh network. In 2010 International Conference on Computer, Mechatronics, Control and Electronic Engineering, vol. 4, pp. 1–3. IEEE (2010) 80. Gupta, B.K., Acharya, B.M., Mishra, M.K.: Optimization of routing algorithm in wireless mesh networks. In: 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), pp. 1150–1155. IEEE (2009) 81. Nandiraju, N.S., Nandiraju, D.S., Agrawal, D.P.: Multipath routing in wireless mesh networks. In 2006 IEEE International Conference on Mobile Ad Hoc and Sensor Systems, pp. 741–746. IEEE (2006) 82. Tsai, J.W., Moors, T.: Interference-aware multipath selection for reliable routing in wireless mesh networks. In: 2007 IEEE International Conference on Mobile Adhoc and Sensor Systems, pp. 1–6. IEEE (2007) 83. Rong, B., Qian, Y., Lu, K., Hu, R.Q., Kadoch, M.: Multipath routing over wireless mesh networks for multiple description video transmission. IEEE J. Sel. Areas Commun. 28(3), 321–331 (2010) 84. Xu, J.: Multipath routing over wireless mesh networks (2012) 85. Backhaus, M., Theil, M., Rossberg, M., Schaefer, G.: Robust and scalable routing in wireless mesh networks using interference-disjoint backup paths. In: 2019 12th IFIP Wireless and Mobile Networking Conference (WMNC), pp. 103–110. IEEE (2019) 86. Garcia-Luna-Aceves, J.J.: Multipath routing in wireless mesh networks (2005) 87. He, R., Rutagemwa, H., Shen, X.: Differentiated reliable routing in hybrid vehicular ad-hoc networks. In: 2008 IEEE International Conference on Communications, pp. 2353–2358. IEEE (2008) 88. Huang, X., Fang, Y.: Performance study of node-disjoint multipath routing in vehicular ad hoc networks. IEEE Trans. Veh. Technol. 58(4), 1942–1950 (2008) 89. Eiza, M.H., Owens, T., Ni, Q., Shi, Q.: Situation-aware QOS routing algorithm for vehicular ad hoc networks. IEEE Trans. Veh. Technol. 64(12), 5520–5535 (2015) 90. Bisht, A.K., Kumar, B., Mishra, S.: Efficiency evaluation of routing protocols for vehicular ad-hoc networks using city scenario. In: 2012 International Conference on Computer Communication and Informatics, pp. 1–7. IEEE (2012)

192

A. Youssouf et al.

91. Vidhale, B., Dorle, S.S.: Performance analysis of routing protocols in realistic environment for vehicular ad hoc networks. In 2011 21st International Conference on Systems Engineering, pp. 267–272. IEEE (2011) 92. Maowad, H., Shaaban, E.: Efficient routing protocol for vehicular ad hoc networks. In: Proceedings of 2012 9th IEEE International Conference on Networking, Sensing and Control, pp. 209–215. IEEE (2012) 93. Zhu, Y., Wu, Y., Li, B.: Vehicular ad hoc networks and trajectory-based routing. In: Mukhopadhyay, S.C. (ed.) Internet of Things. SSMI, vol. 9, pp. 143–167. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-04223-7 6 94. Soumaya, D., Dardouri, S., Ridha, B.: Performance evaluation of routing protocols for vehicular AD-HOC networks using NS2/SUMO. In: Barolli, L., Amato, F., Moscato, F., Enokido, T., Takizawa, M. (eds.) WAINA 2020. AISC, vol. 1150, pp. 352–365. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-440381 32 95. Dorle, S.S., Vidhale, B., Chakole, M.: Evaluation of multipath, unipath and hybrid routing protocols for vehicular ad hoc networks. In: 2011 Fourth International Conference on Emerging Trends in Engineering & Technology, pp. 311–316. IEEE (2011) 96. Tak, S., Kim, H., Kim, T., Kim, H.: A performance comparision study of kshortest disjoint forwarding paths in ship backbone networks. In: The International Conference on Information Network 2012, pp. 308–311. IEEE (2012) 97. Umrao, L.S., Singh, R.S.: Fault-tolerant routing over all shortest node-disjoint paths in hypercube networks. In: 2015 IEEE Workshop on Computational Intelligence: Theories, Applications and Future Directions (WCI), pp. 1–3. IEEE (2015) 98. Routage optimal node-to-set disjoint paths in hypercubes. J. Inf. Sci. Eng. 30, 1087–1093 99. Liu, D., Li, J., Zuo, S.: The disjoint path covers of two-dimensional torus networks. In 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 233–239. IEEE (2018) 100. Rios, M., Marianov, V., Avagliano, A.: Multiple path routing algorithm for IP networks. Comput. Commun. 28(7), 829–836 (2005) 101. Daouda, A.M.: D´efinition d’une infrastructure de s´ecurit´e et de mobilit´e pour les r´eseaux pair-` a-pair recouvrants. Ph.D. thesis, Universit´e de Bordeaux (2014) 102. Suurballe, J.W., Tarjan, R.E.: A quick method for finding shortest pairs of disjoint paths. Networks. 14(2), 325–336 (1984) 103. Bertsekas, D., Gallager, R.: Data networks. Athena Scientific (2021) 104. Meghanathan, N.: Performance comparison of link, node and zone disjoint multipath routing strategies and minimum hop single path routing for mobile ad hoc networks. arXiv preprint arXiv:1011.5021 (2010)

Construction of a Core Ontology of Endogenous Knowledge on Agricultural Techniques: OntoEndo Halguieta Trawina1(B) , Sadouanouan Malo2 , Ibrahima Diop3 , and Yaya Traore1 1 Université Joseph KI-ZERBO, Ouagadougou, Burkina Faso

[email protected]

2 Université Nazi Boni, Bobo-Dioulasso, Burkina Faso 3 Université Assane SECK, Ziguinchor, Senegal

[email protected]

Abstract. Several resources have been identified on endogenous agro-sylvopastoral techniques, but some of them remain unformalized data. In a context of climate change (CC), it is essential to take into account this endogenous knowledge on agricultural techniques as an adaptation measure. We propose in this paper to build an ontology of this endogenous agricultural knowledge. Our methodology identifies the terms on agricultural techniques that will be formalized and used to build the OntoEndo ontology. OntoEndo will allow the construction, co-construction and sharing of agricultural techniques for climate change adaptation. Keywords: Methodology · Ontology · Endogenous knowledge · Climate change

1 Introduction In a context of climate change where rainfall is disrupted from one area to another, the adaptation of populations to climate change, with its socio-environmental impacts, is essential. The most effective and sustainable adaptation measures are often those taken at the local level and directly involving the populations concerned. As a result, farmers are adopting certain strategies to adapt and reduce their vulnerability to the consequences of the rainfall variability they have observed by using different techniques. One of these strategies is the use of endogenous agricultural knowledge. These are techniques that are very easy to appropriate and have very low costs. It is becoming a priority that this knowledge be formalized and popularized through a platform for sharing it with the populations who need it. In [1] and [2], we propose an architectural approach for a Social and Semantic Web platform that will formalize this knowledge and facilitate the sharing of these endogenous agricultural techniques. This platform will federate the data already existing on the web and then offer solutions for the discovery of new induced knowledge.

© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023 Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 193–208, 2023. https://doi.org/10.1007/978-3-031-34896-9_12

194

H. Trawina et al.

The architecture of the platform is structured in five main layers of which the persistence layer describes the storage of data and is composed of endogenous (tacit) and scientific (explicit) knowledge bases. It is essential that the set of data on endogenous agricultural knowledge to be identified, described and put in a format that is understandable, readable and exploitable by the Wiki. In [3], we propose a state of the art on termino-ontological resources in the agrosylvo-pastoral domain. This allowed us to make an inventory of existing terminological resources in the sector and of existing ontological resources in the same domain that could be identified. Once these resources have been identified and categorized, it is important to choose a methodology for ontology engineering. An overview was made of the existing ontology engineering methods in the field of ontology construction. Considering the nature of the existing terminological resources on endogenous knowledge in the field of agriculture in various formats, our methodological choice was the NeON method [4], a more complete method that provides all the alternatives for ontology construction organized in scenarios. Thanks to these use case scenarios, NeON offers the possibilities of ontology construction either from scratch, or through reuse, reengineering or alignment, etc. In this work, we will identify the scenarios of the NeON method to be used for the construction of the ontology of endogenous techniques. The article is structured as follows. In the second section, we will give a brief description of the NeOn methodological framework chosen for the construction of the ontology, the third section will focus on the ontological engineering process that will allow us to obtain the core ontology of endogenous knowledge that we call OntoEndo. We end with a last section that deals with the conclusion and perspectives.

2 Methodological Approach The state of the art on termino-ontological resources [3] in the field of endogenous agrosilvo-pastoral techniques has shown the existence and abundance of resources available and exploitable for the construction of the ontology. These resources have been grouped into two classes: Non Ontological Resources (NOR) and Ontological Ressources (OR). The heterogeneous nature of the resources justifies our choice to use NeOn [4] for the construction of OntoEndo. It provides a more comprehensive set of methods described that range from creation, reuse and management of ontology dynamics. The methodology includes the following (Fig. 1): • The NeOn Glossary, which identifies and defines the processes and activities potentially involved in building the ontology network; • A set of nine scenarios to facilitate the construction of ontologies and ontology networks. Each scenario is decomposed into different processes and activities also described in the NeOn glossary; • Two lifecycle models specify how to organize the processes and activities of the NeOn glossary into several phases;

Construction of a Core Ontology of Endogenous Knowledge

195

• A set of prescriptive methodological guidelines for processes and activities. So, for implementation of OntoEndo, we will apply the different steps of scenario 1 in conjunctly with the reuse of the termino-ontological resources identified in [3] and the ontology design pattern OntCLUVA [5] and the OWL model [6] based on AGROVOC. Scenario 1 allows the construction of ontologies from scratch by respecting the steps defined in this scenario: • The specification: the construction of the ontology is guided by the objective of supporting a social and semantic wiki for the sharing and co-construction of endogenous knowledge on agro-sylvo-pastoral techniques. The purpose of OntoEndo is to annotate the resources of collaborative work in the community working on endogenous adaptation strategies to the effects of climate change; • Acquisition: this task is realized through interviews with experts in the respective domains; • Modeling: consisted in the realization of conceptual models for the structuring of knowledge and will serve as a support for the evaluation conducted with experts in the field; • Formalization and implementation: the logical level required to reason about ontology is first order logic. The use of the Protégé editor, which produces the formal representation and proposes a saving of the model in a machine-readable ontology representation language, frees us from the implementation stage; • Evaluation: This is done in 3 phases. The first phase is done during the conceptualization by having the content of the ontologies validated by the experts. It is essentially based on the validity and coherence of the knowledge represented. The second phase consists of consistency tests using the reasoners included in the Protégé editor. The third phase of the validation consists of testing the operationality of the ontology deployed and measuring its contribution to the study domain. The RNO reuse process is considered in scenario 2 and will facilitate the construction of the ontology from resources such as texts. The RO reuse process is considered in scenario 3 and will allow the reuse of ontologies such as OntoCLUVA [5], a climate change ontology pattern, the OWL model [6] based on AGROVOC for the creation of ontologies in the field of agriculture and FOAF1 , an ontology used to de-escalate people and social relations on the Web.

3 Ontology Construction by the Scenario 1 This step of ontology building refers to the development of ontologies by applying the main activities of scenario 1 and then reusing non-ontological resources and ontological resources. 3.1 Requirements Specification The starting point for any ontology development is the definition of the requirements that the ontology must fulfill [4]. The objective of this phase is to start from the needs 1 About: http://fr.dbpedia.org/resource/FOAF.

196

H. Trawina et al.

Fig. 1. Sequencing of scenarios NeOn [4]

that motivate the creation of the ontology, to define the intended uses of the ontology, to identify the end-users and to describe the set of requirements that the ontology must fulfill in the form of competence questions (CQ). This will be summarized in a document called the Ontology Requirements Specification Document (ORSD). While taking inspiration from the algorithms proposed in [7], we summarize the different steps of this specification phase according to NeOn through the following algorithm. Specification algorithm Input Need motivating the creation of OntoEndo Output Breakdown into sub-domains Pre-glossary of terms Begin: T1. Start from the ontology creation objectives and determine the most appropriate RTOs T2. Identify the CQs that describe the ontology requirements from the RTOs T3. Breakdown into sub-domains T4. Extract high frequency terms T5. Create the pre-glossary of terms End.

This algorithm takes as input the requirements motivating the creation of OntoEndo and as output we obtain the pre-glossary of terms and then the Ontology Requirements Specification Document (ORSD). Task 1 (T1) of the specification algorithm defines the objective, scope and formality level of the ontology to be created. This ontology aims at providing a consensual knowledge model through the construction, co-construction and sharing of endogenous

Construction of a Core Ontology of Endogenous Knowledge

197

knowledge for the adaptation of agricultural techniques in a climate change context. The ontology takes into account the good practices of endogenous knowledge in the field of agriculture taking into account the hazards of climate change and it will be defined in an ontological formalization language OWL. Thanks to the different meetings and interviews, it was possible to identify and determine the main users and uses of the ontology. Indeed, while referring to the section on the architectural approach that we had proposed in our previous work [2], a layer dedicated to the future users of the Semantic Wiki platform is planned, including the Knowledge Base that will be proposed from the ontology to be built. This user layer gathers all the actors (technicians, engineers of the agro-sylvo-pastoral domain or experts of the domain, knowledge experts, researchers, etc.) who can interact on the knowledge base. These same actors will also be able to create new knowledge, make annotations on the Knowledge Base, etc. This corresponds to task 2 (T2) of our algorithm. For task 3 (T3), based on the guidelines given in the NeOn approach [4], the identification of ontology requirements uses techniques known as “natural language requirements writing techniques in the form of so-called competency questions (CQ)”. Different approaches to identify competence questions exist. There are so-called top-down techniques (starting with complex questions and then decomposing them into simpler questions), bottom-up techniques (starting with simple questions and then composing them to create more complex questions), and finally so-called middle techniques (simply starting to write important questions that can be composed and decomposed later to form abstract and simple questions, respectively). By adopting the bottom-up approach, we were able to identify about thirty competency questions that correspond to the requirements of the core ontology and of which an extract is represented in Table 1. These questions were formulated for the domain experts and any resource person during the interviews conducted in the framework of this research work and will be used for the validation of the ontology later on. The analysis of this series of questions and the answers given by the different actors and taking into account the resources identified in the work on the state of the art carried out in [3], allowed us to propose an organization of the study domain into sub-domains as shown in Fig. 1. This division takes into account the one proposed by the work of DIOP and his colleagues [5] for the construction of an ontological pattern of the domain of climate change, to which we have integrated the consideration of endogenous knowledge as also being solutions for adaptation to climate change. The goal of task 4 (T4) of our algorithm is to extract high frequency terms from the CQs. Therefore, it is possible to use terminology extraction techniques thanks to dedicated tools such as AntCon [8], UNITEX [9], TermoStat Web2 , TermoStat Web, TERMINAE [10], Sketch Engine [11]. We chose Sketch Engine because it can integrate any source of information and any format, and is available online. Indeed Sketch Engine [11] is a leading corpus tool, widely used in lexicography. It is a mature software that not only offers many ready-to-use corpora, but also integrates features that allow users to create, download and install their own corpora. Sketch Engine supports several languages (English, French, German, etc.) and gives the possibility to 2 http://termostat.ling.umontreal.ca.

198

H. Trawina et al.

integrate corpora from different sources (from the Internet or added by the user) and in different formats (txt, pdf, xml, docx, etc.). After the term extraction process, Sketch Engine displays the extracted terms in different ways (single word, compound word, etc.) on its interface just by changing the option. It also offers the user the possibility to build a corpus and store it in a drive space or to download it. It exists in a downloadable version but also a version that can be used online. By exploiting with this tool the whole corpus formed by the RNOs identified in [3] with the CQs and the related answers, we obtain a list of terms with a view on their frequencies and their scores for each extracted term. This allowed us to establish a pre-glossary of terms, an extract of which is shown in Table 2.

Fig. 2. Breakdown into sub-domains

Table 1. Questions and Answers Table (QR) N°

Competency Questions

Answers

QC1 What are the characteristics of a good practice?

common name, local name, category, adapted agro-ecological zone, description of the human environment, type of soil, types of land use, description, objective, type of land degradation problem concerned, level of technical knowledge required for its implementation, cost of implementation, literature reference

QC2 What are the different categories of good practices

sustainable water management, land use planning, agricultural techniques, forestry and agroforestry, pastoral resource management, natural resource management, fisheries improvement

QC3 What are the good practices for sustainable grass strips, ridging, partitioning, stony water management? cordons, filter dikes, earthen dikes, dune fixation, subsoiling, vegetation of earthen dikes, mechanical zaï, scarification, plowing

Construction of a Core Ontology of Endogenous Knowledge

199

Table 2. List of extracted terms Item

Frequency Frequency Score (focus) (référence)

Item

Frequency Frequency Score (focus) (reference)

soudanien

106

456

674.628 noms

55

10110

150.934

ligneux

162

6760

553.052 scarification

22

12

149.874

sahélien

127

4163

535.980 compost

105

26234

147.507

zaï

77

598

480.963 bas-fond

38

5211

146.830

inera

64

102

428.467 sawap

21

2

143.316

sorgho

96

4120

406.892 arbollé

21

13

143.086

diguettes

61

142

406.091 infiltration

116

30841

143.020

niébé

53

680

327.729 transhumance 39

6272

138.492

agro-écologique 62

2531

307.575 agroforestiers 22

710

136.028

burkina

360

48084

304.953 bounou

20

47

135.647

hydrique

120

11874

304.262 mil

78

19924

135.471

faso

306

41336

297.846 fourrage

64

15548

132.932

anti-érosif

41

117

294.865 erosion

22

947

131.891

poquet

43

657

274.249 sp-conedd

19

1

129.781

ruissellement

102

11532

266.880 herbacé

54

13736

122.088

demi-lune

58

3978

257.936 mouhoun

20

934

120.181

fertilité

156

23935

249.306 ravine

32

5863

117.388

défens

37

596

235.414 gurunsi

17

20

115tabl.903

pierreux

48

3311

231.652 sarclage

18

574

113.503

pluviométrie

63

6995

219.988 soil

21

1833

113.079

fumure

38

2243

211.726 mooré

17

414

109.613

paillage

58

7670

194.779 lobi

17

437

109.267

npk

30

852

185.896 zougmoré

16

59

108.527

fourrager

58

8239

181.748 ravinement

17

651

106.147

jachère

51

6597

178.884 urée

26

4649

105.563

fertilisation

64

10099

176.569 dégradation

234

97984

103.653

3.2 Conceptualization According to NeOn, the conceptualization phase consists in structuring the domain knowledge obtained in the specification phase. We summarize the activities of this phase in an algorithm presented as follows, inspired by the one proposed by [7]:

200

H. Trawina et al.

Algorithm Conceptualization Input Pre-glossary of terms Output Informal representation model Begin: T1 Group concepts and create a class dictionary: T1 Group concepts and create a class dictionary T2 Create an array of binary relations T3 Create an array of attributes T4 Create an array of instances T5 Build informal representation models by sub-domain (with reuse of existing models) T6 Integrate the different models into a global model End.

Starting from the sub-domain breakdown of the proposed field of study (Fig. 2), we note that the sub-domains of climate change, risk and disaster, urban vulnerability and governance are taken into account in the work of I. DIOP [12]. Therefore, only the two sub-domains “endogenous knowledge” and “argo-sylvo-pastoral sector” will be modeled in this conceptualization phase. This will be followed by an approach that will allow the reuse of the models resulting from the work of I. DIOP’s work according to scenario 3 concerning the RO reuse process. Thus, the dictionary of classes, sub-classes, relations and inverse relations will be created to allow the representation of informal models. • Construction of informal representation models by sub-domain These representations will be realized thanks to task 5 of our algorithm. For this representation, we propose to use a representation by UML class diagrams. UML (Unified Modeling Language) is an object-oriented modeling language. The future vision of the Web in which the information would be explicit in order to allow its automatic treatment by machines is the Semantic Web. Many applications are already modeled in UML. However, there is no specific modeling language to model a knowledge base. We can then extend the use of UML, in particular the class diagrams, to this purpose. As announced above, we propose diagrams for the subdomains “endogenous knowledge” and “agro-sylvo-pastoral sectors”. The diagrams for other sub-domains dealing with climate change will be imported from I. DIOP’s work through the RO reuse process. Figure 3 is a representation of the informal conceptual model of the endogenous knowledge sub-domain and Fig. 4 is a representation of the agro-sylvo-pastoral sector sub-domain. For each model there is a connection point with another model. In the next section on integration, we explain how this integration is done between the different models.

Construction of a Core Ontology of Endogenous Knowledge

201

Fig. 3. Informal conceptual model of the endogenous knowledge subdomain

Fig. 4. Informal model of the agro-sylvo-pastoral sector sub-domain.

• Reuse of NOR by scenario 2 Let us recall that in the work of I. DIOP [5], the sub-domains involved in the construction process of the ontological pattern on climate change were taken into account in the conceptualization phase. Therefore, we limit ourselves to the two sub-domains “endogenous knowledge” and “agro-silvicultural-pastoral sector” in our work. With the non-ontological resources already identified and the set of CQs with the elements of answers obtained from the domain experts, we used terminological extraction tools to generate a list of terms (concepts, relationships, attributes, instances, etc.): the pre-glossary of terms (see Sect. 3.1). Thus, tasks 1, 2, 3 and 4 of our algorithm consist in grouping concepts (or classes), sub-class and finally instances. This allowed us to obtain the dictionary of which an extract is given in Table 3 and which will be used as a basis for the representation of the different models.

202

H. Trawina et al. Table 3. Dictionary of concepts, sub-concepts, relationships (or roles).

Classes principales Bonnepratique Secteuractivite Typeacteur Conditionenvironnementales Objectifs Exploitants Communautescibles Typesutilisationterre Structurederattachement Typesol CategorieBP Problemevise Echelleintervention Zoneclimatiuqe

Imageillustrative Impact

Paysdepratique Niveauadoption TypeBP Effect Avantage

Sous-classes

Strucutrephysique Structuremorale

Zonesahelienne Zonesoudanosahelienne Zonesoudanaise Impactsocioeconomique Impactenvironnemental Impactclimatique

Classes intermédiares

Rôles

Rôles inverses

Bonnepratique_Secteuractivite Bonnepratique_Typeacteur Bonnepratique_conditionenv Bonnepratique_Objectifs Bonnepratique_Exploitants Bonnepratique_Communautescibles Bonnepratique_Typeutilisationterre Bonnepratique_Struturederattachement

est_pratique_dans concerne necessitecondition a_pour_objectif s_adresse cible sapplique_sur est_decrite_par

se_pratique s_interesse sont_requises est-atteint_par sont_interesses_par vise utilise decrit

Bonnepratiqu-Typesol Bonnepratique_CategorieBP Bonnepratique_Problemevise Bonnepratique_eEchelleintervention Bonnepratique_Zoneclimatique

convient_sur est_de_categorie solutionne est_pratique_dans est_localiser

est_propice_a regroupe est_solutionner_par se_pratique existe

Bonnepratique_imageillustrative Bonnepratique_Impact

se_presente a_pour_impact

represente est_impacter_par

Bonnepratique_Paysdepratique Bonnepratiqur_Niveauadoption Bonnepratique_TypeBP Bonnepratique_Effet Bonnepratique_Avantage

existe a_pour_niveau_adoption est_de_type a_pour_effet a_pour_avantage

se_pratique est_atteint est_classe_parmi est_resultat_de

3.3 Formalization et Implementation To facilitate the exploitation of the ontology by the software agents, it must be represented in a knowledge representation formalism. This activity consists in formalizing the conceptual models obtained previously according to a knowledge representation paradigm as summarized in the algorithm above.

Algorithm Formalization & Implementation Input Informal representation model Output The OntoEndo ontology Begin: T1. Describe the formalization language T2. Describe the formalization rules End

Task 1 of the formalization algorithm is to define the formalization language to be used in the construction of ontologies which is based on formal languages. An ontology language allows to signify the membership of an object to a category, to declare the generalization relation between categories and to type the objects that link a relation [13]. In this respect, RDFS and OWL are considered as the most adequate languages because they are derived from W3C consortium recommendations and benefit from

Construction of a Core Ontology of Endogenous Knowledge

203

an expressiveness adapted to the needs of each. Both are based on the XML markup language, a fundamental element of the Semantic Web. OWL is an evolution of the DAML + OIL Web language which is based on RDFS. It was designed “to explicitly represent the meaning of terms, vocabularies, and the relationships between these terms” [14]. OWL surpasses RDFS in its ability to represent an ontology that is automatically interpretable. It introduces the possibility for a machine to reason about the knowledge base, allowing it to infer implicit knowledge and detect possible inconsistencies. Moreover, the vocabulary of OWL is richer than that of RDFS because it adds relations between classes, cardinality and equality properties, and class definition by enumeration. OWL also allows to manage different levels of complexity through three sub-languages of increasing expressiveness: • OWL Light, a minimal subset intended for the construction of taxonomies, • OWL DL (Description Logics), which is much more expressive than OWL Light and guarantees the completeness and decidability of the calculations, • OWL Full with the syntactic freedom of RDFS but without the completeness of calculs. In order to benefit from the full semantic richness of OWL and to anticipate the future use of reasoners, we have chosen to focus on the different ways of modeling the terminology part of an OWL-DL RTO. In any UML to OWL model transformation process, transformation rules are used. UML diagrams are known as a generic formalism for modeling applications in most domains, but the semantics of this formalism are not described in a declarative way. This prevents the development of inferences that allow the detection of formal properties of UML class diagrams. There is a set of mapping (conversion) rules for transforming the objects stored in the UML class diagram into an OWL ontology. While exploiting the works [15, 16], we propose in Table 4 some rules for the transition from UML to OWL. 3.4 Implementation Finally, comes OntoEndo implementation phase. Different ontology editors are currently proposed, in particular, Protégé 2000, OntoEdit OILEd [16] are two editors that integrate the OWL language. For the implementation of our ontology, we chose Protégé which is an open source solution and easy to master. This allowed us to implement our ontology while applying the transformation rules from UML to OWL. Also, some concepts from the AGROVOC thesaurus have been reused. For example the terms “forestry” and “agroforestry" also exist in AGROVOC. For the purpose of linking to these terms, we use their URIs already defined in AGROVOV. For example: Forestry: http://aims.fao.org/aos/agrovoc/c_3055. Agroforestry: http://aims.fao.org/aos/agrovoc/c_207.

204

H. Trawina et al. Table 4. UML-OWL transformation rules

Rule

UML Objets

OWL Code

Description

R1

UML Class

// le type // balise de fermeture

A class attribute with a UML primitive type is mapped to the property data type defined using the Property class data type (DatatypeProperty)

R4

Identifiant

.001), χ2 (1, N = 288) = 4.795, p = .029 (see Table 7A). The model with the intercept only has a −2 Log-Likelihood statistic of 318.779 (313.984 + 4.795). The model summary in Table 6A shows a −2 Log-Likelihood statistic of 168.293. Adding the gender variable reduced the −2 Log-Likelihood by 4.795, which is the χ2 statistic in Table 7A, which implies a better model in predicting the patient’s likelihood of using eHealth technologies. After adding the seven variables to the model, there was a drop in the −2 Log-Likelihood statistic to 168.293 (see Table 6B), indicating that the

258

H. K. Namatovu and M. A. Magumba

expanded model does a better job of predicting the likelihood of eHealth use than was the one-predictor model. The R2 statistics also increased from .025 to .585 in Table 6. Hence, a test of the full model versus a model with intercept only χ2 (1, N = 292) = 4.795, p = .029 showed a significant improvement in the model χ2 (7, N = 292) = 150.486, p < .001 with the overall success rate in classification improving from 76% to 86% (see Table 6B, 7A & 7B). The non-significant chi-square in the Hosmer and Lemeshow statistic χ2 (6, N = 288) = 4.578, p = .801 (see Table 7C) indicates that the data fit the model very well. Table 6. Model Summary (A) For Gender Step

-2 Log Likelihood

Cox & Snell R Square

Nagelkerke R Square

Success Rate

1

318.779

.016

.025

76%

.585

86%

(B) For all the seven variables 1

168.293

.388

Table 7. Omnibus Test for Model Coefficients (A) Gender Chi-square

df

Sig

1 step

4.795

1

.029

Block

4.795

1

.029

Model

4.795

1

.029

1 step

150.486

7

.000

Block

150.486

7

.000

Model

150.486

7

.000

8

.801

(B) For all the Variables

(C) Hosmer and Lemeshow Test 1 step

4.587

The variables in the equation output in Table 8 shows that the regression equation is; ln(ODDS) = −1.537 + .621Gender This model predicted the odds that a subject of a given gender will use eHealth technologies to access health services. The odds prediction equation is; ODDS = ea+bX If the patient is a woman (gender = 0), then they are only 0.21 as likely to use eHealth technologies as she is not to use them. If a patient is a man (gender = 1), they are only 0.4 times more likely to use eHealth technologies than not to.

Barriers and Facilitators of eHealth Adoption

259

Table 8. Variable in the Equation Predictor

B

Wald χ2

p

Exp(B)

Intercept only

−1.169

71.882

.000

.311

Gender

.621

4.632

.031

1.861

Constant

−1.537

44.741

.000

.215

Converting the odds to probabilities, using the formulae below, the model predicts that 17% of women and 29% of men will use eHealth technologies. ˆ = Y

ODDS 1 + ODDS

The odds ratio predicted by the model in Table 8, using the formulae e.621 , implies that the model predicts that the odds of using eHealth technologies are 1.861 times higher for men than they are for women. The variables were denoted as follows. Use of eHealth systems/devices with the highest value of 1 (have ever used) and 0 (have never used any eHealth). Gender was a variable denoting the sex of the patients, with 1 for males and 0 for females. Education was the education attainment level of the patients, with 1 representing those with completed masters degrees and above, 2 representing those with completed bachelor’s degree, 3 diploma and 4 ordinary certificate. Location was the districts of residence of the patients, with 1 representing Kampala, 2 representing Jinja, 3 Mbale and 4 Mbarara. Type of patient was denoted with 1 for outpatients and 0 for recovering patients. Employment status with 1 representing those employed and 0 the unemployed. For Age, 1 represented 18–30, 2 represented 31–40, 3 was 41–50, 4 represented 51 years and above. Table 9 shows a logistic regression coefficient, Wald test and the odds ratio for each predictor variable. Applying a .05 criterion of statistical significance, location, gender and education are statistically significant. The odds ratio for gender (OR = 2.662) indicates that, if other factors are held constant, the odds of using eHealth technologies are 2.662 times higher for men than women. The odds ratio for education (OR = 2.697) reveal that patients with a higher level of education (masters and above) are 2.697 times more likely to use eHealth systems than those with low education. Similarly, the odds for location (OR = 0.12) indicate that patients residing in Kampala have a .012 times chance of using eHealth systems than the patients in other regions. Both models (Pearson Chi-Square model and the logistic regression model) indicate that education level, gender and location of the patients are strong determinants of eHealth adoption.

260

H. K. Namatovu and M. A. Magumba Table 9. Predictors of eHealth adoption

Predictors

B

Wald χ2

P

Exp(B)

Location

−4.433

15.049

.000

.012

Gender

.979

6.147

.013

2.662

Type of patient

−.015

.001

.971

.985

Education

.832

12.250

.000

2.297

Employment Status

−.640

2.731

.098

.528

Age

.554

3.144

.076

1.739

Type of health facility

−.133

2.354

.125

.876

Constant

1.999

2.140

.144

7.385

4 Discussion The study revealed that training patients to use the technology, communicating the benefits of eHealth to users, securing patient data and the ease associated with using eHealth systems facilitate adoption. On the other hand, digital illiteracy, technophobic nature of users, lack of system’s acceptance among users and system’s that do not meet the needs of the users negatively affect adoption. Further analysis revealed that education level (master’s degree and above), location (residing in Kampala) and gender (being male) significantly influenced eHealth adoption. This study revealed that training was a facilitator of eHealth adoption. Training has been identified cited as a key success factor in technology acceptance. Commensurate to our findings, receiving necessary training prior to using the system has been encouraged in several other studies [6, 19, 21, 28, 30, 33, 39, 41, 45]. Training equips users with the knowledge of the system [20], gives a chance to acclimate to the new processes and in the long run boosts confidence to use the system. Effective user training will ensure that users have an optimal starting point for working with the new information system [46], facilitate optimal IT use and acceptance [47] and ensure users with differing levels of IT skills become comfortable with the software [48]. With inadequate training, the system operates, but does not fulfill its desired expectations whilst non-trained users will resist the change [48]. Although some studies emphasize that usability eliminates the need for training [49], many studies have underscored training and support in improving acceptance of eHealth systems [50, 51]. In some studies, training boosted peer and management support, which was a catalyst for system learning and use [33]. Some studies, however stress that training can only be effective if systematic processes are properly followed [47]. Communicating the benefits of eHealth systems to the users ranked second in this study. Systems are as good as the users knowing their benefits, and training has been found to fulfill this. Reminding users of the usefulness of the systems increases the chances of adoption as alluded in several other studies [30, 41]. In some studies, communicating anticipated benefits was reported to increase user acceptance of the eHealth system [5, 52, 53]. Communication tightens the loose ends between the patients and

Barriers and Facilitators of eHealth Adoption

261

the healthcare providers, but most importantly, makes the users aware of the system. User resistance and low adoption of eHealth has largely been attributed to the lack of awareness of the potential benefits of these systems. In some studies, naïve optimism, as a result of lack of communication, has created pockets of resistance even before implementation [54]. This study also revealed that securing patient data is very crucial in accelerating adoption of eHealth. Securing patient data involves protecting confidential medical information and once security is compromised, it creates a sense of fear and resistance among users. Similar to this study, concerns over privacy and security being compromised have been raised in several other studies [6, 10, 39, 40, 45, 55] as barriers of eHealth adoption. In a study conducted by Chang [43], participants expressed concerns about the confidentiality and the security of patient data with smartphones, with specific concern on multimedia capabilities that were perceived as having the potential of abuse. When patients exude fear in the system, its use and adoption becomes far from being attained. The more robust and secure the system is, the less likelihood of attack, hence adoption. The ease associated with using systems was a factor revealed in this study that was critical for successful adoption of eHealth among patients. There has been wide debate on whether ease of use can be ascribed to technology acceptance. But Kassim [56] study stresses that the ease of use is associated with increase in user satisfaction and trust in the system. In a study conducted in Ghana [57], ease of use and perceived usefulness had the strongest influence on eHealth adoption than any other factors. Likewise, other studies Riana et al. [58] have emphasized the relative importance of ease of use in influencing eHealth adoption. This study revealed that digital illiteracy is a barrier to eHealth adoption among patients. This can largely be attributed to lack of training and no user involvement at the time of design and implementation. Lacking digital skills to operate a system can be aggravated by the little or no formal education. As reported in other studies [20, 24, 38, 41] the lack of ICT skills to operate digital technologies was a very big impediment to adoption. Whereas a study conducted in Finland indicated that digital literacy does not have a direct impact on adoption [59], research conducted in Uganda, found out that expectant mothers did not use digital health technologies in their routine antenatal care practices because they lacked technical skills to operate the internet, computers and smartphones [29]. Because of this problem, many users become technophobic – the fear to use technology. This study revealed a strong correlation between digital illiteracy and technophobia. This technology fear exhibited by users is partly due to little or no exposure to technology or digital tools and the fear to be ridiculed [60], and as a result, many shun using eHealth systems as reported in other similar studies [19, 45]. If not dealt with at an early stage, it may result into one being cyberphobic [60] which is an abnormal fear detrimental to users. Lack of system’s acceptance among users was another barrier of eHealth adoption cited in this study. System’s acceptance among users could be caused by the lack of user involvement [33], when the system does not address the needs of the users [61], not communicating the benefits of the system [41] and to a smaller extent, attitude towards technology [23]. Like this study, several other studies [13, 24, 26, 62] reported user acceptability of the systems to be a very big challenge to eHealth adoption. In a

262

H. K. Namatovu and M. A. Magumba

study conducted by Konduri et al., [63], they recommended ensuring user acceptance to fully realize the potential of digital health technologies. In another study that was conducted in Iganga district hospital involving nursing mothers, the NeMo system was successful because of acceptability among mothers [13]. When users do not have a sense of ownership of the system, acceptance will be hard which increases system rejection. The study further revealed that once the system does not address the needs of the users, then ehealth adoption cannot ensue. Usefulness is the knowledge the users have of the system and the benefits that accrues from its use. Many scholars [23, 28, 30, 39, 13, 61, 64] have equally reported on perceived usefulness in facilitating or impeding the successful adoption of systems in general. Once users do not perceive the system as useful, acceptability will be very low. However, some studies have recommended training users [6] and active user participation in the system evaluation process [14, 55] to enhance user’s knowledge of the system. Behind a successful eHealth systems is the ability to satisfy the needs of the users [58, 65]. This study revealed that education level, location and gender could influence eHealth adoption. Both models indicated that the gender of the patient can influence adoption, however the logistic regression model further revealed that male patients were 2.662 times more likely to use eHealth systems than the females. The findings in this study can be corroborated with [18, 57], who equally reported gender to be a determinant of eHealth adoption, although, in their study, females were more likely to adopt than the males. However, other studies have equally reported the males enjoying higher levels of eHealth adoption than the females [18, 66, 67]. Conversely, the study revealed that a patient with a master’s degree or higher was 2.297 times more likely to use eHealth technologies than the rest of the participants. Education influences eHealth adoption and many scholars have equally stressed the importance of education in accelerating eHealth technology acceptance and adoption [19, 31, 42, 68]. In a study conducted in Ghana, it was revealed that participants having a higher education used eHealth devices more often than their counterparts [57]. Education shapes attitude and perception, and it has been reported to improve self-efficacy [69, 70]. Lastly, this study revealed that location as a factor strongly influences eHealth adoption. Specifically, the odds were in favor of the participants residing in Kampala than the rest of the districts. Unlike Kampala, these locations have poor network coverage, intermittent internet connectivity and poor telecommunication infrastructure, which, most times disrupts connectivity [10, 16, 22]. There’s little literature to support location as a determinant of eHealth adoption, however, in some study, though not necessarily related to eHealth adoption, it was reported that location affected the adoption of commercial internet [71]. Similarly, a study by Melesse [72] found a correlation between technology adoption and geographical distance.

5 Conclusion and Recommendations This study showed that gender, education and location have a significant impact on eHealth adoption. The study also revealed that hospital, technological and individual characteristics had a positive influence on eHealth. Specifically, in order of score, it was revealed that training patients, communicating eHealth benefits to the users, user involvement in the preliminary implementation phase were hospital factors that influenced eHealth adoption among patients. Subsequently, technological factors like security

Barriers and Facilitators of eHealth Adoption

263

of patient data and ease of use had an uphill influence on eHealth adoption. Lastly, the study revealed that individual factors like the lack of acceptance among users, technophobia, digital illiteracy and eHealth systems designs that do not meet the patient’s needs had a negative influence on eHealth adoption. Whereas the other factors under hospital, technological and individual barriers/facilitators showed influence of eHealth, their average score was relatively low. The success of eHealth requires players in the health sector to ardently focus on the socio-demographic factors of the users, technological and hospital conditions if eHealth adoption is to ensue. Future research should investigate the maturity level of eHealth systems to ascertain digital health penetration in health facilities. Limitation of the Study The limitations of this study that were both in breadth and accessibility of study sites. At a hospital level, there were many restrictions to access study participants because of the COVID-19 pandemic. Some facilities that had initially approved our study later backed out in a bid to curb the COVID-19 virus spread. At a country level, the two nation-wide lockdowns affected both public and private transport. Inter and intra-district movements were limited, at a certain point, the study had to be halted because it was no longer possible to get travel permits from the relevant government organs. At the participant level, patients were very jittery to interact with our research assistants because at that time, Uganda was at the peak of the second wave, hence many participants declined to participate. Another limitation of the study was that no data was collected on the level of exposure and effectiveness of eHealth technologies as a selection criterion, which could have affected the perception of the participants. Rather, the objective of the study was to get empirical data on the barriers and facilitators, which will inform a more rigorous study. Acknowledgments. We thank the management of the health facilities for the support rendered especially during COVID-19 pandemic. Similarly, we appreciate the participants whose involvement in this study was very pivotal. Lastly, I would like to extend sincere gratification to the Government of Uganda, through the Makerere University, Research and Innovations Fund for funding this research.

Author Contributions. The authors’ contribution towards this study were as follows; Conceptualization, Hasifah K. Namatovu; methodology, both authors; data collection, both authors; validation, both authors; formal analysis, Hasifah K. Namatovu; investigation, both authors; resources, Hasifah K. Namatovu; data curation, both authors; writing—original draft preparation, Hasifah K. Namatovu; writing—review and editing, both authors; supervision, both authors; project administration, both authors; funding acquisition, Hasifah K. Namatovu. All authors have read and agreed to the published version of the manuscript. Funding. The Government of Uganda through the Makerere research and innovations fund funded this research. The grant number is MAK-RIF/IND-RD2/1205 and the same funded the APC. Conflicts of Interest. The authors declare a no conflict of interest.

264

H. K. Namatovu and M. A. Magumba

References 1. Silva, B.M., Rodrigues, J.J., de la Torre Díez, I., López-Coronado, M., Saleem, K.: Mobilehealth: a review of current state in 2015. J. Biomed. Inf. 56, 265–272 (2015). https://doi.org/ 10.1016/j.jbi.2015.06.003 2. World Health Organization: WHO guideline (2019) 3. Barbabella, F., Melchiorre, M.G., Papa, R., Lamura, G.: How can eHealth improve care for people with multimorbidity in Europe? Health Systems and Policy Analysis, p. 31 (2016) 4. Natrielli, D., Enokibara, M.: The use of telemedicine with patients in clinical practice: the view of medical psychology. Sao Paulo Med. J. 131(1), 62–63 (2013) 5. Gagnon, M.P., et al.: Systematic review of factors influencing the adoption of information and communication technologies by healthcare professionals. J. Med. Syst. 36(1), 241–277 (2012). https://doi.org/10.1007/s10916-010-9473-4 6. Uganda Ministry of Health: Uganda National eHealth Policy: Ministry of Health, p. 35 (2016) 7. United Nations Development Program, Sustainable Development Goals (2020) 8. Hoefman, B., Apunyu, B.: Using SMS for HIV/AIDS education and to expand the use of HIV testing and counselling services at the AIDS Information Centre (AIC) Uganda (2009). http://kau.diva-portal.org/smash/get/diva2:357565/FULLTEXT01#page=43 9. Alliance medicines transparency, Essential medicines and health products information portal World Health Organization resource. Client satisfaction with services in Uganda’s Public Health Facilities (2014) 10. Angues, R.V., et al.: A real-time medical cartography of epidemic disease ( Nodding syndrome ) using village- based lay mHealth reporters. PLoS Neglected Trop. Dis. 12(6), 1–20 (2018) 11. Management Science for Health, “Systems for Improved Access to Pharmaceuticals and Services (SIAPS) Program. RxSolution Technical Brief. Arlington” (2014). http://siapsprog ram.org/wp-content/%0Auploads/2016/03/TechBrief-Tools-RxSolution.pdf 12. USAID, Systems for Improved Access to Pharmaceuticals and Services Program (Global) (2017) 13. Martin, S.B., et al.: Feasibility of a mobile health tool for mothers to identify neonatal illness in rural Uganda: acceptability study. JMIR Mhealth Uhealth 8(2), e16426 (2020). https://doi. org/10.2196/16426 14. Namatovu, H.K.: Enhancing antenatal care decisions among expectant mothers in Uganda Namatovu. Hasifah Kasujja Publisher’s PDF, also known as Version of record Publication date. University of Groningen, Groningen (2018) 15. Zanden, V.A.: WinSenga. Mobile Smartphone-based Electronic Foetal Heart Rate Monitor (2014). https://winsenga.wordpress.com 16. Meyer, A.J., et al.: Implementing mhealth interventions in a resource-constrained setting: case study from Uganda. JMIR mHealth uHealth 8(7), 1 (2020). https://doi.org/10.2196/19552 17. Hendrixk, H.C.A., Pippel, S., Wetering, R., Batenburg, R.: Expectations and attitudes in eHealth. a survey among clients of Dutch private healthcare organizations abstract and keywords abstract. Int. J. Healthc. Manag. 6(4), 263–268 (2013) 18. Hoque, M.R.: An empirical study of mHealth adoption in a developing country: the moderating effect of gender concern. BMC Med. Inf. Decis. Making 16(1), 1–10 (2016). https://doi.org/ 10.1186/s12911-016-0289-0 19. Kakaire, S.,Mwagale, F: Mobile Health Projects in Uganda - Narrative Report This report was completed for the in SCALE project by Sauda October 2010 (2010) 20. Chaaya, M., Campbell, O.M.R., El Kak, F., Shaar, D., Harb, H., Kaddour, A.: Postpartum depression: prevalence and determinants in Lebanon. Arch. Womens. Ment. Health 5(2), 65–72 (2002). https://doi.org/10.1007/s00737-002-0140-8

Barriers and Facilitators of eHealth Adoption

265

21. Wandera, S.O., et al.: Facilitators, best practices and barriers to integrating family planning data in Uganda’s health management information system. BMC Health Serv. Res. 19(1), 1–13 (2019). https://doi.org/10.1186/s12913-019-4151-9 22. Isabalija, S.R., Mayoka, K.G., Rwashana, A.S., Mbarika, V.W.: Factors affecting adoption, implementation and sustainability of telemedicine information systems in Uganda. J. Health Inf. Dev. Countries 5(2), 299–316 (2011) 23. Muhaise, H., Kareyo, M., Muwanga-Zake, J.W.F.: Factors influencing the adoption of electronic health record systems in developing countries: a case of Uganda. Am. Sci. Res. J. Eng. Technol. Sci. (ASRJETS) 61(1), 1–12 (2019) 24. Olok, G.T., Yagos, W.O., Ovuga, E.: Knowledge and attitudes of doctors towards e-health use in healthcare delivery in government and private hospitals in Northern Uganda: a crosssectional study. BMC Med. Inform. Decis. Mak. 15(1), 1 (2015). https://doi.org/10.1186/s12 911-015-0209-8 25. Larocca, A., Moro Visconti, R., Marconi, M.: Malaria diagnosis and mapping with m-Health and geographic information systems (GIS): evidence from Uganda. Malar. J. 15(1), 1–12 (2016). https://doi.org/10.1186/s12936-016-1546-5 26. Ggita, J.M., et al.: Related text messages and voice calls in Uganda, vol. 22, no. 5, pp. 530–536 (2019). https://doi.org/10.5588/ijtld.17.0521.Patterns 27. Yagos, W.O., Tabo Olok, G., Ovuga, E.: Use of information and communication technology and retention of health workers in rural post-war conflict Northern Uganda : findings from a qualitative study. BMC Med. Inform. Decis. Mak. 17, 1–8 (2017). https://doi.org/10.1186/ s12911-016-0403-3 28. Kiberu, V.M., Scott, R.E., Mars, M.: Assessing core, e-learning, clinical and technology readiness to integrate telemedicine at public health facilities in Uganda: a health facility–based survey. BMC Health Serv. Res. 19, 1–11 (2019) 29. Namatovu, H.K., Oyana, T.J.: ICT uptake as a determinant of antenatal care utilization in Uganda. Int. J. ICT Res. Afr. Middle East 10(1), 11–32 (2021). https://doi.org/10.4018/ijictr ame.2021010102 30. Huang, F., Blaschke, S., Lucas, H.: Beyond pilotitis: taking digital health interventions to the national level in China and Uganda. Global. Health 13(1), 1–11 (2017). https://doi.org/10. 1186/s12992-017-0275-z 31. Destigter, K.: A Successful Obstetric Care Model in Uganda, pp. 41–44 (2012) 32. Cargo, K., Merry, M., Viljoen, P.: Mobile for Development (2015) 33. Baryashaba, A., Musimenta, A., Mugisha, S., Binamungu, L.: Metadata of the chapter that will be visualized in Online. In: Information and Communication Technologies for Development. Strengthening Southern-Driven Cooperation as a Catalyst for ICT4D (2009) 34. Namatovu, H.K., Semwanga, A.R., Kiberu, V.M., Ndigezza, L., Magumba, M.A., Kyanda, S.K.: Barriers and facilitators of eHealth adoption among healthcare providers in uganda – a quantitative study. In: Sheikh, Y.H., Rai, I.A., Bakar, A.D. (eds.) e-Infrastructure and e-Services for Developing Countries AFRICOMM 2021. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol. 443, pp. 234–251. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-06374-9_15 35. UBOS Statistical Abstract, Uganda bureau of statistics 2017 statistical abstract. Uganda Bur. Stat., pp. 1–341 (2019) 36. Uganda Bureau of Statistics, National Population and Housing Census 2014 (2014) 37. Ministry of Health, National Health Facility Master List 2018, pp. 1–164 (2018). http://library. health.go.ug/sites/default/files/resources/National Health Facility Master List 2018_0.pdf 38. Kiberu, V.M., Mars, M., Scott, R.E.: Development of an evidence-based e-health readiness assessment framework for Uganda. Health Inf. Manage. J. 50(3), 140–148 (2021). https:// doi.org/10.1177/1833358319839253

266

H. K. Namatovu and M. A. Magumba

39. Kiberu, V.M., Scott, R.E., Mars, M.: Assessment of health provider readiness for telemedicine services in Uganda. Heal. Inf. Manag. J. 48(1), 33–41 (2019). https://doi.org/10.1177/183335 8317749369 40. Kiberu, V.M., Mars, M., Scott, R.E.: Barriers and opportunities to implementation of sustainable e-Health programmes in Uganda: a literature review. Afr. J. Primary Health Care Fam. Med. 9(1), 1–10 (2017). https://doi.org/10.4102/phcfm.v9i1.1277 41. Kabukye, J.K., de Keizer, N., Cornet, R.: Assessment of organizational readiness to implement an electronic health record system in a low-resource settings cancer hospital: A cross-sectional survey. PLoS ONE 15(6), 1–17 (2020). https://doi.org/10.1371/journal.pone.0234711 42. Mangwi Ayiasi, R., Atuyambe, L.M., Kiguli, J., Orach, C.G., Kolsteren, P., Criel, B.: Use of mobile phone consultations during home visits by community health workers for maternal and newborn care: community experiences from Masindi and Kiryandongo districts Uganda. BMC Public Health 15(1), 1–13 (2015). https://doi.org/10.1186/s12889-015-1939-3 43. Chang, L.W., Njie-Carr, V., Kalenge, S., Kelly, J.F., Bollinger, R.C., Alamo-Talisuna, S.: Perceptions and acceptability of mHealth interventions for improving patient care at a community-based HIV/AIDS clinic in Uganda: a mixed methods study. AIDS Care 25(7), 874–880 (2013). https://doi.org/10.1080/09540121.2013.774315.Perceptions 44. Adam, A.M.: Sample size determination in survey research. J. Sci. Res. Rep. 26(5), 90–97 (2020). https://doi.org/10.9734/jsrr/2020/v26i530263 45. Muhaise, H., Muwanga-zake, J.W.F., Kareeyo, M.: Assessment model for electronic health management information systems success in a developing country context : a case of the greater Bushenyi districts in Uganda. Am. Sci. Res. J. Eng. Technol. Sci. 61(1), 167–185 (2019) 46. Kushniruk, A.W., Myers, K., Borycki, E.M., Kannry, J.: Exploring the relationship between training and usability: A study of the impact of usability testing on improving training and system deployment. Stud. Health Technol. Inform. 143, 277–283 (2009). https://doi.org/10. 3233/978-1-58603-979-0-277 47. He, Z.: ScholarWorks @ UMass Amherst Design , Implementation, and Evaluation of a User Training Program for Integrating Health Information Technology into Clinical Processes (2016) 48. Ajami, S., Mohammadi-Bertiani, Z.J.J.I.T.S.E.: Training and its impact on hospital information system (HIS) success. J. Inf. Technol. Softw. Eng. 2(05) (2012). https://doi.org/10.4172/ 2165-7866.1000112 49. Ross, J.: Its not a training Issue. Practical Usability. Moving Towards a more Usable World (2010). https://www.uxmatters.com/mt/archives/2010/12/its-not-a-training-issue.php. Accessed 29 June 2021 50. Ross, J., Stevenson, F., Lau, R., Murray, E.: Factors that influence the implementation of e-health: a systematic review of systematic reviews (an update). Implementation Sci. 11(1), 1–12 (2016) 51. Patel, V., et al.: The Lancet commission on global mental health and sustainable development. Lancet 392(10157), 1553–1598 (2018). https://doi.org/10.1016/S0140-6736(18)31612-X 52. Archer, N., Fevrier-Thomas, U., Lokker, C., McKibbon, K.A., Straus, S.E.: Personal health records: a scoping review. J. Am. Med. Inf. Assoc. 18(4), 515–522 (2011). https://doi.org/10. 1136/amiajnl-2011-000105 53. Mcginn, C.A., et al.: Comparison of user groups ‘ perspectives of barriers and facilitators to implementing electronic health records : a systematic review. BMC Med. 9(1), 1–10 (2011) 54. Mapesa, N.M.: Health Information Technology Implementation Strategies in Zimbabwe (2016) 55. Okunade, K., et al.: Understanding data and information needs for palliative cancer care to inform digital health intervention development in Nigeria, Uganda and Zimbabwe: protocol

Barriers and Facilitators of eHealth Adoption

56.

57.

58. 59. 60. 61. 62.

63.

64. 65.

66.

67.

68.

69.

70. 71. 72.

267

for a multicountry qualitative study. BMJ Open 9(10), 1–9 (2019). https://doi.org/10.1136/ bmjopen-2019-032166 Kassim, E.S., Jailani, S.F.A.K., Hairuddin, H., Zamzuri, N.H.: Information system acceptance and user satisfaction: the mediating role of trust. Procedia-Soc. Behav. Sci. 57, 412–418 (2012). https://doi.org/10.1016/j.sbspro.2012.09.1205 Kesse-Tachi, A., Asmah, A.E., Agbozo, E.: Factors influencing adoption of eHealth technologies in Ghana. Digit. Heal. 5, 1–13 (2019). https://doi.org/10.1177/205520761987 1425 Riana, D., Hidayanto, A.N., Hadianti, S.: Integrative Factors of E-Health Laboratory Adoption : A Case of Indonesia, pp. 1–27 (2021) Nikou, S., Aavakare, M.: An assessment of the interplay between literacy and digital technology in higher education. Educ. Inf. Technol. 26(4), 3893–3915 (2021) Naher, A.: Fear of Computers among People and How to Overcome it (2020). https://www. linkedin.com/pulse/fear-computers-among-people-how-overcome-assrafun-naher/ U. Bureau of Statistics and I. International, “Demographic Health Survey,” Uganda Bur. Stat. Kampala, Uganda, p. 461 (2011). https://dhsprogram.com/pubs/pdf/FR264/FR264.pdf Roberts, S., Birgisson, N., Julia Chang, D., Koopman, C.: A pilot study on mobile phones as a means to access maternal health education in eastern rural Uganda. J. Telemed. Telec. 21(1), 14–17 (2015). https://doi.org/10.1177/1357633X14545433 Konduri, N., et al.: Digital health technologies to support access to medicines and pharmaceutical services in the achievement of sustainable development goals. Digit. Health 4, 1–26 (2018). https://doi.org/10.1177/2055207618771407 Pfeiffer, E.: How maternal depression and behaviours impact child health and development Dahleez, K.A., Bader, I., Aboramadan, M.: E-health system characteristics, medical performance and healthcare quality at UNRWA-Palestine health centers. J. Enterp. Inf. Manage. 34(4), 1004–1036 (2021). https://doi.org/10.1108/JEIM-01-2019-0023 Zhang, X., Guo, X., Lai, K.H., Guo, F., Li, C.: Understanding gender differences in m-Health adoption: a modified theory of reasoned action model. Telemed. e-Health 20(1), 39–46 (2014). https://doi.org/10.1089/tmj.2013.0092 Khan, I., Xitong, G., Ahmad, Z., Shahzad, F.: Investigating factors impelling the adoption of e-health: a perspective of African expats in China. SAGE Open 9(3), 2158244019865803 (2019). https://doi.org/10.1177/2158244019865803 Huang, K., et al.: Use of technology to promote child behavioral health in the context of pediatric care: a scoping review and applications to low-and middle-income countries. Front. Psychiatry 10, 806 (2019). https://doi.org/10.3389/fpsyt.2019.00806 Viola, S.B., Coleman, S.L., Glennon, S., Pastorek, M.E.: Use of parent education to improve self-efficacy in parents of students with emotional and behavioral disorders. Eval. Program Plann. 82, 101830 (2020). https://doi.org/10.1016/j.evalprogplan.2020.101830 Margolis H., Mccabe, P.P.: Self-Efficacy a key to improving the motivation of struggling learners. 77(6), 241–249 (2004) Forman, C., Goldfarb, A., Greenstein, S.: How did location affect adoption of the commercial Internet? Global village vs. urban leadership. J. Urban Econ. 58(3), 389–420 (2005) Melesse, B.: A review on factors affecting adoption of agricultural new technologies in Ethiopia. J. Agric. Sci. Food Res. 9(3), 1–4 (2018)

Survey of Detection and Identification of Black Skin Diseases Based on Machine Learning K. Merveille Santi Zinsou(B) , Idy Diop , Cheikh Talibouya Diop , Alassane Bah , Maodo Ndiaye , and Doudou Sow Ummisco-S´en´egal, Institut de Recherche pour le D´eveloppement (IRD-Hann), Ecole Sup´erieure Polytechnique (ESP/UCAD), Ecole Doctorale des Sciences et des Technologies, University Gaston Berger of Saint Louis (UGB), BP: 234 Saint-Louis, Senegal {zinsou.kpetchehoue-merveille-santi,cheikh-talibouya.diop}@ugb.edu.sn, [email protected] https://www.ugb.sn/

Abstract. Due to their physical and psychological effects on patients, skin diseases are a major and worrying problem in societies. Early detection of skin diseases plays an important role in treatment. The process of diagnosis and treatment of skin lesions is related to the skills and experience of the medical specialist. The diagnostic procedure must be precise and timely. Recently, the science of artificial intelligence has been used in the field of diagnosis of skin diseases through the use of learning algorithms and exploiting the vast amount of data available in health centers and hospitals. However, although many solutions are proposed for white skin diseases, they are not suitable for black skin. These algorithms fail to identify the range of skin conditions in black skin effectively. The objective of this study is to show that few researchers are interested in developing algorithms for the diagnosis of skin disease in black patients. This is not the case concerning dermatology on white skin for which there is a multitude of solutions for automatic detection. Keywords: Black skin diseases learning · Machine learning

1

· CNN · transfer learning · Deep

Introduction

Dermatological disorders are one of the most common diseases in the world. Although frequent, their diagnosis is sometimes complicated due to the tone of the skin, its color, the presence of hair, etc. The usual diagnostic process is quite restrictive for patients because it requires significant time and financial resources. A good skin examination is also time-consuming for the doctor because he must examine every lesion on the patient’s entire body and use modern and adequate equipment. In Africa, in addition to the profession’s difficulties, the c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023  Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 268–284, 2023. https://doi.org/10.1007/978-3-031-34896-9_16

Survey of Detection and Identification of Black Skin Diseases

269

challenges in this health field are enormous. Indeed, it is estimated that 30% of the Sub-Saharan population suffers from skin diseases [1]. With so many potential patients, there is a severe shortage of qualified personnel to provide better patient care. There is one dermatologist for a workforce ranging from 350,000 to one million people [1]. In light of this observation, deploying AI in the branch appears to be a viable and long-term solution. Deep learning algorithms have recently demonstrated exceptional performance on a variety of tasks, particularly the diagnosis of skin diseases. However, the results from Africa are not as conclusive as those from developed countries. This research critically examines current dermatological algorithms based on artificial intelligence. Following that, we will identify the limitations of existing solutions and potential challenges that will allow researchers to address dermatological issues related to black skin.

2

Skin Diseases

It is vital to have a minimal understanding of the clinical management of skin diseases because many processes such as feature extraction, image preprocessing, image resizing, image segmentation, etc. are based on them. This section discusses the anatomy of the skin, the current skin diseases, and the most recurrent diseases in SENEGAL. 2.1

Anatomy of Skin

The skin is the human body’s largest organ (representing 16% [2] of its total weight). It provides many functions such as barrier protection, immune protection, body temperature regulation, UV protection, detection, storage, and vitamin D synthesis. It comprises three main layers: the epidermis, the dermis, and the hypodermis. The epidermis forms a semi-permeable barrier and creates our complexion. It is made up of three sorts of cells: keratinocytes (which make the protein keratin) and melanocytes (which make the melanin responsible for skin pigmentation). It is divided into five layers: – The basal layer: is the lower layer of the epidermis. This layer forms keratinocytes and melanocytes (responsible for protecting the skin from the sun’s rays). – The spinous layer: is located above the basal layer. It gives a spiny appearance and contains the keratinocytes that have come up from the basal layer. – The granular layer: is found above the spinous layer and contains keratinocytes that produce fats that form a barrier to prevent dehydration by retaining water inside the skin. – The lucent layer: is located above the granular layer, present only on the palms of the hands and soles of the feet, and contains keratinocytes that are brought up from the granular layer. – The stratum corneum: this is the most superficial layer of the epidermis. Here, the keratinocytes die, flatten out, and are now called corneocytes.

270

K. M. S. Zinsou et al.

The dermis is a connective tissue, which supports the epidermis, protects the vascular network and the nerve fibres. The dermis is composed of two layers. – The papillary layer is a thin layer containing wavy projections that curve up and down at the dermis and the epidermis border. – The reticular layer: is a dense connective tissue composed of a network of elastic fibers. The deeper subcutaneous tissue (hypodermis) consists of connective tissue and fat. The fat protects the body through its cushioning effect, stores energy, and provides insulation for the body (Fig. 1).

Fig. 1. Human skin anatomy

Although the anatomy of the skin is universal for every human being, there are some differences in skin color. Differences in skin color between individuals are due to variation in pigmentation, which results from genetics (inherited from biological parents), sun exposure, or both. Below are some of the typological differences [47] (Table 1): Table 1. Difference between black and white skin [3] Black skin

White Skin

Definitions

Black skin refers to the dark coloration of skin due to the production of eumelanin in humans

White skin refers to the light coloration of skin due to the production of pheomelanin in humans

Type of melanin produced

Eumelanin - dark brown Pheomelanin - red to to black color yellow color

Cell size

Larger cell size with increased diameter

Smaller cell size with decreased diameter

Number of melanocytes

High melanocyte count

Low melanocyte count

Amount of Melanin produced High

Low

pH

Less acidic

More acidic

Survey of Detection and Identification of Black Skin Diseases

271

– The thickness of the epidermis of black skin is identical to that of white skin, but the stratum corneum has 20 cell layers instead of 16. It is, therefore, more compact and more resistant. – The skin of black people is relatively less hydrated than that of white people. – The skin’s pH is slightly more acidic (4.8 to 5.2) than white skin’s. – In white skin, melanocytes are small, clustered within the keratinocytes, and degraded in the upper layers of the epidermis. In blacks, melanosomes are twice as large and are dispersed in the cytoplasm of keratin cells. They are not contaminated and arrive intact in the stratum corneum. 2.2

Common Skin Diseases in Senegal

In the context of our study, we work with dermatologists from the National Hospital Center of Senegal: – Albert Royer Children’s Hospital in Dakar is a hospital center whose mission is to provide medical and surgical care to children from 0 to 15 years old; – Aristide Le Dantec Hospital. We have identified, with the help of dermatologists, the most recurrent skin diseases in Senegal, as summarised in the table below (Table 2). Skin diseases manifest differently depending on skin color, age, sex, etc. For example, eczema usually appears as itchy, dry, darker, or red areas of skin. In people with skin color, eczema often appears “ashen,” grayish, or brown in color. Considering age, eczema in infants, is usually seen on the cheeks and forehead. Children see it on the wrists, ankles, hands, feet, elbow, and knee creases. In adults, eczema is seen on the neck, face, feet, and back of the hands, upper arms and back, elbow and knee creases, fingers, and toes. Based on the above, it is essential to consider the unique characteristics of black skin in developing an intelligent algorithm for identifying black skin diseases [18].

3

State of the Art of Detection and Classification Algorithms for Uncolored Skin Diseases

Deep learning models (architectures) and classical machine learning (ML) methods produced remarkable results in the skin disorders recognition and classification. 3.1

Machine Learning Algorithms

System developed by Nawal Soliman Abdalaziz Alkolifi Alenezi [19] has a 100% accuracy rate for diagnosing three kind of skin illnesses (Eczema, Melanoma, Psoriasis, and Healthyskin). A pre-trained AlexNet model was used to extract features, and a multi-class support vector machine (SVM) classifies the features.

272

K. M. S. Zinsou et al. Table 2. Common skin diseases in Senegal

Disease

Description Disease Atopic dermatitis (eczema) causes your skin to become red and itchy. Most common in children, it can occur at any age. It is a chronic condition that flares up from time to time. Asthma or hay fever may accompany it [4].

[48]

Description Contact dermatitis is a skin response that happens when something touches your skin and causes a rash. The rash may begin quickly or develop over time. Contact dermatitis is fairly prevalent; practically everyone gets it at some point [5].

Psoriasis is a chronic inflammatory disease that manifests differently in people with varied skin tones. Plaque psoriasis can be red, purple, or grayish on darker skin. [27].

[6]

Dermatophytosis are mycotic infection of the skin and nails caused by various fungi and classified based on [7] their location on the body. They are also referred to as ”ringworm” or ”superficial fungal infections of the skin.

[8]

Scabies is a parasitic skin infection caused by a mite, Sarcoptes scabies. The mites live and lay their eggs un[7] der the skin, which results in severe itching, particularly at night. There are two types of scabies, crusted (severe form) and noncrusted. [9].

[10]

Dermatomyositis is a rare and heterogeneous autoimmune pathology characterized by a noninfectious inflammatory disease of the muscles and [12] skin associated with a vasculopathy predominant physiopathological element. It can be very severe, and its complications are numerous. Necrotizing Fasciitis is a severe bacterial infection that destroys the tissue under the skin. This ”flesh-eating” disease occurs when bacteria enter the [14] body through a skin lesion. Possible symptoms include blisters, fever, fatigue, and pain much worse than the appearance of the wound would suggest. Pyoderma gangrenosum is a rare form of neutrophilic dermatosis that causes rapidly progressing skin ulceration. [16] The lower extremities are where these uncomfortable ulcers most frequently manifest [38].

[13]

Acne is characterized by oily skin and pimples on the face, upper back, and chest. More embarrassing than serious, acne (sometimes called ”acne vulgaris”) is typically a teenage skin condition that can have significant psychological effects. Erysipelas is caused by bacteria, most often streptococcus. It appears on the legs in 85% of cases and on the face in 10% of issues. The bacteria usually enter the skin through a wound. It results in strong reaction in the skin. Lupus: purely cutaneous lupus (erythematous lupus) and systemic lupus, an autoimmune disease, affecting primarily young women, with a prolonged coursewhich can potentially affect all organs or tissues [11]. It threatens the life of patients in 5 to 10% of cases [32]. Scleroderma is characterize by chronic hardening and tightening of the skin and connetive tissue, joint pain, and heartburn. A rare disease primarily affects women and usually occurs btween 30 and 50 years. Treatments include medication, physical therapy, and surgery.

Impetigo is a bacterial skin infection caused by staphylococcus or streptococcus. It can be crustymost frequent, [15] or bullous. The bacteria are spread through direct contact with the lesions, which causes small epidemics in children’s communities. Mycetoma is a chronic, progressive infection caused by champions or bacteria, which affects the feet, upper limbs, back, head, etc. Late detection of this disease leads to the amputa[17] tion of the infected limb. It evolves slowly over months or years (10, 15 years), with progressive extension and destruction of muscles, tendons, fascia, and bones.

Survey of Detection and Identification of Black Skin Diseases

273

Thanh-NganLuu et al. [20] provide a framework to categorize human melanoma and non-melanoma skin cancer samples using the Stokes decomposition approach and several artificial intelligence (AI) models. With the proper pre-processing of the input parameters, all models exhibited a classification accuracy (F1 score) of greater than 90%. Overall, the suggested framework presents a potential method for quickly and precisely categorizing tissue samples of human skin cancer. A novel automated skin melanoma detection system (ASMD) including melanoma index was proposed by Kang HaoCheong et al. in [21]. The DermQuest, DermIS, and ISIC2016 databases were used to compile 600 benign and 600 malignant DD pictures for the proposed ASMD approach. An accuracy over 97.50% was reached when RBF (radial basis function) and SVM (the support vector machine) are used together. V.R. Balaji et al. [22] proposed a system based on a dynamic graph cut algorithm and Naive Bayes (a probabilistic classifier) respectively for skin lesion segmentation and skin disease classification. This system is fed by the ISIC (International skin imaging collaboration) 2017 dataset. They attained accuracy rates of 91.2% for melanoma, 94.3% for benign cases, and 92.9% for keratosis. In [23] System developed by Mustafa Qays Hatem and colleagues has achieved 98% accuracy in classifying skin lesions. The K-nearest neighbor (KNN) method is used in the classification process to distinguish between benign lesions that don’t indicate pathology and malignant lesions that do. The database consists of 40 images of melanoma and 40 image of normal skin. Joshua D. B. Mendoza et al. [24] proposed a system is based on Fieldprogrammable gate array (FPGA) and uses an algorithm to recognise seven skin disorders (acne, warts, tinea, psoriasis, eczema, rashes, and hives). The system had a 90% accuracy rate. A total of 40 diseased skin images were identified, including five images for each skin disease. Rahat Yasir et al. [25] proposed a system that successfully identifies nine skin disorders (eczema, acne, leprosy, psoriasis, scabies, foot ulcer, vitiligo, Tinea Corporis, Pityriasis Rosea) with 90% accuracy. A total of 775 dermoscopic images were collected and validated by a specialized physician. The system works in two phases: preprocessing images and disease’s phase classification.

274

K. M. S. Zinsou et al.

Table 3. Comparison of methods based on classical Machine Learning algorithms. Topic

Skin diseases

[19] (2019) Eczema, Melanoma, Psoriasis, Healthyskin [20] 2022

Method

Accuracy

Features extracted

AlexNet + SVM

100%

unknown, use of pre-trained A: digital image of AlexNet model for feature disease L: 100% extraction accuracy means result of Overfitting

90%

Efficient optical parameters Refractive index, Dispersion, Transmittance and Transmission coefficient

A: hybrid framework proposed L: small number images to classify diseases, models with an accuracy of 100% hide an overfitting problem

texture, entropy features

A: High accuracy L: Dataset images are not captured in the same conditions

Melanoma and Stokes-Mueller matrix non-melanoma decomposition + (Random skin cancer Forest/Decision Tree/SVM/XGBoost/Radius Neighbors/Ridge)

Advantage(A) and Limitation(L)

Stokes-Mueller matrix 100% decomposition + ExtraTree/k-Nearest Neighbors (k-NN)/Multilayer Perceptron (MLP) [21] 2021

Melanoma

BEMD (Bidimensionnel Empirical Mode Decomposition of images) + SVM + RBF

[22] 2020

Benign cases, Melanoma, keratosis

Dynamic graph cut algorithm 94.3% - 91.2% - Colour and texture features + Naive Bayes 92.9% respectively

A: use of digital images, dynamic graph cut method framework for segmentation L: small unbalanced dataset

[23] 2022

Melanoma

K-nearest neighbour (KNN), fast fourier transform

98%

Mean (using fast Fourier transform), standard deviation, histogram-based mean and standard deviation, edge-based pixel count of area and hole, and edge-based logarithmic pixel count of area and hole

A: dermoscopic image L: system is ineffective with a huge dataset and sensitive to the dataset’s noise.

[25] 2015

eczema, acne, leprosy, psoriasis, scabies, foot ulcer, vitiligo, Tinea Corporis, Pityriasis Rosea

eight 8 Pre-processing algorithm (grey image, sharpening filter, median filter, smooth filter, binary mask, histogram, YCbCr, and Sobel operator) + forward back propagation artificial neural network

90%

average colour code of infected area, infected area size in case of pixels and shape, edge detection of an infected area, User input:(gender, age, duration, liquid, type, liquid color, elevation, feeling)

A: new approach proposed with acceptable accuracy L: the pre-processing stage is a little complex.

3.2

97.50%

Deep Learning Algorithms

A. Muhaba and al. [26] proposed an automated deep learning system using the mobilenet-v2 pre-trained model, capable of diagnosing five skin disorders (lichen planus, acne vulgaris, atopic dermatitis, tinea capitis, onychomycosis). Data from clinical pictures and patient information power this learning system. Using various smartphone cameras, 1,088 skin images of the five main conditions were gathered in southwest Ethiopia, eastern Amhara, and the Afar area. An Eff2Net convolutional neural network (CNN) model was proposed by Karthik R and al. [27]. The Efficient Channel Attention (ECA) block and EfficientNetV2 are the foundations upon which this concept is based. In comparison to other deep learning methods described in the literature, the suggested CNN learned roughly 16M parameters to categorize the disease. The model has a test accuracy of 84.70% overall when applied to the four classes of actinic keratosis (AK), melanoma, psoriasis, and acne.

Survey of Detection and Identification of Black Skin Diseases

275

In [28], the system proposed is fed with 3406 images and can identify seven skin diseases (Psoriasis, acne, Chickenpox, Vitiligo, eczema, Tinea Corporis, and Pityriasis rosea). The dataset is considered unbalanced due to its uneven number of images. An accuracy of 94.4% was reached by applying the oversampling and data augmentation techniques to pre-process input data. Moolchand Sharma and al. [29] developed a system that can identify and classify five major skin conditions (eczema, viral infections, psoriasis and lichen planus, benign tumors, and fungal infections) with 95% accuracy. They did this by choosing residual neural networks (ResNet) with 50 layers that were given a collection of 1900–2500 photos. One-to-many approach and convolutional neural networks were combined to develop a system suggested by Kemal Polat and al. [30] for classifying skin diseases from dermoscopic images, which they claim achieves 92.90% accuracy. To build the image dataset, they extracted skin disease images from the HAM10000 dataset. In [31], 3400 clinical pictures from the ISIC (International Skin Imaging Collaboration) database make up the dataset. The optimal probability-based deep neural network (OP-DNN) is applied to the pre-processed images during the training phase. The resulting prediction model achieved an accuracy of 95%. A technique that uses the HAM10000 dataset to segment and classify skin lesions for automatic skin cancer diagnosis was proposed by ADEKANMI A. ADEGUN and al. [32]. Using a fully convolutional network (FNC) encoderdecoder, the method involves first learning the intricate and irregular features of skin lesions. Using the DenseNet, which is made up of dense blocks that have been combined and connected using the concatenation strategy and the transition layer, makes up the second component of the strategy. The precision, recall, and AUC score of the suggested model are each 98%, 98.5%, and 100%, respectively. An intelligent method was created by Saad Albawi and al. [33] that can categorize three different sorts of skin conditions: melanoma, neuroma, and typical conditions. The suggested solution uses pre-processing called adaptive filtering to get rid of extra noisy areas in the skin image. The International Skin Imaging Collaboration (ISIC) database feeds the developed model, which has a 96.768% accuracy rate. The system developed in [34] can identify 17 skin diseases based on the multiclass classifier: Deep Generative Adversarial Network (DGAN). They gathered a total of 13,650 pictures for the model from four different datasets: PH2, SD198, Interactive Dermoscopy Atlas, and DermNet. Half of the photos for each category of skin diseases were tagged, and the other half were left unlabeled. A system based on the pre-trained AlexNet model is proposed in [35]. It is fed by a total of eighteen hundred images collected from the Internet. All classes achieved 100% accuracy except for Pityriasis Versicolor, Tinea, and Seborrheic Dermatitis with 93.3%, 96.7%, and 90% respectively. R. Bhavani and al. [36] developed a method based on ensembling three machine learning algorithms: Inception v3, MobileNet, and Resnet. Three neural

276

K. M. S. Zinsou et al.

network models are used to forecast and classify the diseases when an input is supplied to the system. The model is 100% accurate in identifying three skin conditions: atopic dermatitis, actinic keratosis basal cell carcinoma, and acne. Halil Murat unver and al. [37] developed an intelligent system capable of identifying melanomas, using the Yolov3 (You Only Look Once) algorithm for image classification and GrabCut for image segmentation. The method has been tested on two publicly accessible datasets, ISBI 2017 and PH2, which contain a combined 2150 images. The model achieved an accuracy of 93.39% and the data augmentation method was not used. Table 4. Comparison of deep learning approaches Topic

Dataset

Skin diseases

Architecture and future extracted

[26] 2021 1880 image collected in southwestern Ethiopia

acne vulgaris, atopic dermatitis, lichen planus, onychomycosis, tineacapitis

- mobilenet-v2 - 41 features from 97.5% patiens information (age, gender, anatomical sites (abdomen, anterior torso, armpit, chin, ear, forehead, lateralface, lower back, lower extremity, nail, neck, periorbital region, posterior torso, scalp and upper extremity), symptoms of the diseases, outputs of 1280 images feature maps

A: Hight level accuracy L: The images collected were not taken under the same conditions. The dataset is not homogeneous.

[27] 2022 undefined

acne, actinic keratosis, melanoma and psoriasis

EfficientNetV2 + Efficient Channel Attention (ECA)

84.70%

A: New detection approach proposed L: Precision under 90%

[28] 2019 3406 images

Acne, eczema, Chickenpox, Pityriasis rosea, Psoriasis, Tinea Corporis, Vitiligo

- MobileNet - colors and shape features

94.4%

A: using oversampling techniques to transform an imbalanced dataset into a balanced dataset

[29] 2021 900–2500 images of each disease type from DERMNET

eczema, psoriasis and benign tumors, lichen planus, fungal infections, and viral infections)

- ResNet

95%

A: High accuracy L: Ten epochs for training are not enough to qualify the model as accurate

[30] 2020 Extract from HAM10000

actinic keratoses and - CNN + one-to-many approach in-traepithelial carcinoma, Without feature extraction benign keratosis, basal cell carcinoma, dermatofibroma, melanoma, melanocytic type and vascular lesions

92.90%

A: New detection approach proposed L: Accuracy to be improved

[32] 2020 HAM10000

Actinic keratoses and - fully convolutional network intraepithelial carcinoma, (FNC) + Conditional Random basal cell carcinoma, benign Field (CRF) + DenseNet keratosis-like lesions , dermatofi- broma, melanoma, melanocytic nevi and vascular lesions

98%

A: The system uses approaches for hyper-parameter optimization to lessen network complexity and boost computing performance

[33] 2019 ISIC database

melanoma, neuroma and atypical diseases

96.768%

A: High accuracy

[34] 2022 13,650 images from various dataset (PH2, SD-198, Interactive Dermoscopy Atlas and DermNet)

acne, vulgaris, angioma, -Deep Generative Adversarial carcinoma, keratosis, nevus, Network (DGAN) - 256 Milk coffee macule, differents features maps dermatofibroma, eczema, keloid, psoriasis, dermatitis ulcer, steroid acne, versicolor, heat rash, and vulgaris

91.1%, and 92.3% for unlabelled and labeled datasets, respectively

A: the algorithm works with labeled and unlabeled images L: Accuracy to be improved

[35] 2021 2070 images collected on the internet

Acne, Atopic Dermatitis, - AlexNet Contact Dermatitis, Human Papilloma Virus Infection, Pityriasis Versicolor, Seborrheic Dermatitis, Tinea, Urticaria, Vitiligo

97.8%

A: High accuracy L:AI System for Nigerian dermatologists but images are not for black skin disorders

- adaptive region growing technique + two-dimensional discrete wavelet transform (2D-DWT) + CNN - texture and geometric features: (contrast, correlation, energy, homogeneity, gradient, color)

Accuracy

Advantage(A) and Limitation(L)

(continued)

Survey of Detection and Identification of Black Skin Diseases

277

Table 4. (continued) Topic

Dataset

Skin diseases

Architecture and future extracted

Accuracy

Advantage(A) and Limitation(L)

[36] 2019 13000 images from Dermnet

Acne, Actinic Keratosis Basal Cell Carcinoma, Atopic Dermatitis

- Logistic regression + Ensembling 100% three machine learning algorithms: Inception v3, MobileNet, and Resnet - high-level features (considered entire image), middle-level features (extracted over the region), low-level feature (extracted pixel by pixel features)

A: New approach, High accuracy L: The combined architecture is complex and does not respect the size of the input images for each model

[37] 2019 2150 images from PH2 and ISBI 2017

melanomas

- Yolov3 (You Only Look Once) + 93.39% GrabCut

A: New approach, acceptable accuracy L: The proposed method does not give an accurate segmentation of the lesion because it includes the surrounding border.

Tables 4 and 3 show us that there are multiple approaches to identifying skin diseases from images. Although these systems have achieved high accuracy, they could not correctly identify skin diseases on black skin. It is because black skin has certain key specificities (parameters) unique to black skin that are not considered. These specificities are, among others: the complexion, the number of melanocytes (cells responsible for the production of melanin), the melanin type, the shape and the color taken by the disease on black skin, the PH, and the degree of hydration.

4

State of the Art of Black Skin Disease Detection Algorithms

Scientific research on the automatic detection of skin diseases has accelerated in recent years. Some recent works have proposed techniques for automatically identifying conditions on black skin. For example, “FIRST DERM” used neural networks to generate Skin Image Search, an AI tool capable of identifying 33 skin diseases with 80% accuracy. They worked on a total of 300,000 photos (black skin images are only about 5–10%) [38]. Ugandan researchers tested this application on 123 other photos to diagnose six forms of black skin diseases. They were only 17% accurate. [39]; This AI’s effectiveness in diagnosing black skin diseases yields a low percentage. Google has introduced a dermatology application called “Derm Assist” that recognizes 288 distinct skin disorders from photos. Initially, the system was trained on a training dataset of 64837 photos of 2399 patients in two states with an accuracy of up to 97%. People with fair skin, deeper white skin, or light brown skin made up 90% of the database. Dermatologists warn that the app may overdiagnose or underdiagnose persons who are not white due to biased sampling. In conclusion, this AI was not designed for people with skin darker [40]. In [41], an auutonomous AI system achieved 68% accuracy in determining

278

K. M. S. Zinsou et al.

the most likely skin lesion morphology. The accuracy increased to 80% when the highest forecast made by the AI system was enlarged to include its three most likely predictions. In contrast, primary care doctors had a diagnosis accuracy of 68% with a visual assistance and 36% without. An extra batch of 222 heterogeneous photos of various Fitzpatrick skin types (I-III or IV-VI) [46] was used to test the AI. Aggarwal [42] has demonstrated a considerable difference between individuals with fair skin and patients of color in the accuracy of AI for detecting melanoma and basal cell carcinoma (BCC). For this, two image recognition models were tested on 30 photographs after being validated on 38 images and trained on 150 images each. The ratio of melanoma- and BCC-displaying pictures was constant throughout each phase. A model with light skin received training, and another with skin of color. By measuring the area under the receiver operating characteristic curve, the two models’ performance was evaluated. For fair skin, the sensitivity was 0.60, while for SOC (system on chip), it was 0.53. The specificity was, respectively, 0.53 and 0.47. The positive and negative predictive values, respectively, were 0.56 and 0.50 and 0.57 and 0.50. F1 values for fair skin were 0.58 and for SOC they were 0.52. The AI SOC model, according to the author, nonetheless produced subpar results compared to the model trained on lighter skin, even when the same number of photos were utilized for training, validation, and testing. Laila Hayes and al. [43] developed an intelligent system capable of identifying three types of skin diseases (Dermatosis Papulosa Nigra (DNP), Vitiligo, Hyperpigmentation) on black and brown skin via a convolutional neural network (EfficientNet) with transfer learning. The system was trained on 385 images (175 DNP, 160 Vitiligo, 50 Hyperpigmentation) and achieved 94% accuracy. An analysis of the current shows that dermatological algorithms based on artificial intelligence (AI) and able to identify a range of conditions dermals do not work effectively on black skin. The mixed results obtained during previous studies are related to the fact that scientists do not have suitable datasets on the diseases of black skin and do not take into account some specifics features which are unique for black skin conditions.

5

Discussion and Potential Challenges

This study reviews various machine learning techniques and several architectures for deep learning. These approaches show a significant change in accuracy, time, and complexity over the years. The intelligent systems based on these approaches deal with various skin diseases. The proposed methodologies share steps such as image acquisition, image preparation, image segmentation

Survey of Detection and Identification of Black Skin Diseases

279

to extract the skin lesion, feature extraction to extract the features, model classification that uses the extracted features and predicts the disease, and model evaluation. However, as shown in Fig. 2, there are some differences where the machine learning classifier takes the feature vector as input and outputs the object class. In contrast, the deep learning classifier uses the image to determine the object class. Another difference is noted in the feature extraction stage, which is one of the most critical phases. Feature extraction with machine learning algorithms requires a deep knowledge of the image processing domain since the extraction is done manually. Also, designers have a free hand to choose the features they find meaningful, although this is tedious. On the other hand, with deep learning algorithms (e.g., CNN), the extraction is done automatically, as the algorithm identifies by itself the most suitable intrinsic and discriminating features on images that will constitute the vector characteristic for the classification stage. As the comparative Tables 4 and 3 show, the main features extracted by various approaches are: – color characteristics (grayscale, red, green, blue) – texture characteristics (contrast, regularity, asymmetry, uniformity, entropy, gradient, homogeneity, kurtosis) – geometric characteristics (shape) – effective optical parameters: refractive index, dispersion, transmission, and transmission coefficient. In addition, some researchers have used dermoscopic images on the one hand and digital images on the other as input to the model to be trained. Dermoscopic images come from a device called a dermatoscope, which is a diagnostic instrument used by a general practitioner or dermatologist during luminescent microscopy examinations of skin lesions. It allows them to see the deeper layers of the skin and identify abnormalities more precisely. The digital images come from photo cameras or telephone cameras. Dermatological images make it difficult to identify skin lesions. It is due to the presence of unwanted or defective elements (artifacts) in the images, the most common of which are: reflections, hairs, oil bubbles, hair, lighting changes, pixelation, different skin types, etc. Some disorders are challenging to recognize because there is little color difference between the backdrop and foreground (the lesion) due to the diversity and characteristics of the many skin types. Therefore, improper image segmentation will prevent the machine learning algorithms from correctly identifying the disease. Additionally, for accurate skin disease diagnosis, machine learning algorithms must be designed to deal with clinical and camera image data.

280

K. M. S. Zinsou et al.

Fig. 2. Deep learning and machine learning methods process

Despite the achievements of these intelligent algorithms, they are ineffective in identifying and diagnosing skin diseases in dark-skinned people, especially those hyperpigmented. The ineffectiveness of these algorithms on black skin can be explained by the fact that in Africa, due to difficulties in accessing care, patients only come to the hospital when the disease is at an advanced stage, which is not the case in Europe, America, and Asia. It reflects that the disease presents a different and more advanced appearance in the images. Consequently, an algorithm trained on images of white skin cannot have the same reactions when subjected to images of black skin, since the algorithm bases most of its knowledge on how skin lesions appear on fair skin. Moreover, there are significant differences between the manifestations of skin diseases in blacks and whites [44,45]. It is means that characteristics such as the shape, colour and colour of the disease on the black skin are not extracted. Added to this is the virtual absence of a representative set of images of dark-skinned people, and the existing algorithms focus mainly on images of light-skinned people. The anatomical difference at the cellular level (Subsect. 2.1), which causes skin color, explains the proposed algorithm’s inefficiency in identifying black skin diseases. Skin color (tone) is mainly determined by a pigment called melanin, a substance produced by melanocyte cells. This state of affairs indicates that the skin color (complexion) and the type of melanin were not considered during feature extraction. Finally, the analysis shows that to develop an intelligent algorithm capable of identifying skin disorders in black skin, scientists need to take into account key parameters which are unique and specific for black skin, such as: skin color(complexion), melanin type, skin color (tone), shape and color taken by diseases in black skin. From all of the above, potential challenges to address are:

Survey of Detection and Identification of Black Skin Diseases

281

– collect, annotate, and label images to have a dataset of black skin images. – develop a reliable, intelligent algorithm that can identify, classify skin diseases, and can extract characteristics that are unique and specific to black skin diseases; – Set up a generic framework that allows the easy addition of new diseases. Subsequently, we will first work on the study and proposal of an intelligent system to assist in the diagnosis of mycetoma. Mycetoma because it is a disease that affects much more of the population of remote areas (farmers, for example) whose early detection is vital to avoid severe cases leading to the imputation of the infected body member.

6

Conclusion

Based on the foregoing, we may conclude that AI is well adapted to identifying skin diseases and that several methodological approaches for reliable identification exist. However, we must admit that the numerous techniques provided for identifying skin diseases are nearly ineffectual when it comes to detecting black skin diseases. It is because specific black skin parameters such as skin color (complexion), melanin type, the PH, the degree of hydration, shape and color taken by disorder on lack skin, are not considered at the feature extraction stage to allow the models to classify skin diseases correctly. Indeed, decades of clinical and scientific research have mostly concentrated on light skin problem, excluding underprivileged people whose symptoms may manifest differently. It is also worth noting that a shortage of medical workers skilled in “black skin dermatology” is a source of daily difficulty for patients in Sub-Saharan Africa, where dermatology illnesses are common. Few patients are properly diagnosed and treated. Inadequate patient management can result in serious problems. At the conclusion of this analysis, it is urgent and critical to address the issue of “black skin” dermatology in order to design an effective dermatological algorithm for reliable identification and detection that will feed highly representative black skin data. Acknowledgments. The authors are thankful for the support provided by the Regional Scholarship and Innovation Fund (RSIF), and the Partnerships for Skills in Applied Sciences, Engineering, and Technologies (PASET).

Funding. This work was supported by the Regional Scholarship and Innovation Fund (RSIF), and the Partnerships for Skills in Applied Sciences, Engineering, and Technologies (PASET).

282

K. M. S. Zinsou et al.

References 1. Fondation Pierre Fabre - Rapport Annuel 2017: Dermatologie En Milieu TropicalPDF. https://www.fondationpierrefabre.org/fr/axes-dintervention/autre/sortiedu-rapport-annuel-de-la-fondation-pierre-fabre/. Accessed 02 May 2019 2. Human Skin: Basic Anatomy and Functions. https://www.acne.org/human-skinbasic-anatomy-and-functions.html. Accessed 03 Aug 2022 3. Dr Samanthi: Difference Between Black and White Skin (2019). https://www. differencebetween.com/difference-between-black-and-white-skin/#Black%20vs %20White%20Skin%20in%20Tabular%20FormLocation 4. Atopic dermatitis (eczema). https://www.mayoclinic.org/diseases-conditions/ atopic-dermatitis-eczema/symptoms-causes/syc-20353273 5. What is contact dermatitis and how is it treated? https://www.today.com/health/ contact-dermatitis-skin-disorder-information-t188139 6. Dermatology: Microneedling to Treat Acne Scars in Patients With Dark Skin Color. https://www.dermatologyadvisor.com/home/topics/acne/microneedlingto-treat-acne-scars-in-patients-with-dark-skin-color/ 7. Goldstein, A.O., et al.: Dermatophyte (tinea) infections - UpToDate 8. Public HealthImage Library (PHIL). https://phil.cdc.gov/Details.aspx?pid=2875 9. Disease Outbreak Control Division. https://health.hawaii.gov/docd/ diseaselisting/scabies/ 10. Journal of cancer research and therapeutics. https://www.cancerjournal.net 11. Le lupus: symptˆ omes, traitement, esp´erance de vie. https://www.doctissimo.fr/ html/dossiers/peau boutons/15090-lupus.htm 12. ICI-induced dermatomyositis 1 - UpToDate 13. Scleroderma of the Hands. http://www.sclerodermajourney.com/hands.htm 14. Demao, O., et al.: Cervical and facial necrotizing fasciitis of dental origin: about 10 cases 15. Impetigo Skin Infection: fineArtAmerica. https://fineartamerica.com/featured/ impetigo-skin-infection-dr-ma-ansaryscience-photo-library.html 16. Pyoderma Gangrenosum in an African American Male Initially Presenting as Sepsis (2022) 17. Develoux, M., et al.: Biological diagnosis of mycetoma (2011). https://doi.org/10. 1016/S1773-035X(11)70826-7 18. Patient Dermatology Education Eczema. https://skinofcolorsociety.org/patientdermatology-education/eczema/ 19. ALEnezi, N.S.A.: A method of skin disease detection using image processing and machine learning (2019). https://doi.org/10.1016/j.procs.2019.12.090 20. Luu, T.-N., et al.: Classification of human skin cancer using Stokes-Mueller decomposition method and artificial intelligence models (2022). https://doi.org/10.1016/ j.ijleo.2021.168239 21. Cheong, K.H., et al.: An automated skin melanoma detection system with melanoma-index based on entropy features (2021). https://doi.org/10.1016/j.bbe. 2021.05.010 22. Balaji, V.R., et al.: Skin disease detection and segmentation using dynamic graph cut algorithm and classification through Naive Bayes classifier (2020). https://doi. org/10.1016/j.measurement.2020.107922 23. Hatem, M.Q.: Skin lesion classification system using a K-nearest neighbor algorithm. Vis. Comput. Ind. Biomed. Art 5(1), 1–10 (2022). https://doi.org/10.1186/ s42492-022-00103-6

Survey of Detection and Identification of Black Skin Diseases

283

24. Mendoza, J.D.B., et al.: FPGA based skin disease identification system using sift algorithm and K-NN (2020). https://doi.org/10.1117/12.2572951 25. Yasir, R., et al.: Dermatological disease detection using image processing and artificial neural network (2015). https://doi.org/10.1109/ICECE.2014.7026918 26. Muhaba, K.A., et al.: Automatic skin disease diagnosis using deep learning from clinical image and patient information (2021). https://doi.org/10.1002/ski2.81 27. Karthik, R., et al.: Eff2Net: an efficient channel attention-based convolutional neural network for skin disease classification (2022). https://doi.org/10.1016/j.bspc. 2021.103406 28. Velasco, J., et al.: A smartphone-based skin disease classification using MobileNet CNN (2019). https://doi.org/10.30534/ijatcse/2019/116852019 29. Sharma, M., et al.: Detection and diagnosis of skin diseases using residual neural networks (RESNET) (2021). https://doi.org/10.1142/S0219467821400027 30. Polat, K., et al.: Detection of skin diseases from dermoscopy image using the combination of convolutional neural network and one-versus-all (2020). https://doi. org/10.33969/AIS.2020.21006 31. Jain, A., Rao, A.C.S., Jain, P.K., Abraham, A.: Multi-type skin diseases classification using OP-DNN based feature extraction approach (2022). https://doi.org/ 10.1007/s11042-021-11823-x 32. Adegun, A.A., Viriri, S.: FCN-based DenseNet framework for automated detection and classification of skin lesions in dermoscopy images (2020). https://doi.org/10. 1109/ACCESS.2020.3016651 33. Albawi, S., et al.: Robust skin diseases detection and classification using deep neural networks (2019) 34. Heenaye-Mamode Khan, M., et al.: Multi-class skin problem classification using deep generative adversarial network (DGAN) (2022). https://doi.org/10.1155/ 2022/1797471 35. Adegoke, B.O., et al.: An automated skin disease diagnostic system based on deep learning model. Ann. Faculty Eng. Hunedoara 19(3), 135–140 (2021) 36. Bhavani, R., et al.: Vision-based skin disease identification using deep learning. Int. J. Eng. Adv. Technol. 8(6), 3784–3788 (2019). ISSN 2249-8958 37. Unver, H.M., Ayan, E.: Skin lesion segmentation in dermoscopic images with combination of YOLO and GrabCut algorithm (2019). https://doi.org/10.3390/ diagnostics9030072 38. Michael, A.: AI dermatology tool needs more diverse skin types in its training datasets - Physics World (2019) 39. Kamulegeya, L.H., et al.: Using artificial intelligence on dermatology conditions in Uganda: a case for diversity in training data sets for machine learning (2019) 40. Feathers, T.: Google’s New Dermatology App Wasn’t Designed for People With Darker Skin. Vice. https://www.vice.com/en/article/m7evmy/googles-newdermatology-app-wasnt-designed-for-people-with-darker-skin. Accessed 27 June 2022 41. Dulmage, B., et al.: A point-of-care, real-time artificial intelligence system to support clinician diagnosis of a wide range of skin diseases. J. Invest. Dermatol. (2021). https://doi.org/10.1016/j.jid.2020.08.027 42. Scoviak, M.: AI Diagnostics Fall Short in Skin of Color. Dermatology Times (2021). https://www.dermatologytimes.com/view/ai-diagnostics-fall-shortin-skin-of-color 43. Hayes, L., Kara, E.: Recognition of skin diseases on black and brown skin via neural networks (2022). https://symposium.foragerone.com/spelmanresearchday/ presentations/41990

284

K. M. S. Zinsou et al.

44. Basset, A., et al.: Dermatology of black skin. https://doi.org/10.1001/archderm. 1988.01670070100035 45. Mukwende, M., et al.: Mind the Gap 46. Skin Renewal: Fitzpatrick Skin Type V. https://www.skinrenewal.co.za/ fitzpatrick-skin-type-v. Accessed 27 June 2022 47. Chatterjee, S., et al.: Dermatological expert system implementing the ABCD rule of dermoscopy for skin disease identification (2021). https://doi.org/10.1016/j.eswa. 2020.114204 48. What Black Patients Need To Know About The Effects of Psoriasis. https:// www.everydayhealth.com/psoriasis/what-black-patients-need-to-know-aboutthe-effects-of-psoriasis/

Rebuilding Kenya’s Rural Internet Access from the COVID-19 Pandemic Leonard Mabele1(B) , Kennedy Ronoh1 , Joseph Sevilla1 , Edward Wasige2 , Gilbert Mugeni3 , and Dennis Sonoiya3 1 Strathmore University, Ole Sangale Link, Nairobi, Kenya

[email protected]

2 University of Glasgow, Rankine Bldg, Oakfield Ave, Glasgow G12 8LT, UK 3 Communications Authority of Kenya, CA Centre, Waiyaki Way, Nairobi, Kenya

Abstract. In this paper, we share our findings on Research Paper 4 – one of the 15 research studies supported by the International Telecommunication Union (ITU) under the Connect2Recover Research Competition. The main objective of Research Paper 4 was to assess the level of digital resiliency for rural areas of Kenya in a two-pronged approach (before and during the COVID-19 pandemic) and the mechanisms that can be adopted as opportunity to “build back” rural connectivity. Hence, through desk research, the state of broadband access for Kakamega and Turkana counties, as representatives of rural Kenya, was evaluated. The evaluation focused on the connectivity for education and healthcare sectors. Further, field surveys were conducted in Kakamega and Machakos counties to obtain primary data and develop a demonstration mapping tool that can be inferred as a benchmark to effectively support connectivity initiatives for rural Kenya. Spectrum measurements were also incorporated in this evaluation to determine the extent of opportunity for rebuilding rural broadband through spectrum sharing. Therefore, alongside these findings, this paper shares recommendations at the end in terms of both policy and technology to reinforce digital inclusion for rural Kenya as part of the pandemic recovery strategies and safeguards against future hazards. Keywords: Digital Resiliency · Last-mile Internet access · Spectrum Sharing

1 Introduction The Connect2Recover initiative was launched by the International Telecommunication Union (ITU) in conjunction with the Government of Japan and the Government of Saudi Arabia in September 2020. The initiative was launched in line with the United Nations Secretary-General’s Roadmap for Digital Cooperation and the global goal of universal connectivity. The overall objective of the initiative is to reinforce the digital infrastructure and digital ecosystems of beneficiary countries and provide means of utilizing digital technologies such as telework, e-commerce, remote learning and telemedicine to support the COVID-19 recovery efforts [1]. In July 2021, the initiative launched the © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023 Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 285–298, 2023. https://doi.org/10.1007/978-3-031-34896-9_17

286

L. Mabele et al.

“Connect2Recover Research Competition” which sought to identify promising research proposals that would accelerate digital inclusion efforts for COVID-19 recovery. The competition encompassed the following aims: improvement of research focus on digital resiliency and digital inclusion for pandemic recovery; developments of a global research community of think tanks and academic institutions around digital inclusion, and promotion of knowledge sharing that informs targeted practices to build back better with broadband. Out of the 307 proposals submitted to the competition from 80 countries, 15 were selected. The 15 Research Papers hailed from 43 institutions and individual researchers in 22 countries. The work we present here is based on Research Paper 4 which is entitled Rebuilding Digital Inclusion for Rural Counties of Kenya. 1.1 Overview of the State of Connectivity in Kenya and Our Research Work Prior to the COVID-19 pandemic (by the end of 2019), the International Telecommunication Union (ITU) published that 3.7 billion people were still unconnected to the power of the online world [2]. Although this number has significantly reduced to 2.9 billion according to the 2022 Global Connectivity Report [3], it is still high as the figure represents one-third of the population on the planet [4]. Further, a significant fraction of the connected population is yet to be described as ‘meaningfully connected.’ Meaningful connection refers to a reliable, affordable and dependable connection based on user needs rather than a simple connection [2]. Geographically, however, Sub-Saharan Africa (SSA) is seen to lag behind in terms of Internet access compared to the rest of the world with only 29% of its entire population using the Internet [5]. During the pandemic, the region accounted for almost half of the 450 million people living in areas not sufficiently covered by 3G or 4G mobile networks [6]. Therefore, the drastic change of the shifts in data traffic at the height of the pandemic considerably imposed stress to the sectors of healthcare, education and business systems that lacked solid connectivity ecosystem to comfortably migrate to online platforms within the SSA region. This was equal for both fixed and mobile networks [7]. In Kenya, the state of Internet access similarly reflected the COVID-19’s global scenario based on the ‘have’ and ‘have-nots.’ That is, regions that had better connectivity prior to the pandemic remained better connected compared to the regions that lacked affordable and reliable access at the onset of the pandemic [8]. While this was partly attributed to the physical distancing restrictions that had been implemented to curb the spread of the coronavirus, it is also evident that initiatives to connect the ‘have-nots’ were still far (and still are) from being ready [9]. This inequality was further exacerbated by the state of electricity supply across the rural areas of the country, limitations of the available physical and Internet infrastructure, levels of income and limited availability of the contextual e-learning platforms. Moreover, challenges such as lack of digital skills, cost of personal devices as well as perceived lack of relevancy were conspicuous in rural counties of Turkana, Tana River, West Pokot, Lamu and Marsabit [8]. To enhance Internet access for these rural areas of Kenya beyond the pandemic, the Communications Authority of Kenya (CA), Kenya’s Information and Communication Technology (ICT) regulator, endeavored to enact two regulatory frameworks premised on the opportunity of enabling affordable Internet access through spectrum sharing. The

Rebuilding Kenya’s Rural Internet Access

287

first framework is the Authorization of the Use of Television White Spaces (TVWS) and the second one is the Licensing and Shared Spectrum Framework for Community Networks. The former, while emphasizing on Kenya’s vision of a digitally transformed nation, was joining the rest of the countries that had already published rules for secondary access to the TVWS in the 470-694 MHz spectrum as the pioneer step of spectrum sharing [10]. The opportunity of spectrum sharing is meant to increase capacity by allowing fallow spectrum to be exploited opportunistically to connect the underserved without displacing the incumbents. The latter, while noting of the limited number of Community Networks (CNs) in Kenya, was laying a foundation through a bottom-up contextual model to address broadband needs at both sub-county and ‘village’ levels. In essence, it was established as a complementary strategy to meet the needs of affordable communication infrastructure to the Kenyans living in remote and sparsely populated low-income areas. Our research work, hence, as our proposal to ITU’s Connect2Recover Competition, sought to investigate the value these two frameworks would provide to rebuilding Kenya’s rural connectivity. It also investigated the level of connectivity available for rural healthcare and education sectors, taking Kakamega and Turkana counties as case studies. Additionally, it assessed the window of scalability of spectrum sharing beyond the presently allowed radio frequency (RF) bands. The determination was based on spectrum measurements conducted in Kakamega County. The presentation of our research in this paper is therefore segmented as follows: Sect. 2 presents the research methodology used, Sect. 3 shares the findings of all the stages of work carried out, Sect. 4 provides our analysis to the findings while Sect. 5 shares our recommendations. Section 6 concludes the paper and provides insights into future works. The work presented in this paper also provides the following contributions: mapping of all the sites visited during field surveys in schools and healthcare facilities in Kakamega and Machakos counties and the RF findings on spectrum measurements conducted in Kakamega in potential bands that can be considered for spectrum sharing.

2 Research Methodology Predominantly, the research made use of desk research to study and analyze literature and secondary data that had been published by global institutions such as the International Telecommunication Union (ITU), Dynamic Spectrum Alliance (DSA), Global Alliance of Mobile Network Operators (GSMA), and the Alliance for Affordable Internet (A4AI) among others. Some of the notable literature studied as part of desk research include The Last-mile Internet Connectivity Solutions Guide by the ITU and the ROAMX publication of Assessing Internet Development in Kenya. ROAM-X is an acronym of Rights, Openness, Access to All, Multi-stakeholder participation and Cross-Cutting Issues crafted by the United Nations Educational, Scientific and Cultural Organization (UNESCO) as a benchmark of Internet universality indicators. In-country blueprints on connectivity such as the National Broadband Strategy and the Digital Economy Blueprint also contributed significantly to the literature as part of the desk research. Publications by researchers and corporate bodies on connectivity for Turkana and Kakamega as well as those focused on rural connectivity for Kenya were also equally considered as part of secondary sources. While conducting the desk research, we focused on the pillars or

288

L. Mabele et al.

segments of the various sources that related to rural Internet access and the opportunities and mechanisms recommended or suggested to spur affordable Internet access for the rural communities. Moreover, we dwelt on the technology and regulatory developments that are being evaluated at the global stage as well as within Kenya to enhance meaningful access to the underserved. The two frameworks aforementioned in the introduction section were central to our scrutiny. Besides the desk research, field surveys were also carried out in Kakamega and Machakos counties between 19th April 2022 and 25th April 2022. The surveys made use of both paper and mobile App-based questionnaire to obtain information on the experience of Internet access by academic and healthcare institutions in both counties. The App used is known as ODK from getodk.org. The App was adopted due to its open- source nature and the simplicity to allow collection of data both in online and offline modes. The paper questionnaire was used in developing the App questionnaire. Three sections were designed for the paper questionnaire; the first section requested for general information such as the institution, if public or private, name and details of the respondent. The second section asked for the location information in terms of coordinates of the institution and any accessibility to a nearby Internet Service Provider (ISP). The coordinates were obtained through the Mobile App as well as through a search on Google Map. The third section needed information about the connectivity of the institution to the Internet, access technology used, fiber point of presence (PoP), how often the Internet is used, average access speed, state of available devices and computer labs, number of staff and students, experience of access before and during the pandemic, monthly cost for the Internet connection, challenges faced in terms of connectivity, the present connectivity state and institutional initiatives to enhance access. The exercise adhered to the country’s ethical standard of data collection by seeking guidance through the Strathmore University’s Institutional, Scientific and Ethical Review Committee (SU-ISERC). During the survey, a total of 24 institutions was visited: 12 academic and 12 healthcare facilities. Representatives from the visited institutions were also interviewed to share information on their experience of connectivity before the pandemic and during the pandemic as well as at the time the field surveys were being conducted. The representatives included general staff and the ICT staff for the healthcare facilities. For the academic institutions, they included students, teachers, and general staff in various departments as well as the ICT staff. In addition to the field visit to the academic and healthcare facilities, another field exercise was conducted to measure the usage of the RF spectrum that can potentially be exploited for secondary access. The motivation was to develop spectrum awareness to assist knowledge on which spectrum bands to use opportunistically to enhance rural broadband access. This exercise involved use of the CA’s RF monitoring mobile equipment, which is often used across the country for spectrum planning, technical analysis and cross-border coordination. The RF bands of focus during the exercise included 470694 MHz, 700 MHz, 1700 MHz and the 3300-3500 MHz. The 470-694 MHz, as the ultra-high frequency (UHF) band that had been authorized for access through TVWS was evaluated in this case to determine if the intensity of incumbent usage had risen to warrant more protection from interference by the opportunistic deployment of TVWS.

Rebuilding Kenya’s Rural Internet Access

289

3 Research Findings In this section, we share our research findings in accordance with the segments of the work carried out. These segments are provided in Sects. 3.1, 3.2 and 3.3. 3.1 Desk Assessment of the State of Connectivity in Kakamega and Turkana Counties While both Turkana and Kakamega counties are classified as rural, Kakamega is more developed and has fertile lands for more agriculturally driven economic activities [11]. Further, Kakamega County also has more schools and healthcare facilities compared to Turkana due to the favorable geography and a more educated population [12]. Prior to the pandemic, the National Optic Fiber Backbone Infrastructure (NOFBI), the fiber optic connectivity initiative led by the national government of Kenya, had only covered areas along the main tarmac road leading into Kakamega town [13]. This means that only government offices, a few institutions and organizations were able to access Internet through it. In Turkana, however, NOFBI coverage had not been laid yet until the last quarter of 2020 [14]. Notably, the NOFBI project was proposed back in 2007 to establish a national public broadband network with access points in every county in order to attract and stimulate private sector participation in the provision of rural telecommunication services [15]. In terms of cellular network, Kakamega was described to enjoy 85% coverage with healthcare and education shown to be improving, however, the county cited challenges in terms of the quality of coverage even as it strategized to enhance Internet access for all its citizens in its County Integrated Development Plan [16]. For Turkana, out of the three mobile network operators (MNOs) in Kenya, Safaricom was the heavily relied on MNO. There was little data on the extent of coverage for Airtel and Telkom who are the other two MNOs. The Turkana County government, prior to the pandemic, had noted of significant areas within the county that lacked access to the cellular signal, hampering both voice and Internet communication. The National Broadband Strategy of between 2018 and 2023 also alludes to the cellular connectivity challenges that Turkana citizens faced [9]. For instance, citizens of Turkana had to walk more than 2 kilometers to access a mobile cellular signal with access to Internet being non-existent. Both Turkana and Kakamega, unfortunately, lacked sufficient data and published information on the alternative connectivity technologies such as through satellite or fixed Internet services that leveraged Wi-Fi or microwave links prior to the pandemic. This was similarly reflected in the state of connectivity for the schools and healthcare institutions in both counties, which warranted a field visit to ascertain the extent of Internet usage in both sectors. Unfortunately, for the field visit, Turkana could not be physically accessed due to budgetary challenges and issues relating to security. Therefore the field surveys replaced Turkana with Machakos.

290

L. Mabele et al.

3.2 Field Surveys in Kakamega and Machakos Counties The field surveys showed that most of the institutions had a fiber point of presence (PoP) nearby. In addition, five institutions in both Kakamega and Machakos were said to be connected to the NOFBI network although their Wi-Fi speeds seemed as low as below 2 megabits per second (Mbps). In Masinde Muliro University of Science and Technology (MMUST), the only university assessed during the study, one could not load a web page on the phone in the main library floor area although speeds were better (10 Mbps) at the Computer Lab which had fifteen computers. The MMUST ICT staff mentioned of reliable speeds that could averagely reach 100 Mbps but there was no evidence to back this up. Most students, however, preferred outdoor access to Internet which made sense from the physical-distancing scenario that served as a better control approach to the spread of the coronavirus. The list of some of the sites visited in Kakamega is shown in Table 2 with the ISPs providing the services as well as the average Internet speeds. In the institutions where no formal Internet access existed, the table uses ‘NONE’ to show an absence of established ISP and ‘N/A’ for the lack of information on average Internet speed. In Machakos County, 85% of the institutions surveyed had Internet access and microwave seemed a better alternative in the absence of fiber with 5 GHz Wi-Fi links in various facilities. The most popular service provider was Safaricom. However, most institutions had more than one provider with the other providers serving as backup. Majority of the healthcare centers cited unreliability in their current Internet states while mentioning of near future plans to automate most of the healthcare services. Similar to Kakamega, the Kenya Medical Training College (KMTC) seemed better equipped with digital facilities as well as with better Internet coverage compared to all the sites. KMTC is a state corporation established through an Act of Parliament under the Ministry of Health, entrusted with the role of training various disciplines in the health sector to serve Kenya, East Africa and the entire African region. The College has 72 campuses spread across Kenya [17]. A table similar to the one for the sites visited in Kakamega is also shown in Table 1 for the sites visited in Machakos County. At the height of the pandemic, it was noted that some institutions for both healthcare and academic, provided mobile data to their staff and students. Hence, based on the approach of working from home, it was hard to tell how reliable their Internet was as it depended on where the students and the staff stayed during the pandemic. However, a number of students surveyed mentioned to have had huge challenges attending online classes due to the poor coverage of the 4G network where they came from (or attended classes from), particularly those from rural counties. All the institutions, nonetheless, noted of the critical need to have Internet access even while some of them reported of the Internet becoming unreliable now as they bounce back from the COVID-19 pandemic.

Rebuilding Kenya’s Rural Internet Access

291

Table 1. Visited Sites in Kakamega County with their ISP, Access Technologies and the Internet speed. Visited Institution

Sub-county

Access Technology Available

Name of ISP

Average Internet Speed

Sheywe Community Hospital

Lurambi

Fiber

Liquid Telecom/ Safaricom

15 Mbps

KMTC Kakamega

Lurambi

Fiber & Satellite

TelkomUHC/Safaricom

250 Mbps

St. Mary’s Mission Hospital Mumias

Mumias West

Fiber

Liquid Telecom/ Safaricom

20 Mbps

Kakamega Orthopaedic Hospital

Lurambi

Fiber

Safaricom

10 Mbps

MMUST

Lurambi

Fiber

KENET/Safaricom

100 Mbps/300 Mbps

Ekambuli Khwisero Primary School

NONE

N/A

N/A

Mundoli Khwisero Primary School

NONE

N/A

N/A

St. Martha’s Mwitoti Secondary School

Cellular Network

Safaricom/Aitel/Telkom

10 Mbps

Mumias East

3.2.1 Mapping The data collected in both Kakamega and Turkana counties has been used to develop a demonstration mapping tool accessible through this link. The tool has markers of all the sites visited and provides information on the location of the site (institution surveyed), the present access technology used, speeds provided by the technology as well as whether the connection is reliable or unreliable as per the feedback provided during the site survey. Figure 1 and 2 shows an example of one academic and healthcare institution as presented by the tool in Kakamega and Machakos counties respectively. The mapping aims to contribute to ITU’s goal of a systemic platform that shows the connectivity challenges for determination of infrastructure needs [2].

292

L. Mabele et al.

Table 2. Visited Sites in Machakos County with their Access Technologies, Speeds and Monthly cost of Internet access Visited Institution

Sub-county

Access Technology available

Name of ISP

Average Internet Speed

St. Teresa Mwala Girls

Mwala

Microwave

Safaricom

5 Mbps

Kaani Level 2 Hospital

Kathiani

NONE

N/A

N/A

Masii Medical Centre

Masii

Microwave

Safaricom

10 Mbps

KMTC Kangundo

Kangundo

Satellite

KENET

100 Mbps

Machakos Teachers College

Machakos

Fiber

Safaricom

15 Mbps

Maisha Mazuri School

Matungulu

Microwave

Safariland

10 Mbps

Kivaani Health Centre

Kangundo

NONE

N/A

N/A

Machakos Girls Academy

Machakos

Fiber

Jamii/Safaricom

30 Mbps / 15 Mbps

3.3 Spectrum Measurements in Kakamega County Spectrum sharing has gradually been growing in Kenya to supplement the efforts of affordable rural connectivity while addressing the spectrum scarcity challenges [18]. Hence, our spectrum measurements in Kakamega County in the bands aforementioned showed an availability of spectrum holes (white spaces) as shown in Fig. 3 and 4 in 600-750 MHz band and 1700-1800 MHz bands. The figures are based on Fast Fourier Transform (FFT) measurements of the spectrum analyzer used by CA. Both figures show that a significant fraction of the signals (signals without raised peaks) can be exploited opportunistically to enhance Internet access [19]. Both of the bands shown are predominantly licensed to International Mobile Telecommunications (IMT) services, commonly available as mobile and fixed networks [20].

Rebuilding Kenya’s Rural Internet Access

Fig. 1. Details of Connectivity for St. Mary’s Institution in Kakamega County

Fig. 2. Details of Connectivity for Masii Medical Centre in Machakos County

293

294

L. Mabele et al.

Fig. 3. Spectrum Analyzer Fast Fourier Transform (FFT) Measurements in the 600–700 MHz Band. Source: Communications Authority of Kenya

4 Analysis of the Findings Traditional issues of lack of electricity, coverage and affordability were laid bare to the disproportionately underserved groups in Kenya at the height of the COVID-19 pandemic. This became more conspicuous when the Government announced closure of schools starting the week of 16th and 24th March 2020 to combat the geographical spread of coronavirus [13]. The learning of the rural students in both Kakamega and Turkana counties, hence, was heavily hampered given the immature state of fiber coverage at the time in both counties as well as a patchy cellular coverage that students alluded to during the field surveys. This equally affected students who had to travel back from urban institutions to their rural homes. Such students, especially tertiary- level students, struggled to join virtual classes through platforms such as Zoom, Google Meet or Microsoft TEAMs. To address this challenge, the government proposed an approach of having content delivered through radio FM and television for the primary and secondary students. However, with the gap of the lack of electricity, these efforts were not fruitful. Besides, students also lacked access to recorded sessions as they would in a Zoom or Google Meet class. The lack of electricity, prior to the pandemic and into the pandemic, also limited the deployment efforts of sufficient backhaul networks to support better rural connectivity for both schools and healthcare facilities. While CA’s sector statistics report, predominantly focus on the cellular coverage [22], the field surveys found some rural areas lacking meaningful cellular connection by the standards of ITU

Rebuilding Kenya’s Rural Internet Access

295

Fig. 4. Spectrum Analyzer Fast Fourier Transform (FFT) Measurements in the 1700–1800 MHz Band. Source: Communications Authority of Kenya

[23]. On the other hand, the cellular focus seem to inhibit understanding on other access alternatives such as through Community Networks, Low Earth Orbit (LEO) Satellites etc. that would significantly contribute to rural connectivity. Moreover, despite the little fiber coverage identified in both Kakamega and Turkana during the desk studies, the field studies unearthed a scenario of an existing state of dark fibre. Dark fiber refers to unused optical fiber that has been laid [24]. In spite of these challenges, the enactment of the regulatory frameworks for Community Networks (CNs) and TV White Spaces (TVWS) postulate an opportunity that can be tapped to properly reconstruct the connectivity initiatives beyond the two case studies shared in this paper to include other rural areas of Kenya. This is because of the approach of spectrum sharing they both posit. Spectrum sharing means to make more spectrum available for services whose growth is in the national interest, without upsetting the existing users of the spectrum [25]. Under the aegis of the country’s National Broadband Strategy [9], this approach, can be explored to allow coexistence of different radio access technologies (RATs) in the same radio frequency (RF) bands to deliver on affordable and meaningful rural connectivity. This can be extended in other bands such as the ones used for IMT as demonstrated in this paper. While addressing the contextual connectivity needs, spectrum sharing would also help the regulator efficiently manage spectrum without resorting to spectrum clearing and circumvent the challenges

296

L. Mabele et al.

of spectrum re-allocation as the ones experienced during the digital migration process [26].

5 Recommendations Based on the desk studies conducted during the work on Research Paper 4, field surveys, stakeholder engagements and spectrum measurements, we propose the following recommendations. The recommendations consider both policy and technology to rebuild Kenya’s rural Internet access from the COVID-19 pandemic and even enable stronger resiliency in the event of future pandemics: 1. While the Government has made great strides in delivering rural electrification, more effort is required to expand access to affordable and reliable grid electricity across Kenya to reduce the inequality that exists between the urban and rural areas. This would enable ease of deploying Internet to the last mile by the service providers as well as ease of powering end user devices. As an alternative to grid power, more initiatives on off- grid power through solar power need to be supported and funded to allow last mile deployments that can support last mile connectivity efforts as well as student and healthcare institutions’ end devices. 2. Initiatives to increase access options to the Internet in marginalised areas such as through TVWS should be classed in the same category as Community Networks and be sufficiently subsidised or incentivised to enable entrepreneurs or service providers to deliver on the public good of enabling hard-to-reach areas to be brought online. Further, in “less congested areas”, we propose a consideration to manually deploy TVWS radios to support recovery efforts that can rebuild digital inclusion for such areas. The considerations of the new developments on cellular networks needs to first establish the existing usage gaps and explore ways to manage the quality of service (QoS) provided through such networks. On the other hand, a contextual study of LEO satellites also needs to be carried out from a technological, economic and sustainability point of view, especially as a rural Internet access alternative. 3. An assessment needs to be conducted on the dark fibre in the Country to determine the extent of fiber-connected PoPs that can be leveraged, from a more informed perspective, to extend Internet access to both Kakamega and Turkana Counties. This should also be done for the other rural counties of the country. 4. Mapping of the connectivity for schools and healthcare centres in the country also needs to be properly conducted to enable efficiency and effectiveness in responding to the connectivity challenges facing rural Kenya. This would help strengthen the available connectivity options even as new methods such as spectrum sharing are identified. 5. More technology studies inclusive of software-defined radios, cognitive radios, opportunistic spectrum access, geolocation databases, automated frequency coordination as well as coexistence studies need to be conducted to validate the implemented policies on Dynamic Spectrum Access (DSA). This will help to properly inform the future enactment of policies that can sustainably and contextually fit the connectivity needs in Turkana, Kakamega and the other counties based on spectrum sharing.

Rebuilding Kenya’s Rural Internet Access

297

6 Conclusions and Future Works The reality of the digital gap (the gap between individuals, households, businesses and geographic areas at different socio-economic levels with regards to access to ICT technologies and the Internet) was immensely felt across Kenya at the height of the COVID19 pandemic. COVID-19 can hence be described to have completely reshaped the rural view of Internet access in Kenya with both schools and healthcare centers changing their perception on its relevance. As the efforts to rebuild rural counties such as Kakamega and Turkana pick up momentum, all the stakeholders ought to consider Internet access as a driver for the “new normal” of economic development. Hence, part of the future works on this research is to explore in detail implementation efforts that can bridge the dark fiber challenges and the scalability of spectrum sharing from a more practical approach of exploiting unused RF spectrum opportunistically.

References 1. InternationalTelecommunication Union Development (ITU-D). Connect2Recover Initiative, International Telecommunication Union, September 2020. https://www.itu.int/en/ITU-D/ Pages/connect-2recover.aspx. Accessed 26 Sept 2022 2. International Telecommunication Union - Development Sector. The Last-mile Internet Connectivity Solutions Guide, ITU, Geneva (2020) 3. International Telecommunication Union - Development Sector. Global Connectivity Report 2022. ITU, Geneva (2022) 4. United Nations Department of Economic and Social Affairs. World Population Prospects 2022: Summary of Results. UN DESA, New York (2022) 5. SBA Communications Corporation. Southern African Wireless Communications, August 2022. https://www.africanwirelesscomms.com/Media/Default/archive/sawc/SAWC2208. pdf. Accessed 26 Sept 2022 6. The World Bank. Using Geospatial Analysis to Overhaul Connectivity Policies. International Bank for Reconstruction and Development, Washington DC (2022) 7. Ericsson. How networks are adpating to the new normal. Ericsson, 14 April 2020. https://www. ericsson.com/en/blog/2020/4/networks-%20adaptingdata-traffic-new-normal. Accessed 8 June 2022 8. Mutegi, R.: ICT inequalities and e-learning in the wake of COVID-19 in Kenya. Int. Interdisc. J. Educ. 9(4), 220–227 (2020) 9. Ministry of ICT. Innovation and Youth Affairs, National Broadband Strategy 2023Final, May 2019. https://www.ict.go.ke/wp-content/uploads/2019/05/National-Broadb and-Strategy-2023-FINAL.pdf. Accessed 4 Sept 2022 10. Oh, S.W., Ma, Y., Tao, M.-H.: TV White Space - The First Step Towards Better Utilization of Frequency Spectrum. John Wiley and Sons Inc., New Jersey (2016) 11. Kathula, D.N.: Effect of COVID-19 pandemic on the education system in Kenya. J. Educ. 3(6), 31–52 (2020) 12. KNOEMA. Kakamega. https://knoema.com/atlas/Kenya/Kakamega. Accessed 20 Sept 2022 13. iLabAfrica, Strathmore University. Desk Assessment of the State of Connectivity in Kakamega and Turkana Counties, 28 April 2022. http://www.ilabafrica.ac.ke/wp-content/upl oads/2022/04/Desk-Assessement-of-%20theState-of-Connectivity-in-Kakamega-and-Tur kana-%2028_04_2022_to_be_uploaded.pdf. Accessed 20 Sept 2022

298

L. Mabele et al.

14. The Star. State Launches Installation of Sh.3 Billion Fibre Optic Cable in Rift Valley, The Star Newspaper, 26 October 2020. https://www.thestar.co.ke/counties/rift-valley/202010-26-state-launches-%20installation-of-sh3bn-fibreoptic-cable-in-rift-valley/. Accessed 26 Sept 2022 15. Office of the Auditor General. Implementation of National Optic Fibre Backbone Infrastructure Project, December 2014. https://www.oagkenya.go.ke/wp-content/uploads/2022/08/Imp lementation-ofNational-%20Optic-Fibre-Backbone-Infrastructure-Project.pdf. Accessed 26 Sept 2022 16. County Government of Kakamega. Kakamega County Integrated Development Plan 2018– 2022. County Government of Kakamega, Kakamega (2018) 17. Kenya Medical Training College. About KMTC. https://kmtc.ac.ke/about-kmtc/. Accessed 26 Sept 2022 18. Ronoh, K., Mabele, L., Sonoiya, D.: TV white spaces regulatory framework for kenya: an overview and comparison with other regulations in Africa. In: Zitouni, R., Phokeer, A., Chavula, J., Elmokashfi, A., Gueye, A., Benamar, N. (eds.) AFRICOMM 2020. LNICSSITE, vol. 361, pp. 3–22. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-70572-5_1 19. Donat, W.: Explore software defined radio: use SDR to receive satellite images and space signals. PragProg, North Carolina (2021) 20. Communications Authority of Kenya. Table of Radio Frequency Allocations, March 2021. https://www.ca.go.ke/wpcontent/uploads/2021/03/National-Table-of-Frequency-%20Allo cations-2020.pdf. Accessed 20 Sept 2022 21. Communications Authority of Kenya. Sector Statistics Report Q‘1–2021–2022, December 2021. https://www.ca.go.ke/wp-content/uploads/2021/12/Sector-Statististics-Report-Q12021-2022.pdf. Accessed 23 Sept 2022 22. International Telecommunication Union. Global Connectivity Report 2022. ITU, Geneva (2022) 23. Field Engineer. What is Dark Fiber? 13 November 2017. https://www.fieldengineer.com/ blogs/what-is-dark-fiber. Accessed 12 Sept 2022 24. Matyjas, J.D., Kumar, S., Hu, F.: Spectrum sharing in wireless networks. Taylor and Francis Group, Florida (2017) 25. GSMA. Digital Migration Process in Kenya, January 2017. https://www.itu.int/en/ITU-R/ seminars/rrs/2017-Africa/Forum/GSMA%20Digital%20Migration%20Process%20in%20K enya.pdf. Accessed 29 Sept 2022 26. Song, S., Rey-Moreno, C., Esterhuysen, A., Jensen, M., Navarro, L.: The rise and fall and rise of community networks. In: Global Information Society Watch. International Development Research Centre, Ottawa, pp. 7–10 (2018) 27. Turkana County Government. About Turkana (2022). https://turkana.go.ke/about-overview/. Accessed 26 Sept 2022 28. County Government of Kakamega. About Kakamega County, 2 December 2015. https://kak amega.go.ke/vision-mission/. Accessed 26 Sept 2022 29. Jiang, T., Wang, Z., Cao, Y.: Cognitive radio networks: efficient resource allocation in cooperative sensing, cellular communications, high-speed vehicles and smart grid. Taylor and Francis Group LLC, Florida (2015)

E-Services (Social)

Community Networks in Kenya: Characteristics and Challenges Kennedy Ronoh1(B) , Thomas Olwal2 , and Njeri Ngaruiya3 1 Strathmore University, Nairobi, Kenya

[email protected]

2 Tshwane University of Technology, Pretoria, South Africa

[email protected]

3 Technical University of Kenya, Nairobi, Kenya

[email protected]

Abstract. Community networks are set up by communities through pooling of resources. Such networks may also be initiated by third parties such as non-profit organizations (NPOs) or volunteers with involvement of the target community in a certain area. Municipalities can also roll out a community network by setting up free or inexpensive Internet access hotspots. Community networks are set up in order to bring affordable Internet connectivity to an area or to bring Internet connectivity to an area where there is no connectivity at all. Community networks in Kenya have been growing every year since 2015 when the first community network was founded. There are currently four community networks in Kenya. Despite the growth of community networks in Kenya, there is currently no existing critical analysis done on community networks in Kenya. This paper, hence, presents a study on community networks that are currently operating in Kenya. This paper presents the challenges currently hampering the growth of community networks in Kenya. This paper also provides possible solutions to those challenges and make some practical recommendations that can be adopted by community networks in Kenya. The paper further discusses the characteristics of community networks in Kenya and the critical success factors for a community network based on the experience of two community networks in Kenya. The applied methodology are qualitative exploratory research and desktop reviews. Keywords: Community networks · Digital divide · Internet penetration · TunapandaNet · Lanet Umoja · Aheri · Dunia Moja

1 Introduction Universal Internet access is among the top priorities in many countries and also a core pillar of the UN Sustainable Development Agenda. This is because Internet connectivity has the potential of bringing development. Access to affordable Internet stimulates economic growth, enabling startups to expand and bring new businesses to an area [1]. Access to the Internet is seen as a stepping stone to join the knowledge economy. People © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023 Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 301–316, 2023. https://doi.org/10.1007/978-3-031-34896-9_18

302

K. Ronoh et al.

can access e-health, e-commerce, e-learning, e-government and e-banking through the Internet. Despite the critical importance of connectivity to the Internet, over four billion people worldwide still remain unconnected to the Internet despite the progress that has been made in rolling out of mobile telephony and data networks in many countries [2, p.].It is often assumed that the unconnected citizens will somehow get connected through the mobile broadband provided by the national telecommunication operators in a topdown manner. However, the reality is that not all national telecommunication operators find it economically viable to connect certain areas because of low revenue that arises due to low population density [3]. Affordability is also still a problem for some of those that are connected. In areas where the only connectivity available is via 2G/3G/4G by commercial national telecommunication operators, the low-income population in Africa that lives on less one US dollar a day cannot afford it. In order to connect the next two billion low-income people, especially in Africa where some people live on less than US$ 1 per day, the cost of broadband needs to be less than USD 4.50 per month [4]. In Kenya, there is quality and reliable international connectivity through submarine optical fiber cables. In addition to that, the national government has rolled out National Optical Fiber Backbone Infrastructure (NOFBI) that connects major towns and strategic public, social and learning institutions. Internet Service Providers (ISPs) have also rolled out fiber to connect major towns as well as individual homes and businesses. Three mobile network providers (Safaricom, Airtel and Telkom) have also rolled out their 2G, 3G and 4G networks across most parts of the country. However, despite all the efforts towards having broadband available to all Kenyans, there still exist areas where there is no Internet connectivity. In Kenya, Internet penetration stands at 30% according to the Kenya Integrated Household Budget Survey of 2016 [5]. This is below the global average of 40% [6, 7]. This arises because in rural areas where there is low population, ISPs are not interested in providing services by rolling out a last mile access network because the cost of rolling out such an infrastructure is high compared to the revenue they will get. The same problem applies to urban informal settlements because of the low-income population. Despite the fact that there are many telecommunication operators and ISPs in some areas, some of the population in Kenya cannot afford the current broadband offerings. Other than the cellular network operators, traditional ISPs target the urban and wealthy citizens who can afford their services. In some rural areas in Kenya connectivity available is only through cellular networks but it is not affordable to low-income population. Kenya ranks number 37 out of 61 in Africa in terms of Internet affordability index in Africa [8].The top down approach by national telecommunication operators also results unaffordable connectivity because they are profit motivated and also because they operate in an monopolistic or oligopolistic manner [9]. Oligopoly or monopoly arise because not many companies have the finances to roll out a network infrastructure across the country and also pay high initial license fees as well as spectrum fees. Alternate connectivity models are being considered in order to connect the unconnected or provide affordable connectivity due to the increased awareness of the shortcomings of lack of availability or unaffordability of broadband offerings by the national mobile telecommunication operator model [6]. Community networks are currently being

Community Networks in Kenya: Characteristics and Challenges

303

embraced as one of the solutions to bridge the digital divide that arises due to the shortcomings of the national telecommunication operator model [4]. Community networks are networks built by citizens themselves through pooling of their own efforts and resources such as finances, equipment and infrastructure. Community networks can also be initiated by third parties such as volunteers or non-profit organizations who want to bring new or affordable connectivity to an area. Municipalities can also set up a community network especially in towns or cities by providing Internet hotspots. People can also offer their Internet connection to any other person by setting their network connection as a hotspot. If people do this, there will be more democratization of the telecommunication market that can result in cheaper broadband [10]. Community networks are currently emerging in Kenya in order to address the existing digital divide. There exists currently four community networks in Kenya that address the connectivity gaps left by national telecommunication operators and other ISPs. Despite the growth of community networks in Kenya, currently there is no existing comprehensive study done on existing community networks in Kenya to the best of authors’ knowledge. The aim of this paper, therefore, is to present existing community networks in Kenya, the challenges hampering the growth of community networks as a means of bridging the digital divide in Kenya and possible solutions to those challenges. This paper also presents the characteristics of community networks in Kenya and the critical success factors for community networks in Kenya.

2 Literature Review 2.1 Internet Penetration and Affordability in Kenya Kenya is one of the leading countries in terms of telecommunication and Internet in Africa. Kenya has been referred to as “Silicon Savanna”. Kenya is the country that pioneered M-Pesa mobile money transfer service. Currently, almost anyone in Kenya has a mobile phone that makes use of M-Pesa. The percentage of households with a computer is 9.5% and mobile phone penetration stood at 69% [7] as at 2016. 27 million people out of a total population 45 million own a mobile phone in Kenya. According to Kenya Integrated Household Budget Survey (2015–1016), there are more people with mobiles phones and computers in urban areas 18.5% compared to rural areas (4.5%). The same survey also shows that the households Internet penetration in Kenya stood at 30% per cent in 2016. In terms of urban and rural connectivity, the study showed that the percentage of households without Internet connection was 70% and 53% in rural and urban areas. Although the urban areas fare much better in terms of Internet penetration rate compared to rural areas, there also exists digital divide within urban areas. Low-income population in informal settlements lack computers and Internet access in Kenya. A study in one of the urban informal settlements shows that only 8% of the households surveyed have access to the Internet [13]. The same study also showed only 4% had access to a computer but 80% of the respondents had mobile phones. Only 20% could access the Internet using their phones out of the 80% who had access to a mobile phone. These statistics show that in Kenya, there are still some areas with no Internet connectivity. The COVID 19 pandemic showed the consequences of digital divide. During the COVID 19 pandemic, most education institutions in Kenya

304

K. Ronoh et al.

could not conduct online learning because of lack of connectivity or lack of affordable connectivity. 2.2 Community Networks as a Solution to the Digital Divide Community networks are continuing to emerge because of the failures of the traditional way of providing connectivity through national telecommunication operators and other ISPs. Community networks are “do it yourself networks” built by people within a particular community or through assistance by third parties. Community networks are set up in order to provide connectivity for the community members in areas where there is no existing connectivity at all or to provide cheaper alternative connectivity to that which is provided by ISPs [14, 15, 16]. Connectivity provided by community networks is cheaper because the prices are determined by the community since a community network operates based on the concept of self-determination [17]. Externally initiated networks that are concerned with providing affordable connectivity in underserved areas or new connectivity in unserved areas are more common in Africa [14] than those that are initiated from within the community. A community network can also be set up by municipalities [18]. In such a community network a municipality rolls out access points (APs) in public spaces. Internet access could be free or users may be required to pay a small subsidized fee. The municipality will usually enter into an agreement with a private company to roll out the network and provide internet access. If there will be a fee to be charged, it will have to be approved by the municipality. The set-up of Guifi.net forced telecommunication companies to lower their prices [19]. Other examples of successful wireless community networks initiated by citizens are Athens Wireless in Greece, Ninux in Italy, Sarantaporo.gr in Greece, Wireless Leiden in Netherlands, B4RN fiber network in UK and Consume in UK [10, 20]. Examples of externally initiated networks are Zenzeleni in South Africa [21] and Wireless for Communities in India [15]. There are community networks initiated by municipalities in London and Philadelphia [18]. 2.3 Studies on Community Networks in Africa A study was done on community networks in Africa by [14, 22]. The study looked at the reasons behind establishment of community networks and the barriers facing community networks in Africa. The study further provides a map of all the then existing community networks in Africa and also makes some recommendations. However, this study has a focus on all community networks in Africa. The articles were written in 2017 and hence lacks the recent developments on community networks in Kenya. There is also another brief article on community networks in Africa [23]. The paper, however, lack in having a comprehensive study on community networks in Kenya. There is an existing article on TunapandaNET community network [24]. However, the article focuses only on one community network in Kenya and lacks details about the challenges and critical success factors for community networks.

Community Networks in Kenya: Characteristics and Challenges

305

3 Methodology The methodology applied are qualitative exploratory research and desktop reviews. This qualitative exploratory research paper incorporated one-on-one interviews with two of the four community networks in Kenya coded as center1 and centre2, with their representatives coded as participant1 and participant2. The two Community Network choices were based on management availability to furnish this research with rich insights that would lead to understanding the underlying issues and eventually provide sustainable and frugal solutions. Furthermore, the choice of methodology imbues the research to produce quality, credible and reliable research findings. It is more case-oriented than variable-oriented; therefore, the richness of an in-depth description of a phenomenon is anchored on a real-life scenario [11]. The participants consented to record the sessions, which the researchers later transcribed and analysed using Atlas.ti (a qualitative CAQDAS) following the data analysis steps of thematic analysis by Braun & Clarke [12]. Desktop reviews were done on relevant documents and websites on community networks in Kenya and Africa in general.

4 Results and Discussion This section presents the results of the qualitative exploratory research that one-on-one interviews with two of the four community networks and desktop reviews. 4.1 Case Studies Existing Community Networks in Kenya TunapandaNET Community Network TunapandaNET is an urban community wireless network in Kibera, Nairobi [24]. The network was initiated by Tunapanda Institute in 2015. Tunapanda Institute is a non-profit organization that conducts training on technology, multimedia design and business in very low-income areas within East Africa. Tunapanda Institute provides training to youth in order to equip them with digital literacy and other skills to solve local and global issues. The network targets low-income urban youth and women who reside in the slum in Kibera. The slum residents do not afford Internet services provided by ISPs in the area because of their low-income. This hinders them from using the Internet for socio-economic empowerment. Four nodes serving two schools with 1500 students and a youth center serving 300 youth were rolled out in 2017 through partnership Internet Society (ISOC), International Center for Theoretical Physics (Italy) and Rhinotivity (Denmark). In 2018 through funding from ISOC, the network was scaled up to provide connectivity to seven more schools, two more youth centers and one women center. The network covers four out of 10 villages in Kibera. The network covers a range of 20 km. The technology used to roll out the network is Wi-Fi (IEEE 802.11). The network makes use of both 2.4 GHz and 5.8 GHz spectrum. TunapandaNET currently relies on grants to pay for network sustenance costs including Internet bandwidth. Tunapanda Institute receives over 300 applications but it can only train 30 youth per cohort. The network was created so that youth who are not successful in their application

306

K. Ronoh et al.

can take courses on Tunapanda’s e-learning platform. The network provides trainees with offline digital education content that can be accessed through schools, youth centers and women centers in Kibera. Lanet Umoja Community Network The community network is located in Lanet Umoja, Nakuru North Sub County, Nakuru County [25]. It is a rural-urban community network that was initiated by a third party. Initial funding came from USAID but recently they also received a grant from ISOC. The community network targets 14,200 households with a total population of about 50,000. The implementation of the network started in 2019. At the end of the network implementation, there will be Internet connectivity to five public schools, hospitals. There exists connectivity to the community in the area through 2G/3G/4G but it is expensive to the low-income population, schools and other institutions such as hospitals in the area. The community network aims to meet the need for affordable connectivity in the area by providing connectivity that is as low as $2 a month instead of an approximate cost of $2 a day [26]. The payments are used to meet network running costs including payment of bandwidth. The network uses Wi-Fi point to point links as backhaul to connect to various nodes. Access networks also use Wi-F. The technology used to roll out the network is IEEE 802.11 because of the license exempt spectrum band. There are plans to make use of TV White Spaces because of its good propagation characteristics. Aheri Aheri stands for Africa Higher Education Research Institute. It was started as a project under an NGO known as Community Initiative Support Services (CISS). Aheri CN was started in the year 2020 with the aim of strengthening the higher education ecosystem. The CN provides connectivity to technical and vocational training institutions and also community-based organizations. Currently, the network has four nodes around Kisumu City in the following areas of Nyalenda, Dunga Beach, Akala, Nginya and Omuga where CISS community partners are based. Nyalenda and Ogunga are urban informal settlements found in Kisumu. The node in Kisumu connects 100 businesses, home users, schools and various community organizations. In Omuga, Homa Bay County, Aheri has partnered with a local polytechnic called Omuga Technical and Professional Institute. The CN provides connectivity to about 500 students. The Aheri CN provides connectivity which is cheaper than that provided by other ISPs and telecommunications companies in the area. The packages start from $15which is almost half the price that ISPs in the area charge. Dunia Moja Dunia Moja is a CN that is in Kilifi County, the coastal area of Kenya. The CN was initiated by a social enterprise known as Lamuka Hub [27]. The aim of Lamuka Hub and Dunia Moja CN is to reduce the digital divide through digital literacy training and also through provision of connectivity. In the year 2020, the CN provided connectivity to three schools through a pilot project. The CN is in partnership with vocational training centres.

Community Networks in Kenya: Characteristics and Challenges

307

4.2 Characteristics of Community Networks in Kenya Various characteristics were identified through thematic analysis following a discussion with two out of the four community networks. They were coded as community needs, digital divide and digital illiteracy, and partners. This section expounds on these themes. Digital Divide and Digital Illiteracy Among the Population Served Considering the relatively low cost of connectivity provided by CNs compared to the national ISPs, the community dwellers get to access different services through the Internet. The different services through this connection include entertainment, learning (online and research) and working purposes. Although the connectivity provided by CNs has drastically reduced the digital divide, digital illiteracy is still high in the communities served by CNs. The community networks in Kenya, therefore, have noticed a need for more training to educate more on Internet usage so that the residents are able to make use of the services and opportunities that the Internet provides. Partnerships Community networks cannot exist in a silo and therefore need to collaborate primarily for sustainability purposes. The elaboration of the international and local partnerships gets to shed light on how CN’s operate and the areas the management of the CN’s need to constantly be thinking of for continual growth, especially on financial sustainability and policy improvements in closing up the digital divide gap. For the community networks to operate in a community, they have to establish local partners who can be either the government, parent companies (for the case of AHERI), Institutions (Universities and Technical and Vocational Education and Training), community centers, Community Based Organizations, Health centers, Indigenous communities, religious centers, KENET, etc. All these local partners have a role in successfully setting up a community network. In Aheri CN various people living in that community at times beyond gather to learn various skills in community centers which enlighten them on how they can carry out their day-to-day businesses, involving themselves in table banking. In addition, these prime groups can learn from digital platforms how to expand their skills and even acquire more skills to better their businesses and the so called chamas (Swahili word for co-operatives). Focus on Community Needs Community needs in this context identify the desires expressed by the different social groups in a community to ensure the propagation of needful usage of the services provided by the community networks to the community. In the case of TunapandaNet, two brothers from the United States saw that there was a need for digital literacy and this prompted them to work with an organization in Kibera to provide the digital content to the community. The community network provides unified content, especially for local schools, through a Learning Management System (LMS). This allows the distribution of the same insights on learning of computers to all the local schools, enabling the slow learners to learn from the fast learners even if they are in different schools. Women Involvement One characteristic of community networks is participation of women in the founding

308

K. Ronoh et al.

and running of community networks. Two of four community networks ((Lanet Umoja and Tunapanda) out) in Kenya were founded by women. 4.3 Covid-19, Connectivity and Community Networks in Kenya In the Covid-19 era, much had to re-adjust due to lockdowns in various countries, curbing the virus’s prevalence. Local and international travel restrictions, work hazard controls and closure of facilities such as churches, schools, restaurants and parks were implemented. These adjustments followed an increased uptake of usage of technologies for continuity. People started working from home more, and online classes increased uptake as more students followed online tutorials to keep themselves abreast on subjects they needed to improve on. The community networks also felt a positive growth impact during this period because there were more requests for connection during that period. Following the closure of schools, the community networks, the communities that stayed within a region served by community networks started to appreciate the services offered by the various CN’s near them because children were able to continue with their education while at home. This shows reduced community resistance. Unlike the pre-COVID period where CNs were pleading with partners and issuing incentives was common, the communities made requests for connectivity instead. In the Covid-19 era, new partners volunteered to be custodians of the devices as the CN’s increased their hotspot areas. The Covid-19 unprecedented times have given good leverage to technology usage and more so for Internet penetration as it ensures that activities such as school and work-related issues continue outside a brick and mortar environment. 4.4 Challenges Hampering Community Networks in Kenya and Solutions With milestones that the CN’s have made, there has been a significant change with their tagline, “Connecting the unconnected”. This, though, has not been achieved with no challenges. The challenges vary from partnerships, characteristics of the society and business models for sustainability purposes. In this section, the paper discusses the pain points and proposes solutions adaptable for generalization purposes. Congested Spectrum Band The four CNs in Kenya make use of the unlicensed WiFi band (2.4 GHz and 5 GHz) both for backhaul and hotspots. This band is prone to interference because, being unlicensed, there is extensive use especially in urban areas and this affects network quality of service and, as a consequence, data rates. CNs can make use of licensed spectrum but the high cost does not make it possible currently because the currently existing CNs in Kenya have challenges with a sustainable business model and hence they do not have enough funds to pay for licensed spectrum. We sought to find out whether they have considered the use of TV white spaces. Aheri said that they are considering it but mentioned equipment cost and human resource as some of the challenges. Lack of Skilled Human Resource Areas in Africa, whether rural or urban, that can benefit most from community networks often lack trained personnel with specific skills to start and sustain a community network

Community Networks in Kenya: Characteristics and Challenges

309

in Africa [2]. Lanet Umoja had to train recent graduates on wireless networking using Wi-Fi in order to help in rolling out and maintaining the network [26]. TunapandaNET also had to conduct training. The two community networks in Kenya currently rely on interns, volunteer staff or part time staff who may lack motivation and who will move to other organizations that will give them a good compensation for their skills. This has also been the case for TunapandaNET community network [24]. Reliance on part time or volunteer staff arises because the two community networks are currently not for profit. There is a need to have a sustainability model so as to attract professionals who will work full time for community networks for a pay. Lack of Affordable Backhaul Availability and affordability of backhaul also affects the growth of community networks in Kenya. Communication Authority of Kenya (CAK) charges US $100 for every radio device used for point-to-point wireless links in the unlicensed spectrum (Communication Authority of Kenya 2018b). This includes Wi-Fi point to point links radios in the 5GHz spectrum that is useful for backhaul within a community network. This is an extra cost to a community network. Backhaul to the Internet can be achieved through fiber or satellite. However, in some rural areas of Kenya, fiber, which is a relatively cheaper backhaul option, is not available as can be seen from Fig. 1. Satellite, on the other hand, is available in remote rural areas but it is a very expensive option. Regulatory and Policy Barriers At the time of conducting the interviews, there was no regulatory framework for CNs in Kenya. The existing policies on telecommunication did not previously adequately address the unique needs of community networks. CNs in Kenya have been set up with the noble intention of connecting the unconnected and hence they are not profit motivated. It is also a problem faced by many other community networks in Africa [14] and other countries worldwide [10, 15]. In the year 2021 a regulatory framework for community networks in Kenya was developed [27]. The previously existing regulatory framework for telecommunication in Kenya, like most other countries, focused on broadband provision by large scale profit motivated operators. The following were the requirements to get a telecommunication license from CAK [28]: • • • •

The entity should be registered in Kenya as a company, sole proprietor or partnership. Have a duly registered office and permanent premises in Kenya. Provide details of shareholders and directors. Issue at least 20% of its shares to Kenyans on or before the end of three years after receiving a license. • Provide evidence of compliance with tax requirement. • Pay license fees according to tier of operation. Tier 1 license is a license for nationwide operation. Tier 2 license is for regional operation. Tier 3 is for operation within a county. The initial operational license fees are US $150,000 for tier 1 and tier 2 license. The initial operational license fees for tier 3 license are US $2000. There are also annual operation license fees of at least US $1600 for tier 3 license. As can be seen from the above list, the licensing requirements as per the previous regulatory framework did not cater for the needs of community networks. The previously

310

K. Ronoh et al.

existing licensing framework for telecommunication services providers in Kenya, like many other countries, was intended for large scale companies [29, 30].The assumption was that every potential broadband connectivity provider has the resources and time of a legal department of a large telecommunication company to fill detailed application forms and reporting requirements. This may not be the case for community networks, especially in Africa. Community networks may also not afford the huge initial license fees. A special license is necessary because community networks operate in a different manner compared to the conventional networks under the control of ISPs. This is because community networks may not always be for profit and the services may be for the community only. They may not operate as a sole proprietorship, partnership or a limited company. There was, therefore, a need for a different simplified license different from a license that is issued to conventional ISPs in order to cater for the needs of community networks. A simplified and more suitable licensing framework for community networks in Kenya was developed in the year 2021 [27]. In the licensing framework, the license fees for CNs have been reduced and the licensing process is now easier. Lack of Awareness In Africa, generally, there is a lack of awareness among government entities, citizens, non-profit organizations and other community-based organizations and policy makers of the potential of community networks (Rey-Moreno 2017a). There are only four community networks in Kenya. This is because there is a lack of awareness in Kenya about the role that community networks can play in bridging the digital divide. Government policy documents make no mention of community networks. Although there is a mention of community networks in the latest broadband strategy for the country [31], it only focuses on community access networks provided by county governments and not citizen initiated networks. Citizens are also not aware that they can pool their own resources (finances, effort, time, infrastructure and equipment) to create their own network infrastructure that will provide them with cheaper connectivity or bring new connectivity where there is no connectivity at all. Some citizens are satisfied as long as they receive some form of connectivity even if it is not affordable. Community based organizations are also not aware that they can work alongside community members to bring new connectivity or affordable connectivity to an area. Community Resistance Pre-Covid 19, the CN’s initial entry to the communities (the first pilot projects) were marred with resistance. This was primarily because of a lack of knowledge of the importance of such infrastructure and services offered. In addition, the impoverished lives would pounce on any opportunity to get money since most of them are unemployed and are always in the community during the day; this time reference relates to the high rates of crime conducted at the night time. This resistance, for example, saw the CN’s in such settlements incurring more than they had budgeted for as the youths in the area demanded “unworked for money” (can be translated as a bribe) to allow the digging process for fiber installation. However, it is a relief for now as the resistance has since reduced due to the unprecedented times of Covid-19. In Covid-19 times, the majority saw and still are seeing the

Community Networks in Kenya: Characteristics and Challenges

311

impact of the Internet on society, especially in the education sector. Unlike the earlier times where the CN’s had to plead to be heard by the society, now they are receiving installation requests. However, they are careful to ensure that the community is well trained and sense of ownership given to them so as to reduce resistance. Business Model and Sustainability Sustainability aspect can be classified into financial sustainability, partner sustainability and infrastructure sustainability. Out of these three, the major hurdle for the CN’s is financial sustainability. The CN’s in Kenya are having challenges in establishing a business model that enables sustainability. They agree that they need to cut down on the dependency of donors as this might not suffice in the near future considering the growing demand and the different expectations from the same donors. Donor dependence arise because the CNs operate in low-income areas. In order to address the challenge of financial sustainability Aheri charges some fee for their Internet connection services. They charge between Ksh 1500 ($15) to Ksh 2000 ($20). The fee charged is almost half that which is charges by the ISPs in the area. In terms of partner sustainability, the CN’s source for different partners such as schools, churches, youth groups and women groups to enable them to sustain their business models. Operational Challenges The CN’s operational challenges are generalized and currently rely on unsustainable sources like donor dependency. Each of the operational challenges are discussed below. Partner Shift CN’s in Kenya are marred with issues of partner shifts, therefore, leaving them with a significant loss of infrastructural devices to theft as there is no ownership uptake. To avoid the loss, the community networks opt for permanent residency of established religious centers and land-titled indigenous communities within the informal settlements. Capacity Building and Digital Literacy Training Costs Capacity building for the users, especially the infrastructural responsibilities, can sometimes be disheartening considering the non-permanency of partners. This has prompted the transfer of costs to the end-users for sustainability purposes; the cost is tentatively low for their affordability. Digital literacy is another challenge as intensive training is to be conducted, considering most users are first time users of computers and the Internet. This means human resources and time compensation for the trainers and eventual training of the trainers of trainers (ToT) who will be helpful as they are part of the society; Hence this facilitates continual growth without the CN’s involvement in training. Capacity building and digital literacy training are paramount to a society that is intentional in building and learning through the evolving technologies. This, therefore, demands a sustainable solution that will see the continual training for more new users as the burden of training is shifted from the CN’s to the society’s champions.

312

K. Ronoh et al.

4.5 Critical Success Factors for Community Networks This section presents some critical success factors for community that were received from the interviews. Community Ownership, Involvement and Partnership One very important critical success factor for a community network is involvement and partnership with the community so that the community that is being served by the CN feel that they co-own the community network. Understanding of the Community An understanding of the community is also critical success factor for a community network. Aheri made an effort to understand the disposable income of the community they serve before they could decide on the charges for use of their CN connectivity. Business Model and Plan One other critical success factor that was mentioned was having a business model and plan. Costs such bandwidth, personnel, power, and all these have to be captured through a business plan. CNs also have to come up with income generating activities in order to ensure sustainability of the CN. Technical Knowledge Technical knowledge and participation in research were also mentioned as critical success factor for a CN.

5 Recommendations The CNs have proposed challenges to some of the solutions. This section presents other possible solutions to some of the challenges. Recommendations on Spectrum TV White Spaces and Dynamic Spectrum Access In order to address the challenge of congested spectrum band, the CNs should consider the use of TV white spaces now that a regulatory framework has been developed for the use technology in Kenya. Dynamic spectrum access will alleviate the artificial spectrum shortage that arises due to the fixed spectrum assignment that requires payment of huge spectrum fees. It will also significantly bring down spectrum fees. This will suit community networks because they may not have enough funds at their disposal like commercial ISPs. TV white spaces are more suitable for community networks set up in rural areas compared to the Wi-Fi frequencies (2.4 GHz and 5GHz) because the spectrum in those frequency bands have good propagation characteristics [32]. TV white spaces can cover a longer range and can penetrate obstacles such as vegetation. A network set up using TV white spaces will require less base stations compared to WiFi. TV white spaces can also be used by CNs as an alternative and cheaper option for backhaul. This is because TV white spaces can operate in both line of sight and non-line of sight.

Community Networks in Kenya: Characteristics and Challenges

313

Spectrum Secondary Markets Primary spectrum licensees that may not be able to provide broadband in certain unserved or unserved rural areas can lease spectrum to community network operators at a fee affordable to community network operators [29]. Primary licensees may lack economic incentive to roll out a network in such areas. Spectrum secondary markets means a primary licensee can lease out (through a sub-license) the spectrum for which they have license to another entity. Sub-licensing has been applied in Rwanda [33]. Vanu Rwanda was assigned spectrum but it is partnering with Airtel Rwanda to reach out to unserved and underserved areas. Vanu Rwanda has plans to roll out 376 sites in order to reach out to 1 million people. Airtel Rwanda, as the service provider partner, provides the customers to the network. In additional to rolling out the infrastructure Vanu also cover some of the operational expenditure. Airtel gets a share of the revenue for providing customers and being the service provider on the infrastructure rolled out by Vanu. Another alternative is for community network operators to partner with primary licensees to roll out a network in unserved or unserved rural areas at a reasonable profit to the primary licensee but ensuring that the network is affordable to the community [29]. Open Cellular (that is owned by Facebook) has partnered with some operators in Pakistan, Indonesia, Iraq and Philippines to develop community based cellular networks in unserved and underserved areas in order to bridge the digital divide. Exemption from Tax and Other Levies Exemption from tax and other levies is necessary so as to reduce capital and running expenditure for community network operators, especially those that are not for profit. This will also address the challenge of lack of funds. Such exemption will make connectivity provided by community networks affordable. In order to reduce the initial cost of network roll out, equipment to be used to set up a community network should be made tax exempt. Tax incentives can also be given to organizations who offer affordable connectivity in rural areas or new connectivity in an unserved area in order to encourage them to roll out community networks in such areas. This can include exemption from payment of revenue taxes. Levies that can be charged to community networks such as fees per mast and device installed and contributions to universal service funds, among others, can also be exempted from community network operators. Availing of Universal Service Fund and Other Funds for Community Networks One of the biggest barriers to roll out community networks is lack of funds. In order to spur roll out of community networks, funds should be made available for community networks. A potential source of funds for community networks is the Universal Service Fund (USF). In the current framework on the use of USF in Kenya [34], community broadband networks are recognized as potential beneficiaries of the universal service fund. However, the framework considers contracting operators to roll out a community network. The framework should also consider non-profit organizations and other grassroots-based organizations such as self-help groups or other community groups as organizations or entities that can be awarded some funds from USF in order to roll out a community network. Other possible sources of funds for community networks are low interest loans, grant programs or public private partnerships. For example, United States

314

K. Ronoh et al.

Connect Program, through grants, helps funds community networks in rural areas that are not served by ISPs [29].

6 Conclusion In this paper, community networks as a solution divide in Kenya, existing community networks in Kenya, characteristics of CNs in Kenya, challenges hampering the growth of CNs, solutions to the challenges by the CNs themselves and recommended solutions have been presented. Future work will include study on the impact of development of a regulatory framework for community networks in Kenya and possible shortcomings of the framework. Study on development of low-cost hardware such as routers for community networks is another future possible study. Acknowledgments. We would to thank Barrack Otieno and Alphonse Odhiambo of Aheri Net and Tunapanda Community Network, respectively, for accepting our interview request. The feedback was very useful and contributed a lot to this paper.

References 1. Farkas, K., Szabó, C., Horváth, Z.: Planning of Wireless Community Networks. Handbook of Research on Telecommunications Planning and Management for Business, p. 18 (2009). https://doi.org/10.4018/978-1-60566-194-0.ch041 2. Rey-Moreno, C.: Supporting the creation and scalability of affordable access solutions, May 2017. https://www.internetsociety.org/resources/doc/2017/supporting-the-creation-andscalability-of-affordable-access-solutions-understanding-community-networks-in-africa/ 3. Jain, S., Agrawal, D.P.: Wireless community networks. Computer 36(8), 90–92 (2003). https:// doi.org/10.1109/MC.2003.1220588 4. Rey-Moreno, C., Esterhuysen, A., Jensen, M., Song, S.: Can the unconnected connect themselves? Towards an action research agenda for local access networks. In: Belli, L. (ed.) Community Networks: the Internet by the People for the People, Rio de Janeiro, Brazil FGV Direito Rio (2017) 5. Kenya national burea of statistics, “Kenya Integrated Budget Household Survey (2016). https://sun-connect-news.org/fileadmin/DATEIEN/Dateien/New/KNBS_-_Basic_Report. pdf 6. Srivastava, R.: A network by the community and for the community. In: Belli, L. (ed.) Community Connectivity: Building the Internet from Scratch, Rio de Janeiro, Brazil: FGV Direito Rio, pp. 125–146 (2016) 7. Odongo, A.O., Rono, G.C.: Kenya digital and cultural divide. In: Proceedings of the 9th International Conference on Theory and Practice of Electronic Governance - ICEGOV 15–16, Montevideo, Uruguay, pp. 85–94 (2016). https://doi.org/10.1145/2910019.2910077 8. Ndung’u, M., Lewis, C., Mothobi, O.: The state of ICT in Kenya, Research ICT Africa (2019). https://researchictafrica.net/after-access_the-state-of-ict-in-kenya/ 9. Navarro et al.: A commons oriented framework for community networks. In: Belli, L. (ed.) Community Connectivity: Building the Internet from Scratch, Rio de Janeiro, Brazil, pp. 55– 92 (2016)

Community Networks in Kenya: Characteristics and Challenges

315

10. Micholia, P., et al.: Community networks and sustainability: a survey of perceptions, practices, and proposed solutions, arXiv:1707.06898 [cs], July 2017. Accessed 06 Jun 2020. http://arxiv. org/abs/1707.06898 11. Njie, B., Asimiran S.: Case study as a choice in qualitative methodology. IOSR J. Res. Method Educ. 4, 25–30 (2014) 12. Braun, V., Clarke, V.: Using thematic analysis in psychology. Qual. Res. Psychol. 3(2), 77–101 (2006). https://doi.org/10.1191/1478088706qp063oa 13. Wamuyu, P.K.: Bridging the digital divide among low income urban communities. Leveraging use of community technology centers. Telematics Inform. 34(8), 1709–1720 (2017). https:// doi.org/10.1016/j.tele.2017.08.004 14. Rey-Moreno, C.: Supporting the creation and scalability of affordable access solutions: understanding community networks in Africa, May 2017. https://www.internetsociety.org/ resources/doc/2017/supporting-the-creation-and-scalability-of-affordable-access-solutionsunderstanding-community-networks-in-africa/ 15. Srivastava, R.: Regulatory issues and gaps: an experience from India (2017). https://www. internetsociety.org/resources/doc/2017/community-networks-regulatory-issues-gaps-experi ences-india/ 16. Filippi, P.D., Tréguer, F.: Expanding the internet commons: the subversive potential of wireless community networks. J. Peer Prod. 40 (2015) 17. Belli, L.: Network self-determination and the positive externalities of community networks. In: Belli, L. (ed.) Community Networks: the Internet by the People for the People, Rio de Janeiro, Brazil, FGV Direito Rio (2017) 18. Frangoudis, P., Polyzos, G., Kemerlis, V.: Wireless community networks: an alternative approach for nomadic broadband network access. IEEE Commun. Mag. 49(5), 206–213 (2011). https://doi.org/10.1109/MCOM.2011.5762819 19. Baig, R., Roca, R., Freitag, F., Navarro, L.: guifi.net, a crowdsourced network infrastructure held in common. Comput. Netw. 90, 150–165 (2015). https://doi.org/10.1016/j.comnet.2015. 07.009 20. van Drunen, R., Koolhaas, J., Schuurmans, H., Vijn, M.: Building a wireless community network in the Netherlands. In: Proceedings of the FREENIX Track: 2003 USENIX Annual Technical Conference, San Antonio, Texas, USA, p. 7 (2003) 21. Rey-Moreno, C., Sabiescu, A.G., Siya, M.J.: Towards self-sustaining community networks in rural areas of developing countries: understanding local ownership. South Africa., p. 16 (2014) 22. Rey-Moreno, C., Graaf, M.: Map of the community network initiatives in Africa, p. 21 (2016) 23. Rey-Moreno, C.: Barriers for development and scale of community networks in Africa. In: Belli, L. (ed.) Community Networks: the Internet by the People for the People, Rio de Janeiro, Brazil, FGV Direito Rio (2017) 24. Miliza, J.: TunapandaNET community network (2018). https://giswatch.org/sites/default/ files/gw2018_kenya.pdf 25. Misoi, I.: Lanet Umoja community network (2019). https://techweek.co.ke/wp-content/upl oads/2019/11/05_IreneMisoi-Lanet_CommunityNetwork.pdf 26. Mozilla Foundation, In Lanet Umoja, Using schools as an internet hub (2019). https://founda tion.mozilla.org/en/blog/lanet-umoja-using-schools-internet-hub/ 27. Communication authority of Kenya, Licensing and shared spectrum framework for community networks in Kenya (2021) 28. Communication authority of Kenya, Communications authority of Kenya – licensing procedure (2017) 29. Internet society, Unleashing community networks: innovative licensing approaches (2018). https://www.internetsociety.org/resources/2018/unleashing-community-networks-innova tive-licensing-approaches/

316

K. Ronoh et al.

30. Song, S.: Community networks and telecommunication regulation, Association for progressive telecommunications (2018). https://www.giswatch.org/en/infrastructure/community-net works-and-telecommunications-regulation 31. Ministry of ICT, National broadband policy 2018–2023 (2018). https://www.ict.go.ke/wpcontent/uploads/2019/05/National-Broadband-Strategy-2023-FINAL.pdf 32. Kennedy, R., George, K., Vitalice, O., Okello-Odongo, W.: TV white spaces in Africa: trials and role in improving broadband access in Africa. In: AFRICON, vol. 2015, pp. 1–5 (2015), Accessed 25 Nov 2016. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=7331920 33. IT web Africa, Connectivity as a service rollout in Rwanda, Connectivity as a Service Rollout in Rwanda, February 2017. http://www.itwebafrica.com/enterprise-solutions/503rwanda/237465-connectivity-as-aservice-rollout-in-rwanda 34. Communication authority of Kenya, Kenya universal service fund framework (2018). https:// ca.go.ke/wp-content/uploads/2018/02/Universal-Service-Fund-Framework.pdf

Determinants of Cybercrime Victimization: Experiences and Multi-stage Recommendations from a Survey in Cameroon Jean Emmanuel Ntsama1,2 , Franklin Tchakounte1,2,6(B) , Dimitri Tchakounte Tchuimi3 , Ahmadou Faissal1,2 , Franck Arnaud Fotso Kuate1,2 , Joseph Yves Effa4 , Kalum Priyanath Udagepola5 , and Marcellin Atemkeng6 1 Department of Mathematics and Computer Science, Faculty of Science, University of

Ngaoundéré, Ngaoundéré, Cameroon {j.ntsama,f.fotso}@cycomai.com 2 Cybersecurity with Computational and Artificial Intelligence Group (CyComAI), Yaoundé, Cameroon [email protected] 3 Department of Human Resource Economics, Faculty of Economics and Management, University of Yaoundé II, SOA, Yaoundé, Cameroon 4 Department of Physics, Faculty of Science, University of Ngaoundéré, Ngaoundéré, Cameroon 5 Department of Information and Computing Sciences, Scientific Research Development Institute of Technology, Loganlea, Australia [email protected] 6 Department of Mathematics, Rhodes University, Makhanda, South Africa [email protected]

Abstract. Cybercrimes are multiplying and spreading at an elusive speed commensurate with the emerging technologies of the fourth revolution. Their sophistication and user’s vulnerability to attacks catalyze their success. Several surveys have been conducted to determine the factors favoring victimization. However, they can only be applied within a contextual framework since each ecosystem has particularities. Attempts in this direction are unavailable in Cameroon where cybercrimes cost 12.2 billion CFA in 2021. This work consists of a semi-direct survey conducted in Cameroon in 2021 to provide the determinants explaining the most frequent cybercriminal techniques, the vulnerabilities left by the users, the most targeted population segments, and the socio-demographic and economic factors justifying this security fragility. The results relate to 370 questionnaires collected throughout the territory. According to descriptive statistics and the chisquare test, the explanatory variables of cybercrime victimization are gender, age, intellectual level, level of digital knowledge, level of Information and Communication Technology (ICT) proficiency, type of equipment used, the mobile and desktop operating system used, the possession of an anti-virus/anti-spam and the marital status. Within this work, we have identified threats and their drivers and a theoretical framework has been provided with several stages that could be followed to contain cybercrime. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023 Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 317–337, 2023. https://doi.org/10.1007/978-3-031-34896-9_19

318

J. E. Ntsama et al. Keywords: Cybercrimes · Victimization · Survey · Cameroon · Attacks · Factors

1 Introduction The fourth revolution, while beneficial for technological innovations, comes with complexities that leave individuals with technological arrhythmia. Indeed, the rapid evolution and the digital transformation that this carries in all sectors, widen a gap in the appropriation of these technologies during the consumption of services. Catalysts of global economic growth [1], technologies such as the Internet of Things (IoT), and artificial intelligence (AI) to name a few, encourage cybercriminals to take advantage of user vulnerabilities such as lack of awareness or even risks related to their behavior [2]. Cybercrime activities, in this work, include stealing identities, luring users through social engineering techniques, infiltrating malicious applications such as ransomware into systems and networks, stealing confidential codes, and rendering services unavailable [3]. ANTIC recently reported that cybercriminal actions cost the Cameroonian economy approximately 12.2 billion in 2021, representing double the losses in 2019 probably due to the Covid-19 pandemic which promoted contactless activities [4]. These alarming facts make it urgent to revise agile government strategies and policies in the face of cybercrime in order to preserve the integrity of national digital economies. For this, it is important to investigate consumers in various sectors in order to understand and explain the reasons for victimization [5]. Many works have been proposed with a view to determining the factors that may explain cybercrime victimization [6–10]. We note, however, that the results are contextual and specific to the study areas. Obviously, the levels of appropriation, digital governance strategies, infrastructural availability, and socio-cultural and professional habits differ from one country to another. Consequently, this complementary study, the above-mentioned objective related to victimization, is the first complete one that has been carried out in the Cameroonian context. The study of 370 questionnaires covered the entire national territory in the 10 regions. Its targets were pupils and students, teachers, people from the economic fabric, and public and private administration executives. The research hypothesis is: H1: Do specific socio-demographic and technical factors capable of reducing victimization to cybercrime? At the end of this study, the objective is to know if it is easier for a user meeting certain conditions in the use of certain digital technologies to be a victim of cyber-crime. In other words, would certain factors more easily determine a cyberspace user to cybercrimes? An empirical methodological approach was adopted including the following steps: data collection through a semi-directive questionnaire, analysis, descriptive statistical processing, and chi-square tests, and interpretation of the results to better understand this phenomenon. The factors retained are common in the literature. Their choice was also guided by reality and current events on cybercrime in Cameroon and in general. These are determinants related to the socio-demographic, economic, and technical context of the victim. We can cite some such as education, social situation, digital proficiency, level of security, gender, marital status, type of equipment used for access to the target, and the

Determinants of Cybercrime Victimization

319

most targeted types of services, and sectors of activity. This work contributes as follows : • The collection and processing, despite social instability, of the sample for such a study was carried out for the first time in all regions of Cameroon. It can be arnessed by researchers to advance the field and improve governance strategies. • The determination through statistical tests of the explanatory variables of the victimization of cybercrimes followed by interpretations and recommendations to guide governance. A multi-stage framework to mitigate cybercrime has been proposed. The rest of the document is divided into two sections. Section 2 presents the methodology used. Section 3 presents the results, discussions, and recommendations. The document ends with a conclusion and perspectives.

2 Methodology The survey has been conducted in all the ten regions in Cameroon for sake of representativity. The simple random sampling consisted in taking one sample per region and this in the ten regions by including pupils and students, teachers, people from the economic fabric, public and private administration. The younger people are aware, the more they are prepared to avoid scams. In addition, the long-term goal is to put an indispensable tool to prepare to reduce victimization to cybercrimes in today’s and tomorrow’s society. Teachers and other parents make it possible to realize the reality of the problem and their awareness as educators give them the tools to better supervise young people. We also considered the gender approach because both women and men are potential victims. Some citizens of neighboring countries have been associated with it because of the geographical situation of Cameroon with other sub-region countries. Thus, out of 430 people questioned, the distribution by region is illustrated in Table 1. The questionnaire was structured in four sections: Identification of the questionnaire (Sect. 0), characteristics of the respondent (Sect. 1), ICT and possession of a digital device (Sect. 2), and victimization to the scam (Sect. 3). The answers to the questions in each of these sections were obtained by direct administration of the questionnaire by the interviewer to the respondent in order to facilitate the latter’s understanding and so that the questionnaires obtained at the end of the survey have a low chance of being be rejected. The profile and location of the targets constituted resistance to the method of online questionnaires. Indeed, the scarce and even unavailable electric power in landlocked areas as well as the lack of aptitude for Web technologies justifies it. The study, therefore, took place face to face during the Covid-19 period and social instabilities linked to the war. There was reluctance on both sides because contact had to be avoided as much as possible. In addition, the survey tailored to the Cameroonian population posed a problem for expatriates on Cameroonian soil, despite being victims of cybercrimes, who could not provide information on the region of origin. These facts, therefore, resulted in an overall 11.63% of invalid questionnaires. We remain with 379 which respects the condition in terms of size and representativity [11]. At the time of the exploitation of the questionnaires and the purification of the database collected, some questionnaires have been rejected for non-compliance due to a high rate of non-responses, inappropriate responses, and empty responses.

320

J. E. Ntsama et al. Table 1. Repartition of questionnaires

Region

Administered questionnaires

Compliant questionnaires

Invalid questionnaires

Rejection rate

Questionnaire

Adamaoua

30

22

8

26.67%

73.33%

Center

50

46

4

8%

92.00%

East

20

14

6

30%

70.00%

Far North

45

39

6

13.33%

86.67%

Littoral

30

23

7

23.33%

76.67%

North

40

32

8

20.00%

80.00%

North West

60

54

6

10%

90.00%

West

110

108

2

1.82%

South

30

30

0

0%

South West

15

11

4

26.67%

73.33%

Total

430

379

51

11.86%

88.14%

98.18% 100.00%

This study was conducted using research methods, data collection techniques, simple random sampling techniques, and empirical analysis methods used to achieve the expected objectives. The data used in this context are primary data insofar as there is no database available on cybercrime victimization in Cameroon. Empirically, Descriptive analysis was performed through univariate and bivariate statistics. Univariate statistics, are conducted using graphics and central tendancy indicators (mean, proportion) to explore the distributions of all variables (variables of cybercrime victimization, socio-demographic and economic factors, and technical factors). Bivariate statistics are produced using cross-tables and chi-square statistical tests between the variables of cybercrime victimization and the explanatories factors. 2.1 Approach The following steps of this study aimed at learning more about victimization in cybercrime attacks. • • • • •

Literature review to rely on discoveries of authors; Collection of existing attacks technics to get existing experiences; Collection of data through a semi-directive survey; Use descriptive statistics tools to process data collected; Interpretation of results and recommendations.

Determinants of Cybercrime Victimization

321

2.2 Study Variables 2.2.1 Dependent Variable The dependent variable is the phenomenon studied. This is victimization to cybercrimes, apprehended by question S3Q3 in section 3 of the questionnaire. The S3Q3_1 question makes it possible to identify the forms of scams, as long as the questions ranging from S3Q3_1_1 to S3Q3_3_1 make it possible to capture the mechanisms by which each form of scam occurred. Questions ranging from S2Q3_4_1 to S3Q3_4_3 relate to whistleblowing and preventive measures against cybercrime. 2.2.2 Independent Variables The independent variables are the potential determinants of cybercrime victimization. It will be a question of identifying among factors that are associated with the studied phenomenon. They are divided into two categories: technical factors (related to ICT) and socio-demographic and economic factors specific to the respondent. The variables belonging to each of these categories are presented in Table 2. Table 2. Questionnaire skeleton Variables

N° of question

Codification

I) Variables of cybercrime victimization S3Q3

1 = Yes;0 = No

S3Q3_1

1. Financial diversion 2. Ransom 3. Information hacking

S3Q3_1_1

1. Mobile Money 2. Sending/withdrawing money 3. Internet banking transaction

Mechanisms of hacking confidential information

S3Q3_2_1

1. Call and SMS 2. Mail 3. Social media 4. Spy website 5. Other

Ransom mechanisms

S3Q3_3_1

1. Via phone 2. Via computer 3. Via smartphone 5. other

Has ever been a victim of cybercrime Form of cybercrime

Financial diversion mechanisms

II) Independent variables (i) Socio-demographic and economic factors Sex

S1Q1

1 = Female; 0 = Male

Age

S1Q2

1. [18–26[2. [27–34] 3. [35–42] 4. [43–50] 5. [51–58] 6. 59+

Education level

S1Q4

1. None; 2. Primary; 3. Secondary; 4. Higher

Marital status

S1Q3

1. Married 2. Single 3. Other (continued)

322

J. E. Ntsama et al. Table 2. (continued)

Variables

N° of question

Codification

Carry out an economic activity

S1Q5

1 = Yes; 0 = No

Income level (in CFA)

S1Q6

1. 0–25000 2. 25000–50000 3. 50000– 100000 4. 100000-200000 5. 200000– 3000006. 300000+

(ii) Technical factors Completed digital training

S2Q2_1

1 = yes; 0 = no

Type of phone

S2Q4_1

1. Cell phone2. Smartphone Android 3. Smartphone iPhone

Tablet operating system

S2Q6_1

1.Windows

Possession of a security S2Q7_2 software such as anti-virus

1 = Yes;0 = no

3 Results Table 4 in the appendix reveals the results from these methods. It shows the descriptive statistics of the study variables, the crosstab between cybercrime victimization and explanatory/determinant variables as well as p-values of the chi-square tests carried out between the dependent variable, victimization to cybercrime, and the explanatory variables in pairs. 3.1 Distribution of the Studied Population 3.1.1 Questions Validated Per Region The exploitation of the questionnaires and the purification of the database resulting from the collection, reveal questionnaires rejected for non-compliance. The reason for the rejection of some questionnaires could be the high rate of non-responses or the high rate of inappropriate responses in the places indicated. At the end, the empirical investigations focused on 379 individuals with a rejection rate of 11.63%. Moreover, the impossibility of balancing the samples by region is justified by: – the COVID-19 context in which the survey is being conducted; – the security crisis in which the North-West and South-West regions are plunged; – the ambient insecurity in the north (Adamaoua, North and Far North) due to BOKO HARAM; – the existence of areas with difficult access to the East, South and Coastline in the rainy season (Fig. 1).

Determinants of Cybercrime Victimization

323

Fig. 1. Distribution of questionnaires in regions

3.1.2 Distribution by Gender and Age In the sample obtained, 55.15% of individuals questioned are male and 44.85% are female. Men are more willing to respond to questionnaires than women. In addition, the age group under 25 is the most representative within this sample. It is followed by the age groups under 34, then under 42, under 50, under 58 and finally the least represented over 58. This is indicative of a predominantly young population (Fig. 2).

Fig. 2. Distribution of the studied population by gender or age

3.1.3 Distribution by Matrimonial Status or ICT Mastering The population of single women is the most representative compared to more reserved married women. It is mainly made up of those who have an average level of mastery of ICT. Cameroon’s option was to integrate computer science education at all levels of education (Fig, 3).

324

J. E. Ntsama et al.

Fig. 3. Distribution of the studied population by matrimonial status or ICT mastring

3.1.4 Distribution According to the Level of Education The sample contains mostly people with secondary or higher education levels (Fig. 4)

Fig. 4. Distribution of the studied population according to the level of education

3.2 Some Significant Results 3.2.1 Cybercrimes Attacks or Financial Embezzlement Attacks Channels in Cameroon Most of the cyberattacks are linked to financial embezzlement. Among them, those with mobile money are the most frequent encountered (Fig. 5).

Fig. 5. Frequent types of cybercrimes or financial cybercrime attacks channels in Cameroon

Determinants of Cybercrime Victimization

325

3.2.2 Cybercrimes Attacks Chanel by Ransomware or Information Hacking in Cameroon The phone is the most used in Cameroon for ransom (98.38%) cases. The cases recorded on a computer or tablet are less. Of all the channels used for information piracy, 62.2% of cases are by phone call or SMS, then by social networks 25.17%, e-mails 9.45% and finally spy sites 3, 15% (Fig. 6).

Fig. 6. Ransomware attacks or confidential information hacking attacks channels in Cameroon

3.2.3 Cybercrimes Victimization in Cameroon Cybercrimes in Cameroon are mostly related to financial embezzlement attacks. i- Victims by gender or by age Women are more vulnerable to cybercrimes (58.26%) than men (41.74%). Women seem more gullible, more manipulative and more withdrawn than men. In addition, the elders are exposed due to their low resilience to digital transformation. From 2.61% of young victims under 34 to 25.65% of adult victims over 58 (Fig. 7).

Fig. 7. Cybercrime victimization by gender or age

326

J. E. Ntsama et al.

ii- Victimization and the level of education Victimization and digital knowledge or training to increase awareness Having digital knowledge does not significantly spare cybercriminals. The victimization function is inversely proportional to the level of activity in ICT or awareness (Fig. 9)

Fig. 8. Cybercrime victimization and level of education

iii- Victimization and type of phone used or smartphone OS Having digital knowledge does not significantly spare cybercriminals. The victimization function is inversely proportional to the level of trainning in ICT or awareness (Fig. 9).

Fig. 9. Victimization and digital knowledge or training in ICT to increase awareness

iv- Victimization and type of phone used or smartphone OS Most victims are killed by Android phones, followed by simple phones and finally by iPhones.. The ease of access to cyberspace by android telephones and their boom in

Determinants of Cybercrime Victimization

327

the landscape of use of digital terminals further exposes the holders of these Terminal Data Processing Equipment (DTE). The Windows Operating System in smartphones is most at risk. We thus have the Windows OS most affected (53.52%) followed by the others (29.58%) and the iPads less affected (16.9%) (Fig. 10).

Fig. 10. Victimization and type of phone (OS) used

v- Victimization and computer OS Obviously, the Windows Operating System (OS) is the most exposed. It allows its great use to attack many targets (Fig. 11).

Fig. 11. Victimization and type of computer OS

vi- Victimization and use of anti-virus/ anti-spam The absence of anti-virus exposes systems to spam and other malicious software which opens corridors of vulnerabilities (Fig. 12).

328

J. E. Ntsama et al.

Fig. 12. Victimization and use of anti-virus/anti-spam

vii- Victimization and economic activity or marital status Married women are the segment of the population most affected by cybercrime. i.e., 54.35% of cases compared to single people with 45.65% of cases. The easy gain without the knowledge of the spouse and the repressed fantasies sufficiently explains these figures (Fig. 13).

Fig. 13. Victimization and economic activity or marital status

3.2.4 Key Results Table 3 provides a form of synthesis from all the statistics previously presented. It is a form of map which can be exploited by decision makers to develop policies.

Determinants of Cybercrime Victimization

329

Table 3. Statistics Statistics 70.51% 70.51% of cybercrimes are related to financial embezzlement attacks 85.57% 85.57% of cyber-scams target Mobile Money or electronic wallets 98.38% 98.38% of cybercriminal processes exploit social engineering mechanisms 62.2%

62.2% of confidential information hacking activities are done through phone con tact

58.26% 58.26% indicates that the proportion of women is the most vulnerable to cybercrime. 41.74% of men are vulnerable 25.4%

The population segment target by cybercrime is that whose age is above 51 years old (25.4%). This generation of people, which represents 25% of the sample, has great difficulty adapting to digital transformation and is therefore very sensitive to digital deviations

2.48%

Even if the level of victimization is inversely proportional to the level of education, 2.48% of people with a higher level of education are victims of scams. The intellectual level does not automatically reinforce resistance to cybercrime, although it does stir up certain reflexes

54.78% 54.78% of information hacking victims had a very good knowledge of digital technology and its aspects 82.06% 82.06% of attacks occur on Android mobile systems 89.69% 89.69% of users continue to exploit the cracked versions of Microsoft in public and private administrations 76.16% 76.16% of users do not have antivirus or do not care 54,35% More than half of the population targeted by cyber- scammers has an economic activity 67.83% 67.83% of manipulation cybercrimes are directed towards people carrying out an economic activity

3.3 Observations and Understandings Some facts are observed. The most vulnerable segment to cybercrime is that one of the women (58.26%) whereas men occupy 41.74% of the vulnerable. Most (70.51%) of the cybercrimes are related to financial embezzlement attacks such as cyber-scams. 85.57% of them essentially target Mobile Money or electronic wallets. Most (98.38%) cybercriminal processes exploit social engineering mechanisms to manipulate and lure users. More than half (62.2%) of confidential information hacking activities is done through telephone contact. The population segment targeted by cybercrime is those whose age is above 51 years old (25.4%). This generation of people, which represents 25% of the sample, has great difficulty adapting to digital transformation and is therefore very sensitive to digital deviations. Even if the level of victimization is inversely proportional to the level of education, 2.48% of people with a higher level of education are victims of scams. The intellectual level does not automatically reinforce resistance to cybercrime, although it does stir up certain reflexes. The proportion (54.78%) of information

330

J. E. Ntsama et al.

hacking victims had a very good knowledge of digital technology and its aspects. The proportion of 82.06% concerns attacks that occur on Android mobile systems within the sample. Most of the victims (89.69%) dispose of Microsoft Windows versions in- stalled in their computers. These victims, in general (76.16%), do not don’t have an antivirus or don’t care. More than half (67.83%) of the population targeted by cyber- criminals has an economic activity. of manipulation cybercrimes are directed towards people carrying out an economic activity. Some facts from their oral answers or comments are derived from these observations. Users, tools, and technologies are vulnerable points allowing hackers to express themselves easily. Attackers play with manipulation and weak points to infiltrate (email, social media, SMS, telephone). Since many people are unbanked and attached to Mobile money systems for transactions, cybercriminals learned different flaws from these and leveraged user ignorance and unawareness. Because users continue exploiting Microsoft’s cracked versions in public and private organizations, advanced persistent threats (APT) attacks [12] mainly infiltrate ransomware or cyber-espionage exploits. Some testimonies revealed that users have seen their files encrypted and requested to pay a ransom. Answers also show that users are not aware of fake emails and links while using social media, SMS apps, and emails. This study shows that these could be some drivers of cybercrimes. A respondent told us how she had stolen the mobile money PIN code. She received a call from a very polite someone arguing while nearly crying that there had been a mistake for a deposit in her account. The person kindly asked her to return that money, and 10% is for disturbance. And she innocently checked its account and then saw that it was suddenly empty of a considerable amount. Then tried to call back the caller, but it was too late. This narrative feedback revealed that cybercriminals use social engineering techniques [13] to get users’ PIN codes. In this example, for instance, the cybercriminal has initiated a transaction with the victim number as it was herself. But the operation could only continue with the code detained by the legitimate owner.The call was to get the code (before the session timeout) quickly. The cybercriminal tricked the victim into entering this code, and the initial cybercriminal operation was performed until the end. This study shows that the victims are exposed to social engineering attacks. We have been curious to have some feedback on how the victims are engaged in social media in regards to posts relating to opportunities (job, gains, …). Someone told one of the agents that due to hard daily life, posts of interest are those showing opportunities. He expressed that the only reaction is to look at the name of the concerned institution. In case it is a famous one, he applies. He has already called in the past the number that appeared in the post for further process. And then he sent money in one stage. What is incredible is that he knew that this story was fake many months later when the number of the pretended guy did not ring anymore. Another testimony revealed that the victim has clicked on a link (stipulated to submit files to get entrepreneurial funds) and some days later, access to some confidential accounts was not possible anymore. These situations present clearly how many people behave unknowingly. Based on previous arguments and with regard to cross-table and chi-square tests, the following factors explain the cybercrime victimization: completed digital training (with a p-value of 0.093), level of ICT proficiency (with a p-value of 0.01), type of phone (with a p-value of 0.03), computer operating system (with a p-value of 0.032), posses-

Determinants of Cybercrime Victimization

331

sion of an anti-virus (with a p-value of 0.039), age (with a p-value of 0.01), level of education (with a p-value of 0.02), marital status (0.04) and work status (0.024). Indeed, generations understand and consume technologies differently. It is argued that Youngers are more in contact with these technologies than the older ones. Thus, they are more exposed than older people. However, younger people easily can recognize threat traces since they have faced them a lot. Older people are more vulnerable because they have not known about these issues in their early days. Vulnerable people can be more awareof being assisted with related and adequate digital training. People with prior ICT expertise are less likely to be lured by cybercrimes because of knowledge gained throughout educational processes. The degree of risks is related to the type of mobile smartphone. For instance, Android is most exposed than iOS since it is multi-manufacturers meaning that the manufacturers of devices, operating systems, components, and app stores are different. Everything is centralized by Apple concerning its products. It does not mean that Apple OS is attack-free instead, it means that Android requires vigilance from users to reduce risks [14]. Also, the way users collect and install software determines the degree of exposure to attacks. If people are not aware, they will act with weak behavior letting weaknesses be exploited. For instance, software from unofficial sites, use of unclean USB sticks, unsafe validation of forms, etc. Indeed, the installation of defense utilities constitutes the first line of protection. The status in society may influence behavior when receiving a phishing call or message. For instance, poor people are not likely to be able to look for a huge amount of money required by scammers. In the opposite way, comfortable people in society are likely to take risks in their actions. 3.4 Critical Considerations This study shows clear interpretations. First, the behavior of people facilitates the activities of cybercriminals. People are unaware of many risks but they are so easily attracted and trapped due to their social precarious situations. In such an ecosystem, ransomware will rapidly be proliferated. People live with scams like flat mates but without minimal awareness requirements or training. Social conditions play a critical role in their victimization since they are looking forward to satisfying basic needs. Unfortunately, there will still be a segment of the population left behind due to the digital transformation advances. Last but not least, mobile money remains to be the target since the population is unbanked. Users, tools and technologies are the points of vulnerability allowing hackers to easily express themselves. Our national systems/platforms are targeted by advanced and persistent threats such as cyber-espionage or e-crime.

332

J. E. Ntsama et al.

3.5 Framework of Recommendations The framework in Figure 14 depicts the different stages in which governance should be improved to mitigate cybercrime. 3.5.1 Investigation In this stage, policymakers and company leaders should implement programs to learn from habits and behaviors towards digital transformation and related technologies. This stage should be continuous to observe attitudes and flaws to contain. During this stage’s investigations, users should be free to comment and exchange their experiences. The most reliable way should be to put in place a centralized social platform but well oriented to exchange experiences. The architecture can follow the publish/sub- scribe architecture. It does not prohibit surveys like the one in this paper. But the limi- tations will be to merge data with time as well as values or knowledge. Everyone will share experiences concerning cybercrimes and reactions can be provided as well. A background artificial intelligence robot will be exploited to get insights and digest the information presented to people. Decision makers will therefore look at reports anytime to readjust their strategies. 3.5.2 Point of Vulnerabilities (PoV) and Point of Infiltration (PoI) Identification This stage concerns Chief Information Officers (CIO) and Chief Digital Officers (CDO). In fact, they should perform risk management dedicated to users in their institutions. For that, they should put in place programs to simulate campaigns of cybercrime to the users in order to get their attitudes. Also, they should design filtering rules to track the activities of specific users within a period. The objective will therefore evaluate the likelihood to infiltrate malware such as ransomware or spyware. They will adequately plan for reinforcement sessions. 3.5.3 Distributed Policies Settings For sake of robustness, agencies in charge of national ICT should collaboratively with related professionals to set policies concerning the exploitation of cracking. This phenomenon is present in many countries in different critical sectors such as administration, private companies, public institutions, etc. They should insist on Iso standards guidelines. More, they should put in place control mechanisms or motivational techniques (awards, etc.) to get people engaged in that. Any legal and declared institution that uses these technologies must agree to these practices. In the case of non-respect, regulated measures will be taken.

Determinants of Cybercrime Victimization

333

3.5.4 Sanitization of Hardware Ecosystem The manufacturers of devices used to manipulate technology data are possibly unknown and untrusted. If these manufacturers cannot be verified, it poses a problem because the inside can already be infiltrated. Prior quality auditing is required by institutions in charge of that to ensure the protection of consumers. The government should sustain this initiative with infrastructures territorially spread to really guarantee that the minimum is satisfied. 3.5.5 Stimuli Educational Approaches Cybercrime is ruthless. Educational programs must therefore be adapted to illustrate vivid and touching situations rather than getting learners to recite. This movement implies an inevitable reinforcement of teachers or other training actors. Ongoing programs should be facilitated. For example, we can facilitate registration in open online platforms or national or local sessions bringing together these selected teachers. The program must be gradual and with specific, measurable, achievable, relevant, and time-bound (SMART) objectives. Since the objective is also to reach the general public, it is necessary to target the sectors according to their proximity to the categories of victims and also the most targeted services that have been identified during the studies. Regarding proximity, governmental representations such as district municipalities should be involved as well as parents. Additionally, sensitization programs should be deployed permanently in points of service where people flock such as bill payment places, and financial transactions. The resistance to this is often the language of the targets. For that, information must be provided in several formats (audio, video, text, etc.) well stored in a warehouse. This, therefore, requires people who are well trained in the problems of cybersecurity. The constraint in this stage is to fit learning materials within different ages. For example, a 7-year-old child cannot learn with texts but with interactive materials. 3.5.6 Marketing This stage is about to disseminate activities of the other stages in media such as TV, Radio, social media, newspaper, and ads.

334

J. E. Ntsama et al.

Fig. 14. Cybercrime mitigation

4 Conclusion and Perspectives The primary objective of this work was to identify factors that explain victimization in Cameroon. With the help of experiences from respondents and results from the descriptive analysis, cybercriminal techniques, threats as well as behavior weaknesses have been identified. We found technical and socio-demographic ones that are strongly related to victim vulnerabilities. These factors have been justified with proofs and illustrations. Critical considerations have been derived as consequences of understanding. This study has reflected a critical situation where people of any age suffer from cybercrime and need assistance. In this regard, we have proposed a multi-stage framework with specific activities for policy-makers to sanitize the ecosystem of technologies. Further research will be concentrated on pushing investigation on attacks such as mobile money and elements to design educational platforms.

Determinants of Cybercrime Victimization

335

Appendix

Table 4. Results Explanatory variables

I-

Total

chi-square test (p-value)

Descriptive statistics of variables related to cybercrime victimization

Victimisation to cybercrime : Yes

60.69

No Form of cybercrime: Financial diversion Ransom Information hacking Financial diversion mechanisms : mobile money Sending/withdrawing money Internet banking transaction Ransom mechanisms : Phone Computer Smartphone Other Mechanisms of hacking confidential information: call/SMS E-mail Social media Spy website

39.31

II-

Has ever been a victim of cybercrime? No Yes

70.51 13.82 15.67 85.57 8.25 6.19 90.38 5.77 3.85 0 62.20 9.45 25.17 3.15

-

-

-

-

-

-

-

-

-

-

Descriptive statistics of explicative variables, cross-table and chi-square tests

Completed digital training : No Yes Level of ICT proficiency: None Weak Average High Type of phone: Cell phone Smartphone Android Smartphone iPhone Desktop operating system: Windows MacOS Linux Tablet operating system: Windows iOS Other

54.62 45.38 8.44 21.90 59.37 10.29 13.50 76.03 10.47 91.76 6.67 1.57 55.66 18.87 25.47

54.36 45.64 8.72 11.75 12.75 65.77 14.29 19.29 66.43 10.00

54.78 45.22 55.22 27.83 11.30 5.65 9.87 82.06 8.07 89.68

65.05 25.95 22.86

8.39 1.94 53.52

60.00 17.14

16.90 29.58

0.093 0.01

0.03

0.032

0.234

(continued)

336

J. E. Ntsama et al. Table 4. (continued) Possession of a security software: No Yes Sex : Female Male Age :18-26 27-34 35-42 43-50 51-58 >58 Education level: none Primary Secondary Higher Marital status: Single Married Carry out an economic activity: No Yes

38.23 61.77 44.85 55.15 32.98 22.96 22.16 9.23 10.82 1.85 3.43 3.69 38.26 54.62 60.16 39.84 36.68 63.32

25.24

76.17

74.76 44.66 55.34 44.97 23.49 16.78 8.05 6.04 0.67 0.00 4.03 46.98 48.99 69.13 30.87 56.38

23.83 58.26 41.74 10.00 2.61 13.91 22.61 25.22 25.65 58.26 32.61 5.65 3.48 45.65 54.35 32.17

43.62

67.83

0.039

0.130 0.01

0.002

0.004 0.024

References 1. Arakpogun, E.O., Elsahn, Z., Olan, F., Elsahn, F.: Artificial Intelligence in Africa: challengesand opportunities. In: Hamdan, A., Hassanien, A.E., Razzaque, A., Alareeni, B. (eds.) The Fourth Industrial Revolution: Implementation of Artificial Intelligence for Growing Business Success. Studies in Computational Intelligence, vol 935. Springer, Cham (2021). https://doi. org/10.1007/978-3-030-62796-6_22 2. Datta, P.: The promise and challenges of the fourth industrial revolution (4IR). J. Inf. Technol. Teach. Cases 6 (2022). https://doi.org/10.1177/20438869211056938 3. Cascavilla, G., Tamburri, D.A., Van Den Heuvel, W.J.: Cybercrime threat intelligence: a systematic multi-vocal literature review. Comput. Secur. 105, 102258 (2021) 4. Garg, S., Baliyan, N.: Comparative analysis of Android and iOS from securityviewpoint. Comput. Sci. Rev. 40, 100372 (2021) 5. Delos Santos, V., et al.: Riskanalysis of home user’s vulnerability to illegal video streaming platform. In: 2022 4th International Conference on Management Science and Industrial Engineering (MSIE), pp. 365–372 (2022) 6. Andzongo, S. : Au Cameroun, la cybercriminalité fait perdre 12,2 milliards de FCFA à l’économie en 2021 (Antic). https://www.investiraucameroun.com/gestion-publique/0703-17600au-cameroun-la-cybercriminalite-fait-perdre-12-2-milliards-de-fcfa-a-l-economie-en-2021antic. Accessed 19 Aug 2022 7. Ho, H.T.N., Luong, H.T.: Research trends in cybercrime victimization during 2010–2020: a bibliometric analysis. SN Soc. Sci. 2(1), 1–32 (2022). https://doi.org/10.1007/s43545-02100305-4 8. Borwell, J., Jansen, J., Stol, W.: Comparing the victimization impact of cybercrime and traditional crime: literature review and future research directions. J. Digit. Soc. Res. 3(3), 85–110 (2021)

Determinants of Cybercrime Victimization

337

9. Brands, J., Van Doorn, J.: The measurement, intensity and determinants of fear of cybercrime: a systematic review. Comput. Hum. Behav. 127, 107082 (2022) 10. Milani, R., Caneppele, S., Burkhardt, C.: Exposure to cyber victimization: Results from a Swiss survey. Deviant Behav. 43(2), 228–240 (2022) 11. Näsi, M., Danielsson, P., Kaakinen, M.: Cybercrime victimisation and polyvictimisation in Finland—prevalence and risk factors. European J. Criminal Policy Res. 29, 283–301 (2021) 12. Buil-Gil, D., Miró-Llinares, F., Moneva, A., Kemp, S., Díaz-Castaño, N.: Cybercrime and shifts in opportunities during COVID-19: a preliminary analysis in the UK. Eur. Soc. 23(sup1), S47–S59 (2021) 13. Breen, C., Herley, C., Redmiles, E.M.: A large-scale measurement of cybercrime against individuals. In: CHI Conference on Human Factors in Computing Systems, pp. 1–41 (2022). https://wp.stolaf.edu/iea/sample-size/. Accessed 19 Aug 2022 14. Tatam, M., Shanmugam, B., Azam, S., Kannoorpatti, K.: A review of threat modelling approaches for APT-style attacks. Heliyon 7(1), e05969 (2021) 15. Venkatesha, S., Reddy, K.R., Chandavarkar, B.R.: Social engineering attacks during the COVID-19 pandemic. SN computer science 2(2), 1–9 (2021)

E-Services (Education)

A Review of Federated Learning: Algorithms, Frameworks and Applications Lutho Ntantiso(B) , Antoine Bagula , Olasupo Ajayi , and Ferdinand Kahenga-Ngongo Department of Computer Science, University of the Western Cape, Cape Town, South Africa [email protected], [email protected]

Abstract. In today’s world, artificial intelligence (AI) and machine learning (ML) are being widely adopted at an exponential rate. A key requirement of AI and ML models is data, which often must be in proximity to these models. However, it is not always possible to “bring data to the model”, due to several reasons including legal jurisdictions, or ethical reasons, hence, a “taking the model to the data” might be a viable alternative. This process is called Federate Learning (FL), and it is a ML technique that allows devices or clients to collaboratively learn a shared model from the central server, while keeping the training data local and isolated. This ensures privacy and bandwidth preservation, especially in resource constrained environments. In this paper, a review of FL is done with a view of presenting the aggregation models, frameworks, and application areas, as well as identifying open challenges/gaps for potential research works. Keywords: Federated Learning · Machine Learning · Federate Averaging

1 Introduction In recent years data has proven to be one of the most important resources in business operations, decision making, analysis, and research. Data is generated at a fast rate, in different formats, and in large quantities from different sources including IoT (Internet of Things), social and classic media. Due to the diversity of source and type of data. However, data analysis has become more challenging and intensive due to several reasons, including, policies on data privacy and security, networking, and communication constraints, and most importantly because data is often dispersed and exists in the form of isolated islands [1]. Traditional data analysis tools, both statistical and machine learning, primarily work with aggregated data stored in centralized locations. In these classic systems, “data is brought to the analysis models”, however this might not be possible in recent contexts due to several reasons. Legislative policies hinder certain types of data (such as sensitive military or governmental information) from leaving the boundaries of the country to which they belong. Similarly, medical related data and personal information that might compromise a person’s safety also need to be well guarded. These among other reasons are challenges of traditional data analysis methodologies. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023 Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 341–357, 2023. https://doi.org/10.1007/978-3-031-34896-9_20

342

L. Ntantiso et al.

Federated learning (FL), a recent distributed and decentralized machine learning scheme, has attracted significant attention as a means of mitigating these challenges [2]. In FL clients maintain their data locally, while they “download” a model from the central server for training. In essence, rather than taking the data to the model, the model is taken to the data. Once the model is trained on the client’s devices, updates to the model are sent back to the server. The server in turn aggregates these model updates from each of the clients and updates the global model. This process is repeated over multiple training iterations. Figure 1 gives a visual illustration of a FL architecture with a central FL server to which multiple client devices are connected.

Fig. 1. A Generic Federated Learning Architecture

There are two main factors that make this setting different to distributed training on Graphic Processing Units (GPU), which are communication and heterogeneity. Distributed training on GPU refers to multi-node machine learning algorithms designed to improve performance, increase accuracy, and scale to larger input data sizes [3]. In distributed training, worker nodes work in parallel to speed up the training of ML models [4]. Also, in GPU clusters all clients are on the same local network, which makes communication relatively fast, however in FL devices communicate over the Internet which is comparatively slower. Heterogeneity is presented in two forms, the clients, and the data. Devices and data vary, some devices are faster than others, and some devices will have more data than others, i.e., number of images in mobile phones. We call the variation of these distributions NON IID (independently identically distributed). The topic centered around data distributions and heterogeneity allowed researchers to introduce different categories of FL which are horizontal federated learning, vertical federated learning, and federated transfer learning. These variations of FL can be applied using different ML techniques on distributed data, across different industries such as healthcare, mobile applications, autonomous vehicles etc. FL is an emerging aspect of AI and has shown significant benefits over traditional ML approaches, including data privacy or security, hardware efficiency, data diversity/heterogeneous data, and real time learning. Unlike other literature that only focus on FL technologies and enabling technologies, this paper aims at presenting a technical overview of FL, specifically the Federated averaging algorithms, in addition to the processes, challenges, advances of FL in general. It views FL from three perspectives, which are required for building sustainable digital infrastructure for cities and rural dwelling, these are: FL aggregation, FL frameworks, and FL application areas.

A Review of Federated Learning

343

This paper is structured as follows, Sect. 2 provides a survey of FL in general, while Sect. 3 discusses the various FL aggregation models. Section 4 presents the various state-of-the-art FL frameworks, while various application domains of FL are discussed in Sect. 5. Section 6 presents some open challenges, while Sect. 7 concludes the paper and gives insights into future works.

2 Federated Learning 2.1 Federated Learning The concept of federated learning has been well researched by numerous authors in literature, however for completeness purposes, this subsection provides a brief review of some of works. In [5], the authors discussed federated learning through several lenses including, the basic FL fundamentals, FL enabling technologies, the various FL architectures, open challenges including privacy and security, application domains of FL, as well as possible future advancements in FL. Specifically, the authors considered the applications of FL in Artificial intelligence, the Sensor networks, Natural Language Processing, and other contemporary technological domains. In [6], the authors reviewed federated learning and the open problems that exist in federated learning settings. This study also discussed the applications of FL mainly in industrial engineering, six research fronts to address FL for future advancements or optimization. This study [7], addressed the characteristics and challenges of federated learning, as well as an overview of current FL approaches. 2.2 Categorization of Federated Learning Due to different data distribution patterns, FL can be classified into three different categories: horizontal federated learning, vertical federated learning, and federated transfer learning. In this section we briefly describe these categories. Horizontal Federated Learning. This FL technique is applied in situations where the datasets share the same feature space, i.e., the datasets have the same sets of features [1]. In 2016 Google proposed a horizontal federated learning framework [8], where a single user using an Android mobile device updates the model parameters locally and uploads these parameters to the cloud, thus collaboratively training the centralized model together with other devices. It is possible that during data transfer, the user’s private information may be leaked. Therefore, this framework makes use of secure aggregation [55] and differential privacy measures to ensure privacy during aggregations of user updates. Figure 1 is an illustration of the horizontal FL architecture. In the figure, step 1 depicts the client devices downloading the ML model from the FL server. In step 2 the clients train the downloaded model using localized data and then upload the updated model to the FL server in step 3. The FL server then aggregates the updates from the client devices in step 4. Vertical Federated Learning. Vertical federated learning or heterogeneous FL can be used in a case where datasets have certain features in common (overlap) [8]. In

344

L. Ntantiso et al.

essence this framework uses different datasets of different feature space to jointly train a global model [10]. Therefore, with vertical FL the feature dimensions of training data are increased. For example, using the illustration in Fig. 2, if we assume company A to be an ecommerce webstore with data about a customer’s book purchase history, and company B to be a review website, such as Bookish, with data about the customer’s book reviews. Though both platforms hold different datasets, the datasets have features that overlap to some degree (book title and author). When data about customer 1 is pulled by the central server, more features are available (book title, author, genre, purchase record, read count, and ratings), enabling the central server to better serve the customer, for instance by providing better book recommendations when browsing books on the ecommerce store. Data leakages are bound to occur during data transfer in this setting, hence differential privacy, secure aggregation, and homomorphic encryption are often employed [8].

Fig. 2. Illustration of Vertical Federated Learning

Federated Transfer Learning. This applies in cases where two datasets differ not only in sample size but in feature space as well [8]. In essence this technique is useful in settings where the users and user features both rarely overlap. Therefore, to compensate for lack of data or tags transfer learning is used. For example, consider two different companies, one being an ecommerce company in the United States, while the other is a South African bank. Due to geographical restrictions, customers of these institutions have minimal intersection. Moreover, being institutions in different domains, only a small portion (if any) of data features from both parties would overlap. In such cases, transfer learning can be used to carry out effective FL, by optimizing the performance of a task where there is not enough data for training. This technique not only looks at preserving privacy but also solves the issue of small datasets.

A Review of Federated Learning

345

3 Federated Aggregation Models As described in the previous section, most FL architecture consists of a central server which manages the global model and combines updates from the various clients. This section briefly describes the common aggregation models used in the FL environment. 3.1 Aggregation Models Federated Averaging (FedAvg). It is the very first federated learning algorithm created by Google in 2016 [10] [11] to solve FL problems. FedAvg tries to train a shared model across different clients by minimizing the overall global loss, which is the weighted average of each client’s loss. This weighted average is described in (1). f (w) =

K k=1

nk Fk (w) n

(1)

where K = no. of clients, Fk (w) = client’s loss function, nk = size of client k  s dataset. Each of the losses is weighted by the size of the client k  s dataset. Therefore, devices with larger datasets will have larger weighted losses. FedAvg is not the most efficient FL algorithm due to the simplified assumptions that the model makes, which are: • All sample devices will complete E epochs of local stochastic gradient descent. It is important to note that some devices take longer than others, which can negatively impact the speed of convergence. • Convergence is not guaranteed if the data is highly heterogeneous. • Devices are weighed by the proportion of the data that they have. This could mean that the algorithm might favor clients with more data. Other variants of FedAvg have been developed to address some of the limitations of the original version. FedProx. The authors in [12] proposed FedProx, which is a technique that allows devices to do variable amounts of work. This approach introduces a regularization term or proximal term that penalizes large changes in weights and helps with convergence on heterogeneous data. The authors reported that FedProx can achieve an average of 22% absolute accuracy improvement over the base FedAvg method. FedProx is defined in (2) 2   µ  hk w; wt = Fk (w) + w − wt  2  2 where, µ2 w − wt  is a Proximal term, penalized amount.

µ 2

(2)

is the hyper-parameter that controls the

Q-FedAvg. With heterogeneous data, model performance varies since different distributions may require different sets of features. Recent research study introduced two ways of addressing this. Ref. [13] proposed an algorithm called Q-FedAvg. This technique encourages a fairer or uniform accuracy distribution across devices in federated networks, which means the model performs similarly across all devices. Q-FedAvg

346

L. Ntantiso et al.

is a lightweight and scalable distributed method used to solve Q-FFL (Fair Federated Learning) in massive, federated networks. Q-FFL is a method that introduces fairness in federated settings, and Q-FedAvg is a technique that improves the efficiency of Q-FFL. Rather than weighting devices on the proportion of the data they have, Q-FedAvg penalizes the worst performing devices, incentivizing the model to improve performance on these devices [13]. fq (w) =

m  pk q+1 F (w) q+1 k

(3)

k=1

Per-FedAvg. The base FedAvg model does not cater for different data distributions, such as in heterogeneous settings (non-IID), hence the resulting global model obtained by minimizing the average loss could perform poorly when applied to the local datasets of each user. The authors in [14] thus focused on data in heterogeneous settings, where the probability distributions of devices are not identical (i.e., non-IID). They overcome the challenge by incorporating personalization. This technique builds on the ModelAgnostic Meta Learning approach introduced in [15], that formulates FL as a multi-task problem, where each client’s distribution is a separate task. The was to find an initial shared model that current or new clients can easily adapt to their local dataset by running one or a few sets of gradient descent. m Pk Fk (w) (4) f (w) = k=1

Per-FedAvg is described in (4), where the loss function is changed from the loss on weights from the current round to the weights after one step of gradient descent as shown in (5). m Pk Fk ((w− ∝ ∇Fk (w))) (5) f (w) = k=1

where ((w− ∝ ∇Fk (w))) is weights after one step of gradient descent. Table 1 summarizes the averaging various models described thus far.

3.2 Evaluation Dataset This subsection presents some of the datasets used for testing and evaluating the federated aggregation models discussed in the previous section. Table 2 gives a summary of models and the corresponding datasets utilized. MNIST [18] and Sharespare [20] were the most utilized datasets, while all models except Per-FedAvg curated some form of synthetic dataset for their testing.

A Review of Federated Learning

347

Table 1. FL Algorithm Summary Ref.

Description

[10, 11]

FL Method FedAvg

Algorithm

[12]

FedProx

[13]

QFedAvg

[14]

PerFedAvg

An improvement over FedAvg to support difference in client workloads. Introduces a regularization term that penalizes large changes in weights. A lightweight, scalable, fair, and distributed FL model. This technique formulates FL as a multi-task problem where each client’s distribution is a separate task.

Remark

First federated learning algorithm that tries to train a shared model across different clients.

Does not take heterogeneity of devices into consideration. Does not guarantee convergence. Helps with convergence on heterogeneous data

Focuses on fairness in FL Builds on ModelAgnostic Meta learning Approach

Table 2. Evaluation datasets Model

CIFAR-10 [15]

EMNIST [16]

Fashion MNIST [17]

MNIST [18]

Sent140 [19]

Shakespare [20]

Synthetic

FedAvg

No

No

No

Yes

No

Yes

Yes

FedProx

No

Yes

No

Yes

Yes

Yes

Yes

Q-FedAvg

No

No

Yes

No

Yes

Yes

Yes

Per-FedAvg

Yes

No

No

Yes

No

No

No

4 Federated Learning Frameworks FL is a relatively new concept in terms of distributed ML; hence the development of efficient FL models would require the selection of appropriate frameworks for their implementation. This section presents some of the common frameworks used for FL. FATE [21]. FATE (Federated AI Technology Enabler) was developed in 2019 by Webank. The objective of FATE was to support a collaborative and distributed AI ecosystem with cross-silo data applications while meeting compliance and security requirements. It has several components, including FATEFlow – an FL management pipeline,

348

L. Ntantiso et al.

FederatedML - ML library, FATEBoard –visualization tool, FATE Serving – high performance computing platform for FL models, Federated Network - communication, and KubeFATE – cloud-based FL workload manager. Flower [22]. Flower is an open-source FL framework, under Apache 2.0 License and adopted by major research organizations in both academia and industry. It is a unified approach to federated learning, analytics, and evaluation with support for multiple ML frameworks (PyTorch, Keras, TensorFlow), different operating systems (Android, IOS, Windows) and platforms (mobile, desktop, and Cloud). Substra [23]. Substra is a FL framework that places emphasis on privacy and traceability in distributed ML. It employs a form of distributed ledger technology for data privacy and traceability. Like Flower, Substra also supports multiple programming languages and algorithms. OpenFL [24]. Open Federated Learning (OpenFL) is a Python 3 library designed for secure and collaborative FL. It is made up of collaborators – which train ML models using localized datasets, aggregators – which combine updates from collaborators into a global model, a director – which coordinates the federation process, and envoy - which manage the execution of workloads on the collaborators. TensorFlow Federated [25]. TensorFlow Federated (TFF) is an open-source framework for FL on decentralized data developed by Google. The main motivation behind TFF was Google’s need to implement mobile keyboard predictions and on-device search. TFF has two major components, viz. Federated Core API – which combines TensorFlow with distributed communication for implementing distributed computations) and Federated Learning API - a high-level API that enables existing ML models to be plugged into TFF. Other FL frameworks include IBM Federated Learning [26] and NVIDIA Clara [27].

5 Applications of Federated Learning To our knowledge, the Internet of things (IoT), healthcare, NLP, and transportation are some of the key requirements for sustainable city and rural dwellings. Globally, FL has been applied to integrate existing solutions/systems in these areas. This section presents some documented applications of FL in the aforementioned domains. 5.1 Federated Learning in Internet of Things (IoT) The Internet of Things has penetrated a broad range of aspects in modern life, leading to the emergence of diverse intelligent systems, smart devices, and applications [28]. With the plethora of vastly distributed and connected smart devices, we can acquire insights on massive data. These data insights can be leveraged to train advanced AI models that serve in the integration and development of smart devices to help society in all aspects [29]. To benefit from this data, one approach is to aggregate these distributed data onto centralized server(s) for analysis and modeling. However, this approach can be ineffective as data transmission may impose privacy risks due to data leakage. An alternative

A Review of Federated Learning

349

approach is FL, a novel concept that can address the privacy and security concerns data collection and aggregation [29]. Nevertheless, in complex IoT environments there are still challenges that FL models must face. These include device heterogeneity (different storage, computational power, communication range), data heterogeneity (non-IID data), model heterogeneity, expensive communication, and privacy concerns. To tackle some of these challenges, [28] proposed personalization on devices, data, and models. Personalized FL has attracted significant attention as it aims to mitigate heterogeneity and achieve a quality personalized model on devices in the network. Below we briefly discuss some of the most significant challenges of FL models in complex IoT environments, specifically device and data heterogeneity. Device Heterogeneity. Refers to the large number of IoT devices that differ in hardware, network protocols, communication capacity, computing, and storage. In complex IoT settings the main constraint with FL is communication, because devices in the network are relatively slow or frequently offline. In some scenarios devices with limited computing capacity may become stragglers as they take long to report model updates, while others may drop out of the network due to poor connectivity causing a negative effect on the learning process [28]. Data Heterogeneity. Refers to non-IID data distributions of devices found in a FL environment. Statistically, user data can be non-IID in various forms, such as skew feature distribution, skew label distribution, and concept shift [28]. In most FL settings (e.g., medical data across hospitals, financial data across financial institutions) devices generate and collect data in a non-IID distributed manner leading to statistical shift among them [30]. The current research work is exploring different ways of mitigating the abovementioned problems and other arising issues of FL in IoT environments. As stated previously, [28] proposed an edge computing framework for personalized FL, which helps mitigate heterogeneity in IoT applications. Edge computing brings technologies that allow computation and data storage at the network edge so that computing can be carried out closer to devices or data sources. Edge computing can address concerns such as latency, limited battery life in mobile devices, bandwidth costs, security, and privacy [31]. By bringing computing closer to the edge, the issue of device heterogeneity (varied computation and communication) can be addressed, as each device on the network can offload its intensive computation learning task to the edge that provides fast processing and low latency. Furthermore, statistical and model heterogeneity of data can also be addressed with Edge computing, as the edge nodes can participate in the collaborative training of global models under the coordination of a central cloud server [28].

5.2 Healthcare FL has been applied in various sub-domains in healthcare, including in the diagnosis of diseases, cardiac related cases, and drug related complications. These three use cases are discussed below. Diagnosis of COVID-19. The healthcare industry is known to be one of the industries that carry sensitive information. Therefore, due to several challenges associated with

350

L. Ntantiso et al.

traditional ML and cloud-based infrastructures such as security and privacy during data transmission, edge computing has proven to be an efficient computing resource for storing and processing large volumes of medical data, as it brings high quality computing resources closer to the clients (hospitals). In [32] the authors leverage edge-computing capabilities in the healthcare industry by analyzing and evaluating the capabilities of AI processing on isolated clinical data related to COVID-19. The work proposed a model for automatic diagnosis of COVID19 using the clustered federated learning (CFL). Two datasets were utilized, the first containing chest X-ray images, while the other contained chest ultrasound images, upon which binary classification was carried out to differentiate between COVID-19 chest images and normal chest images. VGG16 convolutional neural network (CNN) model was used for training, while focal loss was used to address data imbalance. The proposed CFL model was benchmarked against a specialized FL model and a multimodal CFL. Obtained results showed a comparable performance between the proposed framework and the specialized FL models. The multimodal variant was susceptible to overfitting, hence had to be stopped as certain point. Hospitalizations Due to Cardiac Events. In [33] the authors proposed a FL method to predict hospitalizations due to cardiac events. An iterative cluster primal dual splitting (cPDS) algorithm was built to solve a sparse Support Vector Machine (SVM) problem in a decentralized manner. The work sought to address three challenges tied to healthcare data, which are distributed data residing in isolated locations (hospitals, smart devices), data aggregation in a single database, which is infeasible due to data size and privacy concerns, and the development of a scalable framework to leverage the growing data. Data for the study was extracted from the EHR system and contained demographic data, medical history, and drug prescription history from 2001 to 2012 and containing at least one heart diagnosis between 2005 and 2010. The proposed framework was tested based on its ability to predict hospitalization due to cardiac conditions within a calendar, using data from previous years. Results from the experiment showed that the cPDS algorithm converged faster with less communication overhead compared to some of the distributed algorithms in this experiment and the accuracies of all compared algorithms were relatively equal. Predicting Adverse Drug Reaction (ADR) and Mortality Rate. Ref. [34] proposed a model for predicting Adverse Drug Reaction (ADR) and Mortality rate using FL as well as demonstrating the effectiveness of the FL when privacy preserving measures are in place. Two use cases were considered – prediction of ADR and modeling in-hospital patient mortality, for which two datasets containing over a million patient records were used. For the first, the Limited Market Scan Explores Claims-EMR Data (LCED) dataset was used, while the Medical Information Mart for Intensive Care (MIMIC) dataset was used for the second use case. Three ML algorithms (perceptron, Logistic Regression, and SVM) were compared in a FL set up across 10 sites. The proposed distributed FL model was compared to a centralized learning setting, and obtained results revealed that the FL model achieved comparable performance to centralized learning for both tasks. It was also inferred that though the privacy in the FL setting was guaranteed the security model adopted affected the predictive capability of the FL model.

A Review of Federated Learning

351

5.3 Natural Language Processing (NLP) NLP helps understand human language in a better way [35]. However, this field of study requires massive amounts of data to accurately train models. Like with the previous application areas, the data collection process comes with issues related to hardware, security, privacy, and communication. In this section we go through a few use cases of NLP in FL. Institutional Federated Learning for NLP. Ref. [36] applied FL to the TextCNN NLP model, to classify the intents within sentences. They also applied differential privacy to protect the clients during the training process. This experiment made use of the TREC dataset, a publicly available dataset for NLP text applications, which contains about 5952 data points. The FL process was done with the coMind ML framework [37], which supports distributed training on GPUs with a federated average optimizer [36]. The experiment was conducted on a simulated FL setup within a local area network, with one central server and 4 client devices. RSA encryption was used as the security measure for the communication between devices and the central server, while Tensorflow was used to develop the TextCNN model. Results of experimental simulations revealed the following: i.) an accuracy of 91.2% for the centralized model; ii.) a 4% drop in accuracy when the FL model with distributed data was used; iii.) the FL model was sensitive to data load amongst clients, hence imbalance in data across the clients resulted in up to 10% dip in performance; iv.) differential privacy improved security of the model through the introduction of noise to the dataset. Federated Learning for Mobile Keyboard Prediction. There are multiple constraints when it comes to the development of mobile keyboard models. For instance, to run these models on low and high-end devices, these models need to be small with low inference time [38]. User experience is key when it comes to such settings, as users would expect a visible keyboard response within a very small amount of time. With the current frequency with which mobile keyboards are used, device power could quickly deplete if CPU usage was not constrained [38]. As a result, these models are sometimes limited to a certain data size. To perform next word prediction in virtual keyboards of smart mobile phones (and tablets), the authors trained a recurrent neural network language model using a FL model. The work presented in the work showed the benefit of training models on the client’s devices without the transmission of sensitive user data to servers. For the experiment, the authors compared two forms of training - server-based training using stochastic gradient descent (SGD) and training on devices using the FedAvg algorithm [11]. For the server-based training, data containing about 7.5 billion English language sentences collected from Google keyboard (GBoard) in the United States was used. For the FL model, data from local caches of the client’s Google keyboard was used. FedAvg was then used to aggregate the client SGD updates. Recall was used as a performance metric and obtained results showed that the FL model gives better recall values compared to server trained model. Secure and Efficient FL Framework for NLP. FL has provided various ways of mitigating some of the challenges in centralized ML settings, however some of these solutions either require expensive communication, a trusted aggregator, or heavyweight

352

L. Ntantiso et al.

cryptographic protocols. In [39] a secure and efficient FL framework (SEFL) was introduced that disposes of the need for trusted entities, obtains comparable or better accuracy compared to existing FL frameworks, yet is flexible to client dropouts. SEFL makes use of two non-colluding servers, the Aggregation Server (AS) that collects the encrypted client updates and securely aggregates them, and a Cryptographic Service Provider (CSP) that manages the decryption key. In this design the CSP generates key pairs (secret key and public key) and stores the secret key locally but publishes the public key to all clients. The AS performs encryption on encrypted local updates. Therefore, to decrypt the aggregated updates, these two entities need to collaborate, since the CSP is the only entity that has control over the decryption key. Additional operations of encryption and decryption in FL settings could lead to high computation and communication overhead. To minimize the number of required cryptographic operations during training, the NLP model is trained with reduced volumes of local updates and weight storage. This experiment also made use of a crypto friendly Block-Hankel matrix-based pruning, which aims to achieve symmetric and balanced downloading and uploading communication. The proposed framework was evaluated by conducting experiments using Long Shortterm Memory (LSTM) [40] and Transformer model [41] on the WikiText-2 dataset [42]. Results show that the SEFL framework can produce accurate models even when there are several client dropouts in the network. 5.4 Transportation With the increase of IoT sensors in vehicular networks, it is important to collect data and train ML models that can be used for vehicle and traffic management. Since the data in such scenarios are distributed across the various vehicles, FL can be applied. One particular use case of interest is in predicting energy demand of electric vehicles (EVs). EVs have become one of the most sustainable solutions in transportation, as they help reduce oil consumption, high energy usage and gas emissions. Ref. [43] proposed some ML approaches that helped improve the efficiency and accuracy of energy demand prediction as well as reduce communication overhead in EV networks. Firstly, the researchers introduced a central based approach in which a charging power station (server) collected data from all charging stations (clients) close by and performed energy demand learning to predict energy demand using the data from the clients. This approach was deemed to be inefficient because it required data sharing between the server and distributed clients which resulted in communication overheads, privacy concerns for the clients and the EVs that utilized them. Therefore, federated energy demand learning (EDL) was introduced to address the above issue. To further improve accuracy on the proposed approach, this research introduced a clustering-based EDL, which also helps reduce dimensionality on dataset [44], minimizing biased prediction [45]. The authors compared their proposed methods to conventional machine learning models which are, decision trees, random forest, support vector regressor, k-neighbors regressor, stochastic gradient descent and multilayer perceptron, using a dataset obtained from charging stations across Dundee city between the period of 2017 and 2018. This data set has 65601 transactions, which include the charging station ID of 58 charging stations, transaction ID for each charging station, electric vehicle charging time and consumed energy for each transaction. Experimental results obtained from comparing the conventional ML models and the proposed

A Review of Federated Learning

353

frameworks on various training and testing set ratios, revealed that the proposed models improved accuracy of energy demand prediction (EDP) by 24.63% and decreased communication overhead by 83.4% compared to the other models.

6 Open Challenges in Federated Learning There are several challenges in FL settings, including systems heterogeneity, statistical data heterogeneity, expensive communication, and privacy concerns; most of which have been discussed earlier During information sharing in such setups, we would encounter the previously stated problems which may result in devices dropping out of the network. Since many of these problems are inherently interdisciplinary, solving them requires techniques from distributed optimization, cryptography, security, differential privacy, fairness, compressed sensing, information theory and more [46]. The aim of this section is to highlight FL challenges, specifically privacy, security, and fairness. There are two main techniques to preserve privacy introduced by [47, 48] which are differential privacy and secure aggregation. Differential privacy is defined in terms of the application-specific concept as is the case with end-to-end databases [47]. There are many properties that make differential privacy convenient or functional to certain settings such as composability, group privacy, and robustness to information [47]. In federated settings, this technique improves privacy by studiously adding noise on the datasets or outputs. This can either be done at client level, server level or at both (hybrid approach). However, in practice adding noise at client level is not practical since each device/client may only have little data. Secure aggregation is a cryptographic technique wherein a group of clients each hold a private value and collaborate to compute an aggregate value, without revealing to one another any information regarding the private value except what is learnable from the aggregate value [48]. In essence this technique ensures privacy on individual model updates is preserved. Another technique of preserving privacy is to add randomness in the process of device checking in [49], or by shuffling the model updates sent by devices to the central [50]. It is imperative to note that privacy is an important domain of FL, therefore whenever we build any FL model, we need to ensure that it aligns well with differential privacy and secure aggregation. However, this also introduces another challenge of ensuring fairness in the FL process. Balancing privacy/security concerns with fairness is an open challenge in FL. One of the most important aspects of FL is fairness. Several research efforts have been made to address this challenge, such as in [51], where the authors attempted to tune the overall loss function to improve the model to perform equally on all device types. The concept of fairness can be viewed from multiple dimensions. For instance, in [2] it was argued that devices that contribute more to a FL network should be better rewarded. Similarly, [52] proposed a collaborative fair FL framework which aimed to achieve collaborative fairness by evaluating the contributions of clients and iteratively updating their respective reputations. However, most FL frameworks simply ignore the aspect of device contribution. From the research landscape on the concept of fairness, it seems to be relative. Some researchers consider it more of a policy related question than a technological question, while others consider it technological. Ultimately, it narrows down to how devices are viewed and weighed in a system or network.

354

L. Ntantiso et al.

The techniques used to build and interpret models need to adapt to work on heterogeneous data. In FL settings, data cannot be assumed to be independently and identically distributed, as clients can have different data distributions which can affect the speed of convergence. The authors in [53] analyzed the asymptotic convergence of these algorithms under different data distributions. Some researchers argue that even-out the distributions of data across clients can help tackle the issue of heterogeneity by ensuring that clients have relatively equal distribution of data. However, this introduces two new challenges, i.) How can data on globally distributed clients be controlled? This is synonymous to wanting to ensure that all data on people’s smartphones are of equal sizes globally. ii.) ensuring equality of data across client nodes, might involve redistributing data by moving data from clients with more data to those with less data. Doing this introduces security concerns, and circles back to the privacy and security challenges discussed above. In [54] a technique called Federated Augmentation was proposed, where each device collectively trains a generative model, and thereby augments its local data towards yielding an IID dataset. Table 3 summaries the various aspects of FL discussed, presenting them as a form of FL taxonomy. Table 3. Federated learning taxonomy Categorization

Method

Description

Application

Data Structure

Horizontal Federated Learning

Data with overlapping feature sets

Regression techniques

Vertical Federated Learning

Data with different feature space, hence increased feature dimension

Neural Networks and Deep Neural Networks

Federated Transfer Learning

Increased feature size and sample

Transfer Learning

Differential Privacy

Ensures privacy by Traditional ML, Neural studiously adding noise Networks on the datasets or outputs

Secure Aggregation

Clients hold a private Federated learning, value and collaborate to Regression compute an aggregate value, without revealing information to each other

Edge Computing

Mitigates heterogeneity by bringing technologies that allow computation and data storage near the network edge

Device heterogeneity, statistical data heterogeneity, model heterogeneity

Asynchronous Computing

Solves for high-cost communication and latency

Device Heterogeneity

Privacy techniques

Heterogeneity techniques

A Review of Federated Learning

355

7 Conclusion The growing demand for federated learning (FL) technology led to the development of tools and frameworks that can be used to handle vastly distributed data. The articles reviewed in this work show that FL cuts across various domains, including machine learning, information theory, statistics, fairness, privacy, and security; and has been successfully applied in various fields including, healthcare, transportation, and the Internet of Things. However, despite the advances, there remain several open challenges. This paper gave insights on ways on how techniques applied in this field can be used to improve efficiency, security and ensure fairness in FL networks. Some potential research gaps identified are related to fairness and privacy in complex FL networks. Exploration of these areas might be an avenue for future research work.

References 1. Yang, Q., Yang, L., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol 10(2), 1–19 (2019) 2. Zhang, J., Li, C., Robles-Kelly, A., Kankanhalli, M.: Hierarchically fair federated learning. pp. 1–16 (2020) 3. Galakatos, A., Crotty, A., Kraska, T.: Distributed machine learning. In: Liu, L., Özsu, M.T. (eds.) Encyclopedia of Database Systems. Springer, New York (2018). https://doi.org/10. 1007/978-1-4614-8265-9_80647 4. Baccam, N., Gilley, S., Coulter, D., Martens, J.: Distributed training with azure machine learning,” microsoft [Online]. https://docs.microsoft.com/en-us/azure/machine-learning/con cept-distributed-training. Accessed 20 Aug 2021 5. Banabilah, S., Aloqaily, M., Alsayed, E., Malik, N., Jararweh, Y.: Federated learning review: fundamentals, enabling technologies, and future applications. Inf. Process. Manage. 59(6), 103061 (2022) 6. Li, L., Fan, Y., Tse, M., Lin, K.: A review of applications in federated learning. Comput. Ind. Eng. 149(2020), 106854 (2020) 7. Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: challenges, methods, and future directions. IEEE Signal Process. Mag. 37(3), 50–60 (2020). https://doi.org/10.1109/ msp.2020.2975749 8. Gao, Y., Li, W., Yu, B., Bai, H., Xie, Y., Zhang, C.: A survey on federated learning. Knowl. Based Syst. 216 (2021) 9. Gooday, A.: Federated learning types: Understanding the types of Federated Learning. OpenMind. [Online]. https://blog.openmined.org/federated-learning-types/ 10. Kelvin. Introduction to Federated Learning and Challenges. Towards Data Science (2020). [Online]. https://towardsdatascience.com/introduction-to-federated-learningand-challenges-ea7e02f260ca 11. Brendan McMahan, H., Moore, E., Ramage, D., Hampson, S., Aguera, B.: Communicationefficient learning of deep networks from decentralized data. International Conference on Artificial Intelligence and Statistics (AISTATS), Florida, 2017 12. Li, T., Sahu, A., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. In: MSysConference 2020 (2020) 13. Li, T., Sanjabi, M., Beirami, A., Smith, V.: Fair resource allocation in federated learning. arXiv preprint arXiv:1905.10497 (2019)

356

L. Ntantiso et al.

14. Fallah, A., Mokhtari, A., Ozdaglar, A.: Personalized federated learning with theoretical guarantees: a model-agnostic meta-learning approach. In: Conference on Neural Information Processing Systems (NeurIPS 2020) (2020) 15. V. Smith, C. Chiang, M. Sanjabi and A. Talwalkar, “Federated Multi-Task Learning,” Conf. on Neural Information Processing Systems, California, 2017 16. [EMNST] Cohen, G., Afshar, S., Tapson, J., van Schaik, A.: EMNIST: an extension of MNIST to handwritten letters. arXiv preprint arXiv:1702.05373 (2017) 17. [FashionMNST] Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017) 18. [MNST] LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradientbased learning applied to document recognition. In: Proceedings of the IEEE (1998) 19. [sent140] Go, A., Bhayani, R., Huang, L.: Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford (2009) 20. [Shakespare] McMahan, H.B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.Y.: Communication-efficient learning of deep networks from decentralized data. In: International Conference on Artificial Intelligence and Statistics (2017) 21. Liu, Y., Fan, T., Chen, T., Xu, Q., Yang, Q.: FATE: an industrial grade platform for collaborative learning with data protection. J. Mach. Learn. Res. 22(226), 1–6 (2021) 22. Beutel, D., Topal, T., Mathur, A., Qiu, X., et al.: Flower: a friendly federated learning research framework. arXiv preprint arXiv:2007.14390 (2020) 23. Galtier, M., Marini, C.: Substra: a framework for privacy-preserving, traceable and collaborative machine learning. arXiv preprint arXiv:1910.11567 (2019) 24. Reina, G., Gruzdev, A., Foley, P., Perepelkina, O., et al.: OpenFL: an open-source framework for federated learning. arXiv:2105.06413 (2021) 25. TensorFlow. TensorFlow Federated: Machine Learning on Decentralized Data. [Online]. https://www.tensorflow.org/federated. Accessed 1 Aug 2022 26. Ludwig, H., Baracaldo, N., Thomas, G., Zhou, Y., et al.: IBM federated learning: an enterprise framework white paper v0. 1. arXiv preprint arXiv:2007.10987.(2020) 27. Wen, Y., Li, W., Roth, H., Dogra, P.: Federated Learning powered by NVIDIA Clara [Online]. https://developer.nvidia.com/blog/federated-learning-clara/. Accessed 1 Aug 2022 28. Wu, Q., He, K., Chen, X.: Personalized federated learning for intelligent IoT applications: a cloud-edge based framework. In: IEEE Computer Graphics and Applications (2020) 29. Jiang, J., Kantarci, B., Oktug, S., Soyata, T.: Federated learning in smart city sensing: challenges and opportunities. Sensors 20 (2020) 30. Li, Y., Zhou, W., Wang, H., Mi, H., T.: Hospedales, FedH2L: Federated learning with model and statistical heterogeneity (2021) 31. Shi, W., Dustdar, S.: the promise of edge computing. Computer 49(5), 78–81 (2016) 32. Qayyum, A., Ahmad, K., Ahsan, M., Al-Fuqaha, A.: Collaborative federated learning for healthcare: multi-modal COVID-19 diagnosis at the edge. J. Open Comput. Soc. 3, 172–184 (2021) 33. Brisimia, S., Chena, R., Melac, T., Olshevskya, A., Paschalidis, C.: Federated learning of predictive models from federated Electronic Health Records. Int. J. Med. Inform. 112, 59–67 (2018) 34. Choudhury, O., Gkoulalas-Divanis, A., Salonidis, T., Sylla, I., et al.: Differential privacyenabled federated learning for sensitive health data. In: 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver (2019) 35. Mammen, P.: Federated learning: opportunities and challenges. In: Association for Computing Machinery, Washington (2021) 36. X. Zhu, J. Wang, Z. Hong, and J. Xiao, “Empirical Studies of Institutional Federated Learning for Natural Language Processing,” Association for Computational Linguistics, pp. 625–634, 2020

A Review of Federated Learning

357

37. Roman, A.: coMind collaborative machine learning framework (2019) 38. Hard, A., Rao, K., Mathews, R., Ramaswamy, S., et al.: Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604 (2018) 39. Wang, C., Deng, J., Meng, X., Wang, Y., et al.: A Secure and efficient federated learning framework for NLP. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Punta Cana (2021) 40. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997) 41. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., et al.: Attention is all you need. In: 31st Conference on Neural Information Processing Systems (NIPS 2017), California (2017) 42. Merity, S., Xiong, C., Bradbury, J., Socher, R.: Pointer sentinel mixture models. In: ICLR, California (2017) 43. Saputra, Y., Hoang, D., Nguyen, D., Dutkiewicz, E., et al.: Energy demand prediction with federated learning for electric vehicle networks. In: IEEE Global Communications Conference (GLOBECOM2019), Waikoloa, HI, USA (2019) 44. Hea, Y., Kockelman, K., Perrine, K.: Optimal locations of U.S. fast charging stations for long-distance trip completion by battery electric vehicles. J. Clean. Prod. 214, 452–461 (2019) 45. Li, W., Logenthiran, T., Phan, V., Woo, W.: Implemented IoT-based self-learning home management system (SHMS) for Singapore. IEEE Internet of Things J. 5(3), 2212–2219 (2018) 46. Kairouz, P., Brendan McMahan, H., Avent, B., Bellet, A., et al.: Advances and open problems in federated learning. Found. Trends Mach. Learn. 4(1), (2021) 47. Abadi, M., Chu, A., Goodfellow, I., Brendan McMahan, H., et al.: Deep learning with differential privacy. In: ACM Conference on Computer and Communications Security, Vienna (2016) 48. Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., et al.: Practical secure aggregation for federated learning on user-held data. In: International Conference on Neural Info. Processing Systems (NIPS), Barcelona (2016) 49. Balle, B., Kairouz, P., Brendan McMahan, H., Thakkar, O., Thakurta, A.: Privacy amplification via random check-ins. Neural Info. Process. Syst. 33, 4623–4634 (2020) 50. Erlingsson, U., Mironov, I., Raghunathan, A., Talwar, K., Thakurta, A.: Amplification by shuffling: from local to central differential privacy via anonymity. In: ACM-SIAM Symposium on Discrete Algorithms (SODA) (2020) 51. Mohri, M., Sivek, G., Suresh. A.T.: Agnostic federated learning. In: International Conference on Machine Learning, PMLR 2019, pp. 4615–4625 (2019) 52. Lyu, L., Xu, X., Wang, Q., Yu, H.: Collaborative fairness in federated learning. In: Yang, Q., Fan, L., Yu, H. (eds.) Federated Learning. LNCS, vol. 12500, pp. 189-204. Springer, Cham (2020).https://doi.org/10.1007/978-3-030-63076-8_14 53. Li, X., Huang, K., Yang, W., Wang, S., Zhang, Z.: On the convergence of FedAvg on Non-IID. In: ICLR (2020) 54. Jeong, E., Oh, S., Kim, H., Park, J., et al.: Communication-efficient on-device machine learning: federated distillation and augmentation under non-IID private data. In: Neural Info. Processing Systems (NIPS), Montreal (2018) 55. Krizhevsky, A., et al.: Learning multiple layers of features from tiny images (2009)

Intelligent Tutoring System to Learn the Transcription of Polysemous Words in Moor´ e Pengwend´e Zongo(B) and Tounwendyam Fr´ed´eric Ouedraogo Laboratoire Math´ematiques, Informatique et Applications, Universit´e Norbert ZONGO, Koudougou, Burkina Faso {pengwende.zongo,frederic.ouedraogo}@unz.bf Abstract. Our research is in the domain of Computing Environment for Human Learning. It aims to build an intelligent tutoring system to learn the transcription of polysemous words in Moor´e, a majority tone language spoken in Burkina Faso. In tone languages, polysemous words find their meaning in the pitch used. In this article, we present the results of the work on the modeling of the intelligent tutoring system to learn the transcription of polysemous words in Moor´e by Petri net and on the experimentation of the system. The Petri net modeling that we have done has allowed to simulate the operation of our system and show that it is consistent. From this study on the Petri net, we contribute to show how this approach could be used to model intelligent tutoring system and to fix its possible blockages. Concerning the experimentation of the system, it allowed to show that the contents of the knowledge base of the system are consistent with the contents of the corpus of Moor´e language. The analysis of users’ feelings shows that the intelligent tutoring system to learn the transcription of polysemous words in Moor´e could allow a learner to learn the transcription in Moor´e without the assistance of a human tutor. This analysis also shows that this system would be a great contribution in the area of local language learning and that it could be used to ensure the continuity of local languages learning during periods of pandemic such as COVID 19. Keywords: Intelligent Tutoring System Moor´e

1

· Petri Net · Tone Language ·

Introduction

Intelligent Tutoring Systems (ITS) are interactive computer environment that provide personalized learning and assisted by artificial tutoring [6,15]. It consists of a domain module, a learner module, a tutoring module and a communication module [2]. The domain module allows to model the content of the learning domain, the learner’s module allows to represent the learner’s knowledge, the tutoring module or pedagogical module makes it possible to define the pedagogical strategy of the system and the communication allows interactions between the system and the learner. Research in this area presents an effort to model, c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023  Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 358–366, 2023. https://doi.org/10.1007/978-3-031-34896-9_21

Transcription of Polysemous Words in Moor´e

359

implement and evaluate systems that integrate artificial intelligence techniques and cognitive theories to solve learning tasks [21]. As part of our study, we set ourselves the objective to build an intelligent tutoring system to learn the transcription of polysemous words in Moor´e. This application that we propose to set up is a response to the learning needs of local language in Burkina Faso. In tone languages, a polysemous word is a word whose use of tones on the word gives different meanings. To achieve our objective, we have, in a previous study, specified the suitable knowledge and processes for the design of the different modules of our system [18,19]. In this present study, we present the work on the modeling of the system by the Petri net and the experimentation of the prototype that we have developed. For modeling discrete event systems such as ITS, the Petri net is an excellent tool to simulate the operation of a system and to verify that it is deadlock-free [9]. We used this approach to model our tutoring system in order to simulate its operation and to fix its possible blockages. Modeling by the Petri net allowed us to develop a deadlock-free system. As for the experimentation of our system, the goal is to evaluate the pedagogical impact that this application could bring in the field of learning Moor´e in particular and local languages in general. The experiment that we carried out showed that our system would be of great contribution in the field of learning Moor´e. According to the objective of our study, our contribution is to set up a coherent system able to provide resolution tasks to the learners and to bring them an artificial tutoring. This system will also allow to ensure the continuity of local languages learning during and after periods of pandemic such as COVID 19 which has led to the closure of some training centers. This paper is structured around the following points: Sect. 2 defines the basic concepts used in this article; Sect. 3 presents the architecture of our ITS and describes the different modules; Sect. 4 shows the modeling of the system by the Petri net and presents the verification of its consistency; Sect. 5 shows the results of the experimentation of the system and Sect. 6 summarizes the study and presents the perspectives.

2

Background

In this section of our study, we introduce the concepts of tone language, intelligent tutoring system and Petri net. A tone language is a language in which the pitch of tones is used to distinguish the lexical meaning or grammatical forms of certain words [4]. This adds a level of complexity to lexical meaning. Tone languages are characterized by two types of tones: punctual tone and melodic tone [5]. Melodic or modulated tones are characterized by melodic movement (ascending, descending, descending-ascending). As for the punctual tones, they are characterized by their pitch (high, medium or low) and not by their melodic movement. Moor´e belongs to the family of languages with punctual tone. Three types of punctual tones are distinguished in the Moor´e language:

360

P. Zongo and T. F. Ouedraogo

– the high tone represented by an acute accent, – the low tone represented by a grave accent, – and the middle tone represented by the tilde. For example in Moor´e, the Fig. 1 is transcribed by the word s´ aag´ a and the Fig. 2 by the word s´ aag` a. The word saaga in Moor´e without the pitch does not allow to know if this word alludes to the Fig. 1 or to the Fig. 2.

Fig. 1. Rain (s´ aag´ a in Moor´e ).

Fig. 2. Broom (s´ aag` a in Moor´e ).

The definition of these different pitches is very important in the context of our study insofar as they constitute our pedagogical objective. Likewise, they are the basis for the transcription tasks of our intelligent tutoring system to learn the transcription of polysemous words in Moor´e. In the domain of language learning, the Intelligent Tutoring Systems (ITS) that have been designed so far have focused on the following languages: French, English, Spanish, German, Arabic, Japanese, Chinese [6,15–17]. An ITS is a Computing Environment for Human Learning that integrates Artificial Intelligence (AI) techniques and cognitive theories in the design of its system to provide guided and personalized learning to learners [1,3]. The ITS architecture consists of four components: the domain module, the student module, the tutoring module, and the communication module [2,14]. The specification of knowledge and processes, an approach developed from CommonKADS, makes it possible to design the domain module and the pedagogical module of an ITS for tone language learning [18,19]. After presenting the basic concepts related to tone languages and intelligent tutoring systems, we introduce the Petri net approach.

Transcription of Polysemous Words in Moor´e

361

The Petri Net (PN) is a mathematical tool proposed by Dr. Carl Adam Petri in 1962 to represent discrete distributed systems [10]. It is a modeling language, represented as an oriented bipartite graph. The PN is an approach that is used to diagnose modeling errors in an application [8,9,11]. In our study, we used this tool to model our system and to verify that it is deadlock-free.

3

Architecture of the ITS to Learn the Transcription of Polysemous Words in Moor´ e

The architecture of the intelligent tutoring system to learn the transcription of polysemous words in Moor´e is based on the four-component architecture of ITS. Figure 3 below shows this architecture.

Fig. 3. Architecture of the system. The components inside each module of the ITS represent the sub-modules that compose it, the two-way arrows connecting the modules indicate the communication relationship between them.

The domain module presents the knowledge of our system. It consists of learning tasks and sounds in Moor´e, the tonal transcriptions in Moor´e corresponding to the learning tasks. The tutoring module represents the pedagogical strategy of our system. This module consists of the resolution module, the assessment module and the remediation module. The domain module and the tutoring module are the main modules of our system. The student module of our system represents the learner profile information against solved tasks and learner information for the login. The communication module is the interaction interface between the system and the user. In the next section, we will show that our system developed from the above architecture is consistent.

362

4

P. Zongo and T. F. Ouedraogo

Verification of the System by Petri Net

Petri Net (PN) is an efficient method for the design and verification of discrete event systems [9,20]. Intelligent tutoring systems are discrete event systems. So, we used this method to design and verify our system for learning the transcription of polysemous words in Moor´e. Figure 4 presents the PN of the system modeled.

Fig. 4. Petri net of the system. It is a bipartite graph where the circles represent the places and the vertical bars the transitions.

Fig. 5. Graph of accessible markings of the Petri net of the system. It presents the state of the system after each firing ti

In Fig. 4, the places (pi) represent the input or output data of the different actions of the system and the transitions (ti) represent the actions performed by the system. Table 1 below details these different places and transitions. For the verification, we designed a marking graph to present the markings accessible of the Petri net of the system (see Fig. 5). Figure 5 shows all the marks in our PN at a given time. The analysis of the figure shows that after each firing, the input place of the transition changes from 1 to 0 while the output place changes from 0 to 1. These results show that our PN is binary which means that our system developed is consistent.

Transcription of Polysemous Words in Moor´e

363

Table 1. Description of the different places and transitions. Places (pi )

Transitions (ti )

p1 : app icon p2 : login screen p3 ): home screen(Main) p4 : tasks presented p5 : task selected, image and audio loaded p6 : transcribed word read p7 : transcribed word assessed p8 : success feedback generated p9 : task solved marked p10 : learner’s profile updated p11 : spelling error detected p12 : feedback spelling error displayed p13 : tones error detected p14 : feedback amalgamated tones displayed p15 : feedback tones error displayed p16 : sound emitted p17 : learner score displayed p18 : system ended

t1 : display login screen t2 : check login and password t3 : load tasks t4 : load image and audio t5 : read transcription t7 : produce success feedback t8 : mark task solved t9 : update learner’s profile t10 : return to tasks presented t11 : detect spelling error t12 : display spelling error t13 : transcribe again t14 : detect tones error t15 : detect amalgamated tones and produce feedback t16 : transcribe again t17 : produce tones error feedback t18 : transcribe again t19 : emit sound t20 : end sound emitted t21 : display score t22 : return to the main menu t23 : stop the system

5

Experimentation of the System

The experimentation of the system consisted in making the application available to users for use and in collecting their feelings after using of the tool. The goal of this experimentation is to verify if: – The content of the Knowledge Base (KB) of the system developed from the specification of knowledge and tasks is consistent with the content of the corpus of the Moor´e language, namely the transcriptions and sounds; – The content of the KB can allow a user to learn the transcription of polysemous words in Moor´e and to distinguish the lexical meaning of these words; – The pedagogical strategy defined can allow a user to learn without human tutor assistance. To collect users’ feelings, we developed questionnaires via Google Forms. The experiment concerned a total of seventeen learners and four trainers. Figure 6 and Fig. 7 below show some results of the users’ feelings after the experimentation of the system.

364

P. Zongo and T. F. Ouedraogo

Fig. 6. Learners’ feelings.

Fig. 7. Trainers’ feelings.

Analysis of Fig. 6 shows that fourteen learners found the learning tasks at least sufficiently relevant, and ten found the steps to solve the tasks at least sufficiently clear. As for the analysis of Fig. 7, two trainers found a total correspondence between the tasks presented by the system and the ideal transcriptions and two trainers found a partial correspondence between the tasks and the ideal transcriptions. Three trainers found a total correspondence between the tasks and the ideal sounds integrated and one trainer found them partially corresponding. From the analysis, we find that for each question, more than seventy-five percent of users give a satisfactory answer. It is the same observation for the other results that we have not presented in this paper for summary. From these observations, we can say that the ITS to learn the transcription of polysemous words in Moor´e could allow to learn transcription in Moor´e. Therefore, this application developed would be a great contribution in the domain of education for the local languages learning in Burkina Faso.

6

Conclusion and Perspectives

In this article, we presented the work on the modeling of the system by the Petri net and the experimentation of the system. The modeling of the system has shown that our ITS to learn the transcription of polysemous words in Moor´e is consistent. To achieve this result, we represented: a graphical model of the Petri net which allowed to simulate the operation of the system and a marking graph which allowed to analyze the reachability of the transitions and to show that our Petri net is binary. As for the experimentation of the system, it allowed to collect and analyze the users’ feelings. The analysis of these feelings showed, among other things, that the system could allow a learner to learn transcription in Moor´e without human tutor assistance and that this system could be used to ensure the continuity of local languages learning during periods of pandemic such as COVID 19. The future work of our study will be: the development and

Transcription of Polysemous Words in Moor´e

365

integration of the speech recognition activity in our system, the development of a WordNet ontology for the Moor´e language. The WordNet ontology for the Moor´e language would constitute an online knowledge base interoperable with our system and even with other systems.

References 1. Graesser, A.C., Conley, M.W., Olney, A.: Intelligent tutoring systems. In: APA Educational Psychology Handbook, Vol 3: Application to Learning and Teaching, pp. 451–473 (2012) 2. Nkambou, R., Bourdeau, J., Mizoguchi, R. (eds.): Advances in Intelligent Tutoring Systems (2010) 3. Padayachee, I.: Intelligent tutoring systems: architecture and characteristics. In: Proceedings of the 32nd Annual SACLA Conference, pp. 1–8. Citeseer (2002) 4. Caldwell-Harris, C.L., Lancaster, A., Ladd, D.R., Dediu, D., Christiansen, M.H.: Factors influencing sensitivity to lexical tone in an artificial language. In: Implications for Second Language Learning, pp. 335–357 (2015) 5. Compaor´e, L.: Moor´e prosody analysis essay: tone and intonation. Linguistics. Doctoral thesis, Universit´e de Sorbonne Paris, p. 19 (2017) 6. Paladines, J., Ramirez, J.: A systematic literature review of intelligent tutoring systems with dialogue in natural language. IEEE Access 8, 164246–164267 (2020). https://doi.org/10.1109/ACCESS.2020.3021383 7. Fournier-Viger, P., Nkambou, R., Nguifo, E.M.: Building intelligent tutoring systems for ill-defined domains. In: Nkambou, R., Bourdeau, J., Mizoguchi, R. (eds.) Advances in Intelligent Tutoring Systems, pp. 81–101. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14363-2 5 8. Luo, J., Zhang, Q., Chen, X., Zhou, M.C.: Modeling and race detection of ladder diagrams via ordinary petri nets. IEEE Trans. Syst. Man Cybern. Syst. 48(7), 1166–1176 (2017) 9. Wang, Y.-Y., Lai, A.-F., Shen, R.-K., Yang, C.-Y., Shen, V.R.L., Chu, Y.-H.: Modeling and verification of an intelligent tutoring system based on petri net theory. Math. Biosci. Eng. 16(5), 4947–4975 (2019) 10. Petri, C.A.: Kommunikation mit automaten (1962) 11. Chu, F.: Conception des syst`emes de production ` a l’aide des r´eseaux de Petri: v´erification incr´ementale des propri´et´es qualitatives. Ph.D. thesis, Universit´e Paul Verlaine-Metz (2015) 12. Mirchi, N., Ledwos, N., Del Maestro, R.F.: Intelligent tutoring systems: reenvisioning surgical education in response to COVID-19. Can. J. Neurol. Sci. 48(2), 198–200 (2021) 13. Lynch, C., Ashley, K., Aleven, V., Pinkwart, N.: Defining ill-defined domains. In: Dans Proceedings of the Workshop on Intelligent Tutoring Systems for Ill-Defined Domains at ITS 2006, p. 1–10 (2006) 14. Almasri, A., et al.: Intelligent tutoring systems survey for the period 2000–2018 (2019) 15. Almasri, A., et al.: Intelligent tutoring systems survey for the period 2000–2018. Int. J. Acad. Eng. Res. (IJAER) 3(5), 21–37 (2019). ISSN 2000-003X 16. Ahuja, N.J., Sille, R.: A critical review of development of intelligent tutoring systems: retrospect, present and prospect. Int. J. Comput. Sci. Issues (IJCSI) 10(4), 39–48 (2013)

366

P. Zongo and T. F. Ouedraogo

17. Slavuj, V., Kovaˇci´c, B., Jugo, I.: Intelligent tutoring systems for language learning. In: 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 814–819. IEEE (2015) 18. Zongo, P., Ouedraogo, T.F.: Toward an intelligent tutoring system for tone languages: learning of tone levels in Moor´e. In: 22nd IEEE International Conference on Advanced Learning Technologies (ICALT 2022) (2022) 19. Zongo, P., Ouedraogo, T.F., Capus, L.: A Transcription-based learning environment for a tone language. In: Proceedings of EDULEARN22 Conference 4th–6th July 2022, Palma, Mallorca, Spain, pp. 9042–9049 (2022). ISBN 978-84-09-42484-9 20. Uzam, M.U.R.A.T., Jones, A.H.: Discrete event control system design using automation petri nets and their ladder diagram implementation. Int. J. Adv. Manuf. Technol. 14(10), 716–728 (1998) 21. Bourdeau, J., Grandbastien, M.: La mod´elisation du tutorat dans les syst`emes tutoriels intelligents. STICEF (Sciences et Technologies de l’Information et de la ´ Communication pour l’Education et la Formation) 18, 14 (2011)

Advanced ICT

Assessing the Quality of Acquired Images to Improve Ear Recognition for Children Sthembile Ntshangase, Lungisani Ndlovu(B) , and Akhona Stofile The Council for Scientific and Industrial Research, Pretoria 0001, South Africa {smlambo,lndlovu3,astofile}@csir.co.za

Abstract. The use of biometrics to secure the identity of children is a continuous research worldwide. In the recent past, it has been realized that one of the promising biometrics is the shape of the ear, especially for children. This is be cause most of their biometrics change as they grow. However, there are shortcomings involved when using ear recognition in children, usually caused by the surrounding environment, and children can be at times uncooperative, such as moving during image acquisition. Consequently, the quality of acquired images might be affected by issues such as partial occlusions, blurriness, sharpness, and illumination. Therefore, in this paper, a method of image quality assessment is proposed. This method detects whether the images are affected by partial occlusions, blurriness, sharpness, or illumination. This method assesses the quality of the image to improve ear recognition for children. In this paper, four different test experiments were performed using the AIM database, IIT DELHI ear database, and ear images collected by Council for Scientific and Industrial Research (CSIR) researchers. The Gabor filter and Scale Invariant Feature Transform (SIFT) feature comparison methods were used to assess the quality of images. The experimental results showed that partial ear occlusions has less than 16 key points, resulting in low identification accuracy. Meanwhile, blurriness and sharpness were measured using the sharpness value of the image. Therefore, if the sharpness value is below 13, it means that the image is blurry. On the other hand, if the sharpness value is greater than 110, the image quality affects the ex tracted features and reduces the identification accuracy. Furthermore, it was discovered that the level of illumination in the image varies, the higher the illumination effect, such as the value above 100 affects the features and reduces the identification rate. The overall experimental evaluations demonstrated that image quality assessment is critical in improving ear recognition accuracy. Keywords: Ear · Recognition · Image Quality · Biometrics · Security · Children

1 Introduction The problem of identity theft can be defined as the illegal use of someone’s identity, which can be an ID number, identity details, birth certificate, social security number in the case of a child, and more [1]. This problem has been and is still affecting all age © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023 Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 369–380, 2023. https://doi.org/10.1007/978-3-031-34896-9_22

370

S. Ntshangase et al.

groups, from newborns to old people [2]. Therefore, it is important to determine and implement solutions to this problem and to close any identifiable gaps. Over the past few years, there has been a huge gap in protecting the identity of children. This is because not much has been done in the field of biometrics for children. According to the research in [1] that approximately 230 million children between the ages of 1 and 5 years do not have identification, causing those affected by this issue to be denied education, healthcare, political and economic rights. Biometrics could be defined as a method to use one’s unique physical and behavioral characteristics to identify or verify them. This includes fingerprint, hand geometry, retinal, iris, facial, and voice recognition systems. The applications of these systems include airport security, law enforcement, mobile access and authentication, banking, home assistants, building access, schools, public transport, blood banks, border control, voting systems, and more. However, most existing biometric recognition systems have been developed for use by adults, and thus, there is minimal research conducted in the field of biometric recognition systems for children. The major challenge is that children undergo mental and physical development as they grow. The research in [3, 4] agreed that most existing systems designed for adults do not work perfectly for identifying children. It is through meticulous investigation and development of biometric systems for children that we can solve current identification complications. These complications include wrong identification of newborns in hospitals, and the inability to identify missing and illegally adopted children. The literature has recently proposed different biometric methods for recognizing children, such as face recognition, iris recognition, fingerprint recognition, and footprint recognition [3]. There is, however, one major problem: young children, such as infants, are often uncooperative and do not comprehend or follow instructions, making these solutions or methods unsuitable. For example, using iris as a biometric recognition for children has its challenges [5]. This is because iris capture will be initiated by the child looking directly at the acquisition device. Despite this, newborns, especially premature children, are unable to direct their eyes into a scanning device because they rarely open their eyes. Further, stretching their eyelids to collect an image could be harmful to their eyes. Although the research in [6] pointed out that the iris can be used in children over the age of 2 years. With respect to fingerprints and footprints, a child must touch or hold the acquisition device to capture their fingerprints. For a newborn, this can be a laborintensive and unhygienic act. Furthermore, children’s faces change as they grow, making it difficult for face recognition systems to recognize them [7]. For the most accurate identification of children, ear images are the best biometric method. Compared to other forms of biometrics, obtaining an ear image is quicker, more hygienic, convenient, and less expensive. The use of ear recognition as a solution enables ear images to be captured even while a child is sleeping, eating, or playing using a stand-alone camera or smartphone. Moreover, the size of the ear is larger than other biometric traits including iris, fingerprint, and foot- print. This makes it possible to take images with a reasonable degree of distance from the subject. However, the challenge of using ear recognition systems for infant identification is that the quality of the ear image is affected by a variety of factors. Blurriness, sharpness, illumination, and occlusion are the major factors to consider. If these factors are not

Assessing the Quality of Acquired Images

371

normalized, the accuracy of the system will decrease during enrollment, verification, and identification. This may lead to high false-rejection rates. Thus, this paper proposes a method to assess the quality of the captured ear image by detecting the mentioned factors and then normalizing them based on predefined set values. The main objective is to enhance the recognition of the ear in children by assessing the quality of the image. The remainder of this paper is structured as follows. Section 2 presents a review of the literature to briefly summarize what others have done to improve ear recognition for children. Section 3 presents the methodology for how image quality assessment is performed. Section 4 presents and discusses the results of the proposed quality assessment technique. Section 5 concludes the paper and provides future work.

2 Related Works The identification of children using ear recognition has been explored in numerous studies in recent years. Nevertheless, none of the studies has attempted to assess the quality of the ear images before they are further processed. These studies include patents, conference proceedings, journal articles, and more. The research in [8] developed a contact-based device called Donut and the am-Health app to identify children based on the shape of their ears. It was developed to optimize the performance of the capture process through image stabilization. The application standardizes the distance, angle, rotation, and lighting of the image. The experiment was carried out by capturing images of 194 participants to assess identification rates. They captured images of the left ear of all participants with and without the Donut and applied the amHealth App algorithm to process the images. To find the most likely matches, they measured the top one and the top ten. As a result of using the Donut, the top one identification rate and top ten identification rate were 99.5 and 99.5%, respectively, in comparison to 38.4 and 24.1%, respectively, without it. Nevertheless, their work is not suitable for use on children when it comes to COVID-19 protocols, and using the device when a child is not asleep will be difficult because the image could be affected by partial occlusions and blurriness. Research in [9] developed a neural network model to analyze ear images collected under unconfined conditions. Sharpness, blurring, and illumination were all factors that affected the images. The model was constructed by analyzing annotated images of ears taken from a database containing landmarks from the ears. The database contained 2058 unconfined labeled images of the ear taken from 231 subjects, which were used to verify and/or recognize the ear. Following extensive comparisons and experiments, the study’s results established that both holistic and patch-based AAMs. were successful in aligning ear images uploaded to the in-the-wild database. Although ear verification and recognition improve regularly with alignment, the proposed database proved exceedingly difficult to use. The fact that these images were collected from adults offers some encouragement about the alignment issue. However, it does not address the issue of partial occlusions or deformed ears of infants and minors under age 18. To resolve blurriness and illumination changes, the research in [10] used the Random Sample Consensus (RANSAC) normalization technique. Using this technique, the

372

S. Ntshangase et al.

transformation estimation is determined, and average ears are calculated for ear templates and segmentation masks. The implementation of the method was carried out using the Annotated Web Ears (AWE) dataset. By summing and averaging all pixels of each image, which is annotated as perfectly aligned, the average ear is obtained. Therefore, the roll, pitch, and yaw axes are close to zero degrees and are not blocked or contained by accessories. As a result of the experiments, the proposed method, combined with the masking of ear areas, significantly improves the recognition of head pitch variations in ear images. However, this method does not solve the problem of partial ear occlusion, i.e., if more than one ear image is affected by quality factors, this will affect recognition, resulting in a higher rate of ear recognition errors. As a result of realizing that authentication and identification are major challenges in various organizations, researchers designed an efficient fingerprint recognition system [12]. Image enhancement is performed on the fingerprints of infants and toddlers in the proposed system. To determine fingerprint codes, it uses the improved Gabor filter as a preprocessor on the enhanced images. To authenticate and verify the finger codes, the Euclidean distance is then used to match them to test fingerprints. Using existing CMBD and NITG fingerprint datasets, the proposed system was evaluated for efficiency and performance. Despite this, we argue that using ear recognition for infants and toddlers is the most effective way to solve the problem. In this case, the main problem is that children are often uncooperative. The research in [13] presented a survey on biometric recognition systems for infants and toddlers. Through extensive research, the authors realized that biometric recognition systems for infants and toddlers have many challenges. Some of these challenges include issues like database collection, changes over time in biometrics such as the face, and parents’ unwillingness to provide details about their ward. They presented a detailed review of different biometrics and their challenges, especially for infants and toddlers. They claim that to this day the efficiency of biometric algorithms for infants and toddlers is not up to the mark and has to travel a long path to reach the mark. In-depth research is required to evaluate the efficiency of the biometric recogni-tion system for infants and toddlers. Meanwhile, the research in [14] conducted a comprehensive survey on ear recognition databases, performance evaluation parameters, and existing ear recognition techniques. The authors also developed a new database called NITJEW. The images of NITJEW were captured in an unconstrained environment. The developed database was compared with six existing ear detection and recognition databases. To measure the performance of ear detection and recognition, the authors modified deep learning models, Faster-RCNN and VGG-19. However, their main concern was to evaluate existing biometric systems, and thus not so much was done in terms of experimental evaluations.

3 System Design and Architecture As shown in Fig. 1, ear recognition involves capturing an image, detecting the ear, extracting, and storing features as well as performing comparisons. The proposed approach includes an image quality assessment stage, as illustrated in Fig. 1. To assess the quality of an image, two types of approaches can be used: subjective and objective. In subjective approaches, judging image quality by data is performed manually

Assessing the Quality of Acquired Images

373

Fig. 1. Traditional ear recognition approaches.

by the human, while in objective approaches, identifying and predicting perception of quality is done via computational methods. In this work, the objective approach is used to evaluate four aspects of image quality that affect accuracy for ear recognition, namely: partial ear region, blurriness, sharpness, and illumination. A partial ear region refers to the captured image that shows part of the ear region, and other ear regions that are not visible because of occlusion or incomplete capture of the image. In most situations, partial images are obtained when the ear image is obscured by clothing or hair, or if the ear detection software only detected a small area of the ear. Image blurriness is a common problem when processing images, which results in an image with reduced edge content and smooth transitions. When an image is being acquired, this can be obtained if a child moves faster or if an acquisition device moves faster. Image sharpness of an image, on the other hand, determines the amount of details in the image that emerges during the acquisition process. Meanwhile, image illumination can be defined as the effect caused by the environment, in this case, by light, on the captured object such as the ear, etc.

Fig. 2. Proposed ear recognition approach.

This proposed model determines the partial region of the ear before determining other factors, which are determined in parallel using the results of the previous evaluation as illustrated in Fig. 2 and Fig. 3. 3.1 Partial Ear Region The Scale-Invariant Feature Transform (SIFT) is used to determine whether the acquired image is partial or not. SIFT is a method that is commonly used in image processing to detect and extract features. There are two stages involved in SIFT method, Difference of Gaussian (DoG) and Key-points Detection, as explained in [15]. The SIFT function used in this study returns N number of key points detected from the image. If the number of key points is less than or equal to 16, it means that the image is partial. This indicates that the user is supposed to recapture the image of the ear. Then

374

S. Ntshangase et al.

Fig. 3. The proposed ear recognition flow.

the proposed image quality assessment method will automatically take the user to the image acquisition stage. This threshold value of 16, was determined experimentally. As part of her Ph.D. in Computer Science program, Esther Gonzalez created the AMI Ear Database. Ear images from the AMI Ear database were cropped into smaller portions and tested to see how many SIFT points can be detected from partial images. On average, 16 or fewer points were detected regardless of the size of the image. The full and partial ear images with detected SIFT key points indicated are illustrated in Fig. 4.

Fig. 4. Illustration of partial ear image, a) full image, b) to c) partial images with 22, 9, 8, and 3 key-point, respectively.

Assessing the Quality of Acquired Images

375

3.2 Image Blurriness and Sharpness Blurriness and sharpness of the captured images were measured by calculating the level of sharpness in the image using the Laplacian Filter, the algorithm proposed and thoroughly discussed in [16] and [17]. To determine the values of acceptable blurriness and sharpness by the system, images from the IMA database were used to generate blurred images, as shown in Fig. 5. The level of sharpness was then estimated. The higher the sharpness value, the lower the presence of blurriness in the image.

Fig. 5. Image blurriness and sharpness, a) normal image from AIM, b) to d) blurred images with 13.66, 1.56, 1.27, 1.23 detected sharpness values.

3.3 Image Illumination To detect the level of illumination in the image, a method was used to determine the average amount of illuminance in the image. The method calculates the intensity of each pixel and the nearby pixels. The method used was proposed in [18]. The high er the level of illumination effect, the higher the illuminance value. Illumination is important in the ear image so that the features can be clear. Consequently, too high, and too low of an illuminance value could result in failure to extract features. The thresholds that were determined during experiments include Low Illuminance Threshold (LIT) and High Illuminance Threshold (HIT). Ear images with different illumination effects from the Newborns database and the IIT Delhi ear database were used to test the developed function. Figure 6 illustrates the images with different illuminance values according to the evaluation.

376

S. Ntshangase et al.

Fig. 6. Detected illuminance value from ear images, a) 24, b) 64, c) 94, d) 104, e) 114, f) 131.

4 Results Analysis and Discussions To test the proposed image assessment model, four different tests were performed. These included partial ear, blurriness, sharpness, and illumination. The data that was used was collected by the CSIR research [19]. 4.1 Partial-Ear Images Testing A partial image test was performed on 50 images from the AIM database. To determine whether the given images are partial or not, each image was cropped into 10 different ear regions and then passed through the partial detection method. The partial images were grouped into 11 groups for the number of key points: 6, 12, 16, 20, 23, 25, 30, 35, 40, 45, and 50. To verify the accuracy of the match, partial images were compared to the original image using the Gabor and SIFT ear comparison methods. 4.2 Blur and Sharpen Ear Images Testing From the AIM database, 25 images were selected for testing to determine the blurriness and sharpness of the proposed methods. By applying OpenCV Gaussian and Median blurring methods to each image, three blurred images were generated. Accordingly, the first, second, and third sets of images were all blurred using kernel sizes of 21, 31, and 51. In addition, ten images that looked sharper were selected from the database that the CSIR researchers collected from clinics and schools. For calculating the sharpness of the images, all the images were passed through the function. Thereafter, different sharpness levels were assigned to the images: 1, 4, 6, 8, 10, 14, 18, 40, 80, 100, 120, 150, 200, and 300. A comparison of the generated and selected images with the original image was performed using the Gabor and SIFT ear comparison techniques.

Assessing the Quality of Acquired Images

377

4.3 Images with Illumination Testing To test the illumination detection method, a set of data was collected from various sources. These sources included the AIM database, the IIT Delhi ear database, and the CSIR child database. A total of 110 images with different illumination effects were selected. For each selected image, three different images of the same ear were generated. The total number of images then ended up being 330. These images were then passed to the function for calculating the illumination value. Furthermore, the images were grouped based on the illumination value determined: 10, 20, 40, 60, 70, 80, 90, 110, 120, 130, and 140. To determine the matching accuracy, the images were compared with the original images using the Gabor and SIFT methods. 4.4 Results and Discussion To measure the effectiveness of image quality assessment, an ear comparison was performed, as illustrated in Fig. 3. The original images were compared with the image affected by the partial images, blurriness, sharpness, and illumination. To perform ear comparison, the Gabor feature comparison and SIFT feature comparison methods were used. These methods are publicly available in the OpenCV toolbox. The implementation of the proposed model was carried out in C + + with an open CV library. The results are presented in Fig. 7, Fig. 8, and Fig. 9.

Fig. 7. Accuracy vs. number of sifting key points.

Figure 7 shows that the accuracy of ear recognition depends on the number of key points detected in the images. The fewer the key points, the lower the recognition accuracy. The higher the number of key points, the greater the details in the images, and consequently the greater the correctness of the resulting recognition.

378

S. Ntshangase et al.

Fig. 8. Accuracy vs. sharpness value.

Figure 8 presents two effects: blurriness and sharpness. Sharpness is low if the value of sharpness is less than 10, and the result is blurry and inaccurate. This is because the edges of the images that represent the shape are not visible. In contrast, accuracy decreases as the sharpness value increase above 120. The reason for this is that highsharpened images present too many details, creating a false image of the feature.

Fig. 9. Accuracy vs. detected image illuminance.

Figure 9 illustrates the effect of high illumination, in that the higher the illuminance value detected, the lower the recognition accuracy. Similarly, an illuminance of less than 50 affects the quality of the image, as some elements of the image cannot be detected accurately. Consequently, an ear image with an illuminance value between 50 and 100 improves recognition accuracy.

Assessing the Quality of Acquired Images

379

5 Conclusion This paper presented a method for assessing the quality of an image by looking at four image quality factors, namely partial occlusions, blurriness, sharpness, and illumination. From experimental evaluations, it has been proven that performing image quality assessments on ear images can improve the accuracy of ear recognition for children. The novelty of this work lies in normalizing the presence of these factors in captured ear images to reduce recognition errors. Experiments have shown that ear recognition can be improved to identify and verify children using the proposed image quality assessment method. Empirical tests were conducted to determine the acceptable number of key points, a sharpness value, and an illuminance value for captured ear images that can produce a higher ear recognition accuracy. These tests were carried out using data from the AIM and IIT DELHI ear database. Furthermore, data was also collected by CSIR researchers. In the future, the proposed method will have to be applied in a real-time environment, such as in hospitals, to further perform usability testing. The experimental evaluations showed that partial ear occlusions has less than 16 key points, resulting in low identification accuracy. The blurriness and sharpness were measured using the sharpness value of the image wherein if the sharpness value is below 13, it means that the image is blurry meanwhile if the sharpness value is greater than 110, the image quality affects the extracted features and reduces the identification accuracy. On the other hand, it was discovered that the level of illumination in the image varies, the higher the illumination effect, such as the value above 100 affects the features and reduces the identification rate. Consequently, the overall experimental evaluations demonstrated that image quality assessment is critical in improving ear recognition accuracy.

References 1. E-C international law office, Pulse 65 Special Back to School (2019) 2. Jain, A.K., Arora, S.S., Best-Rowden, L., Cao, K., Sudhish, P.S., Bhatnagar, A., Koda, Y.: Giving infants an identity: fingerprint sensing and recognition. In: Proceedings of the Eighth International Conference on Information and Communication Technologies and Development, pp. 1–4 (2016) 3. De Souza, P.T., Querido, D.L., Regina, G.: The importance of newborn identification to the delivery of safe patient care. Cogitare Enferm 22(3), e49501 (2017) 4. Jain, K., Arora, S.S., Best-Rowden, L., Cao, K., Sudhish, P.S., Bhatnagar, A.: Biometrics for child vaccination and welfare: persistence of fingerprint recognition for infants and toddlers (2015) 5. Moolla, Y., de Kock, A., Mabuza-Hocquet, G., Ntshangase, C.S., Nelufule, N., Khanyile, P.: Biometric recognition of infants using fingerprint, iris and ear biometrics. IEEE Access (2021). https://doi.org/10.1109/ACCESS.2021.3062282 6. Nelufule, N., De Kock, A., Mabuza-hocquet, G., Moolla, Y.: Image quality assessment for iris biometrics for minors (2019) 7. Wang, Z., Yang, J., Zhu, Y.: Review of ear biometrics Arch. Comput. Methods Eng. 28, 149–180 (2021). https://doi.org/10.1007/s11831-019-09376-2 8. Etter, L.P., Ragan, E.J., Campion, R., Martinez, D., Gill, C.J.: Ear biometrics for patient identification in global health: a field study to test the effectiveness of an image stabilization device in improving identification accuracy. BMC Med. Inform. Decis. Mak. 19(1), 114 (2019). https://doi.org/10.1186/s12911-019-0833-9

380

S. Ntshangase et al.

9. Zhou, Y., Zaferiou, S.: Deformable models of ears in-the-wild for alignment and recognition. In: Proceedings - 12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017, pp. 626–633 (2017). https://doi.org/10.1109/FG.2017.79 10. Ribic, M., Emeršic, Z., Štruc, V., Peer, P.: Influence of alignment on ear recognition: case study on AWE Dataset, Request PDF (2016) 11. Ganapathi, I., Prakash, S., Dave, I.R., Joshi, P., Ali, S.S., Shrivastava, A.M.: Ear recognition in 3D using 2D curvilinear features. IET Biomet. 7(6), 519–529 (2018). https://doi.org/10. 1049/iet-bmt.2018.5064 12. Patil, A., Rahulkar, A.D., Modi, C.N.: Designing an efficient fingerprint recognition system for infants and toddlers. In: 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT), pp. 1–7 (2019) 13. Kamble, V.: Infants and toddlers biometric recognition: a review. Asian J. Convergence Technol. (AJCT) ISSN-2350-1146 (2018) 14. Kamboj, A., Rani, R., Nigam, A.: A comprehensive survey and deep learning-based approach for human recognition using ear biometric. Vis. Comput. (2021). https://doi.org/10.1007/s00 371-021-02119-0 15. Otero, R., Delbracio, M.: Anatomy of the SIFT method. Image Process. Line 4, 370–396 (2014). https://doi.org/10.5201/ipol.2014.82 16. De, K., Masilamani, V.: Image sharpness measure for blurred images in frequency domain. Procedia Eng. 64, 149–158 (2013). https://doi.org/10.1016/j.proeng.2013.09.086 17. Ellappan, V., Chopra, V.: Reconstruction of noisy and blurred images using blur kernel. IOP Conf. Ser. Mater. Sci. Eng. 263(4), 2–11 (2017). https://doi.org/10.1088/1757-899X/263/4/ 042024 18. Finlayson, G., Fredembach, C., Drew, M.S.: Detecting illumination in images. Proc. IEEE Int. Conf. Comput. Vis. (2007). https://doi.org/10.1109/ICCV.2007.4409089 19. Moolla, Y., Ntshangase, S., Nelufule, N., de Kock, A., Mabuza-Hocquet, G.: Biometric recognition of infants using fingerprint, iris and ear biometrics. IEEE Trans. Image Process. Process. 18(5), 1–18 (2021)

Autonomous Electromagnetic Signal Analysis and Measurement System Mohammed Bakkali1(B) and Abdi T. Abdalla2 1 Deprtmento de Teoría de la Señal y Comunicaciones y Sistemas Telemáticos y Computación,

Universidad Rey Juan Carlos (URJC), Madrid, Spain [email protected] 2 Department of Electronics and Telecommunications Engineering, University of Dar es Salaam, Dar es Salaam, Tanzania [email protected]

Abstract. Lately, there is a notable increase in devices connectivity and wireless telecommunication applications which generate a huge amount of electromagnetic emissions which may cause adverse effects to human health and other nearby devices due to Electromagnetic Interference; making the area of electromagnetic emissions to attract the attention of many researchers. In this work, we implemented a monitoring system, using a sound card and heterodyne receiver for scanning frequencies of interest and generating an event when detecting emission at these specific bands above threshold. The system triggers an alarm, saves the detected events in a database with relevant information including time, level of the signal, and frequency. The spectrum of the signal detected is stored for further analysis. Experimental validation of the proposed system shows promising results with increased efficacy and with reduced cost. The proposed system can be used as essential building block towards adoption of the artificial intelligence algorithms in electromagnetic data analysis and measurements. Keywords: Electromagnetic Interference · Electromagnetic signals · Fourier Transform · Digital Signal Processing · Measurements

1 Introduction The increased demand in wireless telecommunication systems and the current trend of smart things and connectivity between devices puts the space around us with a huge amount of electromagnetic emissions [1–4]. Unfortunately, monitoring of the electromagnetic (EM) emission is relatively challenging due to its relatively high cost and hence making effective real-time EM monitoring impractical [5, 6]. In such circumstances, a system with a reasonable cost that allows monitoring of the EM spectrum to detect possible disturbing emissions is urgently needed. In addition, the possibility of being able to analyze this data automatically, store it and be able to make decisions in real time could add a considerable value to this field. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023 Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 381–390, 2023. https://doi.org/10.1007/978-3-031-34896-9_23

382

M. Bakkali and A. T. Abdalla

Recently, sound cards have been used in several applications and demonstrated promising results. In most cases, the sound card was used with high-level programming, using some pre-existing tools and libraries available in software like LabVIEW and MATLAB. Sound card has a very good capability, but it is necessary to execute the low-level programming, to manage the parameters which affect the capturing process [7–13]. In this work, we implemented a monitoring system, using a sound card and heterodyne receiver, for scanning frequencies of interest. When detecting emission at these specific bands above a threshold level, the system generates an event, triggers an alarm, saves the detected event in a database with relevant information including time, level of the signal, and frequency. The spectrum of the signal detected is stored for further analysis. Our objective was to collect data of interest, through prior programming, which feeds a database with information of interest about emissions in specific bands. Subsequently, process the collected data using Artificial intelligence (AI) based algorithms.

2 Proposed System The proposed system is a computer-based measurement system that takes a continuous signal at its input and samples it to obtain a sequence of discrete data x(n). This periodic sampling is achieved using the PC’s sound card. The discrete temporal signal is then converted from the time domain to the frequency domain by calculating the Discrete Fourier Transform (DFT) to estimate the spectrum. We use the Fast Fourier Transform (FFT) algorithm to implement the DFT [15, 16]. To reduce the leakage effect, we have implemented the windowing technique. With the presence of background noise, it is very difficult to detect the signal of interest, for this reason some averaging techniques have been implemented to reduce the background noise and extract the signal of interest. The main parts of the proposed system are depicted in Fig. 1 and is composed of the following elements: – – – – –

Receiver control IC-PCR1000 Signal capture Signal analysis Visualization (GUI) Database

The challenge at this stage is to be able to capture the signal received by the ICPCR1000, make the sampling process by the soundcard to achieve a real-time measurement system. The coordination process between the two devices and the implementation of a technique using several buffers with real-time synchronization management make the system efficient by doing the tasks in the optimal way without losing relevant information [17]. Signal analysis is performed with the help of a user-friendly graphic interface.

Autonomous Electromagnetic Signal Analysis and Measurement System

383

Fig. 1. The simplified scheme for the proposed system measurement

Figure 2 represents the connection between the IC-PCR1000 receiver and the sound card that performs the task of periodic sampling of the input signal x(t). For this, a low-level programming algorithm was implemented in order to access the sound card, control the capture of the signal and manage the buffers that will be used to store the signal in the optimal way without losing data.

Fig. 2. Connection between IC-PCR1000 receiver and sound card

To implement the process of capturing the signal from the receiver using the PC’s sound card, Microsoft’s “waveform-audio” structures for audio data was used to perform the following tasks: – Capture the audio signal coming from the microphone input of the sound card, which in turn comes from the output of the receiver; – Manage the storage of samples in buffers; – Save the captured signal in WAV format.

384

M. Bakkali and A. T. Abdalla

Figure 4 shows a simplified representation of the implemented algorithm that performs an automatic scan of the receiver to monitor frequencies within a band of interest. The developed application allows setting a scanning start frequency (F_start) and another scanning end frequency (F_end), the jump (F_step), the sweep duration time (Ti). Once the process has started, the receiver is tuned to the frequency F_start after each time Ti, the frequency is increased by F_step, until reaching the frequency F_Fin, and thus the automatic scanning with frequency sweep will continue until this function is deactivated. During the automatic scanning process, a function can be activated to monitor the level marked by the cursor, and each time this signal level is exceeded, a "BMP” type image is saved with the file name “SPECTRUM_DAY_MONTH_YEAR_HOUR_MINUTE_SECONDS.BMP” and the corresponding data will be stored in a database with the relevant information. This system allows us to monitor a certain area of the spectrum, monitor the entire area of interest, and to detect all the possible interferences that have occurred; the time of their repetition, their frequency, and characteristics in amplitude and bandwidth.

3 Graphical User Interface In the first version, we implemented a system to perform a sequential scan of up to 10 different user-defined channels. A different frequency can be defined for each channel, as well as a different mode and bandwidth filter. Scan time is also user defined. It is also possible to define an interval of interest with a steep frequency increase. Parameter setting is achieved via a graphic user interface shown in Fig. 3.

Fig. 3. Display of the User interface

Autonomous Electromagnetic Signal Analysis and Measurement System

385

Fig. 4. Simplified algorithms of automatic scan of the receiver to monitor frequencies within a band of interest

386

M. Bakkali and A. T. Abdalla

4 System Validation 4.1 Equipment Used to Do the Measurement Validation ICOM IC-PCR1000 Receiver. We used the IC-PCR1000 receiver that covers frequencies between 10 kHz and 1300 MHz in various broadcast modes (WFM, FM, AM, CW, LSB, and USB). This receiver is our main interface with the radio frequencies. We used its audio output to have an Intermediate Frequency to perform the spectral analysis and thus, at most we can have a bandwidth of 20 kHz. Another receiver or even other control devices could be used as in [18]. Baofeng UV-5R Portable Radiotelephone. Is a radiotelephone that transmits and receives in the VHF and UHF bands from 136 MHz to 480 MHz in the 12K5F3EJN and 16K0F3EJN emission classes corresponding to the narrowband and wideband designations in its user manual. The Baofeng UV-5R was shown to be very effective in controlled tests in the said bands when test broadcasts have been carried out with natural human voice and with analog (CCTS) or digital (DCS) subtones. Signal Generator PCE-SDG1010. Generates signals up to 10 MHz. The function generator can produce five output waveforms and it can also produce its own waveforms (arbitrary function).

4.2 Detection of AM Signal Emission Several measurements and tests were carried out to validate the developed system. A known signal was first generated, through a signal generator, and a VHF/UHF transmitter is used to provide other test signals. Experimental setup is shown in Fig. 5. The following parameters were used during the validation process: Signal Generator: sine: 530 kHz, amplitude 2.30 Vpp , Phase 0.0º, Mod AM 10 kHz. Receiving antenna: AN200 E1–124819 Etón, freq 520–1710 kHz. ICOM: 530 kHz, AM, 15 kHz, We used a Black-Harris windows and adjusted the sound card sampling rates to 44.1 kHz. The detected signal in the frequency domain and its corresponding averaged signal are shown in Fig. 6 (a) and (b), respectively.

Autonomous Electromagnetic Signal Analysis and Measurement System

387

Fig. 5. Equipment used to do the measurement validation

Fig. 6. Signal detected by our system (a) signal detected in frequency domain (b) signal detected in frequency domain applying exponential averaging

4.3 Detection of VHF/UHF Emission In this measurement, a Baofeng VHF/UHF radiotelephone residing 10 m from ICPCR1000 was used to generate a known FM signal with frequency of 144.010 MHz and NB mode (Fig. 7.) and then the signal was propagated through space. A monitoring frequency interval has been marked in the band of 140 MHz to 150 MHz.

Fig. 7. VFH emission detection test

388

M. Bakkali and A. T. Abdalla

The signal was detected by using our proposed method at 144.007MHz (Fig. 8). We used Lower Side Band (LSB) mode with a bandwidth of 15 kHz. This detection was improved by adjusting the Digital Signal Processing (DSP) parameters of our measurement system by increasing the number of FFT from 512 to 8192 and adding a HANNING window (Fig. 9).

Fig. 8. Signal detected using proposed system at NFFT 512 with no windows

Fig. 9. Signal detected using proposed system at NFFT 8192 with HANNING windows

4.4 Detection of EMI Emission Knowing that any electrical wires can transport electromagnetic signals interference and play the role of an Electromagnetic Interference (henceforth EMI) antenna [1, 2], our proposed smart signal analysis and measurement system managed to detect if any device containing an AC motor is running or stopped, as for example the case of the air conditioning unit containing an AC compressor. Figure 10 shows detected signal using our proposed system using a 1 kHz signal generated by the PCE-SDG1010 in two scenarios: Air-conditioning OFF and Airconditioning in operation. In Fig. 10b we can appreciate the spectrum of the airconditioning interference detected at the frequencies of 7 kHz and 14 kHz with an

Autonomous Electromagnetic Signal Analysis and Measurement System

389

amplitude level of approximately 300 µV when the compressor of the Air-conditioning is in operation.

Fig. 10. Signal detected using proposed system (a) measurement of 1 kHz signal with Airconditioning OFF. (b) Air-conditioning in operation at the frequencies of 7 kHz and 14 kHz

5 Conclusion Electromagnetic emissions have been an area of interest for many researchers due to its adverse effects particularly to human health. In this work, the possibility of obtaining reliable data of electromagnetic signals of interest, at different spectral coverages, using an automated system is proposed and validated. Experiments show promising results towards realization of a full smart EM measuring device which is relatively cost effective. Our ongoing work is to enable this system to perform the whole smart processing in real-time and make informed decisions by incorporating effective Artificial Intelligence based algorithms [14].

References 1. Mascareñas, C., Bakkali, M., Martín, C., Sánchez de la Campa, F., Abad, F.J., Barea, M., et al.: Sistemas de Comunicaciones a través de la Red Eléctrica. Efecto de Interferencias PLC en los unifamiliares. In: International Science and Technology Conference, 21–22 March 2007, Malaga (Spain) – 23 March 2007, Tangiers (Morocco) 2. Bakkali, M., Mascarenas, C., de la Campa, F.S., Martin, C., Abad, F.J., Barea, M., et al.: Feasibility study of advancing and sitting up Power Line Communication (PLC) system under Environment of Electromagnetic Compatibility (EMC) into the ships. In: 2007 9th International Conference on Electrical Power Quality and Utilization. EPQU 2007, 9–11 Oct 2007, Barce-lona, Spain. pp. 1–5 (2007) 3. Ott Henry, W.: Electromagnectic Compatibility Engineering, September 2009, ISBN: 978-0470-18930-6 4. Balcello, J., Daura, F., Esparza, R., Pallás, R.: Interferencias Electromagnéticas en sistemas electrónicos, ISBN 84-267-0841-2

390

M. Bakkali and A. T. Abdalla

5. Witte, R.A.: Electronic test instruments. Theory and Applications, Hewlett Packard, New Jersey (1993) 6. Emisiones Radioeléctricas: Normativa Técnicas de Medida y Protocolos de Certificación, Colegio oficial de ingenieros de Telecomunicación, Catedra coit (etsit-upm) 7. Quan, X., Zhou, N., Wu, H.: Design of sound card electrocardiosignal acquisition system based on LabView multimedia technology (ICMT). In: 2011 International Conference on pp. 282–285, 26–28 July 2011 8. Zhao, Z., Guo, S.: “Development of an acoustic communication system for multiuser based on sound card” Complex medical Engineering (CME). In: 2011 IEEE/ICME International Conference on, pp.264–267, 22–25 May 2011 9. Gunawan, T.S., Khalifa, O.O.: PC sound card based instrumentation and control, “Computer and Communication Engineering (ICCCE). In: 2010 International Conference on, pp. 1–4, 11–12 May 2010 10. Xian-ling, Z.: The virtual instrument based on labview and sound card, Computational aspects of social network (CASoN). In: 2010 International Conference on, pp.743–745, 26–28 September 2010 11. Xin-sheng, X, et al.: Study on precise frequency measurements based on sound card Electronic Measurement & Instruments. In: 2009 ICEMI09- 9th International Conference on, pp. 2–455, 2–458, 16–19 August 2009 12. Chen, A., Liu, J.: A kind of virtual oscilloscope used in experiment teaching based on sound card and Labview, Education technology and training 2009. ETT 09. In: Second International Conference on, pp. 118–121, 13–14 December 2009 13. Neitzert, H.C., Rainone, N.G.: Photocurrent and electroluminescence mapping system for optoelectronic device characterization using a PC sound card for data acquisition. In: Instrumentation and measurement Technology Conference Proceedings, 2007. IMTC2007, pp. 1–6. IEEE, 1–3 May 2007 14. Lu, Y.: Artificial intelligence: a survey on evolution, models, applications and future trends. J. Manag. Anal. 6(1), 1–29 (2019) 15. Robinson, E.A.: A historical perspective of spectrum estimation. Proc. IEEE 70(9), 885–907 (1982) 16. Smith, S.W.: The Scientist and Engineer’s Guide to Digital Signal Processing, California Technical Publishing (1997). ISBN 0966017633 17. Aerospace & Defense Symposium 2012, Agilent Technologies, 31/05/2012, Madrid, Spain 18. Bakkali, M., Mascareñas Perez Iñigo, C., Carmona Galán, R.: IC-PCR1000 control using a wireless sensor network (WSN). Int. J. Comput. Commun. Eng. 1(3), 290–292 (2012). ISSN 2010-3743

Digital Transformation of the Textile and Fashion Design Industry in the Global South: A Scoping Review A. A. Ogunyemi1

, I. J. Diyaolu2 , I. O. Awoyelu3 and A. O. Oluwatope3(B)

, K. O. Bakare2 ,

1 School of Digital Technologies, Tallinn University, Narva Mnt 29, Tallinn, Estonia

[email protected]

2 Department of Family Nutrition and Consumer Sciences, Obafemi Awolowo University,

Ile-Ife, Nigeria {diyaolu,bissibakare}@oauife.edu.ng 3 Department of Computer Science and Engineering, Obafemi Awolowo University, Ile-Ife, Nigeria {iawoyelu,aoluwato}@oauife.edu.ng

Abstract. This paper focuses on the digital transformation of the textile and fashion design industry of developing countries in the Global South. Its goal is to describe the state of the art and determine the topical trends, challenges, and opportunities associated with the digital transformation of the textile industries in the Global South. We conducted a scoping review of 16 studies and followed the Preferred Reporting Items for Systematic Reviews and Meta-analysis and Scoping Reviews Protocols (PRISMA-ScR) guidelines. The search string was composed, and searches were conducted on some selected digital libraries and databases. We summarised each study and analysed it based on emerging commonalities. We performed quantitative and qualitative analyses of the included studies. The results reveal that transition to sustainable and smart production is an ongoing slow process in the textile and fashion design industry in the Global South. Textile production is embracing Industry 4.0 in practice based on intelligence systems. Sustainability dimensions can be incorporated into the value chains with digital technologies. The study implies that emerging firms can leverage the recent development in textile production to achieve more sustainable production practices. The study can also be a reference for inspiring the digital transformation of the textiles and fashion industry in developing countries, Nigeria in particular. Keywords: Digital Transformation · Textile and Fashion Industry · Global South · Environmental Sustainability · Scoping Review

1 Introduction Textiles are indispensable for human existence (clothing, household, furnishing). The textile industry is saddled with manufacturing these articles for human convenience. Textile and fashion design is also a significant contributor to the economic growth of © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023 Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 391–413, 2023. https://doi.org/10.1007/978-3-031-34896-9_24

392

A. A. Ogunyemi et al.

many countries [1–3]. However, with technological advancement and an increase in demand for textile products and services, coupled with the drive for compliance with industrial revolutions happening in the industry, the textile and fashion design industry, over the years, has faced different challenges in coping with these trends. Digital transformation is one of the significant trends in this industry. The definition of digital transformation varies according to what is peculiar to different authors; therefore, there may be no single definition for digital transformation. A recent systematic review by Morakanyane et al. [9] collected and synthesised definitions offered by eleven authors. From the eleven definitions, key issues central to digital transformation are digital technologies, processes, and change. Therefore, we define digital transformation as the necessitated changes in processes, whether business organisations or societies, usually driven by technological advancements, primarily digital technologies. Digital transformation is an ongoing trend in industrial sectors such as manufacturing (food, wood, metal), construction, marketing, and creative industries such as textile and fashion design, especially as compelled by industrial revolution frameworks. Although some existing reviews focus on the textile and fashion industry, none of them is, in particular, focused on the state of affairs in the Global South. In their review, Islam, Perry and Gill [7] mapped environmentally sustainable practices in the textile, apparel and fashion industries and found there are diversities and complexities regarding environmentally sustainable practices in the industry. Moreover, the study reveals a paucity of studies from developing countries, and it is just time for us to examine the trends, challenges and opportunities for the Global South. Similarly, Conlon [3] conducted a systematic review to understand Product Lifecycle Management (PLM) - ‘an enterprise-wide strategy gaining prominence across manufacturing’ practice in the textile and fashion industry. Conlon [3] found a limited holistic and theoretical view of PLM and a shortage in relevant industry skills as primary hindrances to PLM adoption and optimisation in the sector. It is, therefore, necessary to investigate digital transformation activities of the textile and fashion industry, especially the Global South, since Conlon [3] focused on the Global North. In another related review, Rahman [11] examined the application of digital technologies in the textile and fashion industry and found that implementing digital technologies such as 3D printing can help reduce production costs and increase profitability. Although the results are promising, the focus is global, and it is not so clear what the situation is in the Global South. A scoping review makes it possible to quickly map significant concepts underlying a research area and types and sources of available evidence to provide a foundation for conducting a comprehensive review later (Mays et al. [8] cited in Arksey & O’Malley, [2]). Our goal is to examine scholarly literature, describe state of the art, and determine the trends, challenges and opportunities associated with the digital transformation of the textile and fashion industry in the Global South. Table 1 is the overview of our research objectives and questions. The rest of the paper is organised as follows: Sect. 2 describes the study methodology, Sect. 3 describes the results, Sect. 4 is the discussion and Sect. 5 describes the conclusions.

Digital Transformation of the Textile and Fashion Design Industry

393

Table 1. Research objectives and questions S/N Objectives

Specific Research Questions (RQs)

1

Determine the digital transformation trends What topics characterise the textile and of the textile and fashion design industry in fashion design industry research in the the Global South in the last three decades Global South in the last three decades, and in which countries are these conducted?

2

Investigate challenges the industries faced within the study period

3

Identify the methods used by researchers to What research methods have been used by conduct studies the current scholarly works?

4

Identify the opportunities for digital transformation in the textile and fashion industry

What documented challenges in the extant literature confront the textile and fashion design industry in the last three decades, and how are these classified?

What opportunities are there for the textile and fashion industry in the Global South to digitally transform based on documented evidence from the existing literature?

2 Methods We conducted a scoping review and followed the PRISMA-ScR guidelines. Our review goal aligns with Arksey and O’Malley’s [2] four fundamental reasons for conducting a scoping review; investigate the range, extent, and nature of research activity, determine the need to undertake a comprehensive systematic review, summarise and disseminate the results; and determine the gaps in the extant literature. 2.1 Protocols and Registration We drafted our protocol employing the PRISMA-ScR. The research team revised the protocol. The final protocol was developed on 18 July 2022. We decided that our focus shall be on the digital transformation of the textile industries of developing countries in the Global South. Our goal is to describe the state of the art of these industries and determine the topical trends, challenges, and opportunities associated with the digital transformation of the textile industries. As a result, we decided that all the included papers must align with this goal. 2.2 Inclusion and Exclusion Criteria Based on our broad study goal, we selected papers where the research was in the context of developing countries in the Global South. We wanted to determine the roles of digital technologies, people, and processes in the digital transformation of the textile industries in this region. Further, we wanted to determine the methods employed by researchers to carry out the studies.

394

A. A. Ogunyemi et al.

The rest of the eligibility criteria is as follows: Inclusion Criteria 1. 2. 3. 4. 5. 6. 7.

Written in English Conference or journal articles Published between 1992–2022 At least four pages Peer-reviewed Accessible online Developing country in the Global South

Exclusion 1. Book Chapters, Magazines, Opinion papers, Theses, Blog Posts, Workshops and Panel papers 2.3 Information Sources Table 2 details the bibliographic databases and digital libraries in which we conducted our search. The search was conducted on 20 July 2022. One of the authors, who is wellexperienced in conducting systematic reviews, drafted the search strategies. We refined the search strategies further through team discussion. We exported the final search results to Mendeley, where one of the authors processed the bibliographic details of the studies. Table 2. Details of the search results. Database or source

Screening by Duplicates Studies sought Included based Final inclusion titles and for retrieval on inclusion abstracts criteria

ACM DL

124

Elsevier

0

12

2

2

7

0

5

4

4

13

0

10

5

5

Taylor & Francis

302

0

7

1

1

Wiley Online

212

0

8

1

1

African Journal Online

11

0

2

1

1

Springer

199

0

10

2

2

Total

868

0

54

16

16

IEEE Xplore

Digital Transformation of the Textile and Fashion Design Industry

395

We used a classification1 to verify the regional setting of countries, that is, those that belong to the Global North from those in the Global South. As a result, we ensure to include studies from only developing countries in the Global South. 2.4 Search Strategy All the researchers decided on the composition of the search strings to increase consistency. We composed the search string: (‘textile’ OR ‘fashion’ design) AND (‘digital transformation’ OR digit* transform* OR digital technolog* OR industry 4.0 OR industry 5.0 OR Smart industry OR Zero Waste) AND (Global South OR Developing countr*) AND [Publication Date]: 01/01/1992 TO 31/07/2022. 2.5 Selection of Sources of Evidence Each researcher was assigned digital libraries and databases to search, and the researchers screened the results together. Since each digital library and database have peculiarities, we amended the screening where appropriate. Five researchers in a workshop set up worked together to agree on at what point the screening iteration should stop, and we documented the search process results. Figure 1 is the search process. 2.6 Data Charting Process A data-charting workshop was held in which all the five researchers decided what data to extract from included studies. We created a table in the Google doc where we collated the variables to extract from studies. We iterate the process a couple of times until all the researchers feel satisfied with the data charting.

Fig. 1. The search processes.

1 The classification was found at https://meta.wikimedia.org/wiki/List_of_countries_by_reg

ional_classification.

396

A. A. Ogunyemi et al.

2.7 Data Items We created a Google spreadsheet and extracted data on characteristics of studies (e.g., article/paper title, author keywords, year publication, source of publication, study type, and country of study). Also, contextual issues reported in the study (e.g., issues relating to digital transformation, technologies and roles, processes, challenges, and opportunities), study methods, sample size, and significant findings. 2.8 Synthesis of Results We collated the studies into behavioural and non-behavioural-based studies. We summarised each study design and analysed it based on emerging commonalities, for example, evidence of intervention, impact, challenges, and trends. As a result, we performed quantitative and qualitative analyses of the included studies. We compiled tables, figures and charts for the visual representation of quantitative data and added some narratives regarding qualitative data. We mapped emerging evidence to create a trajectory for future research. 2.9 Demographic Data The final inclusion of studies comprised eight conference papers and eight journal articles. The publications count by year is as shown in Fig. 2. The figure shows one published article in 2006, 2014, 2015, 2017 and 2020, respectively. There were two publications in 2011, 2018 and 2020, respectively, and five in 2021. The studies were conducted from 2006 to 2022, with five conducted in China, 2 in Taiwan, and one in Pakistan, Egypt, Malaysia, Kazakhstan, South Africa, Saudi Arabia, Brazil, Columbia and Thailand. Figure 3 shows the results of nestling the keywords of the included papers, and it can be seen that themes such as Industry 4.0, dyeing, decision, textile, and design dominate the author’s keywords. We also concatenated and mined the abstract texts of the 16 papers in a Voyant tool2 to determine what trends can be inferred from the studies.

Fig. 2. Publications count by year

Fig. 3. Keywords mapping

The results in Fig. 4 show the sixth-most frequently used words in the corpus are fashion (n = 32), textile (n = 24), industry (n = 22), study (n = 19), design (n = 14), and technology (n = 14). 2 https://voyant-tools.org/.

Digital Transformation of the Textile and Fashion Design Industry

397

Fig. 4. Abstracts trends corpus

3 Results RQ 1. What topics characterise the textile and fashion design industry research in the Global South in the last three decades, and in which countries are these conducted? First, we created a timeline of events and topics dominating the research for the period of selected studies. As seen in Fig. 5, the results indicate that many of the studies investigated Industry 4.0 compliance. Seven studies examined readiness, environmental sustainability, smart production, and IoT platform interoperability. One study, in particular, proposed an in-between framework called Industry 3.5, which the authors believe is more compatible with the Global South. As can be seen in Fig. 5, Industry 4.0 framework compliance has been consistently investigated from 2011 to date. Figure 6 is the result of aggregating the various firms and technologies used in the textile and fashion industry. The result can be classified into raw materials (for example, chemical fibre), processing (for example, embroidery, cutting, weaving, dyeing and finishing, and printing, and the final product (for example, clothing, children’s garments). Next, we distributed the sixteen included studies into countries where the research focused on. Table 3 shows that most of the studies came from China (n = 5) followed by Taiwan (n = 2). Other countries with less textile studies include Brazil, Pakistan and South Africa. Overall, the results in Table 2 reveal a paucity in research and suggest China is leading in research on textile and fashion industry. Data obtained from the study signified the systematic shift in production based on the industrial revolution. Textile production followed this trend by moving from technology in the first industrial revolution to the fourth industrial revolution. RQ 2. What documented challenges in the extant literature confront the textile and fashion design industry in the last three decades, and how are these classified? Figure 7 shows the challenges delineated from the sixteen studies reviewed for the study. Technologies have brought tremendous changes in textiles and fashion in the last three decades—nevertheless, the industry-specific challenges via technology

398

A. A. Ogunyemi et al.

Fig. 5. Timeline of topics dominating the study period

Fig. 6. Firm types and technologies in use

Table 3. Distribution of studies by countries Countries

Frq

References

China

5

STD 3, STD 4, STD 7, STD 8, STD 11

Taiwan

2

STD 10, STD 13

Pakistan

1

STD 1

Egypt

1

STD 5

Malaysia

1

STD 6

Kazakhstan

1

STD 9

South Africa

1

STD 12

Saudi Arabia

1

STD 14

Brazil

1

STD 15

Colombia

1

STD 16

Thailand

1

STD 2

deployment are apparent. We classified the challenges into technology, raw materials, process, and industry 4.0 transition. Three studies from the sixteen (16) studies reviewed based on the specified inclusion criteria reveal that technological challenges hinge on 3D dyeing and finishing and 3D printing. Regarding 3D printing design, STD 7 reported a challenge of non-real user interactivity with 3D design textiles. Dyeing and finishing had increasing demands for a quality product (STD 1, STD 3), production of comfortable fabrics (STD 4), fabric colours (STD 2, STD 3), special requirements for new fibres (STD 3), development of unique designs (STD 3), ecology and economic situation awareness (STD 3), environmental pollution control (STD 3, STD 5), high efficiency and low energy demands (STD

Digital Transformation of the Textile and Fashion Design Industry

399

Fig. 7. Mapping of challenges found in studies

3, STD 5, STD 11), optimal configuration process (STD 3), and solar irradiation for dye water management (STD 11). Raw material constituted two arrays of difficulty: fashion trends and fashion size charts. Specifically, fashion trends face the challenge of demand and supply imbalance (STD 2), over-reliance on textile producers (STD 2), fashion trend books to inspire fabric designers (STD 2), and hindrances to the decision-making process (STD 2). Process challenges border on batch-dyeing and design drafts. The design draft caused performance limitations to the existing digital design tools (STD 8). Batch-dyeing linked sequence-dependent setup (STD 13), parallel machines (STD 13), arbitrary production size (STD 13), incompatible production (STD 13), family, non-relational data storage (STD 13), and increasing job order complexity and variation (STD 10) are other processrelated challenges. The design draft led to performance limitations to existing digital design tools (STD 8). Industry 4.0 transition challenges are top management’s lack of awareness led to organisational awareness creation (STD 3, STD 5, STD 9), credibility, and variations in customer awareness in the target market (STD 5). The other pressing challenges for the industry 4.0 transition are the use of traditional and physical sales channels (STD 9), non-detailed product customisation (STD 9), limited ICT budget (STD 9), lack of digital features in products/services (STD 9), lack of strategic transition implementation plans (STD 9), lack of machine-machine communication (STD 9, STD 13), firm-suppliers

400

A. A. Ogunyemi et al.

relationship modification (STD 2, STD 4, STD 5), lack of supplier buy-in (STD 5), and poor waste management system (STD3, STD 5, STD, 6, STD 11). RQ 3. What research methods have been used by the current scholarly works? The current scholarly works use various research methods, as presented in Table 4. We classify the studies into behavioural and non-behavioural-based. Most of the studies are non-behavioural-based studies. These studies do not focus on people’s behaviour in a given social context or under some controlled observation. The results show that modelling (n = 8) is the primary method used. Modelling is a method used in physical sciences. Mathematical models are a typical example. Other methods are interview method (n = 4). Experiment (n = 3), survey (n = 3), case study (n = 3), and expert review (n = 1) are the other used methods. As can be seen in Table 3, six of the studies (STD 1, STD 4, STD 5, STD 9, STD 14, and STD 15) used mixed methods, that is, more than one method, and that explains why the total frequency of the methods used is 22. Sample sizes vary by study, and the results reveal that the unit of analysis determines the sample size, that is, individuals or companies. RQ 4. What opportunities are there for the textile and fashion industry in the Global South to digitally transform based on documented evidence from the existing literature? The results in Table 4 harmonise the textile and fashion industry indicators to transition to green operations. The green value chain indicators were retrieved from the 16 included studies. From Table 5, we can see that leadership/company innovativeness (n = 7) and water usage efficiency (n = 7) are the topmost indicators of creating a green value chain for the industry. It is well-known that the value chain describes activities that are essential to enhance a competitive edge. Other indicators according to their frequencies are environmentally-friendly equipment/technologies (n = 6), sustainable product design (n = 5), government policy (n = 5), waste removal (n = 4), energy efficiency (n = 4), pollution reduction (n = 2), and renewable material sourcing (n = 1). The findings revealed that achieving a “green” value chain will involve integrating environmental management strategies into all value chain activities, as presented in Table 5. Other opportunities revealed through the review of the extant literature for promoting digital transformation of the textile and fashion industry in the Global South indicate that digital transformation can integrate information from separate systems. In particular, there is the need to structure dyeing and finishing (STD 1, STD 3), use digital tools to enhance design user interactivity, and look into performance and malleability (STD 7). Also, review the decision support system for dyeing machine scheduling (STD 10), develop country-specific fashion size charts (STD 12), and put in place efficient dye wastewater treatment (STD 11).

Digital Transformation of the Textile and Fashion Design Industry

401

Table 4. Methods applied in studies and issues addressed Studies Methods Code

Sample Size

Experiment Modelling Interview Survey Case Expert Study Review STD 1

STD 2

x

x

x

STD 3

3 (company case studies) - 9 semi-structured interviews CBIR with five well known descriptors

x

NA

STD 4

x

x

2 innovation platforms case studies (16 semi-structured interviews)

STD 5

x

x

Unspecified number of semi-structured interviews

STD 6

x

42 experts (31 professionals, 5 BI vendors, and 6 academic researchers.)

STD 7

x

NA

STD 8

x

NA

STD 9

x

STD 10

x

none

STD 11

x

none

STD 12

27 employees

x

STD 13 STD 14

x

150 full-figured women with pear-shape body x

x

none x

54 students (continued)

402

A. A. Ogunyemi et al. Table 4. (continued)

Studies Methods Code

Sample Size

STD 15

x

x

STD 16

10 (comprising 4 experts and 6 managers/practitioners)

x

Total

3

8

4

3

13 companies 3

1

NA

Table 5. Green value chain indicators for the textile and fashion industry Studies

STD 1

Green value chain indicators Leadership /company

Sustainable product

innovativeness

design

Government policy

Renewable materials

Waste removal

Energy efficiency

Pollution reduction

Environmentally-friendly equipment /technologies

sourcing

Water usage efficiency

x

STD 2 STD 3 STD 4

x

x

x

x

x

STD 5

x

STD 6

x

STD 7

x

x

x

x

x

x

x

x

x

x

x

x

STD 8

x

STD 9

x

STD 10

x

x

x

STD

x

x

x

x

x

x

x

11 STD 12 STD 13

x

STD 14 STD

x

x

x

x

15 STD 16

x

Total

7

x

5

5

1

4

4

2

x

x

6

7

Digital Transformation of the Textile and Fashion Design Industry

403

4 Discussion 4.1 Topical Issues and Trends in the Textile and Fashion Industry in the Global South The textile and fashion industry are a significant contributor to the national economy of many developing countries (STD 1, STD 6, STD 9). Our findings show that China is leading the research on the textile and fashion industry and the finding is consistent with a systematic review by [7] that investigated environmentally sustainable practices in textiles, apparel and fashion industries and found most of the researches are from China. However, due to the need for digital transformation, some textile and fashion industries in developing countries have faced some decline and need upgrading (STD 3). On the one hand, the absence of digital transformation in the textile and fashion industry in the Global South is due to the lack of government policies to drive such transformation (STD 4). On the other hand, it is due to a lack of readiness to embrace green and smart production (STD 6, STD 9, STD 10). In some developing countries, the need to adopt sustainable and smart production is due to the need for a ‘cleaner and unpolluted global environment’ (STD 5), ‘global competition for mass customisation to address dynamic customer demands’ (STD 10), and energy efficiency (STD 11). Nevertheless, some developing countries have shifted their focus to sustainable development in their industrial sectors, especially textile and fashion design (STD 5). In particular, STD 10 found the global framework, Industry 4.0, not so implementable for the Global South and proposed a mid-range framework called Industry 3.5. It is noteworthy that Industry 5.0 is now in force regarding implementation in the Global North. The Global South textile and fashion industry results reveal a dichotomy between the two regions and suggest developing frameworks and guidelines for standard practices in contexts. STD 13 developed a ‘Multi-subpopulation Genetic Algorithm with Heuristics Embedded (MSGA-H) to reduce the makespan to enhance textile batch dyeing scheduling bottlenecks. The study responded to the need to transition the textile and fashion industry to operating under the industry 4.0 framework. In the same vein, STD 6 extended the Technology, Organisation and Environment (TOE) framework based on business intelligence dimensions to facilitate the adoption of advanced technologies in the textile and fashion industry. Overall, our results reveal that although the transition to sustainable and smart production is an ongoing process in the textile and fashion industry in the Global South, there is a paucity of studies investigating this issue. Our finding is consistent with Islam, Perry, and Gill’s [7] review, which revealed that research is scarce from developing countries even though the Global South is the leader in textile, apparel, and fashion production. Green value chain adoption is an opportunity for the textile and fashion industry to transition smoothly to Industry 4.0. Although one can argue that the industry 5.0 framework has superseded the industry 4.0 framework, it is a well-known fact that there is a significant digital divide between the highly industrialised Global North and the developing Global South. Our results suggest that research focusing on the digital transformation of the textile and fashion industry has slowly started since 2006 and not growing to date. Industrial revolutions in the manufacturing and creative industries seem to be the main

404

A. A. Ogunyemi et al.

focus for researchers, although other issues such as decision-making, leadership, and strategic management were the researchers’ focus. 4.2 Challenges Confronting the Textile and Fashion Industry in the Global South There are particular challenges facing the industry, but our findings reveal that some of the reviewed studies focused on proffering solutions to the afore-stated problems in Sect. 4.1. For example, STD 11 proposed 40 CdF2 60 BaF2 1.0 Er2O3 - an upconversion luminescence agent to enable solar irradiation or interior lighting efficiency. An upconversion luminescence agent is a technological approach for treating dye wastewaters utilising solar energy, particularly in developing countries’ textile industries. Similarly, STD 15 examined the comparative competitive preferences of large retailers’ slow and fast fashion retail operations in Brazil and found that price and quality are the most relevant priorities in slow fashion. In contrast, customer relationships and flexibility are significant priorities in fast fashion. The study, therefore, proposed procedures for simultaneously handling the two strategies. STD 15 proposed that Slow fashion companies should reduce costs and improve quality by implementing online process controls. Fast fashion companies should lower lot sizes, increase the variety of blends and assortments, and bolster customer ties. It is noteworthy that fast fashion consumes a significant amount of water and energy because its processes involve extracting raw materials, manufacturing fibres, dyeing, weaving and washing, fibre burning and recycling and waste extraction from clothes [6]. Other peculiar challenges or problems were also addressed. STD 2 used the ContentBased Image Retrieval (CBIR) method to inspire the creation of fashion trends. This challenge is not unlikely, considering varying levels of modernisation across the globe. Other factors that could solve these problems are cultural diversity and personal characteristics, which play pivotal roles in fashion choices and decisions. STD 7 demonstrated ‘FlexTruss, a design and construction pipeline based on the assembly of modularised truss-like objects fabricated with conventional 3D printers and assembled by threading’ to direct the workflow ‘guides the assembly of printed truss modules by threading.‘ STD 8 examined the ‘unaligned translation problem between design drafts and real fashion items’ and proposed a ‘design draft to real fashion item translation network (D2RNet)’, capable of generating ‘realistic garments with both texture and shape consistency to their design drafts.’ STD 12 developed a statistical model of essential body dimensions (bust, waist, and hip) to define a size chart for producing ready-to-wear apparel for curvy, pear-shaped South African women because of the cruciality of ‘anthropometric body measurement’ for the textile and fashion industry. Fashion size charts culminating in full-fledged, pear-shaped South African women’s study were an anathema to the fashion world as different races have peculiar figure prominence; hence, countries develop size charts specific to the average figures. Just as it is popularly projected that “there is no perfect figure”. STD 14 explored the use of augmented reality technology for training university students to acquire fashion design skills. STD 16 examined the prospects of ‘Internet of things (IoT) platforms for the industry 4.0 framework implementation.

Digital Transformation of the Textile and Fashion Design Industry

405

4.3 Research Methods Used in the Current Scholarly Works The research methods used in the current scholarly works on the digital transformation of the textile and fashion design industry are either single methods or mixed methods. The methods range from experimentation, modelling, interview, and survey to case studies. Our findings show that Modelling is the mostly used method, followed by the interview method. About half of our studies are non-behavioural inquiry-based, thus, explaining the use of modelling technique. In a related study, Rahman [11] also applied modelling in his study which sought to provide models of different applications of digital printing, 3D printing, artificial intelligence, and radio frequency identification (RFID) in textile manufacturing. Experiments, surveys, case studies, and expert reviews are the other used methods. In other recent review studies such as [7] and [3], the case study was the most used method. Our findings also show that mixed methods can also be used as one of the approaches for the digital transformation of the textile and fashion design industry. STD 1, STD 4, STD 5, and STD 9 used a combination of methods. This is consistent with Gazzola et al. [5] that also used mixed methods. Overall, we can infer that the nature of the problem of investigation determines the type of method to use. Our study goal was broad whereas other studies streamlined their focus. Conlon’s [3] focus was PLM practice in the textile, apparel, and fashion industries. Islam, Perry, and Gill [7] focused on environmentally sustainable practices in the industry. 4.4 Digital Transformation Opportunities for the Textile and Fashion Industry in the Global South Green value chain adoption is a huge opportunity for the textile and fashion industry to digitally transform their operations, as revealed in our findings. One of the topmost activities for companies to benefit from these opportunities is innovativeness. Environmentally-sustainable equipment/technologies is another indicator for boosting the green value chain for the industry. One of the existing recent review studies (Rahman, 2020) reported increasing demand for digital technologies such as 3D printing in the textile and fashion industry to foster supply chains and promote customer satisfaction. However, the same study revealed there is scarce research on in-depth assessment of applying digital technologies in the industry, especially textile manufacturing and fashion retailing. It is a well-known fact that the textile and fashion industry is a significant contributor to pollution. Nevertheless, production of sustainable fibres such as bio-technology fabrics is a great opportunity for sustainable production [12]. Adaptation of the industrial revolution frameworks such as the industry 3.5 framework proposed by (STD 10) is a huge opportunity for the industry regarding their digital transformation processes. It has also been proposed that the industry can harness the power of digital technologies to build an ecosystem to facilitate cross-pollination and symbiotic relationships among the members of the ecosystem [6, 12]. Such an ecosystem can help to coordinate logistics, processes, productions, supply chain, and value chain for the maximum and sustainable outputs and mutual benefits.

406

A. A. Ogunyemi et al.

4.5 Limitations, Implications, and Future Research This study is a scoping review. We might have missed other exciting studies due to streamlining the research to the databases searched, studies written in English, and only accessible studies online, among other inclusion criteria, set and research questions set with broad parameters. From the studies conducted so far, it is evident from the demographic data presented in Sect. 2.9 that only Egypt and South Africa are the African countries reported in the literature on the digital transformation of the textile and fashion industry, which is consistent with the review by Islam, Perry and Gill (2020) that included only three studies from Africa in their review. The implication of the paucity of studies conducted on the subject area reported in Africa is that researchers need to conduct more research on the pressing issues in the textile and fashion industry, such as environmentally sustainable practices, roles of digital technologies in the digital transformation of the industry, and product lifecycle management adoption among other topics. In-depth studies on what may have led to the paucity of literature on the subject area in African countries, especially Nigeria, are also needed. Second, practitioners in the textile industry in Nigeria and other countries in the Global South should be sensitised and motivated toward the adoption of digital transformation of the textile industry. Governments at all levels (Federal, State, and Local) should endeavour to formulate, evaluate and monitor enabling policies to encourage the adoption of digital transformation of the textile industry. For future studies, a more in-depth systematic review can be carried out, especially examining the state of the digital transformation of textile and fashion industries in the Global North and the Global South and the dichotomy between these regions - the challenges, lessons learnt, and opportunities.

5 Conclusions In this paper, we have considered the review papers on digital transformation, which has impacted the industrial sector, especially the creative industries in textile and fashion design. Our study highlighted a total lack of focus on the Global South in the existing literature. Therefore, we carried out a scoping literature review to map the essential concepts to provide a basis for a comprehensive review in the near future. Our discoveries are: industry 4.0 dominated the studies reviewed, technology, raw materials, process, and industry 4.0 transition are the documented significant challenges confronting the textile and fashion design industry, and half of the studies are nonbehavioural and used the modelling research method. Leadership/company innovativeness and water usage efficiency provide the most significant opportunities for the digital transformation of the textile and fashion industry in the Global South. There is a paucity of studies on the digital transformation of the textile and fashion industry in developing countries. Therefore, we have examined digital transmission trends, challenges, and opportunities in the Global South. We have shown ample opportunities for the digital transformation of the textile and fashion industry in the Global South. One interesting future research is conducting a systematic review of the Global North and South on the subject matter.

Digital Transformation of the Textile and Fashion Design Industry

407

Acknowledgment. The authors thank the Africa Centre of Excellence, OAU ICT Knowledge Driven Park (ACE-OAKPARK), for support.

Funding. This study is part of the digital textile ecosystem development project for the Nigerian textile industry funded by The Nigerian Tertiary Education Trust Fund (TETFUND) 2020 (NRF/CC/EWC00049).

Appendix 1 See Tables 6. Table 6. Overview of the studies Studies Code Issues addressed

Major outcomes/key findings

STD 1

Decision-making processes and strategy formulation

The function of leadership and strategy diminished. Five internal factors contributed to the failure: 1) “Seth” management styles and policies; 2) Firm’s culture, politics, and internal disputes; 3) Hostile human resource policies and unfriendly working environment; 4) Poor financial portfolios management; and 5) Operational matters connected to technology, operations, and marketing

STD 2

Creating fashion trends

Tamura texture’s precision and recall outperform the other descriptors. Different fashion style affects the descriptor’s precision and recall efficiency

STD 3

Role of dyeing and finishing technology

The optimal dyeing and finishing process design is fundamental for textile plants. The various quality provisions of the plant can set the input and output of the dyeing and finishing technology (continued)

408

A. A. Ogunyemi et al. Table 6. (continued)

Studies Code Issues addressed

Major outcomes/key findings

STD 4

Industrial cluster

First, population elements are essential factors affecting the governance mode of the innovation medium. Second, two of four technological regime proportions affect the innovation medium’s strategic position. An innovation medium facing more technology prospects and appropriability is more likely to be innovation system oriented rather than production system oriented. Third, the governance mode contributes to the strategic position of the innovation medium. Government-dominated innovation media tend to have general-oriented strategic positions and equally stress production and innovation systems

STD 5

Environmental sustainability and green readiness

The demand for green supply chains highlights the necessity for approximating green innovations in materials and processes and places additional problems in shifting green since the determination of what innovations are more valuable than others and which will be obsolete suffers the same ambiguity that companies concerned with high innovation levels do. Developing countries will only use these innovations if they have worked elsewhere. Also, each region requires funding for further development and implementation of these innovations to fit, especially in an affordable manner

STD 6

Industry 4.0 compliance

Sustainability (E2), leadership management and support (O1), technology maturity (T1), compatibility (T2), and users’ traits (I1) are more significant with the highest (R-D) values, which shows these determinants have a more significant impact on the whole model than other determinants (continued)

Digital Transformation of the Textile and Fashion Design Industry

409

Table 6. (continued) Studies Code Issues addressed

Major outcomes/key findings

STD 7

Real-time feedback and design Application of computational design user-based modification for 3D fashion tool to use cases in fashion design design shows that FlexTruss is a user-friendly and usable design in terms of feedback and design plasticity

STD 8

Misalignment between design draft and real fashion

A novel D2RNet translates design drafts to authentic fashion items and shows promising performance in both shape preservation to original design drafts and creation of naturalistic texture details, and a novel R2DNet to solve its inverse task

STD 9

Industry 4.0 readiness

The textile industry is a newbie in the Industry 4.0 implementation. Due to deficient digital elements in products and/or services, machine-to-machine communication, suitable strategies toward Industry 4.0 and its implementation plan, and poor ICT systems budget management

STD 10

Decision support system for smart production

The study produced a decision support system for dyeing machine scheduling to integrate production information among individual systems and provide dyeing machine scheduling for maximising the utilisation

STD 11

Treatment of dye waste water

The upconversion luminescence agent can emit five upconversion fluorescent peaks below 387 nm under 488 nm visible light excitation

STD 12

There is a mismatch between apparel available in the market and existing body types for triangular female body-shaped Africans. No existence of a South African anthropometric database

Two models developed are reliable for three key rim body dimensions (bust, waist and hips) within the size chart for the full-figured, pear-shaped South African woman. The models confirm that the bust measurement is necessary for determining waist and hip body size measurements. Further, the bust measurement may be used to predict the apparel sizes by consumers. Size chart based on the standard “ideal” body shape cannot provide the full-figured, pear-shaped South African woman with the right fit of apparel (continued)

410

A. A. Ogunyemi et al. Table 6. (continued)

Studies Code Issues addressed

Major outcomes/key findings

STD 13

The problem of textile batch dyeing scheduling arises due to the short product lifecycle and demand for smart fabric production

The proposed MSGA-H can solve the parallel batch processing machines scheduling efficiently and effectively. It is also robust

STD 14

How to use augment reality technology There was a statistical significance to develop fashion design skills between the performance of the two groups in favour of the experimental group that studied through augmented reality technology

STD 15

Fashion marketability

In slow fashion operations, price and quality are two outstanding competitive criteria, whereas, Customer Relation Management and flexibility are the two outstanding competitive criteria in fast fashion operations. Competitive preferences may impact performance. Financial performance might focus on cost, flexibility, and quality; prioritising delivery and quality should be the focus for non-financial performance

STD 16

Identifying the leading IoT platforms presently used by companies and verifying their versatility and the field of application

Industry 4.0 implementation demands a joint public sector action led by the government, businesses and the university. Only 23% of the companies surveyed do not use any platform to explore company data, although those realise the Industry 4.0 concept, while 47% of the companies confirmed utilising two or more platforms simultaneously in various business areas

Digital Transformation of the Textile and Fashion Design Industry

411

Appendix 2 See Table 7. Table 7. References of the included studies Study Code Full Reference STD 1

Qureshi JA, Shaikh AM, Seaman C (2021) Leadership mindset and the fall of once giant family-run textile exporting businesses. Glob Bus Organ Excell 40:41–55. https://doi.org/10.1002/joe.22129

STD 2

Kawinakrathiti K, Phimoltares S (2014) A comparative study of CBIR descriptors on innovative application of fashion image. In: 2014 4th International Conference on Digital Information and Communication Technology and Its Applications, DICTAP 2014. IEEE, pp 164–168

STD 3

Miao Z, Lin W (2016) Based on goal programming model of optimization study of dyeing and finishing technology. In: Proceedings - 2015 International Conference on Intelligent Transportation, Big Data and Smart City, ICITBS 2015. pp 148–151

STD 4

Gong L, Jiang S (2011) Does one size fit all? Explaining the governance mode and strategic position of cluster innovation platform: A comparative case study of Zhili childrens garment cluster and Shaoxing textile cluster. In: PICMET: Portland International Center for Management of Engineering and Technology, Proceedings. IEEE, pp 1–15

STD 5

Ibrahim SE, Ahmed KH (2011) Key drivers for sustainable operations in developing countries: A textile case study from Egypt. In: PICMET: Portland International Center for Management of Engineering and Technology, Proceedings. IEEE, pp 1–8

STD 6

Ahmad S, Miskon S, Alabdan R, Tlili I (2021) Statistical Assessment of Business Intelligence System Adoption Model for Sustainable Textile and Apparel Industry. IEEE Access 9:106560–106574. https://doi.org/10.1109/ACCESS. 2021.3100410

STD 7

Sun L, Li J, Luo D, et al. (2021) Fashion Design with FlexTruss Approach. In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, pp 1–4

STD 8

Han Y, Yang S, Wang W, Liu J (2020) From Design Draft to Real Attire: Unaligned Fashion Image Translation. In: MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia. pp 1533–1541

STD 9

Dikhanbayeva D, Aitzhanova M, Shehab E, Turkyilmaz A (2022) Analysis of Textile Manufacturing SMEs in Kazakhstan for Industry 4.0. Procedia CIRP 107:888–893. https://doi.org/10.1016/j.procir.2022.05.080

STD 10

Ku C-C, Chien C-F, Ma K-T (2020) Digital transformation to empower smart production for Industry 3.5 and an empirical study for textile dyeing. Comput Ind Eng 142:1–11. https://doi.org/10.1016/j.cie.2020.106297 (continued)

412

A. A. Ogunyemi et al. Table 7. (continued)

Study Code Full Reference STD 11

Wang J, Zhang G, Zhang Z, et al. (2006) Investigation on photocatalytic degradation of ethyl violet dyestuff using visible light in the presence of ordinary rutile TiO2 catalyst doped with upconversion luminescence agent. Water Res 40:2143–2150. https://doi.org/10.1016/j.watres.2006.04.009

STD 12

Afolayan OO, Zwane PE, Mason AM (2021) Statistical modelling of key body dimensions in developing the size chart for the South African pear - shaped women. J Consum Sci 49:52–63

STD 13

Huynh N-T, Chien C-F (2018) A hybrid multi-subpopulation genetic algorithm for textile batch dyeing scheduling and an empirical study. Comput Ind Eng 125:615–627. https://doi.org/10.1016/j.cie.2018.01.005

STD 14

Elfeky AIM, Elbyaly MYH (2021) Developing skills of fashion design by augmented reality technology in higher education. Interact Learn Environ 29:17–32. https://doi.org/10.1080/10494820.2018.1558259

STD 15

Sellitto MA, Valladares DRF, Pastore E, Alfieri A (2022) Comparing Competitive Priorities of Slow Fashion and Fast Fashion Operations of Large Retailers in an Emerging Economy. Glob J Flex Syst Manag 23:1–19. https://doi. org/10.1007/s40171-021-00284-8

STD 16

Sinisterra KVB, Mejía SM, Molano JIR (2017) Industry 4.0 and Its Development in Colombian Industry. In: 4th Workshop on Engineering Applications, WEA 2017, Cartagena, Colombia, September 27–29, 2017, Proceedings. pp 312–323

References 1. Ahmad, S., Shaikh, A.M., Seaman, C.: Statistical assessment of business intelligence system adoption model for sustainable textile and apparel industry. IEEE Access. 9, 106560–106574 (2021). https://doi.org/10.1109/ACCESS.2021.3100410 2. Arksey, H., O’Malley, L.: Scoping studies: towards a methodological framework. Int. J. Soc. Res. Methodol. 8(1), 19–32 (2005). https://doi.org/10.1080/1364557032000119616 3. Conlon, J.: From PLM 1.0 to PLM 2.0: the evolving role of product lifecycle management (PLM) in the textile and apparel industries. J. Fash. Mark. Manag. 24(4), 533–553 (2020). https://doi.org/10.1108/JFMM-12-2017-0143 4. Dikhanbayeva, D., Aitzhanova, M., Shehab, E., Turkyilmaz, A.: Analysis of textile manufacturing SMEs in Kazakhstan for industry 4.0. Procedia CIRP. 107, 888–893 (2022). https:// doi.org/10.1016/j.procir.2022.05.080 5. Gazzola, P., Pavione, E., Pezzetti, R., Grechi, D.: Trends in the fashion industry. The perception of sustainability and circular economy: a gender/generation quantitative approach. Sustain 12(7), 1–19 (2020). https://doi.org/10.3390/su12072809 6. Happonen, A., Ghoreishi, M.: A mapping study of the current literature on digitalization and industry 4.0 technologies utilization for sustainability and circular economy in textile industries. In: Yang, X.-S., Sherratt, S., Dey, N., Joshi, A. (eds.) Proceedings of Sixth International Congress on Information and Communication Technology. LNNS, vol. 217, pp. 697–711. Springer, Singapore (2022). https://doi.org/10.1007/978-981-16-2102-4_63

Digital Transformation of the Textile and Fashion Design Industry

413

7. Islam, M.M., Perry, P., Gill, S.: Mapping environmentally sustainable practices in textiles, apparel and fashion industries: a systematic literature review. J. Fash. Mark. Manag. 25(2), 331–353 (2021). https://doi.org/10.1108/JFMM-07-2020-0130 8. Mays, N., Roberts, E., Popay, J.: Synthesising research evidence. In: Fulop, N. et al. (eds.) Methods for Studying the Delivery and Organisation of Health Services. Routledge, London (2001) 9. Morakanyane, R., Grace, A., O’Reilly, P.: Conceptualizing digital transformation in business organizations: a systematic review of the literature. In: 30th Bled eConference: Digital Transformation - From Connecting Things to Transforming our Lives, BLED 2017. pp. 427–444 (2017). https://doi.org/10.18690/978-961-286-043-1.30 10. Qureshi, J.A., Shaikh, A.M., Seaman, C.: Leadership mindset and the fall of once giant familyrun textile exporting businesses. Glob. Bus. Organ. Excell. 40(6), 41–55 (2021). https://doi. org/10.1002/joe.22129 11. Rahman, M.: Applications of the digital technologies in textile and fashion manufacturing industry. Tech. Rom. J. Appl. Sci. 3(1), 114–127 (2021) 12. Teunissen, J., Bertola, P.: Fashion 4.0. Innovating fashion industry through digital transformation. Res. J. Text. Appar. 22(4), 352–369 (2018)

Electrical Big Data’s Stream Management for Efficient Energy Control Jean Gane Sarr(B) , Ndiouma Bame, and Aliou Boly Cheikh Anta Diop University, Dakar Fann, BP 5005, Senegal {jeangane.sarr,ndiouma.bame,aliou.boly}@ucad.edu.sn Abstract. Energy is crucial for any activity of a country. Therefore, electrical organizations need to analyze the data generated in the form of data streams by network equipment’s for decisions making. These data are voluminous and velocious, which make impossible to process or store them with conventional methods. Hence the need to have a tool that allows to exploit these electrical data from the user’s consumption and the network. In this paper, we propose a tool that summaries data streams by using data cubes structures and gives the ability to face the need to make the best decisions required by the customer as well as the supplier. This tool is composed by a data stream summary model and two algorithms used to create, load, and update the data cubes. This proposal was performed with Big Data tools that give to this summary the capacity to scale up. To demonstrate the effectiveness of the proposition, a detailed experimental evaluation over a real electrical data stream is presented. Keywords: energy data · data cube

1

· electrical data · data stream · summary · big

Introduction

The need to manage power energy in developing country constitute a real challenge. Indeed, all other sectors like health, education, telecommunications, and so on, crucially depend on electricity. Therefore, with the advent of smart meters, electricity consumption can be measured remotely from consumers by electricity suppliers such as EDF for France, ENEL for Italy or SENELEC for Senegal with a certain periodicity between measured (seconds, minutes, hours,...). This has the effect of generating a continuous stream of electrical data, in a fast and volumetric way. Thus, sensors can be connected to the supplier’s information system (IS) to allow billing, control, aggregation of load curves, for one thing. The analysis made on the consumption readings of the meters of the customers can prove to be of great use both for the supplier and for the customer. Indeed, the customer could use this analysis, for example, to detect the reasons of high-power consumption (a damaged freezer, a high number of household appliances,...). The supplier, could use these data, for example, to know the profile of the consumption to propose adapted prices to each customer, to anticipate electrical needs, to c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023  Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 414–429, 2023. https://doi.org/10.1007/978-3-031-34896-9_25

Electrical Big Data’s Stream Management for Efficient Energy Control

415

react to demand in the advent of a critical situation, or to supervise its network, to process the overall efficiency of the network or to issue some alerts, among other things. In addition, the recorded measures allow to analyze the load curves corresponding to the evolution of power consumption at a time interval ranging from one second to many days for a well-defined period. Thus, a communicating meter can, during its operation, transmit a records (tuple) according to a customizable time interval. This tuple can contain the meter number, the reading time, the date, voltages, intensities, active power, and other energy attributes. Therefore, a stream of consumption data is composed by events coming from several equipments. Data streams are often described by multiple qualitative and quantitative attributes from different objects. These data can be stored in data structures called cubes [1] where the qualitative objects can be viewed are dimensions and the quantitative objects are measures. They are also unloaded from the system if it does not exist a retention mechanism that can be used to query them in the future. In this sense, this paper aims to present a summary technic that allows to analyze electrical data streams with Online Analytical processing (OLAP) [2] technics by using NoSQL [3] tools that give the possibility to deal with data streams constraints like storage and processing and to scale up. In the remainder of this article, the Sect. 2 presents a study of some propositions for the processing of electrical data streams. Then, in Sect. 3, the proposed summary and the architecture are described. The data stream used and its modeling as well as some use cases are presented in Sect. 4. The results obtained are given in Sect. 5. Finally, in Sect. 6, an assessment and future directions are discussed.

2

State of the Art

Many relevant works on the processing of electrical data streams have been carried out [4]-[10]. The authors of [4] presented an experiment performed with two public Data Stream Management Systems (DSMS), STREAM and Telegraph [6] based on queries with the aim of proposing a new distributed data stream management system. This work allows to discover the usage of these frameworks but do not propose a summary. In [7], authors proposed an approach for real-time analysis of the energy consumption of manufacturing equipment based only on electrical energy data streams. This approach followed the paradigm of eventdriven systems and used complex event processing methods, however, it did not retain data. In [8], event stream processing techniques are applied to automate the monitoring and analysis of energy consumption in manufacturing systems. Methods to reduce the use according to the specific patterns discerned are discussed. However, in these works, the processing is performed only on the fly. So, there is no mechanism that retains a certain part of the observed data stream, that’s why they are unloaded after analysis. Another proposal for the processing of electrical data streams is PQStream [9]. It is a data stream processing system which determines the quality of electrical energy. This system has been designed for the processing and management of electrical data streams from the Turkish electricity transmission system. However, the output is stored in a relational

416

J. G. Sarr et al.

database that does not allow multidimensional analysis and scaling. In [10], the authors presented an approach related to Big Data and the Internet of Things that involved models that learn gradually. They used a strategy that consists of storing the first events of the stream to apply a set of learning techniques on them that make the streaming algorithms ready to be able to handle incoming events. In contrast, this proposition doesn’t allow to query the unloaded data because of a lack of data storage structure. These works constitute real advances in the processing of electrical data streams. Nevertheless, they do not address the multidimensional aspect of them. Also they do not propose summary techniques that enable to preserve and exploit certain relevant data from these data streams in the future. Indeed, with data streams, the data are unloaded from the system after each expiration of time windows. However, the end user may want to query the past data to be able to make decisions. Therefore, with these systems, it will not be possible to give results for this kind of queries. In addition, to satisfy the storage and processing constraints related to data streams, the system must evolve, which is not the case with these proposals. Thus, this paper gives a summary of multidimensional data stream performed on electrical data with Big data tools to deal with scalability and cope with storage and processing limits for future analysis.

3

Summary Model and Architecture

This section describes the proposition that consists of a multi-level cascading cubes with two algorithms to create, load and update these cubes. A cube is a data structure that involves axes of analysis like equipment, time, geograpic area known as dimension to analyse insights called measures as power, voltage, intensity, gathered into a fact table [1]. A cube can be modelized in a star schema (where dimensions dont have links) or in a snowflakes schema (where dimensions may be linked) [1]. For the implementation of this model, a complete architecture based on Big data approach will be proposed. 3.1

A Cascading Cube Model

The data streams are often characterized by several dimensions and facts. Thus, it becomes necessary to model them in a multidimensional way for OLAP applications. In this sense, a multidimensional data streams summary technic is proposed. These summaries consist of cascading data cubes with algorithms to create and update the different cubes. The Summary Definition. The proposed summary is composed by different sub summaries corresponding each to a cube. Thus, let S be the summaries set presented by Fig. 1, Tw a tilted-time window [11] (where the most recent data are registered at the finest granularity and the more distant data are registered at a coarser granularity). We also define the set of cubes C representing the differents cubes arranged in a manner to form a cascade. In other words,

Electrical Big Data’s Stream Management for Efficient Energy Control

417

the summary is composed by cubes having different levels of data aggregation following the time granularity. Therefore, we have : – Tw = {Tw1 , Tw2 ,..., Twn } where each Twi in this time dimension corresponds to a time window and Twi < Twi+1 . – C = {C1, C2,..., Cn} where each Ci (1 ≤ i ≤ n) corresponds to a summary (cube of the cascade) computed after the time window Twi (1 ≤ i ≤ n). In the set of cubes C, Ci and Ci+1 are disjoint. In addition, each Ci+1 is the result of the update function (presented in the following section) applied on Ci. – Thus, the definition of the global summary is gived by the union of cubes or summaries Ci calculated during the time window Twi . Thus, S = U {(Ci, Twi )}. The expiration of a window initiates the call of the aggregation and propagation algorithm of the model. Thus, the different sizes of these time windows constitute indicators used as timers to aggregate and propagate data from one cube level to another. The cubes of this model differ in the duration of conservation of their data before aggregation and propagation thereof to the cube having the highest temporal value as well as the level of aggregation of the data of each of these cubes. In this sense, the temporal granularity is not the same at all levels of the cascade and it is possible to customize it with minimum (gTMin minimal time window parameter) and maximum (gTMax maximal time window parameter) limits presented in Fig. 1. The number of cubes in the cascade is defined by the gTMax parameter. For example, when gTMin = minute and gTMax = day then we have time levels minute, hour, day so the cascade will contain 3 cubes.

Fig. 1. Multi-level cascade of cubes.

The Algorithms of the Model. The initialization and the updating of the different cubes of the cascade are performed by two algorithms. The first algorithm, presented by the Fig. 2, creates and loads the data cubes. The second is the update algorithm described by Fig. 3 and used to propagate the aggregated data between two successive cubes of the cascade. These two algorithms are customizable in the aim to be able to consider the user’s preferences and are presented as follows.

418

J. G. Sarr et al.

Initialization Algorithm. This algorithm is responsible for the creation of each data cube of the model. It is presented by the Fig. 2. For the first level of the cascade, the function verifies if the table corresponding to the data cube already exists; if it is not the case, it creates the cube’s table where column families correspond to each dimension and to the fact(s) [1] defined from the received data structure before loading data. After creating this data cube, this function will then inject data in the corresponding table.

Fig. 2. Initialization of summaries.

The Update Algorithm. The data cube update task is made by this function also known here as f update and presented by the Fig. 3. It is executed after time windows expiration to update the differents cubes. This function works in different steps. In the first time, it discards data contained in the cube Ci+1. In the second step, it generates a new data stream with data contained in the cube Ci. In the next step, it calls over this new data stream a continuous query to aggregate these data according to the new time window granularity. Afterwards, in the fourth step, it injects the results in the cube Ci+1 table. The update function is run to these different processes between all the cubes of the cascade in a descending way. If we assume to have three cubes Ci, Ci+1 and Ci+2, the process can be defined like following: discard data from Ci+2, Ci+2 = f update(Ci+1), discard data from Ci+1, Ci+1 = f update(Ci), discard data from Ci, load new data in Ci. Discarding the data of the last cube allows one to maintain a bounded size for the system.

Fig. 3. Summaries (cubes) updating process.

Electrical Big Data’s Stream Management for Efficient Energy Control

3.2

419

Architecture

The architecture presented by the Fig. 4, consists of different layers or phases ranging from the ingestion of data streams to the visualization of these data streams through the processing of data and their storage. These different phases define a multilayer architecture whose levels are interdependent. At each level of this architecture, there are different tools to manipulate data streams for well-defined processing.

Fig. 4. The global architecture.

3.3

Data Stream Ingestion Layer

The data stream ingestion layer connects the various data stream sources such as, for example, the electricity consumption data collected in real time on the meters of all the network equipments (substations, transformers, feeders, ...) to the processing layer. These data streams are collected and then injected into the system by various tools that work in producer/consumer mode by using a queue. This step may involve tools such as Kafka [12], Flume [13]. In this work, Kafka is used to implement this layer because it can handle voluminous data from various sources in a fast way. It is also simple to implement and can be linked with many other processing tools. 3.4

Data Stream Processing Layer

This processing can be performed into two ways, namely a stream or batch processing [14]. In batch processing, data are collected and grouped into blocks of a certain temporal granularity (second, minute, hour, ...) after that, they are injected into a processing system. Batch processing is best suited when data streams are received later. For example when the data source only provides its information every 30 min. As usage, it can be employed to process all electrical measures taken by the meters after 10 min. This type of processing is also suitable when it is more important to process large volumes of data to obtain more detailed information than to obtain fast analysis results. For batch processing, there are various distributed platforms that provide scalable processing on clusters. To process the data streams as they arrive in the system, it is necessary

420

J. G. Sarr et al.

to perform on-the-fly processing on them to be able to draw knowledge from them. In fact, their storage in their globality is not allowed by the available resources. The continuous processing of data streams is usually done on clusters of distributed machines to allow to scale. This ensures a certain availability of processing resources (memory and CPU) [20]. This layer can be achieved by tools like Apache Storm, Spark Streaming [21]. 3.5

Storage Layer

This layer, is used to store summaries build by the processing layer. The implementation of this layer is ensured by a model consisting of cascading cubes representing the summary of data streams obtained after processing. The first level is a cube that stores detailed data, and the other levels aggregate the data according to the time window coarsity. The data migration between the cubes of the cascade is performed using an update algorithm later presented. This data stream summary layer allows to visualize the history or a part of them in order to give to the decision-makers the ability to perform analysis as well as to query past data. This layer is achievable by solutions such as HBase, Hive or Cassandra [14]. In this work, HBase [15], a NoSQL Document-oriented database, is used for the implementation of this layer. The reason of this choice is that HBase helps to deal with scalability and gives high performance in read/ write tasks. Indeed, NoSQL databases support a variety of data models that are ideal for building applications that require large data volumes and low latency or response times. They provide flexible schemas that allow to handle multiple data format (non structure, semi-structured or structured) [15]. NoSQL databases can scaleout by using commodity hardware which has the ability to support increased traffic [18,19]. In addtion, NoSQL databases can become larger and with hightperformance which make them to be more suitable for evolving datasets like data streams [16]. They are automatically replicated wich give a high availibilty and are designed for distributed data stores with large data storage needs which make them best choice for big data. These solutions involve tools such as HBase, Hive or Cassandra for the management of NoSQL databases and are most often used to store Big Data [15]. They are grouped into four categories depending on the type of representation implemented. Thus, key-valueoriented, column-oriented, document-oriented, and graph-oriented models can be distinguished. Data streams summaries using Column-oriented databases: The columns-oriented databases were used in various propositions [16,17] to store data streams. Indeed, they are built for highly analytical, complex-query tasks and are most often suitable for large data model. They can also be utilized to perform data warehouses. HBase [16] works very well for real-time analysis of large data streams. 3.6

Data Visualization Layer

The visualization layer allows the extraction of crucial information for analysis with real-time data stream mining or learning tools to allow strategic and/or tac-

Electrical Big Data’s Stream Management for Efficient Energy Control

421

tical decision-making (detecting outliers [17], detecting fraud, ...). Thus, to help analysis of the data streams, obtained information must be described according to a certain number of types of representation such as tables, graphs, curves and stuff. And these different displays, combined, represent a dashboard. In this sense, this phase of data analysis is the one in which can be detected intrinsic patterns, extract relationships and knowledge. It thus helps the decision-makers to easily exploit the results obtained from the data stream analysis. SAP Hana, Power BI [22] are different tools that can be utilized to carry out this analysis. Power BI is employed for the need of this work.

4

Implementation

This section describes the implementation of the summary based on the presented architecture. The used data stream and his modelling are presented. Also the ingestion is achieved with Kafka, the processing layer with Apache Storm, and storage layers with HBase [3,20]. The visualization layer is achieved with Power BI Desktop [22]. 4.1

Electrical Data Stream

The used data stream in the experiments describes data measured by smart equipments. They are continuously obtained from the different components of the electrical network of Senelec (national electricity company of Senegal). The database used as source has an initial size more than 100GB and is continuously growing with millions of records (measures) each record having 42 columns (40 numerical values) 1 for the timestamp and 1 for the meter number. This data stream is described by the following parameters: – The name gives the name of the equipment. – The equipment type item describes the type of equipment. – The longitude and latitude headings describe the geographical the equipment location. – The MeterNo the counter number, – The dataTstmp the timestamp, – The attributes P8311 (U1), P8312 (U2) and P813 (U3) the values measured of the average voltages of phases 1, 2 and 3 in volts (V), – The attributes P8391 (I1), P8392 (I2) and P8393 (I3) the measured values of the average intensities of phases 1, 2 and 3 in ampere (A), – The P8341 (kW) attribute defines the value measured of the imported active power from which the reactive power (in kW) will be processed with the values of the voltages as well as those of the intensities. 4.2

Use Case

The electrical data streams analysis allows to gain information that helps to know the network behaviour. With these results the end user can make investments,

422

J. G. Sarr et al.

to know zones that need to be linked to the electrical network, to reduce power wastage, to detect frauds, among other analysis tasks. Therefore, in this paper we provide to a decision maker a way to query multidimensional data streams. This proposition allows then to the end user to analyse electrical data stream measures (intensities, voltages, powers) by using different axes of analysis (equipment, time, geographic) in the aim to take decisions like what zones require to be liked to the electrical network, how to detect frauds, how to predict production, how to reduce power wastages, among others. In this sense, we build a summary that gives a great possibility of analysing data streams and is applicable to all use cases. In this sense, some cases are defined related to the data stream described above. The queries are built in windows defined on the streams emitted in the system by the higher layers of the architecture. These queries are formulated as follows: – Query 1: Determines the average for the Phase 2’s voltage of all equipments. The result of this query allows to detect for example the causes of abnormal or excessive consumption by detecting values lower or greater than these measures. – Query 2: Gives the maximum, minimum and the average values for the phase 2 voltage of the HTA transformers. The supplier could use this query, for example, to anticipate electrical needs, to react to power energy needs of the customers demand in the advent of a critical situation, or to supervise its network. – Query 3: Provides U2 values for all equipment the 09-27-2020 for example. This query can help to detect the equipments that excessively consume power for a precise day. – Query 4: Gets maximum, minimum after getting the average values of U2 for equipment which name contains “Classic” for period between the 09-27-2020 at 23:50 and the 09-28-2020 at 04:20 for example. This query can be used to detect outliers [17]. 4.3

Modeling

In the following, the star model [1] is described by the Fig. 5. It has been developed from the presented data stream. This model contains dimensions which are the axes of analysis and a fact table. These dimensions are the following: the “DimEquipment” dimension describing each equipment present in the network having attributes like the meter number, the equipment name. Then, the “DimTime” dimension that stores different time granularities such as data tstmp, minute, hour, day. The “DimService” dimension gives the service to which the equipment is attached. The “DimGeo”’ dimension represents the location of the counter and allows a geographical analysis. The “DimEquipmentType” dimension that categorizes the equipment as source station, substation, feeder, transformers. Finally, the model has a central fact table, “FactEnergy” which groups together a certain number of measures to be analyzed. More formally, we have the following diagram.

Electrical Big Data’s Stream Management for Efficient Energy Control

423

Fig. 5. Star schema based on the used data stream

4.4

Data Stream Retrieval

The source data stream is retrieved by this part of the system that is built with kafka event producer. This producer is implemented with Java. It defines the kafka configuration by using java.util.Properties object and it uses the send method of the classe org.apache.kafka.clients.producer.Producer to emit events read from th source into a Kafka topic. A topic is similar to a folder in a filesystem, and the events are the files in that folder and works as a queue. And before writing events in a topic, it must be created by giving it a name with the following command : – bin/kafka-topics.sh –create –topic fluxDBTestUpdate –bootstrap-server localhost:9092 Thus, the producer firstly retrieves events from the data source. Then, it sends them to the Kafka topic where they are taken by a Storm’s spout (consumer) to send them to the various bolts for processing. 4.5

The Summary Construction and Update

The initialization and update algorithms previously presented for the summary’s cubes creation and updating are implemented in this part. This layer ensures the processing of the tuples into the Kafka topic. To achieve these tasks, different functions performed by Storm bolts are used for retrieving these data. They also execute over data streams continuous queries. In the final step, the presented initialization function is ran to create the cube data tables (multidimensional summaries) in HBase and to load obtained results into them. After retrieving the data injected by a spout, a bolt is defined with the goal to translate data into a JavaBean having the following prototype: MeasuresCounter (String id, String meterNo, String dataTstmp, String P8311, String P8312, String P8313, String P8391, String P8392, String P8393, String 8341). The JavaBean instantiated

424

J. G. Sarr et al.

is transmitted to a second bolt which role is to check the existence of the table of the first cube (summary) in HBase. If the table does not exist, one method which, after defining the HBase table structure, creates it. This method performs the description of its column families as well as the columns of the HBase table. Then data are inserted into the previous created table. The updating cubes (summaries) process is performed according to different steps. In the first step, the data of the cube Ci are retrieved via a new Kafka producer. After aggregation, they are loaded in a new Kafka topic (a queue) named fluxDBTestUpdate. Then by a new spout, we load these data into an update bolt. This update bolt performs its processing via queries executed in the Esper engine following time windows corresponding to the different time values of the different levels of the cascade. Indeed, each cube updated corresponds to a welldefined time window. To do this, a Java class called EsperOperation is defined. The constructor of the EsperOperation class itializes the Esper listener and defines the Esper query to be executed, presented by the Fig. 6. For example, if we want to update the data of the cube corresponding to the last 30 min electricals measures (cube Ci ), the Esper query buffers the events over 30 min and then returns the result of the aggregate functions applied to them for each equipment during the 30 min. For data propagation, the bolt that achieves the update task calls the data insertion bolt to load the results into an HBase table corresponding to the cube to be updated (cube Ci+1 ). Fixed time windows are used here.

Fig. 6. An Esper query sample

5

Results and Discussion

This section presents some results obtained after execution of some queries using the time and equipment dimensions. The Fig. 7 presents the phase 2 voltage (U2) over all the network, the Fig. 8 the phase 2 voltage for electrical HTA transformers, the Fig. 9 phase 2 voltage for the 09/27/2020 and Fig. 10 phase 2 voltage for 09/20/2020 at 04:20. In each of these figures, we also represent MAX, MIN and AVG phase 2 voltage values.

Electrical Big Data’s Stream Management for Efficient Energy Control

5.1

425

Results Presentation

Fig. 7. Phase 2 voltage (U2) measure over the electrical network

The Fig. 7 shows the voltage measured over all the network. It also shows the max, min and average of the phase 2 voltage. These results allow to the supplier to supervise the equipment load. The Fig. 8 gives the maximum, the minimum and the average values for the phase 2 voltage of the HTA transformers. It shows the ability of the summary to answer queries with filters in other dimensions. This query can help the supplier to supervise the electricity transportation. The Fig. 9 shows phase 2 voltage for the given day 27-09-2020. It allows to see that the summary can aggregate data by also applying a filter. This query demonstrates the summary ability to analyze the network data in a particular date. The Fig. 10 shows that the summary can be used to analyze a certain part of the electrical network.

426

J. G. Sarr et al.

Fig. 8. Phase 2 voltage (U2) voltage measure for HTA transformers

Fig. 9. Phase 2 voltage (U2) voltage measure for the 27-09-2020

Electrical Big Data’s Stream Management for Efficient Energy Control

427

Fig. 10. Phase 2 voltage (U2) measured values with maximum, minimum and the average for equipment which name contains “Classic” for between timestamp “09-272020 23:50” and “09-28-2020 04:20” ?

5.2

Discussion

In this section, a certain number of queries have been executed against the summaries. We observed from the results that the waterfall cube model was able to meet the demands of a user who would like to view stream data on different dimensions. The results obtained by the summaries show that the contribution of this model is different from works presented in the section II. Indeed, it allows to give answers for queries by using different dimensions over data streams with filtering and aggregation. Moreover, this model can scale up. This model’s ability is allowed by the fact that it was achieved by Big Data tools. This functionality is not offered by the propositions studied from the literature. In this sense, the model can be used to deal with CPU and memory resources.

6

Conclusion

Analyzing electrical data streams can give the opportunity to take decisions that will help to evolve different sectors of a developing country. However, it is impossible to efficiently process and store these multidimensional data streams with existing methods. Therefore, in this paper, different propositions on electrical data stream processing have been studied. This study shows that these different interesting systems do not propose a summary technic for analyzing in the future electrical data streams but only do it on the fly. Also, these works do not consider the multidimensional aspect of electrical data streams. In this sense, a generic cascading cubes data model and an architecture have been proposed and

428

J. G. Sarr et al.

implemented with Big Data tools. The architecture includes ingestion, processing, storage and visualization layers. Similarly, the generic cascading cubes data model and its OLAP modeling have been described. The implementation of the different architecture’s layers was also presented to test the waterfall model over a real electrical data stream. Finally, the execution of some queries has been performed on the summaries. The execution of these queries shows that the proposed model met the expectations of decision-making users. The perspectives are to study and to develop data mining and machine learning algorithms to analyze data present into the obtained summaries.

References 1. Zafar, K.M., Ghulam, M., Nadeem, S., Syed, W., Junaid, Q., Shaista, S.: A Review of Star Schema and Snowflakes Schema, pp. 129-140, May 2020. ISBN:978-981-155231-1, https://doi.org/10.1007/978-981-15-5232-8 12 2. Zhan, C., et al.: AnalyticDB: real-time OLAP database system at Alibaba cloud. Proc. VLDB Endow. 12(12), 2059–2070 (2019). https://doi.org/10.14778/3352063. 3352124 3. Wingerath, W., Gessert, F., Ritter, N.: NoSQL & real-time data management in research & practice. In: Meyer, H., Ritter, N., Thor, A., Nicklas, D., Heuer, A., Klettke, M. (Hrsg.), BTW 2019 - Workshopband. Gesellschaft f¨ ur Informatik, Bonn, S. 267–270 (2019). https://doi.org/10.18420/btw2019-ws-28 4. Abdessalem, T., Chiky, R., H´ebrail, G., Vitti, J.: Traitement de donn´ees de consommation ´electrique par un Syst`eme de Gestion de Flux de Donn´ees, pp. 521–532 (2007) 5. Arasu, A., et al.: STREAM: the Stanford data stream management system. In: Garofalakis, M., Gehrke, J., Rastogi, R. (eds) Data Stream Management. DSA, pp. 317–336. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-540-286080 16 6. Chandrasekaran, S., et al.: Telegraphcq: continuous data stream processing for an uncertain world. In: Proceeding of CIDR, ACM (2003) 7. Chiotellis, S., Grismajer, M.: Analysis of electrical power data streams in manufacturing. In: Dornfeld, D., Linke, B. (eds) Leveraging Technology for a Sustainable World. Springer, Berlin (2012). https://doi.org/10.1007/978-3-642-29069-5 90 8. Dornfeld, D., Vijayaraghavan, A.: Automated energy monitoring of machine tools. CIRP Ann. Manuf. Technol. 59, 21–24 (2010) 9. K¨ u¸cu ¨k, D., et al.: PQStream : a data stream architecture for electrical power quality (2015). ArXiv: abs/1504.04750 10. Lobo, J.L., Ballesteros, I., Oregi, I., Del Ser, J., Salcedo-Sanz, S.: Stream learning in energy IoT systems: a case study in combined cycle power plants. Energies 13, 740 (2020). https://doi.org/10.3390/en13030740 11. Pitarch, Y., et al.: Multidimensional data streams summarization using extended tilted time windows. In: INA: Frontiers of Information Systems and Network Applications, Bradford, United Kingdom. pp. 102–106, 1–20, May 2009 12. Kreps, J., Narkhede, N., Rao, J.: Kafka: a distributed messaging system for log processing. In: ACM NetDB 2011, Athens, Greece, June 12 2011 Vohra, Deepak. (2016). Apache Flume. https://doi.org/10.1007/978-1-4842-2199-0 6. 978-1-45030652-2/11/06.10.00

Electrical Big Data’s Stream Management for Efficient Energy Control

429

13. Vohra, D.: Apache flume, pp. 287–300 (2016). ISBN: 978-1-4842-2198-3. https:// doi.org/10.1007/978-1-4842-2199-0 6 14. Sarr, J.G., Boly, A., Bame, N., et al.: Data stream summary in big data context: challenges and opportunities. Adv. Sci. Technol. Eng. Syst. J. 6(4), 414–430 (2021) 15. Ahmed, R., Khatun, A., Ali, A., Sundaraj, K.: A literature review on NoSQL database for big data processing 7(2) (2018). https://doi.org/10.14419/ijet.v7i2. 12113 16. Zheng, T., Chen, G., Wang, X., Chen, C., Wang, X., Luo, S.: Real-time intelligent big data processing: technology, platform, and applications. Sci. China Inf. Sci. 62(8), 1–12 (2019). https://doi.org/10.1007/s11432-018-9834-8 17. Duraj, A., Szczepaniak, P.S.: Outlier detection in data streams - a comparative study of selected methods, procedia computer science; knowledgebased and intelligent information & engineering systems. In: Proceedings of the 25th International Conference KES2021, vol. 192, pp. 2769–2778, ISSN, pp. 1877–0509 (2021). https://doi.org/10.1016/j.procs.2021.09.047, https://www. sciencedirect.com/science/article/pii/S1877050921017841 18. Khalid, M., Kjell, O., Tore, R.: Wrapping a NoSQL datastore for stream analytics, pp. 301–305, August 2020. https://doi.org/10.1109/IRI49571.2020.00050 19. Mahmood, K., Orsborn, K., Risch, T.: Comparison of NoSQL datastores for large scale data stream log analytics, pp. 478–480, June 2019. https://doi.org/10.1109/ SMARTCOMP.2019.00093 20. Shaikh, S.A., Kitagawa, H., Matono, A., Mariam, K., Kim, K. -S.: GeoFlink: an efficient and scalable spatial data stream management system. In: IEEE Access, J. Name Stand. Abbrev, vol. 10, pp. 24909–24935 (2022). (In Press) https://doi.org/ 10.1109/ACCESS.2022.3154063.e 21. Hoseiny Farahabady, M., Taheri, J., Zomaya, A.Y., Tari, Z., Energy efficient resource controller for apache storm. Concurr. Comput. Pract. Exp. (2021). https://doi.org/10.1002/cpe.6799 22. Amrapali, B., Upadhyay, A.K.: Microsoft power BI. Int. J. Soft Comput. Eng. (IJSCE) 7(3), 14–20 (2017). ISSN: 2231–2307

Mobile Money Phishing Cybercrimes: Vulnerabilities, Taxonomies, Characterization from an Investigation in Cameroon Alima Nzeket Njoya1,2 , Franklin Tchakounté1,2,3(B) , Marcellin Atemkeng3 Kalum Priyanath Udagepola4 , and Didier Bassolé5

,

1 Department of Mathematics and Computer Science, Faculty of Science, University of

Ngaoundere, Ngaoundere, Cameroon 2 Cybersecurity With Computational and Artificial Intelligence Group (CyComAI),

Ngaoundere, Cameroon {j.ntsama,f.tchakounte}@cycomai.com 3 Department of Mathematics, Rhodes University, Grahamstown 6140, South Africa [email protected] 4 Department of Information and Computing Sciences, Scientific Research Development Institute of Technology, Loganlea, Australia [email protected] 5 Laboratory of Mathematics and Computer Science, University of Joseph Ki-Zerbo, Ouagadougou, Burkina Faso

Abstract. Mobile Money (MM) technologies are popular in developing countries where people are unbanked, and they are exploited as means of financial transactions in the economy. Sophisticated cyber-phishing techniques successfully target MM accounts. Related countermeasures are rare and the existing ones are so technical that people without minimal knowledge cannot be helped. Making the knowledge around cybercrime facts is therefore relevant since it provides a good basis for further technical research. In this vein, this paper dissects phishing cybercrime strategies observed within the Cameroonian cyberspace. We provide identified vulnerabilities, process design of generic MM attack, taxonomies of attacks, and classification of the latter based on criteria. Findings about commonalities and dissimilarities reveal two aspects to really consider when designing solutions: emotion and interactions. Keywords: Mobile money · phishing · vulnerabilities · taxonomies · emotions · interactions · cybercrimes · cameroon

1 Introduction Mobile payment or Mobile Money (MM) allows people to accumulate, send and receive money using their mobile phone without having a bank account [1]. This technology is widely and effectively used in many countries where populations are still unbanked © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023 Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 430–445, 2023. https://doi.org/10.1007/978-3-031-34896-9_26

Mobile Money Phishing Cybercrimesn

431

and where banking services are unavailable and/or in crisis, such as during the Covid19 pandemic, access to societal facilities (transport, trade, hospital, …) are limited [1]. According to the World Bank, this technology is a vector of economic growth and therefore of the objectives of sustainable development in developing countries [2]. In sub-Saharan Africa, the world’s most popular mobile payment region, 64.15% of global transaction volume was recorded in 2019, followed by 19.7% growth in 2020 [3]. In Cameroon, it generated 17.5% of the gross domestic product (GDP) in 2017 [4–6] and made it possible to control the risks associated with savings operations in households [4, 7, 8]. However, the popularity of Mobile Money is growing with the attractiveness of cyber-scammers who are experts in stealing money and sensitive information. Indeed, they use social engineering techniques such as phishing [9] to manipulate victims and get them to unknowingly disclose their confidential data [10]. Based on the psychology of the user, phishing caused the Cameroonian economy to lose 12.2 billion in 2021, i.e., double that in 2019 [12]. It is one of the most criminal to which effective user-centered solutions must be provided [9]. Many works exist to fight against this scourge. While some more technical exploit emerging technologies such as artificial intelligence [12–17], others which require minimal knowledge in ICT [18, 19], are intended for education and awareness of people. However, the complexities related to the categories of solutions, leave victims open to attack. Which leads us to think that it would be useful to dig into the actual unfolding of the attacks. This could thus contribute to adding new ingredients to the technical solutions and to orient awareness in a more comprehensible way from the point of view of the consumer. This work provides a characterization layer of Mobile Money cyber-crimes with a particular investigation of mobile network operators (MNO) in Cameroon. More particularly, we contribute to three points. • First, we provide threats identified and justified within the mobile payment services. We have matched to the security services which are violated such as confidentiality, integrity, availability to name a few. • Second, we design a four-step process flow to represent general mobile money cybercrimes. Each step has been elucidated in relation to vulnerabilities in the first contribution. Third, we provide a taxonomy of MM attacks based on observations. They have been classified based on defined criteria and findings constitute solid knowledge of commonalities and dissimilarities to consider when designing solutions. The remainder of the paper is structured as follows. Related works are presented in Sect. 2. Section 3 describes some background on mobile money motivations and social engineering aspects. Section 4 explains how we proceed with acquisition of cybercrimes. Section 5 describes the vulnerabilities observed in the mobile payment systems. The process flow of generic MM attack is described in Sect. 6. Section 7 shows how taxonomies of attacks are created. Section 8 concerns the classification of these taxonomies in classes based on specific criteria. In the same section, findings are explained and key aspects for solutions are discussed. The document is ended with a conclusion and perspectives are mentioned.

432

N. N. Alima et al.

2 Related Works Phishing detection is a crowded domain of research. However, specific phishing such as the one in mobile money remains not deeply visited. In the next, the main trends of approaches are described. A cybercrime is an activity which can be identified due to some features identified during the process. So, researchers look inside vectors such as email, phone calls and URLs to determine whether a set of fields with specific values are used during the attack. As a human, this exercise becomes complicated when there are a lot of things – a hundred for example - to discover. What authors do is to be assisted with artificial intelligence to automate the process. For instance, Gandotra and Gupta [12] exploit machine learning algorithms on 4898 phishing and 6157 normal webpages, each page structured in 30 features with a high phishing detection accuracy. Extensive surveys are provided for such solutions in terms of using deep learning [13, 14] and machine learning [15]. Emergent technologies are also targets for phishing. Some other authors investigate phishing arising in blockchains. MP-GCN [16] is designed to identify phishing in Ethereum networks. After modelling as a graph, authors adopt feature engineering and dimensionality reduction to give a graph convolution network. Authors in [17] use ensemble learning on collected Ethereum transactions, which are structured as labelled graphs. Educational games are designed to improve phishing reconnaissance to the players. In this vein, Panga et al. propose an educational mobile game for teenagers in Tanzania [18]. Likewise, a game-based training has been built and contextually evaluated in [19]. Moreover, telecom operators and media owners vehiculate sensitization through media channels. As targeted in these works, detection performance is not the concern in this study. We are not looking to have more complex solutions which could not be used by simple people. But we are interested in understanding in depth and characterizing the processes of cybercrimes. In so doing, consumers will be more aware of the cybercrime intentions and cybersecurity professionals will be more. We believe that if we provide people with dissected components and justifications, their comprehension will be more improved and therefore mistrust elevated. This work is therefore complementary to sensibilization improvements.

3 Background In this section, some background concerning mobile money and social engineering are presented. Ondrus and Pigneur [20] define mobile payment as a transaction of monetary value between two parties, through a mobile device capable of securely processing a financial transaction over a wireless/telecom network. According to Mbiti and Weil [21], mobile payment is a service offered by a mobile telephone network, allowing users to deposit funds in their personal account, to make funds transfers by short messages, to make withdrawals, to pay bills, to name but a few. In general, a mobile payment system is made up of the following entities [22]: (i) Customer (C) is anyone who looks for services from the merchant such as the transfer

Mobile Money Phishing Cybercrimesn

433

of money and the payment of invoices to name a few. (ii) Merchant (M) providing the service. (iii) Acquirer (A) is the institution responsible to manage the merchant’s account and the verification of the payment instrument filed. (iv) Issuer (I) is the financial institution that manages the customer’s account and provides electronic payment instruments to be used by the customer. (v) Payment gateway (PG) is that entity, intermediary between the buyer and the issuer. In general, issuers are mobile network operators (MNO). According to Bahri-Domon [23], mobile payment was initiated in Cameroon for the very first time in 2011. In his study, the author presents four important platforms for mobile money services in Cameroon, namely MTN Mobile Money, Orange Money, Express Union Mobile Money and Nexttel. Although banking institutions increasingly turn to mobile money services, MTN and Orange remain the two MNO dominating this sector. Together they have 5.4 million registered users [24]. Hence, cybercrimes targeting these two issuers are concerned within this work. This economic sector attracts malicious people who multiply social engineering (SE) strategies. Social engineering is helpful to them because they need to lure, to manipulate the psychology of their victims. The final aim is in fact to put the victim in a situation of confidence where any fake request will be considered and realized. The popular SE vector is email from where fake content, links and attachment are embedded. Since we are talking about unbanked people who hardly have emailing services, phishing related to mobile money is considered and therefore cybercrimes through mobile phones. In the following, the process flow of mobile money cybercrime is designed.

4 Collection of Cybercrimes This study is concerned with mobile money cybercrimes through mobile phones. This concerns cybercrimes which target mobile money accounts. For that, we have investigated situations of cybercrimes which happened to people in the country, in our environment as well as those we met in person. The approach was to investigate social media spheres such as Facebook, Whatsapp and Twitter, complaints about MM phishing. Also, we enriched the knowledge with advertisements or sensitizing artifacts released in media channels such as newspapers, television and radio. With the advantage of being expert in the area, it was straightforward to discern interesting requirements from stories. Since it has been impossible to directly have information from telecom operators due to confidentiality, we were obliged to informally be on permanent watch for phishing stories. Apart from scrutinizing media and social media, we sometimes proceeded to genuine and informal questions to people who seemed to have already been exposed. In case, similar stories we discarded duplicates. We mean by discarding to consider two similar phishing stories in the same category.

434

N. N. Alima et al.

5 Threats Found in MNO Here, we describe certain threats discovered during the investigation concerning money transactions. But before, the transaction process should be presented. The prior requirement is that the customer creates a MM wallet exploited for withdrawals and deposit transactions. For that, subscribers must be identified using valid identity documents in conformance with laws [25]. Then a confidential code or Personal Identification Number (PIN) is created by the user to secure the wallet. A country-specific code called Unstructured Supplementary Service Data (USSD) identifies the service and must be dialed to start the process. For instance, #150# refers to money transfer in Orange and *126# in MTN. Once the user has dialed the code, steps are guided until he specifies the amount and the receiver number [26] the name of the receiver asking to confirm the operation. Indeed, the process requires entering a PIN code. All those steps can be equivalent to a sequence of codes. Table 1 illustrates six main threats referring to (ti) with their description and violated security services. Table 1. Threats Threat ID

Threat

Description

Service violated

t1

Disclosure of name

The attacker can simulate the Confidentiality intention to make a transfer just to capture the name of people

t2

Sequence of code incomprehension

Each MNO provides its way Integrity and of accessing services through availability sequences of codes starting with the USSD. For a simple user, the semantic of that is unknown. Therefore, he/she can be misleading to put a sensitive value somewhere to leak information

t3

Presence of unidentified SIM card

Until now, there are sellers of Authentication and SIM Cards of obsolete non-repudiation pre-identified cards or even unidentified constraints. Cybercrimes are catalyzed with these bad people who maintain the black market of fake SIM cards. People are therefore able to repudiate a bad action. Really controlling this means to identify these markets [25, 27] (continued)

Mobile Money Phishing Cybercrimesn

435

Table 1. (continued) Threat ID

Threat

Description

Service violated

t4

Non-compliance during registration

This threat is about (i) using fake ID cards (ii) exploitation of lost identity cards and (iii) usurpation of identity [28] It seems difficult for MNO to control this aspect since there are no centralized and connected systems of identifications. Moreover, human controllers during registration are not so efficient when there are a lot of customers to serve. These three cases give authorization to do bad actions due to fake credentials and therefore, they will be able to repudiate some cybercrime

Confidentiality, Authentication, Authorization and non-repudiation

t5

Phone and pocket stolen

People used to store PIN Confidentiality, codes in the phone or in their Integrity, pockets. In case they are lost, Availability people will obviously steal information

t6

Initiation of transaction

We observed that you can Confidentiality initiate the transaction on behalf of someone else. This relies on T1 helpful to get some credentials. Indeed, you couldn’t follow without the PIN and the concerned will be triggered as well

436

N. N. Alima et al.

Fig. 1. Diagram of treats and violated services

Figure 1 synthetizes the content of Table 1. As we can see in this figure, only one security service (confidentiality) has been violated concerning threats t1 and t6 . Threats t2 and t3 have both violated two security services. Integrity and availability for threat t2 ; authentication and non-repudiation for threat t3 . However, regarding threat t5 , we observe that there are up to three security services violated such as confidentiality, integrity and availability. Finally, with threat t4 , all the security services have been violated. This means that; threats t4 and t5 are the most dangerous in the attack during the process of transaction money.

6 Mobile Money Cybercrime Process Flow Based on the cybercrimes previously characterized, the process flow illustrated in Fig. 2 has been designed to represent mobile money cybercrimes. It includes four steps. First, an attacker investigates the alleged victim in order to gather some basic information needed to carry out his attack. Second, the attacker uses this information to contact the victim and coax him by playing on his emotions. Third, an attack is launched without the victim suspecting anything and obtains the victim’s personal code or a deposit of money made by the victim. Fourth, whatever the attacker has obtained, this stage is to disappear without leaving any traces. Figure 3 gives another representation of the process flow. As we can see in this figure, during the stages of manipulation and execution of the attack, the attacker exploits many more threats than during the other stages (gathering information and planning the end of the attack). This fact means that manipulation and execution of the attack include more decisive and complex steps to complete than in the other stages. Indeed, the attacker is likely to win if he succeeds in manipulating the

Mobile Money Phishing Cybercrimesn

437

Fig. 2. Process flow

Fig. 3. Process flow diagram

victim. The attacker also fails if the technique is not as smarter as the potential victim could detect. Gathering Information and Planning the Attack: The attacker defines his target which can be an individual or an organization and collects details about them by physically visiting them, monitoring them or collecting information through social networks, the Web, physical media, relatives and friends. He thus chooses a feasible means of communication to launch the phishing attack and get in touch with the victim. This means it can be a call or an SMS. Once this attack technique is selected, the attacker can then proceed to the next stage which is manipulation. In this stage, the target is even knowingly selected or unknowingly selected, both cases through mobile phone numbers. In the first case, the attacker could have been investigating the victim on social media such as Facebook for quite long, the time to collect enough information and to learn weaknesses. Based on his/her interest and seen people divulging their phone number and daily life. For instance, a person who had a promotion can be susceptible and of interest or a person with a high standing status in society. As information such as the phone number are easily provided on the Web, the attacker will just have to retrieve the name and/or surname through threat t1 described in Table 1. Threat t4 is also a source

438

N. N. Alima et al.

exploitable by the attacker in this stage, since they make calls with fake identities to get some personal information on the target. Manipulation: The attacker contacts the victim by means of an SMS or a call. The goal of making contact is to develop a relationship of trust with the victim, a relationship that might lead them to see the attacker as someone familiar and whom they could trust or need. to communicate. To build this relationship and hook the victim, the attacker makes the victim confident and prepares the ground for his attack. One of the keys for the attacker is the use of the exact name taken due to t1 and t4 . During this phase, the attacker asks a series of questions or holds a conversation that motivates the target to follow the path they want. He begins by asking his target very neutral questions to which the answers will most likely be yes or no, then he moves on to a few open questions, accompanied by a few closed questions while directing the victim towards the final goal which is the scam. With good communication and psychological skills, he manipulates the victim’s emotions by adapting the conversation to his environment and situation. He listens for example to the tone of the victim’s voice; he launches a funny word and reassures himself that the victim will laugh or be amazed. It can also create emergency situations related to the victim’s experience. For example, he can simulate crying to get the victim to panic and act as he pleases. Threat t6 can occasion successful manipulation. Execution of the Attack: In this phase, the attacker is ready to launch his attack because through a certain number of emotions activated in the victim during the manipulation phase, he is sure to have put him in confidence. He therefore creates a scenario in which he will lead the victim to take an action or perform an action to activate this attack phase (this may be the fact of disclosing personal information about him or giving his PIN code without realizing or making a deposit to the attacker). According to Laird and Oatley [29], a well-crafted script creates an atmosphere in which the victim feels comfortable disclosing information that they would not normally do. Social engineers design different scenarios in different attacks. And each scenario aims to activate specific emotions in the victim, emotions that can be exploited by the scammer to succeed in the attack. A good scenario therefore manipulates emotions to establish a situation of trust in the victim and lead him to do what the attacker wants. Developing the relationship and executing the attack therefore involves the victim directly disclosing sensitive information about themselves or doing things they normally would not do. It is in these stages that the victim’s emotions are aroused, influenced and manipulated. A social engineer is very adept psychologically, socially, technically and emotionally as he uses pretense, tricks and influences to control and manipulate the emotions of the victim and drive him towards his goal [29]. The attack is most often conducted with a series of calls and different people involved in the process to really lure the victim. The case studies will show all the details. This stage is favorized by all the threats in Table 1. Here, the attacker can restart manipulation if it seems the victim has learned the tips. End of the Attack: The end of the attack is marked by the fact that the victim has transferred the content from the wallet to the attacker or has mistakenly provided a PIN code. Therefore, the attacker empties the victim’s account. In case the victim discovers the fraud, the action which follows is to cancel the phone calls and the ongoing transfer if it was about to be launched. What allows awake victims to detect are generally to

Mobile Money Phishing Cybercrimesn

439

take advice from relatives. Some experiences may reveal similar strategies. So once the attacker succeeds or fails in executing his attack, he can simply walk away, cover his tracks as much as possible, and prepare for the next attack. The information collected and the scenario used during this attack as well as the skills acquired during the attack are updated to move on to the next attack. The attacker may have to reiterate other manipulation techniques if he feels that the victim is suspecting (see the back arrow). For example, involving a fake relative of the victim during the conversation.

7 Taxonomizing Cybercrimes In this section, we have compiled and relooked MM phishing stories in seven taxonomies. These taxonomies are essentially obtained based on observations coupled with expertise and some literature. Some statements are supported by literature (Fig. 4).

Fig. 4. Taxonomies

MM Identity Theft (T1 ): This is a form of mobile money crime committed by a friend, relative or fraudster who steals owners’ financial information such as PIN to complete transactions. According to Bosamia [30], when a customer’s mobile phone is stolen, attackers use all the sensitive data stored on it, including the PIN code, and control the device. The mobile money PIN code stored on the mobile phone allows them to access the MoMo account thus enabling them to perform fraudulent transactions. MM Authentication Attack (T2 ): It is a mobile money crime where attackers target and attempt to exploit the mobile money authentication process by applying brute force attack or weak PIN code attack. This is consistent with the findings of Mtaho [31], who found that attackers use many means to gain access to users’ accounts and take advantage of weak PIN reset procedures, making it easy to guess, smudge, or spying. This result is also consistent with the study conducted by Bosamia [30], which reported that most mobile money systems are not adequately protected, giving cyber fraudsters the opportunity to apply reverse engineering (RSE) to attack hard-coded passwords or PINs, encryption keys and steal customers’ money. MM Phishing Attack (T3 ): This is a form of mobile money crime where fraudsters pose as employees of the mobile money service provider by calling or texting users to reveal their data, including a PIN code for an update. This is in line with observations by Bosamie [32], who also found that fraudsters carry out sophisticated attacks by either

440

N. N. Alima et al.

emailing, texting, texting or calling mobile money users to disclose their personal and financial information. MM Vishing Attack (T4 ): This is a form of mobile money fraud in which fraudsters use voice calls to trick users and mobile wallet agents into revealing their personal financial information such as a PIN. This confirms the findings of previous studies by Saxena et al. [33], Maseno et al. [34] who observed that attackers use anonymous phone calls or fake promotions to trick users into disclosing their PINs or other sensitive personal information which is then used to steal their mobile money accounts. MM Smishing Attack (T5 ): This is a form of mobile money fraud where fraudsters send delirious emotional text messages to trick users and mobile money agents into revealing their mobile money account information, including the PIN code. This result is described in previous studies by Maseno et al. [34], where fraudsters send fake text messages using their mobile phones to mobile money users and mobile money agents and then take them through different stages, which later results in the money transfer from their account to the fraudsters’ account. It is also consistent with studies by AkomeaFrimpong et al. [35], Gilman and Joyce [37], Lonie [37] who reported that fraudsters posing as employees of mobile service providers send fake text messages to customers that they have won a promotional prize, and for them to claim the price to be paid for sending money to the fraudster’s number. MM Agent Fraud (T6 ): Mobile money agents also experience fraud from attackers and users, thereby threatening the security of the platform. This finding is consistent with work by Buku and Mazer [37], Lonie [37], in which they found that common acts of fraud experienced by agents include loss of flow in the agent’s account resulting from unauthorized use, misuse of PINs, and a fraudster impersonating an agent to gain unauthorized access to the agent’s checking account. MM USSD Vulnerabilities (T7 ): The greatest risk of the USSD system is that information transported in the communication channel is not encrypted, thus making the USSD data vulnerable to attack [37]. This is consistent with Mtaho’s submission [31] who noted that during the verification process, the client enters the PIN that goes through the USSD system to the server in plain text; thus, attackers using sniffing software such as Wireshark can intercept it.

8 Classification Table 2 provides a classification of taxonomies according to six criteria. The first criterion refers to the element exploited to transmit the attack. It can be the SMS in the case where the attacker shapes a false message to deceive the victim. The attacker can also coax the victim through phone calls. The second criterion is the threat facilitating the success of cybercrime. The different threats are already explained in Table 1. The third aspect represents the emotional aspects incited in the victim during the attack. For example, during the call the cybercriminal may want to put the victim in a state of trust with respect to his recommendations. The fourth aspect includes the key elements without which the attack would not be sophisticated. The fifth criteria indicates whether there

Mobile Money Phishing Cybercrimesn

441

were probably a multitude of interactions (multi-stage) for the attack to succeed in convincing the victim or only one interaction (one-stage). The sixth criterion highlights the categories of victims concerned. Table 2. Classification Taxonomies Attack vector

Threat

Emotional Key elements aspects for succeeded from victim attack

T1

Stolen device

t5

None

Available PIN None

Consumer

T2

Malicious attempts

t6

None

brute force, weak PIN

Consumer

T3

SMS, t 3 , t4 voice calls

Confidence Identity usurpation

Multi-stage Consumer, MNO MM Agents

T4

Voice calls t1 , t2 , t3 , t 4 , t5 , t6

Confidence Manipulation tips

Multi-stage Consumer

T5

SMS

t6

Delirious, happy, surprise

One-stage

T6

Voice calls, Malicious attempts

t2 ,t3 ,t6

Confidence Bad use of One-stage code PIN, impersonation

T7

Uncovered Lack of None channel encryption

Content of SMS

Sniffing

Interaction type

None

None

Victims

Consumer

MM Agent

Consumer

Figure 5 synthetizes the taxonomies with their threats. We mentioned above that, MM Vishing attack (T4 ) is a form of mobile money fraud in which fraudsters use voice calls to trick users and mobile wallet agents into revealing their personal financial information such as a PIN. As we observe in the Fig. 5., T4 is the only one kind of attack which exploits all threats which means that MM Vishing attack can be qualified as the most dangerous attack between all the taxonomies we mentioned on the above.

442

N. N. Alima et al.

Fig. 5. Taxonomies and their treats

8.1 Findings Several observations follow from the table. We present them criterion by criterion and then as a whole. Most mobile money attacks are perpetrated through text messages and calls. Indeed, these means easily retain the victims when they are in contact by the manipulated voice and by the written text of the attacker. Threat t6 comes back more because it is probably in vogue for its ease of deployment. We note that some attacks are technical (T7 ) and linked to the issuer’s system. Imagine that the target is someone who could be deceived. The victim receives a message asking to consult the balance because of winning a sum of 100,000 CFA. The immediate consultation makes it possible to validate the transaction which was initiated by the attacker on his account. We also see that the security services targeted are integrity because the electronic wallet is emptied; confidentiality because the attack scenarios require identity theft and services related to non-repudiation since the attacker uses credentials that are not his. To achieve this, it is necessary to trigger emotions of trust in the victim because he must believe in everything that comes from the attacker, emotions of panic through deadlines to be met or through messages of death or delusional events, and pleasant surprises when it comes to winnings that positively impact the victim’s life. It should be noted that during the attacks, these emotions created aim to make the victim a slave to the attacker. Criterion 4 reveals that the PIN code is the key element sought by attackers during their processes. Added to this are identity theft tricks to reinforce emotions. Through criteria 5, we note that the attacker will in some cases be required to repeat the interactions, possibly with different accomplices, to prevent the victim from noticing. The victims most affected are individuals with an electronic wallet, but they can also be agents of the mobile payment service. This shows that this type of cybercrime can also affect expert people.

Mobile Money Phishing Cybercrimesn

443

Typically, the attacker seeks the PIN from the victim or through threats related to theft and loss activities. The first case is the most popular that requires emotional manipulation. 8.2 What Aspects Would be Interesting for Solutions Against MM Phishing Table 2 reveals two things to consider. The first concerns the interactions between the victim and the attacker. We believe that you must control what happens there in time and be able to represent it in the form of states-transitions. Thus, the valued connections will likely bring out characteristics that indicate similarities in attacks. The second thing concerns emotions. Emotion is an important element that can be obtained from the human voice through very specific features. Researchers can try to characterize an MM phishing interaction based on the emotions of the two interlocutors. Thus, we believe that we will be able to see how the emotion of the victim adapts according to that of the cybercriminal. The emotions that emerge may be able to characterize a situation of deception or manipulation. These two aspects can be manipulated by artificial intelligence techniques such as deep learning, computer vision or even reinforcement learning.

9 Conclusion et Perspectives This study concerned the investigation of cybercrimes directed in mobile payment. It was conducted in Cameroon and made it possible to identify the threats related to the payment system, to characterize an attack by mobile payment in general, to create taxonomies of attacks according to the scenarios observed through our observations and finally to create a classification of these taxonomies according to criteria. A discussion was able to highlight the key elements that the protectors can consider in the implementation of possible solutions. This work provides a good basis for understanding these kinds of attacks that plague countries where the population is unbanked. As a perspective, MM attack strategies in other countries will be investigated in order to enrich current knowledge.

References 1. Aker, J.: Using mobile money to help the poor in developing countries. The Fletcher School, Tufts University. https://econofact.org/using-mobile-money-to-help-the-poor-in-developingcountries. Accessed 01 Aug 2022 2. Banque mondiale, Rapport de la Banque mondiale : les transactions électroniques sont d’une importance vitale pour la croissance économique. https://www.banquemond-iale.org/fr/news/ press-release/2014/08/28/world-bank-report-digital-payments-economic-growth. Accessed 01 Aug 2022 3. Agenceecofin, l’Afrique Subsaharienne et le mobile money. https://www.agenceeco-fin. com/monetique/0804-75539-l-afrique-subsaharienne-a-genere-64-15-des-transactions-mon diales-par-mobile-money-en-2019. Accessed 01 Aug 2022 4. Finmark. FinScope Consumer Survey Highlights Cameroon 2017. Finmark Trust: Johannesburg, South Africa (2017). https://finmark.org.za/system/docu-ments/files/000/000/220/ original/Cameroon-pocket-guide_English.pdf?1601984244. Accessed 01 Aug 2022

444

N. N. Alima et al.

5. Andzongo, S.: Cameroon: MobileMoney Transactions Surged to FCFA3500bn in 2017. Investiraucameroun.com.https://www.businessincameroon.com/finance/3108-8300-cameroon-mobile-money-transactions-surged-to-cfa3-500bn-in-2017. Accessed 01 Aug 2022 6. Tengeh, R.K., Gahapa Talom, F.S.: Mobile money as a sustainable alternative for SMEs in less developed financial markets. J. Open Innov. Technol. Mark. Complex 6, 163 (2020). https:// doi.org/10.3390/joitmc6040163 7. Amponsah, E.O.: The advantages and disadvantages of mobile money on the profitability of the ghanaian banking industry. Texila Int. J. Manag. 4, 1–8 (2018) 8. Must, B., Ludewig, K.: Mobile money: cell phone banking in developing countries. Policy Matters J. 7, 27–33 (2010) 9. Jai, A.K., Gupta, B.B.: A survey of phishing attack techniques, defence mechanisms and open research challenges. Enterp. Inf. Syst. 16(4), 527–565 (2022) 10. Wang, Z., Zhu, H., Sun, L.: Social engineering in cybersecurity: effect mechanisms, Human vulnerabilities and attack methods. IEEE Access 9, 11895–11910 (2021) 11. Bangda, B.: La cybercriminalité fait perdre 12,2 milliards au Cameroun en 2021. EcoMatin. https://ecomatin.net/la-cybercriminalite-fait-perdre-122-milliards-s-au-cameroun-en2021/. Accessed 01 Aug 2022 12. Gandotra, E., Gupta, D.: An efficient approach for phishing detection using machine learning. In: Giri, K.J., Parah, S.A., Bashir, R., Muhammad, K. (eds) Multimedia Security. Algorithms for Intelligent Systems. Springer, Singapore. https://doi.org/10.1007/978-981-15-8711-5_12 13. Do, N.Q., Selamat, A., Krejcar, O., Herrera-Viedma, E. and Fujita, H.: Deep learning for phishing detection: taxonomy, current challenges and future directions. IEEE Access 36429– 36463 (2022) 14. Catal, C., Giray, G., Tekinerdogan, B., Kumar, S., Shukla, S.: Applications of deep learning for phishing detection: a systematic literature review. Knowl. Inf. Syst. 64, 1457–1500 (2022) 15. Quang, D.N., Selamat, A., Krejcar, O.: Recent research on phishing detection through machine learning algorithm. In: Fujita, H., Selamat, A., Lin, J.-W., Ali, M. (eds.) IEA/AIE 2021. LNCS (LNAI), vol. 12798, pp. 495–508. Springer, Cham (2021). https://doi.org/10.1007/978-3-03079457-6_42 16. Yu, T., Chen, X., Xu, Z., Xu, J.: MP-GCN: a phishing nodes detection approach via graph convolution network for ethereum. Appl. Sci. 12, 7294 (2022) 17. Chen, W., Guo, X., Chen, Z., Zheng, Z., Lu, Y.: Phishing scam detection on ethereum: towards financial security for blockchain ecosystem. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Special Track on AI in FinTech, pp. 4506–4512 (2020) 18. Panga, R.C.T., Marwa, J., Ndibwile, J.D.: A game or notes? The use of a customized mobile game to improve teenagers’ phishing knowledge. Case Tanzania. J. Cybersecur. Priv. 2, 466– 489 (2022) 19. Kävrestad, J., Hagberg, A., Nohlberg, M., Rambusch, J., Roos, R., Furnell, S.: Evaluation of contextual and game-based training for phishing detection. Future Internet 14, 104 (2022) 20. Ondrus, J., Pigneur Y.: An Assessment of NFC for future mobile payment systems. In: Proceeding of the International Conference on the Management of Mobile Business (2007) 21. Mbiti, I.M., Weil, D.N.: Mobile banking: the impact of M-Pesa in Kenya. NBER Working Paper No. w17129 (2011) 22. Téllez, J., Zeadally, S.: Mobile payment systems secure network architectures and protocols. Computer Communications and Networks, Springer (2017) 23. Bahri-Domon, Y.: Mobile money ready for take-off in cameroon, Mediamania. https://www. businessincameroon.com/pdf/BC28.pdf. Accessed 01 Aug 2022. 24. Mbodiam, B.R.: In Cameroon, MTN and orange wage a fierce commercial war on the mobile money market. https://www.businessincameroon.com/telecom/0202-6860-in-came-roon-

Mobile Money Phishing Cybercrimesn

25.

26. 27.

28. 29. 30.

31. 32.

33.

34. 35.

36. 37.

38.

39.

445

mtn-and-orange-wage-a-fierce-commercial-war-on-the-mobile-money-market. Accessed 01 Aug 2022 BRM: Cameroon: a decree to limit numbers of SIM per subscriber and prohibit sale of SIM cards on streets. https://www.businessincameroon.com/telecom/0110-5670-cameroon-ade-cree-to-limit-numbers-of-sim-per-subscriber-and-prohibit-sale-of-sim-cards-on-streets. Accessed 01 Aug 2022 Orange, Transfert d’argent. https://www.orange.cm/fr/om-gestion-de-compte/transfert-d-arg ent.html. Accessed 01 Aug 2022 BRM: Mobile telephone subscribers’ identification: telecom regulator ART admonishes cameroonian operators. https://www.businessincameroon.com/public-management/120411454-mobile-telephone-subscribers-identification-telecom-regulator-art-admonishes-cam eroonian-operators. Accessed 01 Aug 2022 Atabong, A.B., Cameroon: SIM card shutdown piles on pressure for telcos. https://itweb.afr ica/content/lLn14Mmj9RKqJ6Aa. Accessed 01 Aug 2022 Johnson-laird, P.N., Oatley, K.: Basic emotions, rationality, and folk theory. Cogn. Emot. 6(3–4), 201–223 (1992) Bosamia, M.P.: MobileWallet payments recent potential threats and vulnerabilities with its possible security measures. In: Proceedings of the 2017 International Conference on Soft Computing and its Engineering Applications (icSoftComp-2017), Changa, India, 1–2 pp. 1–7 (2017) Mtaho, A.B: Improving mobile money security with two-factor authentication. Int. J. Comput. Appl. 109, 9–15 (2015) Bosamia, M.P.: MobileWallet payments recent potential threats and vulnerabilities with its possible security measures. In Proceedings of the 2017 International Conference on Soft Computing and its Engineering Applications (icSoftComp-2017), Changa, India, vol. 1–2; pp. 1–7 (2017) Saxena, S., Vyas, S., Kumar, B.S., Gupta, S.: Survey on online electronic payments security. In: Proceedings of the 2019 Amity International Conference on Artificial Intelligence (AICAI), Dubai, United Arab Emirates, pp. 746–751 (2019) Maseno, E.M., Ogao, P., Matende, S.: Vishing attacks on mobile platform in Nairobi county kEnya. Int. J. Adv. Res. Comput. Sci. Technol. 5, 73–77 (2017) Akomea-Frimpong, I., Andoh, C., Akomea-Frimpong, A., Dwomoh-Okudzeto, Y. Control of fraud on mobile money services in Ghana: an exploratory study. J. Money Laund. Control 22, 300–317 (2018) Gilman, L., Joyce, M.: Managing the risk of fraud in mobile money (2012). http://www.gsma. com/mmu. Accessed 01 Aug 2022 Lonie, S., Fraud risk management for mobile money: an overview (2017). https://www. chyp.com/wp-content/uploads/2018/06/Fraud-Risk-Management-for-MM-31.07.2017.pdf. Accessed 01 Aug 2022 Buku, M., Mazer, R.: Fraud in mobile financial services: protecting consumers, providers, and the system. https://www.cgap.org/publications/fraud-mobile-%EF%AC%81nancial-ser vices. Accessed 01 Aug 2022 ITU, Security testing for USSD and STK based digital financial services applications. https://figi.itu.int/wp-content/uploads/2021/04/Security-testing-for-USSD-and-STKbased-Digital-Financial-Services-applications-1.pdf. Accessed 01 Aug 2022

Software Vulnerabilities Detection Using a Trace-Based Analysis Model Gouayon Koala1(B) , Didier Bassole1 , Telesphore Tiendrebeogo2 , and Oumarou Sie1 1

Laboratoire de Math´ematiques et d’Informatique, Universit´e Joseph Ki -Zerbo, Ouagadougou, Burkina Faso [email protected] 2 Laboratoire d’Alg`ebre, de Math´ematiques Discr`etes et d’Informatique, Universit´e Nazi Boni, Bobo-Dioulasso, Burkina Faso https://www.ujkz.bf, https://www.univ-bobo.gov.bf

Abstract. Over the years, digital technology has grown considerably. With this growth, the information systems’ security has increasingly become a major concern. In this paper, we propose an analysis model based on application execution traces. This model makes it possible to improve the detection of vulnerabilities in applications. Indeed, after an evaluation of each of the tracing techniques we derived this model which takes into account these techniques and combines them with machine learning techniques. In this way, the applications undergo several analyses. This reduces the effect of evasion techniques used by hackers to circumvent the proposed solutions. We focused on Android applications because of their increasing popularity with a variety of services and features offered, making them a favourite target for hackers. These hackers use every means to exploit the slightest flaw in the applications. Unfortunately, the solutions proposed remain insufficient and sometimes ineffective in the face of their determination.

Keywords: Execution traces Applications · Attacks

1

· Vulnerabilities · Tracing techniques ·

Introduction

The role that applications play in computer systems makes them essential to progress. Applications use, although indispensable for companies and individuals, makes them a privileged target for malicious attacks [1]. These attacks exploit known or unknown vulnerabilities in the applications. The solutions proposed in the literature remain insufficient in view of the growth of attacks. These attacks and exploits are increasingly motivated by financial gain. These motives have led to changes in the nature of the attacks and also in the procedures used by cybercriminals. They are using techniques that make malware more sophisticated in order to evade the detection or protection systems in place [2]. c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023  Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 446–457, 2023. https://doi.org/10.1007/978-3-031-34896-9_27

Software Vulnerabilities Detection Using a Trace-Based Analysis Model

447

However, if applications contain flaws, these are exploited by cybercriminals to their advantage. How do we combine the protection of application data with the continuous innovation of these applications? This is the question that mobilises researchers in their quest to solve these computer data security problems. They are committed to solving this problem. Thus, several avenues of solutions are being explored and several solutions are being experimented with in order to protect the data [2–5]. Unfortunately, their efforts come up against the determination of pirates. This means that the existing solutions are either insufficient or ineffective in the face of the rise of piracy techniques [1]. This study is part of the logic of finding solutions to reduce the effects of malicious actions on applications. For this purpose, we have chosen a dynamic approach based on the analysis of application traces during their execution. We propose a model for analysing these mobile application traces to improve data protection. The aim is to improve the detection of vulnerabilities in mobile applications, which will also improve data protection. The rest of this document is divided into several sections. Section. 2 presents work similar to our study while Section. 3 provides a comparative study of different application tracing tools. In Section. 4 we describe our approach with a model based on the collection of application execution trace data. We conclude in Section. 5 with the conclusion where we present the synthesis of our contribution and our future work.

2 2.1

Related Works Malware Evolution

According to the Gartner report [6], the number of applications has increased considerably in recent years and are used worldwide and in all areas of activity. Application development is highly competitive as developers come up with applications with features to meet users’ needs [5,6]. Almost all organisations have adopted the new uses offered by these applications. In addition to the development, management of reduced time, developers must deal with the use of increasingly complex platforms. Indeed, Cueva et al. [7] have shown in their study of embedded multimedia applications that the increase in hardware complexity has led to an increase in application complexity. In addition, all private and confidential informations are easily stored on these applications. This makes the applications prone to vulnerabilities and malicious attacks [8]. Several software vulnerabilities are detected every year. Studies [1,6] show that malware threats are increasing with the continued expansion of the functionality of associated devices. Some applications are harmful and contain vulnerabilities. These applications put users’ data at risk. It is therefore necessary to detect vulnerable and malicious software. Protection mechanisms based on signatures (antivirus), firewalls, etc. are sometimes necessary but became ineffective over the years against the new threats from cybercriminals [2–4,9,10]. The constant evolution of threats and malware

448

G. Koala et al.

attacks that exploit software vulnerabilities requires the use of new techniques to analyze and detect these vulnerabilities. In the remainder of this section, we study these techniques and particularly tracing techniques. 2.2

Tracing

Having efficient and robust applications remains a challenge for developers and researchers. Unfortunately, traditional application protection measures have shown their limits. In the quest to improve the security of software data, researchers have explored solutions based on application traces [11–13]. The purpose of tracing an application is to obtain precise information on its behaviour. Most of this information is dated with precision [11]. The study by Hassan et al. [14] on the traces of web applications shows the possibility of obtaining information such as the date, time, duration and IP address of the web applications used. Zhou et al. [15] add that important information can be collected and transmitted using the TCP/IP protocol. This information makes it possible to link each IP address to the applications running on the device. This shows that the content of the traces is essentially accurate and can be recorded for analysis for security purposes. To obtain this information, the tracing relies on events that will be captured when the system has reached certain states [11,16]. The collection of traces is fluid and only slightly modifies the application’s behaviour. The events are produced using trace points (see Sect. 4). Several tracing techniques have been used in previous work. These techniques include debugging, profiling and logging. In practice, debugging, logging and profiling techniques are similar. All of these techniques are used to present the user with information about the system being monitored so that the user has a better understanding of its behaviour. However, these tools differ in several aspects. 2.3

Debugging, Profiling and Logging Techniques

Debugging. According to Gruber’s works [17], debugging is a way of checking the software functions to determine whether it does what it should do. It is used by developers during the validation of an application to discover and eliminate bugs if possible. Debugging therefore has a different purpose than tracing. It allows us to understand the reasons for the errors that occurred during the execution of a program [18]. This is why debugging techniques are used for development purposes. They make it possible to show the user precise and useful messages to understand the causes of system errors. Belkhiri [18] shows in his work that some bugs can remain hidden until integration. Thus, some bugs are difficult to diagnose and solve. Authors of the works [17–20] explain that frameworks with advanced features are needed to debug and optimize applications. One of the limitations of debugging is the step-by-step inspection of the system state. In addition, debugging only identifies bugs that are easy to find and therefore debugged applications can be vulnerable. In addition, debugging is less useful during the execution of an application, so vulnerabilities may escape

Software Vulnerabilities Detection Using a Trace-Based Analysis Model

449

it. It is therefore essential to find optimal solutions for diagnosing vulnerabilities and optimising computer programs. Profiling. As for profiling, it provides users with general informations about the system. For Gruber [17], profiling allows one to verify that the application being analysed is doing what it should be doing at the time it is supposed to. Profiling also provides statistics on memory usage and execution time associated with the monitored system [17,19]. While profiling can identify bottlenecks in applications execution, it doesn’t take into account events that occur during the time the system is being monitored. Thus profiling presents to user with averages that correspond to the behaviour that occurred during that time but doesn’t take into account the order in which events occur on the system. Furthermore, the number of results presented as a result of profiling an application doesn’t increase even if the profiling time increases, unlike tracing. Thus profiling doesn’t provide enough information to successfully diagnose problems in an application. 2.4

Logging

Logging makes it possible to record valuable information about the events of a system in the course of its activity, such as access to a system or file, modification or transfer of data [21]. The works [21–23] show that logging resembles tracing in the sense that the aim is also to record events occurring in a system. Also, the data recorded is very high level, unlike the events of tracing which can be very low level. According to these studies, it is noted that the frequency of recording in logging is very low compared to that of tracing, and the information collected is the errors in the program output. Taking into account the new architectures that digital technology is adopting, such as mobile devices, it is difficult for logging alone to solve the problems effectively. On the one hand, it is necessary to manage the interactions between different services and on the other hand to save the path of a trace that passes through various functions. Thus, the logging tools are relatively inefficient for recording frequent events. We will therefore consider tracing tools as optimised logging tools capable of gathering a lot of information. In sum, debugging, profiling and logging tools do not provide enough information for optimal diagnosis of software vulnerabilities. Tracing that includes these tools is an alternative and inclusive solution. Several tracing tools have been proposed for the visualisation of application traces.

3 3.1

Tracing and Visualisation Tools Applications Tracing Tools

Tracers are tools responsible for collecting events occurring during system execution and storing them in an orderly fashion in a trace. They used to understand

450

G. Koala et al.

the functional or interactive behaviour of the system through relevant and valuable data. There are several tracers whose difference lies in the volume of data collected, the associated system and the behaviours represented. These different tracers can save events that occur in the operating system (kernel tracing) or in applications (user-space tracing) within the same trace. Kernel tracing provides enough information about the execution of a system or an application and userspace tracing allows to trace an application by looking at its performance. We are therefore interested in user-space tracing. On the one hand, this tracing allows the modification of the source code of an application to insert tracing points in order to analyze the application’s behaviour. On the other hand, because we want to collect more application-specific information. We will study the best user-space tracers based on the performance of each tracer. Strace. Strace1 is one of the powerful, simple and easy to use tracers. It provides its user with a record of the system calls that a program makes. This tracer is offered in many distributions and can provide statistics on the time a program spends in the kernel [24]. To obtain a Strace trace, the program must be launched by setting the name of the command to be traced as a parameter to strace. It is possible to specify filters and restrict the trace to certain system calls. One of the advantages of Strace is that it offers the facility to record the opening and closing of files of any program [25]. However, Strace uses only one file per output process and this makes it inefficient. However, Strace uses only one file per process for output, making it very inefficient on parallelized programs, since it uses a lock to allow writing of the collected information. In addition, Strace’s performance is worse at tracing system calls, which introduces two extra system calls for each traced system call. This can slow down the execution time of the program. DTrace. DTrace2 is a tracer that allows us to run scripts. DTrace has been made compatible with Mac OS X and Linux environments by a team at Oracle [24]. A user can instrument system calls without having to recompile their application code. Since DTrace attaches to probes in the kernel, it is possible to trace all processes running on the system. This tracer is part of Apple Inc.’s development support software.3 . This is a tool that is known to have no impact on the traced system when trace points are disabled. It uses dynamic instrumentation of user-space applications. DTrace has its own language like C to describe the actions to be taken when a prerequisite is met. This provides a unified interface for user-space application tracing. For performance, DTrace tracing overhead increases when tracing applications due to the number of threads involved in using concurrent execution protection mechanisms when tracing [25,26]. 1 2 3

http://www.brendangregg.com/blog/2014-05-11/strace-wow-much-syscall.html. http://dtrace.org/blogs/about/. https://developer.apple.com/library/mac/documentation/DeveloperTools/ Conceptual/InstrumentsUserGuide/.

Software Vulnerabilities Detection Using a Trace-Based Analysis Model

451

SystemTap. SystemTap is a tracer more recommended for system administrators to facilitate their access to kernel tracing and retrieve statistics as soon as a problem occurs. It is a tracing tool that combines trace generation and analysis of results. SystemTap allows the use of scripts like DTrace to define actions when a trace point is reached. Scripts can be defined at the beginning or end of functions or in some cases during system calls [26]. The difference is that SystemTap scripts are compiled as native code and run faster than DTrace’s bytecode-compiled scripts. With Strace, the difference is that SystemTap analyses the activity of all the processes in the system. SystemTap allows to analyze all interactions of the same type for all processes running on the computer compared to Strace which only analyses one or a few specific processes. This provides more informations for data analysis to identify problems caused by the activity of other processes. However, the synchronisation techniques used by SystemTap mean that the overhead of tracing increases [25]. In addition, during tracing, the calculation of statistics and the flexibility of the tracer are an advantage when it comes to diagnosing performance problems. Unfortunately, the traces are often large and are written to a text file or directly to a console. In addition, the analysis of the results is done during the tracing process, so it is impossible to do more complex or customised analyses afterwards. These reasons show that the use of SystemTap is limited for our study. LTTng-UST. Linux Trace Toolkit Next Generation User-Space Tracer (LTTng-UST) is the component of the LTTng4 kernel tracer that runs entirely in user mode. This tracer has been designed to minimize its impact on the traced system. It uses the same architecture as LTTng and relies on trace points inserted at arbitrary locations in the code, which will send informations about the state of the system to the tracer when encountered during execution of the instrumented program. Thus, the program is dynamically linked during its execution to the liblttng-ust library. To ensure scalability to multiple cores and to be efficient, this tracer allocates a circular buffer for each execution core, allowing to instrument programs with parallelized execution without penalizing the different execution threads [24]. In addition, the buffers are managed through atomic operations and are shared with a background process responsible for transferring the stored data to disk or sending it across the network, and flushing the buffers before they are filled. However, the buffers can fill up too quickly, if the network or disk throughput for example is lower than the event generation rate. New events overwrite older ones. A buffer backup can be triggered if necessary. For example when an error occurs or when too much latency is detected, a trace of the few seconds preceding the incident becomes available for analysis. By default, LTTng accepts to lose events rather than blocking the process waiting for the disk or the network, in order not to slow down the execution of the program. The advantage is therefore to reduce system disruption and avoid having to manage the storage of large traces. With these different developed modules, LTTng-UST is able to instrument programs written in C/C++, Java and even Python. Furthermore, 4

https://lttng.org.

452

G. Koala et al.

it is easy to combine traces produced in user-space by LTTng-UST and kernel traces produced by LTTng. Tracing with LTTng-UST doesn’t require any system calls, which makes it very powerful. Events are written by applications to buffers in shared memory. A single daemon is responsible for collecting events generated by all instrumented applications on the system. 3.2

Viewing Traces

The virtualization tools provide appropriate views of the execution of programs in order to understand the information gathered. The raw trace data can be displayed, but when the traced program becomes complex, it is more difficult to get an overview of the application’s execution in order to detect functionality or performance problems. Several trace visualisation tools have been proposed to analyse events. In this study, we favour the tools for visualising traces generated in CTF format, although they are binary and therefore difficult to read directly. Babeltrace. Babeltrace is a LTTng trace visualization tool used to print the content of a trace to the console. This tool makes it easy to read the traces without performing any analysis. In addition to the provided libbabeltrace library that allows other programs to read LTTng traces, it allows to display CTF traces as ordered textual events. These displayed events are precisely stamped with the name of the event and the arguments recorded at the time the trace point was encountered at system execution. The advantage of this tool is that it can convert traces from CTF format to another format and vice versa while providing a programming interface in C or Python language. This makes it possible to filter the desired events for analysis. Also, the programming interfaces allow the conversion of binary data into data structures that can be modified and used by other algorithms. The weakness of Babeltrace lies in the slow reading of traces in CTF format in case the number of events is high. Trace Compass. Formerly known as the Tracing and Monitoring Framework plugin within the Eclipse environment, Trace Compass is a popular trace visualization and analysis tool and supports several trace formats including the CTF format. It is designed to quickly process very large traces and process events based on finite state machines to present different results. This tool provides clear and differentiated views from recorded traces. Also, the conversion of traces in CTF format to a human-readable format is done using state machines. These state machines are very efficient for saving large numbers of recorded events [25], and the construction of the state tree is particularly optimised. Thus, when a trace is loaded into the software, the state tree is calculated according to the interval visible in the current view, which allows this tool to display traces quickly, and users to be able to explore the different recorded events efficiently. In addition, various views have been developed to analyse the traced system and display statistics relating to the system during the period it was traced. To obtain richer views, it is possible to add Java code to the project or even attach

Software Vulnerabilities Detection Using a Trace-Based Analysis Model

453

scripts written in Python to process the visualized traces without modifying the source code. Trace Compass is therefore a preferred tool for visualizing recorded traces and for conducting fast and powerful analyses. A limitation of Trace compass is that each analysis is confined to its own view. It is therefore necessary to combine the information that each analysis offers about the same resource.

4

Approach

4.1

Choice of Applications

In the rest of our work, we have restricted our study to mobile applications, particularly Android applications. Our choice is motivated by the popularity of Android ([8,27]), the growing number of vulnerabilities in applications (Gartner report5 , [27]), the risks of threats to which users are exposed ([15,28]) and the inadequacy of the solutions for the protection of the data that transit these applications ([1,9,29]). In fact, functions that were previously reserved for hardware are now taken into account by mobile applications. In addition, the openness of Android allows developers to take advantage and the fact that applications are free to download and use has helped attract users [8,15]. According to Gartner’s Digital Report, Android is in first place with over 82% of the mobile market and over 2 billion shipments by 2021. This expansion of applications has also created challenges related to data security. Several studies show that vulnerabilities in applications make it the most vulnerable system and the most targeted by malicious attacks [2,4,8,27,30]. The scale of attacks targeting Android users is considerable. These attacks exploit vulnerabilities, the increasing number of which has caused enormous damage to users. Unfortunately the threat of malware is growing every year and new malware such as ransomware has evolved in sophistication to evade existing scanning techniques. The complexity of malware, its evolution, the increase in damage caused by its attacks and the inadequacies of traditional protection mechanisms prompt us to explore execution traces to improve the protection of application data in this system. In addition, few works have focused on the tracing of mobile applications. 4.2

Using Machine Learning Techniques

In view of the number of vulnerabilities in Android applications, we propose in our approach to combine machine learning techniques with tracing for vulnerability detection. Machine learning is one of the new approaches used to detect vulnerabilities in Android applications [30,31]. It has algorithms that combined with our approach will improve the identification of software vulnerability risks in Android and thus reduce the threat of malware. After the recovery of events and the visualization of the obtained traces we will extract the characteristics in these traces to build vectors suitable for the analysis with the algorithms. 5

https://www.gartner.com/en/information-technology/insights/top-technologytrends/top-technology-trends-ebook.

454

G. Koala et al.

In addition, we will build a dataset to train our model to learn from the data. To the vectors we will apply Logistic Regression (LR), Linear Discriminant Analysis (LDA), K-Nearest Neighbors (KNN), Decision Tree Classifier (DTREE-CART), Naive Bayes (NB), MLP Classifier (MLP), Random Forest Classifier (RFOREST) and Support Vector Machine (SVM). Then we will compare the results of these algorithms on vulnerability detection from execution traces. In this way we will obtain results that improve the accuracy, efficiency and effectiveness of vulnerability detection. 4.3

Model Proposed

In the model we propose in Fig. 1, our approach is an extension of tracing techniques to Android applications to which we combine machine learning techniques. This model has two stages of analysis of Android applications. In the implementation of our Android applications tracing approach, we will collect applications (both malware and benign) from several sources to instrument the code.

Fig. 1. Model using Android application execution traces

The first step is to use tracepoints. These tracepoints will function to make a call containing information about the state of the application. With the insertion of tracepoints, we will have data on the operation of the traced application and also a description of its interactions. To capture the events that are the

Software Vulnerabilities Detection Using a Trace-Based Analysis Model

455

data exported by the tracepoints, we reserved the LTTng-UST tracing tool. This tracer is known for its low overhead and its ability to trace applications running in the user-space. The collected events are stored in a trace in an orderly fashion. Then, we will use the visualization tool Trace Compass to convert the collected binary data into a CTF file. Finally, as this file is readable, we will analyse its content to identify possible precursor behaviours of attacks according to functional and interactive gaps. Tracing and visualisation tools can sometimes provide an effective first approach to tracing but are not sufficient on their own to identify vulnerabilities. This encourages us to combine learning techniques to refine the examination of the traces. The second step is therefore the extraction of features from the CTF files. These features will be used to construct eigenvectors for analysis with the algorithms seen previously. The aim is to convert the data into vectors of numbers for model learning so that it can be used as a basis for identifying the origin of undesirable application behaviour. Using the Python interface, the events in the trace can be read and converted into dictionary objects. These objects, after various modification steps, can also be used to feed machine learning algorithms.

5

Conclusion

New threats against digital suggest new approaches to improve data protection. In this paper we have presented a model to improve the vulnerabilities detection in Android applications. Our model combines tracing and machine learning techniques to analyze Android applications execution traces. These traces have proven to be valuable in analysing applications through a comparison of functionality with its runtime behaviour. For this purpose, we selected LTTng-UST as tracer to capture and record events and Trace compass as visualization tool. To these powerful tools, we proposed to associate machine learning algorithms which allows to have detailed informations on the application behaviour but also on the trace structure. The main contribution of this paper is the offering of a model that combines user-space tracing techniques with machine learning techniques for advanced analysis of Android applications. In addition, this paper provides a comparative study of tracing techniques, tracers and also runtime trace visualization tools. Our approach is to limit the actions of malware that use evasion techniques. And, this will significantly increase the detection of vulnerabilities and also real problems in Android applications. In the future, we plan to compare the results obtained by applying our model with other approaches to evaluate the success rate and effectiveness of our model. The objective is to have applications that are resistant to malware evasion techniques.

References 1. Nguyen, K.D.T., Tuan, T.M., Le, S.H., Viet, A.P., Ogawa, M., Minh, N.L.: Comparison of three deep learning-based approaches for IoT malware detection. In: 10th International Conference on Knowledge and Systems Engineering (2018)

456

G. Koala et al.

2. Dehkordy, D.T., Rasoolzadegan, A.: A new machine learning-based method for android malware detection on imbalanced dataset. Multimedia Tools Appl. 80(16), 24533–24554 (2021). https://doi.org/10.1007/s11042-021-10647-z 3. Lin, G., Wen, S., Han, Q-L., Zhang, J., Xiang, Y.: Software vulnerability detection using deep neural networks: a survey. In: Proceedings of the IEEE (2020). https:// doi.org/10.1109/JPROC.2020.2993293 4. Dong, S., et al.: Understanding android obfuscation techniques: a large-scale investigation in the wild. In: Beyah, R., Chang, B., Li, Y., Zhu, S. (eds.) SecureComm 2018. LNICST, vol. 254, pp. 172–192. Springer, Cham (2018). https://doi.org/10. 1007/978-3-030-01701-9 10 5. Garcia, J., Hammad, M., Malek, S.: Lightweight, obfuscation-resilient detection and family identification of android malware. ACM Trans. Softw. Eng. Meth. (TOSEM) (2018) 6. https://www.gartner.com/en/information-technology/insights/top-technologytrends/top-technology-trends-ebook 7. Cueva, P.L., Bertaux, A., Termier, A., M´ehaut, J.F., Santana, M.: Debugging embedded multimedia application traces through periodic pattern mining. In: Proceedings of the Tenth ACM International Conference on Embedded Software, EMSOFT 2012, pp. 13–22 (2012). https://doi.org/10.1145/2380356.2380366 8. Koala, G., Bassol´e, D., Zerbo/Saban´e, A., Bissyand´e, T.F., Si´e, O.: Analysis of the impact of permissions on the vulnerability of mobile applications. In: International Conference on e-Infrastructure and e-Services for Developing Countries, AFRICOMM 2019, pp 3–14 (2019). https://doi.org/10.1007/978-3-030-41593-8 1 9. Ghaffarian, S.M., Shahriari, H.R.: Software vulnerability analysis and discovery using machine-learning and data-mining techniques: a survey. ACM Comput. Surv. 50(4), 1–36 (2017) 10. Lei, T., Qin, Z., Wang, Z., Li, Q., Ye, D.: Evedroid: event-aware android malware detection against model degrading for IoT devices. IEEE Internet Things J. (2019). https://doi.org/10.1109/JIOT.2019.2909745 11. Lebis, A.: Capitaliser les processus d’analyse de traces d’apprentissage : mod´elisation ontologique et assistance ` a la r´eutilisation”, Th`ese, Sorbonne Universit´e (2020). https://tel.archives-ouvertes.fr/tel-02164400v2 12. Galli, T., Chiclana, F., Siewe, F.: Quality properties of execution tracing, an empirical study. Appl. Syst. Innov. 4, 20 (2021). https://doi.org/10.3390/asi4010020 13. Hojaji, F., Mayerhofer, T., Zamani, B., Hamou-Lhadj, A., Bousse, E.: Model execution tracing: a systematic mapping study. Softw. Syst. Model. 18(6), 3461–3485 (2019). https://doi.org/10.1007/s10270-019-00724-1 14. Hassan, N.A., Hijazi, R.: Digital Privacy and Security Using Windows, CA Apress, Berkeley (2017). https://doi.org/10.1007/978-1-4842-2799-2 15. Zhou, D., Yan, Z., Fu, Y., Yao, Z.: A survey on network data collection. J. Netw. Comput. Appl. 116, 9–23 (2018). https://doi.org/10.1016/j.jnca.2018.05.004 16. Lazar, J., Feng, J.H., Hochheiser, H.: Chapter 12 - Automated Data Collection Methods. Research Methods in Human Computer Interaction, 2nd edition, Elsevier, Britain, pp 329–368 (2017). https://doi.org/10.1016/B978-0-12-8053904.00012-1 17. Gruber, F.: Performance debugging toolbox for binaries: sensitivity analysis and dependence profiling. pp 3–10 (2020). https://tel.archives-ouvertes.fr/tel-02908498 18. Belkhiri, A.: Analyse de performances des r´eseaux programmables, ` a partir d’une trace d’ex´ecution (2021). https://publications.polymtl.ca/9988/1/2021 AdelBelkhiri.pdf

Software Vulnerabilities Detection Using a Trace-Based Analysis Model

457

19. Venturi, H.: Le d´ebogage de code optimis´e dans le contexte des syst`emes embarqu´es”, pp. 13–40 (2008) 20. Iegorov, O.: Data mining approach to temporal debugging of embedded streaming applications, pp 89–95 (2018). https://tel.archives-ouvertes.fr/tel-01690719 ´ 21. Bationo, Y.J.: Analyse de performance des plateformes infonuagiques, Ecole Polytechnique de Montr´eal, pp. 19–28 (2016) 22. Reumont-Locke, F.: M´ethodes efficaces de parall´elisation de l’analyse de traces noyau (2015). https://publications.polymtl.ca/1899/1/2015 FabienReumontLocke.pdf 23. Ravanello, A.: Modeling end user performance perspective for cloud computing systems using data center logs from big data technology. Thesis (2017) 24. Kouam´e, K.G., Ezzati-Jivan, N., Dagenais, M.R.: A flexible datadriven approach for execution trace filtering. In: IEEE International Congress on Big Data (BigData Congress: New York, NY, USA (2015). https://doi.org/10.1109/bigdatacongress. 2015.112 25. Bationo, Y.J., Ezzati-Jivan, N., Dagenais, M.R.: Efficient cloud tracing: from very high level to very low level. In: IEEE International Conference on Consumer Electronics (ICCE 2018), Las Vegas, NV, USA (2018). https://doi.org/10.1109/icce. 2018.8326353 26. Ezzati-Jivan, N., Bastien, G., Dagenais, M.R.: High latency cause detection using multilevel dynamic analysis. In: Annual IEEE International Systems Conference SysCon: Vancouver. Canada (2018). https://doi.org/10.1109/syscon.2018.8369613 27. Agrawal, P., Trivedi, B.: A survey on android malware and their detection techniques. In: IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT) (2019). https://doi.org/10.1109/ICECCT.2019. 8868951 28. Qamar, A., Karim, A., Chang, V.: Mobile malware attacks: review, taxonomy and future directions. Future Gener. Comput. Syst. 97, 887–909 (2019). https://doi. org/10.1016/j.future.2019.03.007 29. Zhou, Q., Feng, F., Shen, Z., Zhou, R., Hsieh, M.-Y., Li, K.-C.: A novel approach for mobile malware classification and detection in Android systems. Multimedia Tools Appl. 78(3), 3529–3552 (2018). https://doi.org/10.1007/s11042-018-6498-z 30. Sestili, C.D., Snavely, W.S., VanHoudnos, N.M.: Towards security defect prediction with AI (2018). arXiv:1808.09897. http://arxiv.org/abs/1808.09897 31. Fern´ andez, A., Garc´ıa, S., Galar, M., Prati, R.C., Krawczyk, B., Herrera, F.: Imbalanced classification for big data. In: Learning from Imbalanced Data Sets, pp. 327–349. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98074-4 13

Subscription Fraud Prevention in Telecommunication Using Multimodal Biometric System Freddie Mathews Kau1 and Okuthe P. Kogeda2(B) 1 Tshwane University of Technology, Pretoria 0001, South Africa 2 University of the Free State, Bloemfontein 9300, South Africa

[email protected]

Abstract. South African telecommunications market has reached a saturation point; as a result, telecommunication companies spend most of their budget on customer acquisition and retention, and very little is spent on fraud prevention or detection systems. This spending pattern has caused an increase in fraud, making it the most significant revenue leakage in telecommunications, where the leading fraud type is subscription fraud. Subscription fraud has a direct negative impact on the company’s revenue, bonuses of employees, and customers’ credit status. Although the current fraud systems can detect subscription fraud, they cannot identify the fraudster. This enables the fraudster to commit fraud using the same or multiple identity documents during the contract application process without being detected. In trying to change the spending pattern and prevent subscription fraud, we sought to determine the impact of subscription fraud in mobile telecommunication companies. We designed, developed, and implemented a Multimodal Biometrics System (MBS) using Python, SQLite3, and JavaScript to enable telecommunication companies to capture and store customer faces and fingerprints to use them for verification before approving the contract. We used Principal Component Analysis (PCA) algorithm to reduce the dimension of the face and fingerprint images. PCA outperformed Independent Component Analysis and Linear Discriminate Analysis algorithms. To do image matching, we used the PCA-based representation for local features (PCA-SIFT) algorithm, which outperformed Scale Invariant Feature Transform (SIFT) and Oriented FAST and Rotated BRIEF (OBR) algorithms. MBS results gave us biometric matching accuracy of 94.84%. MBS is easy to implement and cost-effective. The system can help identify the fraudster, prevent subscription fraud and reduce revenue leakage. Keywords: Subscription Fraud · Telecommunication · Fingerprint · Biometrics · Face Biometrics · PCA Algorithm · Multimodal Biometrics System

1 Introduction South African telecommunication market comprises well-known Information Communication Technology (ICT) companies: Telkom and Neotel, mainly fixed-line or fixed wireless providers, and Mobile Telephone Network, Vodacom, and Cell C, primarily © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023 Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 458–472, 2023. https://doi.org/10.1007/978-3-031-34896-9_28

Subscription Fraud Prevention in Telecommunication

459

mobile telecommunication providers. Fixed-line has over 1.8 million subscribers, and mobile telecommunication has over 103 million subscribers [1]. South African demographics encompass about 59.62 million people [2]. This shows that mobile telecommunication in South Africa (SA) has changed from a fast-growing to a saturation state. With this saturation and high usage of cellular phones come several challenges like revenue growth and fraud. Casey et al. [3] warned us that as people and organizations become more dependent on mobile phones, computer criminals focus more on how they can victimize individuals and break into corporate networks. The growth and the high usage of technology do not tame fraud. KPMG [4] points out that technological advances are a double-edged sword that provides companies with more-powerful, more defensive tools in defence against fraud and fraudsters to discover areas of vulnerability to attack. Fraud in mobile telecommunication is defined as illegal access to a mobile operator’s network and the use of its services for unlawful interest to the detriment of the network operator and/or its subscribers [5]. Wieland, cited by [6], indicates that fraud is the telecommunication industry’s most extensive, most significant revenue leakage area. The survey by [7] shows that telecommunications companies lost over $38 billion due to fraud. The report further lists the top five fraud methods: subscription fraud (SF), private branch exchange (PBX) hacking, dealer fraud, service abuse, and account takeover. SF is the main contributor to revenue leakage in mobile telecommunication. The impact of subscription fraud is enormous [22]. SF is defined as acquiring telecommunications services using fraudulently obtained subscriber documents or false identification [8]. Both prepaid and postpaid subscribers can be victims of SF. The fact that SF uses fake identity makes it the most concerning fraud because mobile telecommunication companies cannot confidently identify subscribers beyond a reasonable doubt. “Mobile devices such as cell phones and smartphones have become an integral part of people’s daily lives, and as such, they are prone to facilitating criminal activities or otherwise being involved when crimes occur” [3]; this means that incorrect identification of a subscriber is indirectly defeating the ends of justice. Kabari et al. [9] state that its complexity makes it hard to detect subscription fraud. According to [4], weak fraud controls are the biggest issue for companies victimized by fraud. They further state that companies are not investing in more robust anti-fraud controls mainly because of economic challenges. Another problem mobile telecommunication faces is that one fraudster can commit multiple SFs. In their report, Estevez et al. [6] discovered that a fraudster committed seven fraud cases within three months without being detected. Telecommunication fraud is the number one enemy in the industry mainly because, with other revenue losses, companies can still recover money from subscribers. In contrast, a company loses both the cost and expected revenue with fraud. Money lost through fraud can be irrecoverable, especially if the fraud committed is below the fraud insurance access. This background showed us that there was a need for a system that could prevent SF and proactively stop multiple SFs from the same fraudster. We sought to avoid SF and identify a fraudster beyond a reasonable doubt using multimodal biometric

460

F. M. Kau and O. P. Kogeda

traits: fingerprints and face. Therefore, our proposed system, connected to a customer relationship management (CRM) system, was used to conduct identity verification using a fingerprint biometrics scanner for Fingerprint and a webcam for the face. Our research was conducted on South African mobile telecommunication focusing on Postpaid SF. The rest of the paper is organized as follows. In Sect. 2, an overview of subscription fraud is provided. In Sect. 3, biometrics is described. In Sect. 4, an outline of the research methodology is provided. In Sect. 5, we discuss testing results. In Sect. 6, the conclusion and future work is presented.

2 Overview of Subscription Fraud Subscription fraud is the acquisition of telecom services using fraudulently obtained subscriber documents or false identification [8]. A fraudulently obtained identity document here is defined as identity theft which occurs when a fraudster steals the identity documents of an existing person without their consent to commit fraud [10]. False identification, defined as identity fraud, occurs when a fraudster creates an identity document of a non-existing person [10]. There are three types of SF methods named by [11]: a) SF (Identity) is the utilization of an actual identity without the knowledge of the owner to obtain goods and services with no intention to pay; SF (Identity) contribution to revenue loss was 2 billion USD in 2016 [11]. b) SF (Application) is a creation of false details to gain access to goods and services with no intention to pay. SF (Application) contributed 1.9 billion USD to revenue loss in 2016 [11]. c) SF (Credit Mulling/Proxy) is the utilization of actual identity details to obtain goods and services with no intention to pay, SF (Credit Mulling/Proxy) contributed 1.8 billion USD to revenue loss in 2016 [11].

3 Biometrics Biometrics which is made up of two Greek words, bios meaning life and metrics meaning measure, is defined as the measure of physical or behavioural biological characteristics of a person [12]. For many decades, biometrics characteristics have been used for identification and authentication mainly because they are reliable in determining a person’s identity compared to tokens and ID cards, which can be misplaced or shared [13]. There are two characteristics of biometrics [14]: i. Behavioral Traits Behavioral traits are biometrics traits that are related to the behaviour of a person. Examples of the commonly used are voice print and signature dynamics [15]. ii. Physiological Traits Physiological traits are biometrics traits related to the body’s shape and differ from person to person; The commonly used physical traits are [15]: face, Fingerprint, iris pattern, and DNA, while the most prevalent behavioural traits are: voice and signature dynamics. In South Africa, the department of home Affairs (DHA) also

Subscription Fraud Prevention in Telecommunication

461

uses two biometric traits: face and Fingerprint, to issue ID documents [16]. For this reason, we chose to use Fingerprints and faces in our study because they can be verified with the central database of DHA. a) Face Face traits are less intrusive and commonly used in biometrics; as humans, we recognize each other by face. Most algorithms have a high failure rate caused by facial expressions, different angles, and slow performance when retrieving images from the DB [17]. b) Fingerprint The Fingerprint is the mother biometric and the most widely used biometric. What makes fingerprints robust is that they remain constant for a person’s entire life. The advent of several inkless fingerprint scanning technologies coupled with the exponential increase in processor performance has taken fingerprint recognition out of criminal identification applications to several civilian applications such as access control, time and attendance, and computer login [18]. iii. Multimodal and Unimodal Biometrics Multimodal biometrics is the use of multiple biometrics traits [19]. Unimodal biometrics is defined as the use of one biometric trait. Multimodal systems remove the disadvantages of unimodal biometric systems by combining different biometric traits [20]. According to M. Khan and J. Zhang, as cited by [21], the performance of unimodal biometrics systems has to contend with various problems, such as background noise, signal noise and distortion, and environment or device variations.

4 Research Methodology i. Principal Component Analysis PCA is one of the most popular and successful techniques for image recognition and compression for feature and data presentation. It achieves this by identifying the maximum variance of data and reducing reconstruction errors. There are five steps involved in the PCA algorithm [22]: Get data, subtract the mean, find the covariance matrix, calculate eigenvectors and eigenvalues, form a feature vector, and retrieve a new dataset. Step 1: Subtract mean Once you have identified the data e.g., Z = [11 121 30 33], that you need to perform PCA on, the first thing PCA does is to subtract the mean on the data. Below is the formula to get the mean: n Zi (1) Z = i=1 n Z is used to refer to the entire data of numbers; to retrieve a certain number, say 33, we will refer to it with Z4 . Also, note that the symbol n refers to the number of elements in dataset Z. Z represents the mean for the set of Z as calculated in Eq. (1). Standard deviation Eq. (2) is used to subtract the mean:  2 n  x=1 Z − Z (2) s= n−1

462

F. M. Kau and O. P. Kogeda

Step 2: Find the covariance matrix The second step of PCA is to calculate the covariance matrix. The covariance matrix is calculated using Eq. (3):   n  Xi − X Yi − Y (3) cov(X , Y ) = i=1 (n − 1) where cov(X , Y ), is the covariance matrix for a 2-dimensional dataset of (X, Y). Since not all the datasets are two-dimensional, when a dataset has more than two dimensions, we use Eq. (4) to calculate covariance and put them in a matric:    Cnxn = ci,j , ci,j = cov Dimi , Dimj (4) where Cnxn is the matrix with n rows and n columns and Dimx is the x th dimension. Step 3: Calculate eigenvectors and eigenvalues Equation (5) is used to calculate the eigenvectors and eigenvalues Av = λv

(5)

where λ is an eigenvalue of the n * n Matrix A and v is an eigenvector of Matrix A. Step 4: Forming a feature vector Equation (6) is used to choose these vectors. FeatureVectors = (eigenvector1 . . . eigenvectorn )

(6)

Step 5: Extracting new dataset The final step of PCA is to form a new dataset using principal components based on the eigenvectors chosen in step 4. To do that, Eq. (7) is used: FinalData = RowFeatureVector × RowDataAdjust

(7)

where RowFeatureVector is the transpose columns of an eigenvector matrix, and RowDataAdjust is the transposed adjusted-mean variable ii. Algorithm PCA is one of the most popular and successful techniques used for image recognition and compression for feature and

Subscription Fraud Prevention in Telecommunication

463

MBS Algorithm 1. Input (P, ID, F) a. b. ID: Identity Number or Passport c. F: Face Image 2. C: Customer type (new or existing customer) 3. Output 4. Outcome of Verification 5. PCA Procedure of Fingerprint do For each = PCA_function( ) End For 6. 7.

End PCA Biometric , where R is a Retrieved fingerprint from the database

8. 9. Store_Biometrics 10. Delete_Biometrics 11. ENDMBSComputation

a) Input As shown in line 1 of the MBS algorithm, the key inputs of the algorithm are fingerprints (frm ), ID number (ID), and face image (F) of the customer. The input is used to authenticate or register. Authentication and registration are determined by the input in line 2 C : CustomerType. b) Biometric Verification In line 2, the agent in the store specifies customer type C, who can be a new customer or an existing customer; based on this information, the MBS algorithm decides whether to do registration or verification of the customer. MBS does biometric registration when C = New and verification when C = Existing. c) Retrieve In line 7, MBS retrieves biometric information from sets of databases, namely: CRM, Fraud, and DHA, and stores them in R ← D[]. retrieve(var) function, which is invoked by Biometric_Verification (C, frm , R) procedure only accepts one variable var. Where: var: Can be the ID number or ALL. ID number: South African ID number or Passport number. IF var = ALL, THEN Retrieve all the fingerprints in the database and store them in R ← D[] ELSE IF var = ID number THEN Retrieve only fingerprints linked to the ID number and store them in R ← D[] END IF

464

F. M. Kau and O. P. Kogeda

d) PCA procedure In line 5, the system performs PCA on both biometric information captured by the store agent and biometrics retrieved from the relevant database and store variables in frm and R respectively. e) Matching To do the matching, Euclidean and Mahalanobis distances were used. Euclidean distance measures spatial distance, while Mahalanobis distance measures similarity.   1. First, Euclidean distance Td j , k between the vectors of the captured biometric information j and database biometric information k gets calculated using Eqs. (8) and (9): ⎤ ⎤ ⎡ ⎡ ⎤ k j 11 12 1M ⎢ k ⎥ ⎢ j ⎥ ⎢ 21 22 2M ⎥ ⎥ → k := ⎢ ⎥ ⎢ ⎥ =⎢ ⎣ . ⎦ and j = ⎣ . ⎦ ⎣ . . . ⎦ M 1 M 2 MM k j   n    2 Td j , k :=   j − k ⎡

(8)

(9)

b=1

2. Secondly, the Mahalanobis distance Te (Fxv , k ) between vectors of captured biometric information PFxv and the biometric information stored in the database 1,2...M . Mahalanobis is recalculated and rewritten as indicated in Eqs. (10) and (11):  Te (Fxv , k ) = (k − PFxv )T ∗ [λkk ]−1 ∗ (k − PFxv ) (10) where: ⎤ ⎡ ⎤ k PF11 ⎢ k ⎥ ⎥ ⎣ PF21 ⎦ k := ⎢ ⎣ . ⎦ and PFxv = PFM 1 k ⎡

(11)

λkk are the eigenvalues of the matrix which correspond to the eigenvectors as described in Eq. (5). 3. Lastly, the system does verification to check if Fx matches the biometric information in our databases; because it is not always necessary to make a full database search, the maximum search value is set as given by Eq. (12):      1 max Td j , k WL := (12) 2

Subscription Fraud Prevention in Telecommunication

465

There are three scenarios where the match is confirmed, as indicated in Eq. (13): a. If

b. If

c. If

min[Te (Fxv , )] ≤ WL then M := Match Return M min[TM (Fxv , )] ≤ aλ11 then M := Match Return M min[Te (Fxv , )] ≤ βλ11 then M := Match Return M

(13)

If the above three scenarios are not met, then BiometricVerification (C, frm , R) Return M := MisMatch. Lines 3, 4, 7, 8, 9, and 10 are triggered by Biometric_Verification (C, frm , R) in line 6. returns when M := In line 8 function when is M := MisMatch. Match in line 7 and returns Line 9 is function store_biometric(ID, F, frm ) which is used to store the ID number, face image, and Fingerprint after verification or authentication. If the Fingerprint is successfully authenticated, info is stored in CRM DB; otherwise in Fraud DB. Function delete_biometric(ID, F, frm ) in line 10 is used to delete the ID number, face image, and Fingerprint, which were temporarily stored in the CRM Biometric DB. iii. Data Collection and Description a) Face Images Collection - The face images were obtained from the AT&T dataset, which has four hundred (400) images made up of ten (10) images per person from forty (40) individuals. b) Fingerprints images Collection - The fingerprints images were obtained from FVC2004 (Fingerprints Verification Competition) dataset. The dataset consists of eight (8) fingerprints per person collected from thirty (30) random people. c) Telecommunication CRM Data Collection - The fraud data was obtained from a South African telecommunication mobile company. The dataset is for postpaid customers from January to April 2017, having 1510 churn cases with 38 fields. iv. Biometric Live Data Capture a) Capture Face Image - An internal webcam was used to capture the face image and store it locally in the working directory before saving it in the database. b) Capture Index and Middle Fingerprint - Both fingerprints were captured using the ‘BioMini Slim 2’ fingerprint scanner that is FBI PIV and FBI Mobile ID FAP20 certified; the scanner also provides advanced live finger detection technology, which detects various materials including film, paper, glue, silicon, rubber, clay and many more.

466

F. M. Kau and O. P. Kogeda

v. Face Image Lifecycle Figure 1 shows the lifecycle that a captured face image goes through before matching is performed. Step 1 is the first step where an image gets captured. Step 2 is the second step that checks whether the captured image is a face; the check is to determine whether the image contains a face and also if there are two eyes. Step 3 is the third step, where only a face gets extracted from the image, and cut out the unnecessary portion of the image; this step is necessary because in cases where we have a full body image, this step ensures that only a face is cut out and used in the model. Step 4 is the last step, where PCA is applied to a captured image; this is the image the system uses to do recognition tests. In this study, we used and recommended the image size 200 × 200 to be used for testing.

Fig. 1. Face image lifecycle

vi. Face Image Lifecycle Table 1 shows the lifecycle the image goes through before verifying the captured image and any image stored in the database. Step 1 is the first step, where a fingerprint image is captured. Step 2 is the second step, where PCA is performed on the captured image, and this is the image our system uses to do recognition tests. Table 1. Fingerprint image lifecycle

Step 1

Step 2

Subscription Fraud Prevention in Telecommunication

467

5 Testing Results and Discussions i. Face Accuracy Test Face accuracy evaluation consist of recognition before applying PCA, PCA face recognition, and a full database face accuracy test. In Fig. 2, the recognition/match test is done before applying PCA to the image. Images for both captured on the left and the database image on the right were used to do the verification. Both images had 91 key points, and all the key points were matched by our system.

Fig. 2. Face recognition before PCA

Figure 3 shows the final face test conducted, showing the PCA image recognition. PCA was applied in Fig. 3. PCA reduced the number of key points from 91 to 65. When the match was done, all 65 key points were matched successfully.

Fig. 3. Face ricognition PCA image

ii. Full database Face Accuracy Test For the Full database test, all faces from the database were selected to do a recognition against a captured image; the model was 93% accurate. The initial tests of the system were not always accurate due to different hairstyles introducing variance. The introduction of Step 2 in Fig. 1: Face Image Lifecycle improves the system’s performance and is 93% accurate, which is better than the earlier 80%. Table 2 shows the results of a full database test for the face image.

468

F. M. Kau and O. P. Kogeda Table 2. Face CRM images compared with DHABIO database images results table

Row ID

Results description

Items

1

Correct

71

2

Wrong

5

3

Total Test Images

76

4

Accuracy Percent

93.42

5

Total Person

38

6

Total Train Images

76

7

Total Time Taken for recognition:

0.015625 s

8

Time Taken for one recognition:

0.000205 s

9

Training Time

7.46875 s

iii. Fingerprint Match Testing Fingerprint accuracy evaluation consists of fingerprint recognition before applying PCA, PCA fingerprint recognition, and a full database fingerprint accuracy test.

Fig. 4. Fingerprint recognition before PCA

Figure 4 shows the accuracy test done before applying PCA. Images of the same finger were used for both captured image on the left and the database image on the right. Both images had 364 key points, and our system successfully matched all the key points.

Fig. 5. PCA Fingerprint recognition

Subscription Fraud Prevention in Telecommunication

469

Figure 5 shows the PCA testing done on the fingers used in Fig. 4. PCA reduced the number of key points from 364 to 283. The system successfully matched all 283 key points. iv. Full database Fingerprint Accuracy Test For the Full database test, all fingerprints from the DHABIO database were selected and used to do recognition against a captured image. The system was 95% accurate. Table 3 shows the results of a full database test for fingerprint images. Table 3. CRM Fingerprint compared with DHABIO database images results table Row ID

Results description

Items

1

Correct

72

2

Wrong

4

3

Total Test Images

76

4

Accuracy Percent

94.84%

5

Total Person

38

6

Total Train Images

76

7

Total Time Taken for recognition:

0.015625 s

8

Time Taken for one recognition:

0.0002067 s

9

Training Time

7.46875 s

v. Performance of PCA We compared the performance of the proposed algorithm with other algorithms based on accuracy. The algorithms are Independent Component Analysis Algorithm (ICA), Linear Discriminate Analysis Algorithm (LDA), and Principal Component Analysis PCA. Table 4 shows the comparison between PCA, ICA, and LDA based on accuracy, key points recovery, and execution time. Table 4. Comparison results between PCA, ICA, and LDA Evaluation

PCA

ICA

LDA

Accuracy

94.84%

93.2%

94%

Keypoints Recovery

47.40%

44.47%

45.75%

Execution Time

7.46875 s

10.26437 s

7.59831 s

a) PCA – the PCA reduced the number of key points of the image and still retained 47.4% of the key points with the most variance. Even with reduced key points, we could still get the desired accuracy of 94.84% at the execution time of 7.46875 s for a full database evaluation, as illustrated in Table 5. This makes the deployment of the algorithm cost-effective by saving space and the number of required CPUs.

470

F. M. Kau and O. P. Kogeda

The principal Component Analysis Algorithm was chosen to solve subscription fraud based on the best accuracy, key points recovery, and fast execution time. b) ICA – we compared the proposed algorithm with ICA. The idea behind ICA is to minimize statistical dependence between components. We compared our algorithm with ICA because both reduce dimension, and both are widely used for recognition; however, ICA first executes PCA and then selects the best components of our data. This makes the algorithm slower than PCA, needs more processing power and makes the system expensive to deploy. PCA outperforms ICA with accuracy as well. ICA is 93.2% accurate, recovered 44.7% of the key points, and is 3.1 s slower than PCA. c) LDA – The LDA is similar to PCA since both models focus on linear combinations of variables that best describe data. The goal behind LDA is that it explicitly shows the difference between the data. However, LDA is 94% accurate and managed to recover 45.75% of the key points. PCA outperformed LDA. On the other hand, LDA was more accurate and faster than ICA. Table 5 shows the comparison between the matching algorithms: Scale Invariant Feature Transform (SIFT), Oriented FAST and Rotated BRIEF (ORB), and PCA-based representation for local features (PCA-SIFT). False Acceptance Rate (FAR) and False Rejection Rate (FRR). Table 5. Comparison between PCA-SIFT, SIFT, and OBR Evaluation

PCA-SIFT

SIFT

OBR

Accuracy

94.84%

93%

0%

FAR

0.2791

0.329

N/A

FRR

0.0269

0.0279

N/A

Execution Time

7.46875 s

12.67435 s

N/A

a) PCA-SIFT was chosen to do matching for our system. PCA-SIFT uses PCA to replace the gradient histogram method in SIFT. It produces vectors that are smaller than vectors produced by SIFT. PCA-SIFT is 94.84% accurate with FAR of 0.2791, FRR of 0.0269, and execution time of 7.46875 s, as illustrated in Table 6. b) SIFT - we compared the efficiency and accuracy of our matching algorithm with SIFT. SIFT is a feature detector proven to be very efficient in recognition applications. SIFT identifies the key points on the image and uses them to do recognition. As illustrated in Table 6, results show that SIFT is 93% accurate with FAR of 0.329, FRR of 0.0279, and execution time of 12.67435 s. PCA-SIFT is more accurate and faster than SIFT. SIFT is CPU intensive and costly to deploy. c) ORB - uses FAST to detect the key points of the image and BRIEF descriptor. ORB is cost-effective since it is free to use for everyone. It is very fast in matching and identifying the key points. This algorithm is unable to identify the key points on a PCA image.

Subscription Fraud Prevention in Telecommunication

471

6 Conclusion and Future Work In this study, we designed and built MBS that prevents subscription fraud, and we also managed to identify a fraudster and save their details to avoid the future reoccurrence of subscription fraud. We demonstrated successfully how telecommunication companies could capture biometrics information of potential customers and verify it against biometrics information stored in a central database of the DHA before approving a contract. We compared the performance of PCA with LDA and ICA. We compared our matching algorithms and found that PCA-SIFT is better than SIFT and ORB. In the future, the integration of MBS into the police system can be investigated.

References 1. ICASA: The state of the ICT sector in South Africa. Independent Communications Authority of South Africa, p. 115, March 2021 2. StatsSA: Statistical release P0302: mid-year population estimates 2020, pp. 1–22. Stats SA, July 2020 3. Casey, E., Turnbull, B.: Digital evidence on mobile devices. Digit. Evid. Comput. Crime 3, 1–44 (2011) 4. KPMG: Global profiles of the fraudster, pp. 1–27. KPMG, May 2016 5. Ogundile, O.O.: Fraud analysis in Nigeria’ s mobile telecommunication industry. Int. J. Sci. Res. Publ. 3(2), 1–4 (2013) 6. Estévez, P.A., Held, C.M., Perez, C.A.: Subscription fraud prevention in telecommunications using fuzzy rules and neural networks. Expert Syst. Appl. 31(2), 337–344 (2006) 7. CFCA: 2015 global fraud loss survey. Report, p. 26 (2015) 8. Van Heerden, J.H.: Detecting fraud in cellular telephone networks, December 2005 9. Kabari, L.G., Ajuru, I., Harcourt, P.: Telecommunications subscription fraud detection using Naïve Bayesian network. Int. J. Comput. Sci. Math. Theory 2(2), 1–10 (2016) 10. Koops, B.-J., Leenes, R.: Identity theft, identity fraud and/or identity-related crime: definitions matter. Datenschutz und Datensicherheit - DuD 30(9), 553–556 (2006). https://doi.org/10. 1007/s11623-006-0141-2 11. CFCA: 2017 global fraud loss survey. Report, p. 26 (2018) 12. Adeoye, O.: Multi-mode biometric solution for examination malpractices in Nigerian schools. Int. J. Comput. Appl. 4(7), 20–27 (2010) 13. Shah, D., Haradi, V.: IoT based biometrics implementation on Raspberry Pi. Procedia Comput. Sci. 79, 328–336 (2016) 14. Ugale, A., Ingole, A.: Bimodal biometric recognition using PCA. Int. J. Innov. Res. Comput. Commun. Eng. 4(6), 2257–2263 (2016) 15. Bhattacharyya, D., Ranjan, R., Alisherov, F., Choi, M.: Biometric authentication: a review. Int. J. Serv. Sci. Technol. 2(3), 13–28 (2009) 16. Republic of South Africa: Department of Home Affairs - Identity Documents. http://www. dha.gov.za/index.php/civic-services/identity-documents. Accessed 23 June 2018 17. JKCS: Biometrics white paper, pp. 1–46 (2012) 18. Ratha, N.K., Senior, A., Bolle, R.M.: Automated biometrics. In: Singh, S., Murshed, N., Kropatsch, W. (eds.) ICAPR 2001. LNCS, vol. 2013, pp. 447–455. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44732-6_46 19. Kale, P.G., Khandelwal, C.S.: IRIS & finger print recognition using PCA for multi modal biometric system, pp. 78–81 (2016)

472

F. M. Kau and O. P. Kogeda

20. Jain, A.K., Hong, L., Kulkarni, Y.: A multimodal biometric system using fingerprint, face, and speech. In: International Journal of Computer Science and Mathematical Theory, vol. 1, no. 4, pp. 182–187 (1999) 21. Wang, Z., Wang, E., Wang, S., Ding, Q.: Multimodal biometric system using face-iris fusion feature. J. Comput. 6(5), 931–938 (2011) 22. Kau, F.M., Kogeda, O.P.: Impact of subscription fraud in mobile telecommunication companies. In: IEEE 2019 Open Innovations Conference, Cape Town, South Africa, 2– 4 October 2019, pp. 42–47. Cape Peninsula University of Technology (2019). ISBN: 978-1-7281-3464-2. https://doi.org/10.1109/OI.2019.8908261

Use of Artificial Intelligence in Cardiology: Where Are We in Africa? Fatou Lo Niang1(B) , Vinasetan Ratheil Houndji2 , Moussa Lˆ o1 , Jules Degila2 , 3 and Mouhamadou Lamine Ba 1

2

Universit´e Gaston Berger, Saint-louis, Senegal [email protected] Universit´e d’Abomey Calavi, Abomey Calavi, Benin 3 Universit´e Cheikh Anta de Dakar, Dakar, Senegal

Abstract. Cardiovascular diseases are the leading cause of death. Their inherent silent nature makes them often challenging to detect very early. The management of these diseases also requires many resources. Meanwhile, Artificial Intelligence (AI) in cardiology has recently showed its ability to fill the gap. Indeed, several scoring methods and prediction models have been developed to understand the different aspects of these pathologies. The purpose of this paper is to review the state-of-the-art of the use of AI and digital technologies in cardiology in developing countries and to see the place of Africa. We have conducted a bibliometric analysis with 222 papers and an in-depth study on 26 papers using real and local databases. The words arrhythmia, cardiovascular disease, deep learning, and machine learning come up most often. Support vector machine algorithms, decision tree-based assemblers, and convolutional neural networks are more used. Among the 26 papers studied, only one comes from Africa, 24 from Asia, and one is a joint work between researches from Uganda and Brazil. The results show that countries using these AI-based methods often have accessible health databases, and collaborations between health specialists and universities are frequent. The finding of the African studies is that they focused, in most instances, on medical research to find risk factors or statistics on the epidemiology of heart disease. Keywords: Artificial Intelligence · Cardiology · Cardio-vascular disease · Algorithm · Machine Learning · Africa

1

Introduction

Cardiovascular diseases (CVDs) are the leading cause of death in the world, according to the World Health Organization (in short, WHO). Approximately, We are grateful to AFD for funding this research work. We would also like to thank ACE-SMIA, ACE-MITIC and DSTN for their support and the members of the AI4CARDIO project for their helpful suggestions and remarks to improve this work. c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023  Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 473–486, 2023. https://doi.org/10.1007/978-3-031-34896-9_29

474

F. L. Niang et al.

17, 7 million of deaths are due to cardiovascular diseases, which represents 31% of total world mortality. Among the 17 million of dead persons whose age is under 70 and caused by non-communicable diseases, 82% occur in low- and middleincome countries while 37% are imputed to cardiovascular diseases [6]. Several risk factors, e.g. sedentary lifestyle, high cholesterol, obesity, hypertension, etc., can cause this high prevalence of CVDs worldwide. Prevention and early treatment are necessary to better fight against complications related to cardiovascular diseases. For instance, technology is currently used in the medical field for the prevention of stroke. In this direction, a real progress has been made in the fields of theoretical and applied research on cardiovascular diseases and particularly the application of Artificial Intelligence (AI) in developing countries like Asia ones. Indeed, AI techniques are used in several countries for the understanding, prevention, and management of stroke. Such existing techniques are often built on machine learning models allowing the prediction with a high accuracy of a cardiovascular anomaly. In practice, notable advances are also noted in the medical field with intelligent systems such as the one implemented in a cardiology department in the Netherlands, which allows not only the consultation of statistics on interventions and aortic valve operations in the past but also the risks of surgeries. Another example of an invention that targets the prediction of the dangers of cardiac arrhythmia called Torsade de Pointes [12] has been set up by a collaboration between IRD (“Institut pour la Recherche et le D´eveloppement Fran¸cais”), Sorbonne University and AP-HP (“Assistance Publique – Hˆ opitaux de Paris”), and DeepECG4U. At last, the Volta medical company in Marseilles envisioned setting up Artificial Intelligence to predict the most severe forms of atrial fibrillation [40]. Unfortunately, in Africa, it isn’t easy to access reliable socio-demographic and clinical data to help better understand and prevent CVDs. According to [1], the disease is not considered as a public health problem in the African continent despite its increasing mortality and morbidity rates. The WHO provides guidelines for diagnosing cardiovascular diseases [30] however, there are still disparities in managing the diseases in low-income countries. This paper provides an in-depth review of the progress made in developing countries regarding the use of AI to tackle CVD issues. This review helps us identify and present the barriers to the effective use of AI for better management and prevention of CVDs in African countries. We propose answers to the following open question: how can we do in Africa to have reliable, available, and exploitable health databases to propose intelligent systems that can be adapted and allow remote management? The rest of this paper is organized as follows. Firstly, Sect. 2 describes the methodology used for collecting the most relevant research papers and details the proposed multi-dimensional review of the state-of-the-art of the current studies about CVDs and AI with the purpose to gain more insights into risk factors and specific diseases. We also review the AI models and their performance measures regarding the pathology of interest and used data sources. Then, Sect. 3 discusses the limitations of the existing African studies regarding the state of the art in the rest of the world.

Use of Artificial Intelligence in Cardiology: Where Are We in Africa?

2

475

Multi-dimensional Data Profiling Framework

To motivate the relevancy of our study, we refer the readers to a recent survey in [2] about research works worldwide on cardiovascular diseases and the use of AI. In our framework, we first perform a typical bibliometric analysis with 222 papers before restricting ourselves to a subset of this latter selected based on the database provenance (see Fig. 1). We then proceed to an in-deep analysis by considering various aspects related to CVD challenges and AI. 2.1

Targeted Data Collection

Considering that we aim to illustrate through this review that there is a need to perform an analysis of the state-of-the-art from diverse areas and see where Africa stands on the application of AI in cardiology compared to other continents, we have targeted different research papers with topics covering the development of an intelligent system for cardiology. We have searched for studies performed in developing countries according to the list available on the website Donn´ eesMondiales.com1 , in combination with the keywords (“cardiovascular disease” OR “heart disease”) AND (“Artificial intelligence” OR “machine learning”) on PubMed and ScienceDirect from 2000 to 2020. This process results in 3713 records, out of which several filters were applied, as summed up in Fig. 1, after removing the duplicates. 2.2

Filtering Out the Most Relevant Research Papers

We started by eliminating the papers not related to cardiovascular diseases, and the papers that did not exploit databases from the target areas. Then we conducted our in-deep analysis of the 26 remaining papers, 16 from PubMed and 10 from ScienceDirect. We also looked at sources like IEEE, which allowed us to find other papers on research about cardiology in Africa. 2.3

Multi Dimensional Analysis of the Literature

In this section, we introduce and detail a multi-dimensional analysis approach of the existing research studies about the use of AI and digital technologies for CVDs worldwide and a focus on Africa. We start by presenting a common bibliometrics analysis. Basic Bibliometrircs Analysis. VOSviewer is free software that facilitates visualizing information about newspapers, scientific papers, etc. It allows analysis of co-occurrences with titles, keywords, or authors. VOSviewer helps us generate maps of the most frequent terms. We consider only the keywords that appear at least 5 times and obtained results depicted in Fig. 2(a). The expression “deep learning” has 16 occurrences and “machine learning” 15 with the same 1

https://www.donneesmondiales.com/pays-voie-developpement.php.

476

F. L. Niang et al.

Fig. 1. Methodology of collection of the relevant research papers

total link strength 8. The word “classification” appears 15 times, “ecg” 13 times. For diseases, “coronary artery disease” appears 10 times with a full link strength of 8 followed by “arrhythmia” which appears 8 times with a total link strength of 6. The only machine learning model appears is “support vector machine” with occurrences 6. A co-occurrence analysis performed on the 222 papers on titles and abstracts resulted in the most frequent terms with a minimum number of occurrences of 25. 32 terms are concerned, but 19 are retained as being part of the 60% most

Use of Artificial Intelligence in Cardiology: Where Are We in Africa?

477

relevant. In Fig. 2(b), two clusters are formed: the green cluster concerns mainly the models, the data counts 10 terms, while the yellow cluster is formed by the pathology and counts 9 terms.

Fig. 2. Results of the bibliometric analysis (Color figure online)

In-Depth Analysis. To go further than the common bibliometrics analysis of the state-of-the-art, we propose an in-depth analysis of the 26 selected papers to have a deeper insight into the most studied CVDs using AI, common risk factors, developed algorithms, and their performance measures. Research on Specific CVDs Using AI. Table 1 presents the list of diseases studied on the 26 papers selected. We notice that arrhythmia is the most studied pathology using machine learning methods. Nearby come heart failure and stroke with machine learning and deep learning, respectively for the most AI method used. In more developed countries, other disorders may be the priorities for their populations. For example, atherosclerosis, left ventricular hypertrophy [36], atrial fibrillation [38], coronary artery disease, deep vein thrombosis, abdominal aortic aneurysm, and type 2 diabetes [32] have already been predicted thanks to machine learning or deep-learning models. This proves that each region will have to perform studies on local data to know the top priorities for its population and thus propose solutions in accordance. This raises the following question: what is the most urgent pathology to study using AI in Africa? Research on Risk Factors Using AI. Cardiovascular diseases are often difficult to diagnose in time and can have disastrous consequences (e.g., death or physical handicap). They may have already altered some of the body’s functionality when detected late. Knowing the risk factors that cause these complications can help prevent and detect them earlier. Studies have been conducted to this end, in different areas and in different population samples. The studies about the factors that can promote or increase the risk of CVDs will help know which

478

F. L. Niang et al. Table 1. Heart diseases with common used AI methods

Pathologies

Methods References Machine learning Deep Learning

Arrhythmia

4

2

[10, 13, 25, 28, 42, 45]

Heart failure

3

2

[9, 17, 19, 39, 46]

Stroke

2

3

[5, 11, 15, 20, 44]

Cardiac disease

3

1

[7, 31, 35, 37]

Rheumatic heart disease



1

[22]

Coronary Disease



1

[21]

Left ventricular dysfunction –

1

[8]

Valvular heart disease

1



[33]

Peripheral arterial disease

1



[16]

factors are most considered or neglected in the research related to CVDs. Risk factors can be discovered by exploring comorbidity or physiological factors. To distinguish sick patients from healthy ones, factors such as renal insufficiency and hyper-coagulation are crucial [3]. Gender can also be a parameter that helps to understand the treatment of CVDs [41] compared to the country’s income level. The standard of living can have an impact that promotes cardiovascular disease. In low-income countries, populations do not always have access to primary health care, so getting the information needed to prevent cardiovascular diseases is often difficult. The social level must be considered for middle or low-income countries, e.g. those in sub-Saharan Africa do not necessarily have the same factors as other areas [47]. Changes in risk factors in middle and low-income countries can be due to urbanization or an adaptation to new lifestyle habits. It has been proven that it is essential to know that considering social and family factors is necessary because it can improve the management of patients [43] based on a risk score developed and compared to the Framingham score. Most of the research studies are epidemiological regarding the results obtained in Africa in this direction [18]. The socio-economic level, poor oral hygiene, and morbid associations are factors favoring arrhythmia according to [14], who finds that the socio-economic status is low in 14 cases (82.3%) of the 17 patients and poor oral hygiene in 14 cases (93, 3%) out of 15 patients examined. When we consider the research for risk factors in Africa compared to other areas, we realize that the methods used are quite different. Some developing countries, such as China and India have available databases that facilitate this task, whereas, in Africa, the first obstacle is the availability of the required data. Indeed, there are few health structures with computerized equipment for managing CVDs. Machine Learning and Deep Learning Models. AI is used in cardiology worldwide, and developing countries have integrated it well in managing cardiovascular diseases over the last 20 years. Classification, prediction, and data mining

Use of Artificial Intelligence in Cardiology: Where Are We in Africa?

479

are some techniques that allow better management and understanding of some pathologies. To construct models, one needs input data. Table 2 shows that most studies used data from hospitals in the concerned countries or national registries. For machine learning models, algorithms based on decision trees: Random forest [33], gradient boosting tree [17,28] and Support Vector Machine [7,16,29] are the most used algorithms. We also observe the use of the multi-marker algorithm [46] and the sparse decomposition algorithm [25]. Convolutional neural networks are widely used for deep learning algorithms. New algorithms set up for mortality with a deep learning algorithm that predicts mortality due to acute heart failure [19]. Adaptive Neural Fuzzy Inference System (ANFIS) is built for the identification of Congenital Heart Disease or Defect (CHD) [37]. Regarding the results in Table 2, we may note that these new algorithms do not perform better than others based on standard algorithms. Several performance measures have been used to evaluate the models’ performance. The accuracy in evaluating the correctness of the model allows having a percentage of good prediction. The sensitivity measures the true positive rate of the model: the higher it is, the better the model predicts for sick subjects. The specificity measures ate of true negatives to identify the subjects who are not sick: the higher it is, the more the model can make a good prediction for patients who do not have the disease. The metrics AUC (Area Under the Curve), F1 score, precision, or the Kappa are also used. The Case of Africa. The application of AI in health requires the availability of reliable and up-to-date data. One of the main research challenges regarding the application of artificial intelligence in Africa remains the lack of data. Indeed, IT tools are poorly integrated into hospital management, and data collection is not done regularly, leading to a lack of work on AI and health. In a systematic review of 451 references between 2012 and 2015 [2], categorized by country, research on machine learning and heart disease, no African country appeared in the results. Africa, with its lack of health facilities and adequate care [24], should be at the forefront of AI and health research to prevent, help diagnose, and provide early care and monitor patients in areas without such structures. Information about cardiovascular diseases in Africa is mostly from prospective studies conducted by medical teams Table 3. In the study done by Diouf et al. [34] for understanding comatose stroke and evaluating the survival of patients, the results show that coma is the leading disease and the first cause of death in neurology in Dakar. The study was conducted between 2006 and 2007 in the neuroanimation department of CHU Fann in Dakar on 105 patients admitted to the department. The results show that mortality is estimated at 82.9%, survival of 71 days, and survival at day 90 is 9.5%. The mean age was 61.914.2 years. The study also shows that ischemic strokes accounted for 51.4% and hemorrhagic strokes for 48.6%. People between 61 and 70 years of age were most at risk with 40% of cases. On the other hand, people coming from their homes represent the largest proportion of cases with 38.1%, which may be due to late detection or management [14].

480

F. L. Niang et al. Table 2. Different studies from middle low-income countries about CVDs

Type of Model

Reference Algorithm

Performance of the model Acc (%) Se (%) Sp (%)

Re (%) Pr (%) F1

Ka

AUC

Machine Learning

[42]

Lasso-logistic model















[29] [17] [39] [46] [11] [20] [13] [7]

Co-SVM Gradient boosting model Multi-marker model gradient boosted trees model CatBoost SVM (linear)

– – – – 76–80 – 99 95.40

– – – 94.4 76–67 – 99.17 92.70

– – – 90.3 76–81 – 99.25 –

– 89 93.28 – – 99.92 – –

– 90 94.12 – – 97.33 – –

0.723 – 0.9370 – – – 0.99 –

– – – – – – – –

[33]

Random Forest

96.34

99.38











[35]

Classification and regression tree (CART) Logistic Regression SVM







92

96

0.94



0.905–0.867 Malignant arrhythmia – – – Heart failure Heart failure 0.956 Heart failure – Stroke 99.94 Stroke – Heart Rate – Myocardial Infarction – Valvular Heart Diseases – Cardiac disease

85.2 81.3 96.36–97.91 –

859. –

– –

31.7 –

0.456 –

– 0.836 0.91–0.96 –

Sparse decomposition algorithm Gradient Boosting Tree

93.27 2.78

91.27,

93.46











Cardiac disease Peripheral arterial disease Arrhythmia

70.00 7.82













0.767 0.08

Heart rate

Deep-learning-based artificial intelligence algorithm for predicting mortality of AHF –















0,782–0,813 Acute heart failure



72.1–86.9 88.0–89.6 –







0.94

Deep convolutional neural network 3D convolutional neural network Fast CNN Neural network model

74.29–93.94 –













72.77















96.24 –

97.96 –

– –

– –

– –

0.9807 – 0.84 –

– –

Deep learning algorithm-based Computed Tomography Perfusion BioBERT (bidirectional encoder representations from transformers for biomedical text mining) Neural network



93.66

96.18





0.84



















0.822–0.858 Coronary insufficiency

92.4

93.7













Adaptive Neuro Fuzzy Inference System (ANFIS) Convolutional neural network

84.4

96.7

82.3





0.9673 –













[31] [16] [25] [28] Deep Learning

[19]

[8] [9] [22] [45] [10] [44]

[21]

[5] [37]

[15]

98.77–93.33 –

Studied anomalies



Left ventricular dysfunction Acute left heart failure Rheumatic heart disease Arrhythmia Cardiac Arrhythmia Acute Ischemic Stroke

Acute ischemic stroke Truncus Arteriosus congenital heart defect Stroke

Table 3. Heart diseases studies in Africa Reference Focus of the study Research Type

Disease of Interest Year of the study

[23]

Recommendation

Survey

[34]

Prevention

Prospective study

comatose stroke

2008

[24]

Recommendation

Survey



2010

[1]

Risk factors

Survey



2010

[26]

Risk factors

Case control study –

2021

3



2001

Discussion

Regarding research and studies on this subject, it is not always easy to find concrete results that integrate health personnel and researchers in informatics or AI in Africa. Nevertheless, publications concerning African countries are available

Use of Artificial Intelligence in Cardiology: Where Are We in Africa?

481

to all and can serve as a lever to improve the management and prevention of CVDs. While the strategies for management and prevention of stroke are far from reaching the goals when considering the burden of the disease, we can see in Table 3 that most of the studies conducted in Africa are medical or retrospective studies. There is therefore a lack of investigations in the digital domain concerning the prevention, diagnosis, and management of CVDs. It is necessary to find early management strategies but also to introduce solutions to allow a remote follow-up of patients. To achieve this, it is necessary to include new technologies in the management and prevention of CVDs, as pointed out by [1], who believe that modern information and communication techniques should be used to develop telemedicine. According to [4], there is a need for Africa to connect cardiology. A significant part of the research related to CVDs is the recommendation aspect, which allows us to know the concepts to consider before jumping in, whether it is about prevention, treatment, management, or behaviors to adopt when in doubt. WHO believes that “People with heart disease or at high risk of heart disease (due to the presence of one or more risk factors such as hypertension, diabetes, hyperlipidemia, or existing disease) require early detection and management, including psychological support and medication, as appropriate” [6]. In the management component, some risk factors may be overlooked or not considered at all. Several other aspects, such as living conditions or access to health care, must also be taken considered because each population, depending on its diversity, may react differently to the risks of cardiovascular disease. Mosca et al. suggest conducting essays based exclusively on women and limited to studies of unique or predominantly female conditions, consideration of genderspecific living conditions and publication of gender-specific analyses to reduce the prevalence gap [27]. Research is being conducted to facilitate management and to better understand the disease. Indeed, studies must be conducted on the methodologies used for the management of CVDs and comparisons by introducing factors that have often been neglected. Asian countries are well ahead in the use of AI for cardiology. In Table 1, all the research mentioned comes from there except one that comes from Tunisia and another one from Brazil and Uganda, so it is normal to ask ourselves what is slowing down countries like us from getting involved and integrating these technologies in the management of cardiovascular diseases. Looking at the sources of data used in each study, on the Table 4, some studies used data from field collection, others from government databases accessible for exploitation and better integration of intelligent techniques to find features that define their population for cardiovascular diseases, others used data from hospital databases that can allow having clinical highlights that identify patients through collaboration with health structures. The method of collecting these data may depend on the final objectives of the study, but having databases already available or being in collaboration with structures may allow having a history of the disease for the given area, but also to have a follow-up of the cases to make a better diagnosis, classification, prediction with intelligent tools and up-to-date data. It can be noted that there is a lack of this kind of policy based on the establishment of a national or hospital database in Africa and as in the

482

F. L. Niang et al.

study we must fall back on the collection of specific data knowing the purpose of the study which may limit the overall impact and sustainability of the research. Therefore, quality data are needed for Africa to develop new technologies for cardiology. Collaborations between specialists and digital technicians must be encouraged and popularized to set up tools adapted to Africa. What can we do in Africa to have reliable, available, and exploitable health databases to propose intelligent systems that can be adapted and allow remote management? There are difficulties accessing medical data in Africa, but we can highlight other blockages. There is a need to establish collaborations between specialists in cardiology and computer scientists, which will allow the establishment of platforms for data management and a better understanding of the urgent needs for cardiology in Africa. Table 4. Data source for the different studies and their countries References Data Sources

Countries

[42]

Anhui HF cohort

China

[29]

International Heart Hospital, Rizhao

China

[17]

Nine public hospitals for heart failure from Hong Kong

China

[31]

Tel-Aviv Sourasky Medical Center

Israel

[16]

Collaborating medical center database

India

[39]

The Joint Chinese University

China

[46]

Cardiology Department of Anzhen Hospital

China

[11]

10 areas in China

China

[8]

Tri-Service General Hospital, Taipei

Taiwan

[20]

National stroke screening

China

[19]

Korean AHF registry

Korea

[13]

National Institute of Technology Rourkela

India

[7]

Medical College and Hospital, Kolkata

India

[33]

Tunisian University Hospital L¨ a Rabta

Tunisia

[35]

B & J Super speciality Hospital, Navi Mumbai, other hospitals in Navi Mumbai

India

[9]

Affiliated Hangzhou First People’s Hospital

China

[22]

Brazil and Uganda

Brazil and Uganda

[45]

The Second Medical Center and National Clinical Research Center for Geriatric Diseases

China

[10]

China Physiological Signal Challenge (CPSC) 2018

China

[44]

Department of Medical Imaging Centre, First People’s Hospital of Xianyang

China

[21]

Taichung Veterans General Hospital

Taiwan

[5]

Huaxi Hospital of Sichuan University, Hangzhou First People’s Hospital of Zhejiang University China

[37]

Nearby hospitals and medical diagnostic centers

India

[15]

Himalayan Institute of Medical Sciences (HIMS), Dehradun

India

[28]

India volunteers

India

[25]

Tehran arrhythmia clinic database

Iran

4

Conclusion

In this work, we have conducted a review to understand what blocks the use of AI in cardiology in Africa compared to other continents. We found that research in other developing countries is facilitated by the availability of data and the existence of health care structures to conduct the tests. The gap is more felt because there is an almost absence of public health data, and collaborations between hospitals, professionals, and academics are not too much valued.

Use of Artificial Intelligence in Cardiology: Where Are We in Africa?

483

References 1. Adoukonou, T.A., et al.: Prise en charge des accidents vasculaires c´er´ebraux en Afrique subsaharienne. Revue Neurologique 166(11), 882–893 (2010). https://doi. org/10.1016/j.neurol.2010.06.004 2. Ahsan, M.M., Siddique, Z.: Machine learning-based heart disease diagnosis: a systematic literature review. Artif. Intell. Med. 128, 102289 (2022). https://doi.org/ 10.1016/j.artmed.2022.102289 3. Al-Absi, H.R.H., Refaee, M.A., Rehman, A.U., Islam, M.T., Belhaouari, S.B., Alam, T.: Risk factors and comorbidities associated to cardiovascular disease in Qatar: a machine learning based case-control study. IEEE Access 9, 29929–29941 (2021). https://doi.org/10.1109/ACCESS.2021.3059469 4. Bonny, A., et al.: Cardiac arrhythmias in Africa: epidemiology, management challenges, and perspectives. J. Am. Coll. Cardiol. 73(1), 100–109 (2019). https://doi. org/10.1016/j.jacc.2018.09.084 5. Cao, Z., et al.: Deep learning derived automated ASPECTS on non-contrast CT scans of acute ischemic stroke patients. Hum. Brain Mapp. 43(10), 3023–3036 (2022). https://doi.org/10.1002/hbm.25845 6. Cardiovascular diseases (CVDs). https://www.who.int/fr/news-room/fact-sheets/ detail/cardiovascular-diseases-(cvds) 7. Chakraborty, A., Sadhukhan, D., Pal, S., Mitra, M.: Automated myocardial infarction identification based on interbeat variability analysis of the photoplethysmographic data. Biomed. Sig. Process. Control 57, 101747 (2020). https://doi.org/ 10.1016/j.bspc.2019.101747 8. Chen, H.Y., et al.: Artificial intelligence-enabled electrocardiography predicts left ventricular dysfunction and future cardiovascular outcomes: a retrospective analysis. J. Personalized Med. 12(3), 455 (2022). https://doi.org/10.3390/jpm12030455 9. Chen, J., Gao, Y.: The role of deep learning-based echocardiography in the diagnosis and evaluation of the effects of routine anti-heart-failure Western medicines in elderly patients with acute left heart failure. J. Healthc. Eng. 2021, 4845792 (2021). https://doi.org/10.1155/2021/4845792 10. Chen, T.M., Huang, C.H., Shih, E.S., Hu, Y.F., Hwang, M.J.: Detection and classification of cardiac arrhythmias by a challenge-best deep learning neural network model. iScience 23(3), 100886 (2020). https://doi.org/10.1016/j.isci.2020.100886 11. Chun, M., et al.: Stroke risk prediction using machine learning: a prospective cohort study of 0.5 million Chinese adults. J. Am. Med. Inform. Assoc. JAMIA 28(8), 1719–1727 (2021). https://doi.org/10.1093/jamia/ocab068 12. DeepECG4U: intelligence artificielle au service de la sant´e cardiaque. Site Web IRD 13. Dhananjay, B., Sivaraman, J.: Analysis and classification of heart rate using CatBoost feature ranking model. Biomed. Sig. Process. Control 68, 102610 (2021). https://doi.org/10.1016/j.bspc.2021.102610 14. Diao, M., et al.: Cardiopathies rhumatismales ´evolutives a propos de 17 cas collig´es au chu de Dakar. Undefined (2005) 15. Gautam, A., Raman, B.: Towards effective classification of brain hemorrhagic and ischemic stroke using CNN. Biomed. Sig. Process. Control 63, 102178 (2021). https://doi.org/10.1016/j.bspc.2020.102178 16. Jana, B., Oswal, K., Mitra, S., Saha, G., Banerjee, S.: Detection of peripheral arterial disease using Doppler spectrogram based expert system for Point-of-Care applications. Biomed. Sig. Process. Control 54, 101599 (2019). https://doi.org/10. 1016/j.bspc.2019.101599

484

F. L. Niang et al.

17. Ju, C., et al.: Derivation of an electronic frailty index for predicting short-term mortality in heart failure: a machine learning approach. ESC Heart Fail. 8(4), 2837–2845 (2021). https://doi.org/10.1002/ehf2.13358 18. Kimani, K., Namukwaya, E., Grant, L., Murray, S.A.: What is known about heart failure in sub-Saharan Africa: a scoping review of the English literature. BMJ Support. Palliat. Care 7(2), 122–127 (2017). https://doi.org/10.1136/bmjspcare2015-000924 19. Kwon, J.M., et al.: Artificial intelligence algorithm for predicting mortality of patients with acute heart failure. PLoS ONE 14(7), e0219302 (2019). https:// doi.org/10.1371/journal.pone.0219302 20. Li, X., Bian, D., Yu, J., Li, M., Zhao, D.: Using machine learning models to improve stroke risk level classification methods of China national stroke screening. BMC Med. Inform. Decis. Making 19, 261 (2019). https://doi.org/10.1186/s12911-0190998-2 21. Li, Y.H., Lee, I.T., Chen, Y.W., Lin, Y.K., Liu, Y.H., Lai, F.P.: Using text content from coronary catheterization reports to predict 5-year mortality among patients undergoing coronary angiography: a deep learning approach. Front. Cardiovasc. Med. 9, 800864 (2022). https://doi.org/10.3389/fcvm.2022.800864 22. Martins, J.F.B.S., et al.: Towards automatic diagnosis of rheumatic heart disease on echocardiographic exams through video-based deep learning. J. Am. Med. Inform. Assoc. JAMIA 28(9), 1834–1842 (2021). https://doi.org/10.1093/jamia/ocab061 23. Mendez, G.F., Cowie, M.R.: The epidemiological features of heart failure in developing countries: a review of the literature. Int. J. Cardiol. 80(2–3), 213–219 (2001). https://doi.org/10.1016/S0167-5273(01)00497-1 24. Mocumbi, A.O.H., Ferreira, M.B.: Neglected cardiovascular diseases in Africa. J. Am. Coll. Cardiol. 55(7), 680–687 (2010). https://doi.org/10.1016/j.jacc.2009.09. 041 25. Mohammadi, F., Sheikhani, A., Razzazi, F., Ghorbani Sharif, A.: Non-invasive localization of the ectopic foci of focal atrial tachycardia by using ECG signal based sparse decomposition algorithm. Biomed. Sig. Process. Control 70, 103014 (2021). https://doi.org/10.1016/j.bspc.2021.103014 26. Mohammed, E.M., Alnory, A.: Bivariate analysis of cardiovascular disease risk factors in Gezira state, Sudan (2019): a hospital-based case-control study. In: 2020 International Conference on Computer, Control, Electrical, and Electronics Engineering (ICCCEEE), Khartoum, Sudan, pp. 1–6. IEEE, February 2021. https:// doi.org/10.1109/ICCCEEE49695.2021.9429659 27. Mosca, L., Barrett-Connor, E., Wenger, N.K.: Sex/gender differences in cardiovascular disease prevention: what a difference a decade makes. Circulation 124(19), 2145–2154 (2011). https://doi.org/10.1161/CIRCULATIONAHA.110.968792 28. Nayak, S.K., Pradhan, B.K., Banerjee, I., Pal, K.: Analysis of heart rate variability to understand the effect of cannabis consumption on Indian male paddy-field workers. Biomed. Sig. Process. Control 62, 102072 (2020). https://doi.org/10.1016/j. bspc.2020.102072 29. Peng, J., et al.: Research on application of data mining algorithm in cardiac medical diagnosis system. Biomed. Res. Int. 2022, 7262010 (2022). https://doi.org/10. 1155/2022/7262010 30. Pr´evention des maladies cardiovasculaires: Guide de poche pour l’´evaluation et la prise en charge du risque cardiovasculaire (diagrammes OMS/ISH de pr´ediction du risque cardiovasculaire pour la sous-r´egion africaine de l’OMS AFR D, AFR E). https://apps.who.int/iris/handle/10665/43848

Use of Artificial Intelligence in Cardiology: Where Are We in Africa?

485

31. Reychav, I., Zhu, L., McHaney, R., Arbel, Y.: Empirical thresholding logistic regression model based on unbalanced cardiac patient data. Procedia Comput. Sci. 121, 160–165 (2017). https://doi.org/10.1016/j.procs.2017.11.022 32. Rezaee, M., Putrenko, I., Takeh, A., Ganna, A., Ingelsson, E.: Development and validation of risk prediction models for multiple cardiovascular diseases and Type 2 diabetes. PLoS ONE 15(7), e0235758 (2020). https://doi.org/10.1371/journal. pone.0235758 33. Salah, I.B., De la Rosa, R., Ouni, K., Salah, R.B.: Automatic diagnosis of valvular heart diseases by impedance cardiography signal processing. Biomed. Sig. Process. Control 57, 101758 (2020). https://doi.org/10.1016/j.bspc.2019.101758 34. S`ene Diouf, F., et al.: Survie des accidents vasculaires c´er´ebraux comateux ` a Dakar (S´en´egal). Revue Neurologique 164(5), 452–458 (2008). https://doi.org/10.1016/ j.neurol.2008.01.007 35. Shirole, U., Joshi, M., Bagul, P.: Cardiac, diabetic and normal subjects classification using decision tree and result confirmation through orthostatic stress index. Inform. Med. Unlocked 17, 100252 (2019). https://doi.org/10.1016/j.imu. 2019.100252 36. Sparapani, R., et al.: Detection of left ventricular hypertrophy using Bayesian additive regression trees: the MESA. J. Am. Heart Assoc. 8(5), e009959 (2019). https://doi.org/10.1161/JAHA.118.009959 37. Sridevi, S., Nirmala, S.: ANFIS based decision support system for prenatal detection of Truncus Arteriosus congenital heart defect. Appl. Soft Comput. 46, 577–587 (2016). https://doi.org/10.1016/j.asoc.2015.09.002 38. Tiwari, P., Colborn, K.L., Smith, D.E., Xing, F., Ghosh, D., Rosenberg, M.A.: Assessment of a machine learning model applied to harmonized electronic health record data for the prediction of incident atrial fibrillation. JAMA Netw. Open 3(1), e1919396 (2020). https://doi.org/10.1001/jamanetworkopen.2019.19396 39. Tse, G., et al.: Multi-modality machine learning approach for risk stratification in heart failure with left ventricular ejection fraction ≤ 45%. ESC Heart Fail. 7(6), 3716–3725 (2020). https://doi.org/10.1002/ehf2.12929 40. Volta Medical l`eve 23 millions d’euros pour son IA en cardiologie, January 2021 41. Walli-Attaei, M., et al.: Variations between women and men in risk factors, treatments, cardiovascular disease incidence, and death in 27 high-income, middleincome, and low-income countries (PURE): a prospective cohort study. The Lancet 396(10244), 97–109 (2020). https://doi.org/10.1016/S0140-6736(20)30543-2 42. Wang, Q., et al.: Machine learning-based risk prediction of malignant arrhythmia in hospitalized patients with heart failure. ESC Heart Fail. 8(6), 5363–5371 (2021). https://doi.org/10.1002/ehf2.13627 43. Woodward, M., Brindle, P., Tunstall-Pedoe, H., SIGN Group on Risk Estimation: Adding social deprivation and family history to cardiovascular risk assessment: the ASSIGN score from the Scottish Heart Health Extended Cohort (SHHEC). Heart (Br. Card. Soc.) 93(2), 172–176 (2007). https://doi.org/10.1136/hrt.2006.108167 44. Yang, Y., Yang, J., Feng, J., Wang, Y.: Early diagnosis of acute ischemic stroke by brain computed tomography perfusion imaging combined with head and neck computed tomography angiography on deep learning algorithm. Contrast Media Mol. Imaging 2022, 5373585 (2022). https://doi.org/10.1155/2022/5373585 45. Yin, M., et al.: Influence of optimization design based on artificial intelligence and Internet of Things on the electrocardiogram monitoring system. J. Healthc. Eng. 2020, 8840910 (2020). https://doi.org/10.1155/2020/8840910

486

F. L. Niang et al.

46. Yuan, H., et al.: Development of heart failure risk prediction models based on a multi-marker approach using random forest algorithms. Chin. Med. J. 132(7), 819–826 (2019). https://doi.org/10.1097/CM9.0000000000000149 ˆ 47. Yusuf, S., Reddy, S., Ounpuu, S., Anand, S.: Global burden of cardiovascular diseases: part I: general considerations, the epidemiologic transition, risk factors, and impact of urbanization. Circulation 104(22), 2746–2753 (2001)

Towards ICT-Driven Tanzania Blue Economy: The Role of Higher Learning Institutions in Supporting the Agenda Abdi T. Abdalla1(B) , Kwame Ibwe1 , Baraka Maiseli1 , Daudi Muhamed2 , and Mahmoud Alawi3 1 Department of Electronics and Telecommunications Engineering, University of Dar es

Salaam, Dar es Salaam, Tanzania [email protected] 2 Department of Technology & Innovation, WTC Zanzibar, Zanzibar, Tanzania 3 Department of Telecommunication, Electronics and Computer Engineering, Karume Institute of Science and Technology, Zanzibar, Tanzania Abstract. In different occasions, the President of Tanzania and that of the revolutionary government of Zanzibar have emphasized the importance of Blue Economy in socio-economic development. Inspired by their statements, and noting the reports from previous studies, we argue that sustainable Blue Economy can be achieved through effective utilization of Information and Communication Technology (ICT). This work investigated the contribution of ICT in sustaining the Tanzania Blue Economy. To this end, we have extensively reviewed the curricula of the selected Tanzanian Higher Learning Institutions (HLIs) to investigate ICT programs in such institutions. Results show that approximately 29% of the 24 selected universities lack ICT programs, and the remaining universities contain an inadequate number of ICT programs. This observation further suggests a need to reform the curricula for HLIs by incorporating more ICT programs, especially those directly linked with Blue Economy. In addition, researchers should be encouraged and supported to undertake ICT multidisciplinary research focusing on maximizing the potential benefits of marine and fresh waters while preserving the environment. Our work opens interesting research opportunities on the application of ICT as enabler to foster and sustain Blue Economy. We have established possible research directions in machine learning, data science, and electronics for researchers to seize and advance Blue Economy. Keywords: Blue Economy · Information and Communication Technology · Higher Learning Institutions

1 Introduction The effective utilization of Information and Communication Technology (ICT) marks a key determinant for both individual business success and national economic growth. The emergence of the information age, in particular, emphasizes the importance of ICT in facilitating the nation’s socio-economic development through education, research © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023 Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 487–497, 2023. https://doi.org/10.1007/978-3-031-34896-9_30

488

A. T. Abdalla et al.

and innovation. The Tanzania National ICT Policy 2016 [1] and the National Development Vision 2025 [2] acknowledge that the nation can significantly accelerate its socio-economic development and gain global competitiveness through development, utilization, and exploitation of ICT. For years, the Tanzanian government has been expanding access to social and economic services through transformation of ICT infrastructures. The government has constructed the National Fiber Optic Cable network and the National ICT Broadband Backbone to achieve its grand ICT vision. The backbone infrastructure enhances the usage of computer and mobile ICT applications, including e-government, e-learning, e-health, and e-commerce, and supports research, innovation and future ICT services [1]. The government believes that ICT, if properly exploited, can positively influence productivity, lower prohibitively high operational costs, and promote competitiveness in industries, hence strengthening its economy and increasing employment opportunities [3]. Any country should invest in sustainable exploitation of its natural resources to create medium- and long-term employment opportunities, reduce poverty and improve human wellbeing [4]. The driving force for the Tanzania sustainable transformation can be measured through investment in natural endowment of the country. Tanzania has, in the recent past, been doing well in attracting local and foreign investments. However, the challenge remains to scale up the amount and quality of investments in line with workable economy models [3]. While the Market Economy offers many advantages, such as innovation and entrepreneurial opportunities, it also suffers from inequitable distribution of wealth, poor work conditions, and environmental degradation. Similarly, the Green Economy1 remained dormant due to innumerable challenges of high production cost, poor technology, employment and business losses, and lack of awareness to the general public. Subsequently, Blue Economy has been identified as a more appropriate technology for sustainable socio-economic development [5]. The Blue Economy, also referred to as the ocean or maritime economy, is a concept that simultaneously encourages social inclusion, environmental sustainability, strengthening of maritime ecosystems, transparent governance as well as economic growth and development [6]. Blue Economy has diverse components, including established traditional ocean industries, such as fisheries, tourism, and maritime transport as well as new and emerging activities, such as offshore renewable energy, aquaculture, seabed extractive activities, and marine biotechnology and bioprospecting. Increased and sustained utilization of the Blue Economic resources has the potential to boost the economy of the society [7]. The enormous potentials of Blue Economy can be maximized through ICT, a general-purpose technology providing an indispensable platform upon which further productivity-enhancing changes, such as service, product and process innovations, can be based [8]. Digital technologies may be sustainably exploited using ICT to advance Blue Economy. The prospective pioneers of this agenda are Higher Learning Institutions (HLIs). In essence, HLIs have the potential of realizing the concepts of Blue Economy by properly utilizing the abundance of young minds [9]. HLIs have the responsibility to 1 One that results in improved human well-being and social equity, while significantly reducing

environmental risks and ecological scarcities. It is low carbon, resource efficient, and socially inclusive (UNEP, 2011).

Towards ICT-Driven Tanzania Blue Economy

489

produce skilled manpower and conduct cutting-edge research in all strategic areas of the Blue Economy. The government has been supporting the HLIs through establishment of ICT Centers of Excellence, empowering local researchers and innovators in ICT, and promotion of new products [3]. In view of these facts, several research questions could come to mind: are the curricula in HLIs tailored to support the Blue Economy agenda? Is the number of ICT experts produced in HLIs sufficient to support the agenda? How can Tanzania harness the massive potentials of ICT to advance Blue Economy, hence promoting sustainable socio-economic development? Researchers agree that ICT should be used to address societies’ pressing needs, such as hunger, poverty and poor health conditions through research and innovation. This paper discusses the possible roles that ICT could play in sustaining the Tanzania Blue Economy agenda. Specifically, we focus on HLIs and their role towards applying sophisticated emerging technologies in realizing and sustaining the Tanzania Blue Economy. Recognizing the potential role of HLIs in advancing technology, we investigate current technological developments and trends of applying ICT solutions in conservation and sustainability of marine resources for inclusive socioeconomic development. Besides, the paper recommends potential reforms in HLIs, as key stakeholders and frontiers of technology, and structures for managing data from segments of Blue Economy. These recommendations may necessitate informed decision making and policy reforms.

2 Role of ICT in Blue Economy The ICT can play a major role in sustaining the Blue Economy, if positively exploited to solve related challenges. With the advancement and proliferation of digital technologies, researchers and practitioners may exploit the capabilities of ICT to accelerate the success of Blue Economy in Tanzania. ICT can automate the business processes of all sectors of Blue Economy to enhance efficiency and optimize productivity while simultaneously promoting ocean health and country’s economic growth. This article elaborates possible applications of ICT in fishing, aquaculture, fish marketing, and tourism. 2.1 ICT in Fishing The use of ICT can reinvigorate the fishing industry and ensure its sustainability and improved welfare of Tanzania fishers, hence advancing the overall country’s economy. Considering Tanzania as an example, small-scale fishers rely on their fishing experiences to plan for the next fishing sites. This approach does not guarantee the availability of adequate fishes and may create frustrations among fishers to lose their limited resources. To address the challenge, ICT based fish finder gadgets can be applied to instantaneously and accurately locate fish-rich sites [10]. Consequently, we can reduce unnecessary costs incurred during the manual fish finding approach where fishers can move to a number of locations searching for the best places. Using the state-of-the-art technologies, such as Machine Learning (ML), highly accurate algorithms can be devised to locate the fish sites and their corresponding species by using previous fishing information [11]. This low-cost approach can be achieved

490

A. T. Abdalla et al.

through a careful observation of the fishers to track their fishing records that may serve as a training dataset for the ML models. Figure 1(a) shows a sample of a mobile application which gives fish types and their locations. This application was developed by researchers of University of Dar es Salaam, Signal Processing Research Group. Furthermore, ICT can facilitate emergency reporting and localization of fishers [12]. There have been a number of reported cases of missing and deaths of small-scale fishers due to emergency reasons, including boat sinks. If such events could be communicated without delay, then lives of fishers could be saved, thereby improving the reliability in the small-scale fishing industry. With the current state of ICT, a mobile application can be developed to track all registered boats and report vital information of the boats and fishers to the control center for timely assistance. Emergency cases, including overfishing and boat sink or fire tragedies due to technical challenges, can be seamlessly exchanged between communicating parties to allow provision of appropriate assistance. Figure 1(b) shows a sample mobile based system that tracks registered boats (vessels) and allows exchanges of critical message exchanges. This system can reduce the risks due to losses of fishers and their fishing resources. 2.2 ICT in Aquaculture ICT can provide cost-effective tools to automate the entire business process of aquaculture and improve its efficiency and productivity. ICT can, without human intervention, facilitate measuring, monitoring and controlling of water quality parameters (e.g., temperature, salinity, and pH) to ensure optimal fish growth and reduction of fish mortality rate. It has been widely reported in the literature that the quality of water and feeding behavior can adversely affect the fish growth rate. This observation adds extra operational cost as the fish will need more time before attaining the market weight [13]. Using ICT solutions, the user can, in real-time, monitor the pond status, such as water level, and respond appropriately to maintain the optimum operating conditions. Also, the system can establish semi-automated or automated feeding schedules taking into account the age of the fish. Besides, the system can timely report all critical issues that may jeopardize the fish growth. Such a sophisticated system can significantly reduce the manpower without compromising productivity, an advantage that may maximize profit. The system may, in addition, create employment in the aquaculture industry and benefit both fishers and the country. Figure 1(c) depicts a mobile application that monitors the health of a pond. 2.3 Digital Fish Market Traditionally, fish introduced into the markets, geographical physical sites, necessitate travel of customer to the selling points without the knowledge of the market status. Physical markets are challenged by a number of factors, including inaccessibility during unfavorable conditions and insufficient information on the product availability, leading to wastage of time and resources. Such markets provide no direct link between sellers and potential customers. To alleviate these challenges, mobile fish markets have been widely applied whereby sellers use vehicles to reach potential customers. These movable

Towards ICT-Driven Tanzania Blue Economy

491

shops, unaffordable to many local fishers, unnecessarily add extra cost attributable to human resources as well as fuel and maintenance costs of the vehicles. ICT may provide advanced tools for the development of an online interactive market that considers the existing business processes, such as auctions, to complement the physical market. Figure 1(d) shows a sample of an online fish market system. In the online interactive market, the buyer will not be obliged to travel but yet can view all the available market products and swiftly buy the selected products. Also, with the online system, the seller can generate timely reports for their business analysis. Through this system, buyers can confirm transactions online and a reliable courier will be notified for delivery services, hence improving flexibility, adding convenience, and significantly reducing cost to the buyer. The system can create employment opportunities across various players in the Blue Economy ecosystem.

Fig. 1. Sample mobile applications (a) fish types and locations (b) vessel tracking (c) aquaculture monitoring (d) digital fish market.

2.4 ICT in Tourism Development of the tourism industry depends on the effective and high-speed ICT infrastructure and software applications. The tourism management system requires tourists’ data to be accessed at different levels. The tourism attraction sites and other products should be readily available through the use of technology-based dissemination mechanisms, including dedicated broadcasting channels and social networks, to ensure that the intended information reaches a wider community. Customers should have the ability to share information and ratings on destination, quality of services for hotels and restaurants, and environmental and social conditions. The current challenges in the tourism industry include undertrained tour guides which then lowers the tourists’ security and safety, a consequence that may reduce the number of potential tourists. To overcome such a challenge, mobile applications can be developed to automate the processes by providing auto-guides and translations (multilingual customized geo-maps).

492

A. T. Abdalla et al.

3 State of the Blue Economy in Tanzania There have been commendable efforts by the Tanzanian government to establish the Blue Economy in country [14, 15]. The National Five-Year Development Plan 2021/2022– 2025/2026 (FYDP III) has established several strategies to ensure positive transformation of the fisheries industries, including utilization of the Blue Economy potentials for marine and fresh waters. The FYDP III has recommended effective methods to achieve a sustainable Blue Economy, hence promoting freshwater and deep-sea fishing, marine and freshwater conservation, and aquaculture. For instance, the government has been argued to facilitate fishing activities through procurement of fishing vessels and construction of fishing harbors. In different occasions, the current presidents of Tanzania, H. E. Samia Suluhu Hassan, and Zanzibar, H. E. Hussein Mwinyi, have strongly supported the Blue Economy as a feasible approach to address most socio-economic problems. In Zanzibar, the initiative to establish the Blue Economy Policy2 started since October 2020 to ensure sustainable utilization of marine resources. Similar efforts are expected to be duplicated in the Tanzania mainland. Generally, these efforts provide a promising future for Tanzania to accomplish the goal of the FYDP III: “realizing competitiveness and industrialization through human development.” Tanzania and France are members of the Indian Ocean Rim Association (IORA)3 —a dynamic inter-governmental organization with two primary objectives: strengthening of regional cooperation; and sustainable development within the Indian Ocean. As IORA members, the two countries will interact more closely in activities related to Blue Economy: tourism; safety and security; trade and investment; academic, scientific and technological exchanges; fisheries management; disaster risk management; and women’s economic empowerment. These encouraging efforts by the government provide a promising future for Tanzania to achieve sustainable socio-economic development through Blue Economy. Some HLIs in Tanzania have been engaged in various ways to foster Blue Economy. The Open University of Tanzania (OUT), for instance, has designed curricula in response to the Blue Economy4 . Emphasizing the initiative, H. E. Hussein Mwinyi commended the achievements and challenged OUT to produce more curricula of Blue Economy to address the growing national and societal needs and aspirations. Despite the existing efforts, our investigation reveals that more work is needed by HLIs to manifest the envisaged Blue Economy. OUT, in particular, has demonstrated a way towards this promising technology. For the case of Tanzania mainland, HLIs may need to adapt the efforts demonstrated by OUT. In this work, we investigated 24 HLIs to learn how their curricula contain components necessary for students to conceptualize and advance the Blue Economy (Fig. 2). Specifically, our goal centered on the contribution of ICT offered by HLIs towards the sustainable Blue Economy. The rationale for this goal is that ICT plays a primary role in providing suitable technological tools to promote and enhance the development of Blue 2 http://www.planningznz.go.tz/. 3 https://www.iora.int/en. 4 https://www.out.ac.tz/out-design-curricula-in-response-to-the-blue-economy/.

Towards ICT-Driven Tanzania Blue Economy

493

Economy. Results from Fig. 2, however, show that we still have a long way for HLIs to sufficiently support Blue Economy through ICT programs. Approximately 29% of the twenty-four selected universities lack ICT programs, and the remaining universities contain an inadequate number of ICT programs. This observation further supports a need to reform the curricula for HLIs by incorporating more ICT programs, especially those directly linked with Blue Economy. We should aim to generating a considerable number of well-trained ICT students that can foster our vision of achieving a sustainable Blue Economy. Migration towards Blue Economy requires collaborative efforts by academia, government, and industries. These entities should work together in varied capacities to realize the envisaged technology. Supported by the Tanzanian government, industries, stakeholders, and universities must undertake advanced research on Blue Economy. Researchers should be encouraged and supported to undertake multidisciplinary research focused on maximizing the potential benefits of marine and fresh waters while preserving the environment.

UDSM UDOM SUA MUST DIT IFM SUZA KIST ZU SUMAIT University SJUCET SAUT MNMA IPA DMI MUM ATC Ardhi University KIUT Mzumbe University NIT UAUT KCMU College MUHAS

100 90 80 70 60 50 40 30 20 10 0

Number of ICT Programs

Total Number of Programs offered

Fig. 2. ICT Programs offered by selected higher learning institutions in Tanzania.

4 Recommendations The introduction of ICT in the Blue Economy sector cannot produce the intended benefits if unaccompanied by complementary organizational changes. A significant reform on the existing management structures should be made to create a conducive platform

494

A. T. Abdalla et al.

to practically realize the concept of Blue Economy. The government should establish a one-stop data center in which data infrastructures, including data servers and associated systems, will be maintained. Furthermore, adequate ICT human resource (experienced developers and technicians) should be prepared to undertake technical tasks in the center. In this regard, the education system and curricula should, therefore, be adjusted accordingly to support the proposed concept. 4.1 Curricula Adjustment Sustainability of Blue Economy needs preparation of skilled personnel to support ICT related matters required to foster the technology. ICT experts and professionals majoring in Blue Economy should be trained well, preferably through local universities and research centers. The Commonwealth of Learning5 has shown the importance of establishing courses on Blue Economy to promote socio-economic development. Universities and the national research centers should establish new academic and professional programs at all levels to foster the implementation of the concept: Curricula Review. Local universities should review their curricula, especially on the existing Blue Economy related programs, such as fisheries and aquaculture, marine sciences, fish markets, and tourism, to ensure the candidates acquire relevant ICT skills so that the graduates can digitize their respective business processes. Emerging technologies, including artificial intelligence, big data analytics, and internet of things should be considered as potential fields that experts should be equipped. Professional Training. ICT, like other technologies, evolves with time. Hence, professionals should update their skills regularly to cope with the changes. Well-trained professionals can equip trainers with state-of-the-art technologies and practical knowledge. There should be well-defined training schedules and calendars with appropriate budgets to support the mission, achieved through the engagement of the HLIs and stakeholders in Blue Economy.

4.2 Blue Research in ICT Higher learning institutions and other research organs in Tanzania should invest heavily in relevant research to maximize utilization of ICT in the Blue Economy sectors. The research centers should undertake applied research and disseminate their findings to respective stakeholders. Possible research areas include community networking, big data handling and visualization, automation and control systems, machine learning based fish monitoring, among others. Associated data will then be managed in a dedicated data center. Lessons may be taken from universities and research institutes that undertake research in Blue Economy: University of Seychelles6 , Middlebury Institute of International Studies7 , and Australia’s Blue Economy Cooperative Research Center8 . 5 https://vussc.col.org/index.php/2020/06/01/blue-econony/. 6 https://beri.unisey.ac.sc/. 7 https://www.middlebury.edu/institute/academics/centers-initiatives/center-blue-economy. 8 https://eprints.utas.edu.au/37099/.

Towards ICT-Driven Tanzania Blue Economy

495

4.3 The Blue Data Center Efficient information sharing systems among resource users and fishery managers are essential for the sustainable management of aquatic resources (Fig. 3). In that regard, data infrastructure, the so-called Blue Data Center, needs to be established through the ministry dealing with the Blue Economy. The data center should encompass all data related to the Blue Economy for research and decision making. The data should be accessible remotely by different users and the data level will be dictated by their respective roles in the business processes. This recommendation aligns with the FYDP III, which highlights the necessity of the National Data Center. Expected data shall include, but not limited to, the following aspects: Fisheries Information. A fisheries information system is expected to store fishing information, including types and locations of catches, types and capacity of boats, and fish weights. This information can be utilized to generate fish mapping for investigating fish behaviors with respect to time and other physical parameters. Digital Fish Markets Information. The data center will also host the digital market platform. With the current trend where the physical market dominates, farmers and customers are operating probabilistically. The market status, which is highly affected by external factors, can hardly be determined. With the introduction of ICT based markets, domestic and industrial customers can view market status and place their orders without the need to physically visit the selling points. Aquaculture Information. All registered ponds and their respective locations, unhealthy fishes, measurements of water quality parameters (salinity, pH, and ammonia), water levels and temperatures. Also, the amount of food reserve and any other related information and security alerts. Tourism Information. All tourists’ information, including airports of their flight origins, number of days spent in each tourism site, and their personal opinions on different matters associated with tourism.

Fig. 3. Blue Economy data communication infrastructure.

496

A. T. Abdalla et al.

5 Conclusion In this paper, we have reviewed various concepts of ICT as a tool to support Blue Economy. Evidently, the advancement in ICT can facilitate the digitization of the Blue Economy ecosystem to efficiently and effectively sustain the agenda, hence attaining the intended benefits. Using the course catalogues and organizational structures of relevant departments in the selected institutions, we have reviewed the course curricula and research activities. Results show that, to fully exploit the power of the Blue Economy, serious reforms of course curricula and organization structures are urgently needed. This recommendation will ensure sufficient ICT skills to digitalize business processes of Blue Economy. In addition, effectiveness on the related operations of Blue Economy will be enhanced. We have also proposed some potential research avenues that may be considered to achieve sustainable Blue Economy in Tanzania. HLIs may explore the recommended research avenues and act appropriately towards realization of the envisaged ICT-driven Blue Economy. Immediate response by HLIs may accelerate the government agenda of building an inclusive and a sustainable Blue Economy. Glossary for Fig. 2 UDSM – University of Dar es Salaam SUA – Sokoine University of Agriculture DIT – Dar es Salaam Institute of Technology SUZA – The State University of Zanzibar ZU – Zanzibar University SAUT – St. Augustine University of Tanzania IPA – Institute of Public Administration DMI – Dar es Salaam Maritime Institute MUM – Muslim University of Morogoro ATC – Arusha Technical College NIT – National Institute of Transport KCMU – Kilimanjaro Christian Medical University

UDOM – University of Dodoma MUST – Mbeya University of Science and Technology IFM – Institute of Finance Management KIST – Karume Institute of Science and Technology SJUCET – St. Joseph University of Tanzania MNMA – Mwalimu Nyerere Memorial Academy University KIUT – Kampala International University in Tanzania UAUT – United African University of Tanzania MUHAS – Muhimbili University of Health and Allied Sciences

References 1. Sedoyeka, E., Sicilima, J.: Tanzania national fibre broadband backbone: challenges and opportunities. Int. J. Comput. ICT Res. 10(1), 61–92 (2016) 2. Mango, C.: Local people participation through equity in private mobile telecommunication companies - towards effecting the Tanzania national development vision of 2025: an assessment of legal barriers. SSRN Electron (2015) 3. URT: National Five Year Development Plan 2021/22–2025/26-Realising Competitiveness and Industrialisation for Human Development. United Republic of Tanzania (2021) 4. Zonien˙e, A., Valiul˙e, V.: Tvaraus investavimo galimyb˙es pl˙etojant m˙elyn˛aj˛a ekonomik˛a. Reg. Form. Dev. Stud. 33(1), 164–171 (2021)

Towards ICT-Driven Tanzania Blue Economy

497

5. Tonkovi´c, A.: Gunter Pauli: Plava ekonomija. 10 godina, 100 inovacija, 100 milijuna radnih mjesta Izvješ´ce podneseno Rimskom klubu. Sociologija i Prostor 51(1), 150–154 (2013) 6. Spamer, J.: Riding the African blue economy wave: a South African perspective. In: 2015 4th IEEE International Conference on Advanced Logistics and Transport, France, pp. 59–64. IEEE (2015) 7. Slay, H., Dalvit, L.: Red or blue? The importance of digital literacy in African rural communities. In: 2008 International Conference on Computer Science and Software Engineering, Wuhan, China, pp. 675–678. IEEE (2008) 8. Spiezia, V.: Are ICT users more innovative? An analysis of ICT-enabled innovation in OECD firms. OECD J. 2011(1), 1–21 (2011) 9. Cross, M., Adam, F.: ICT policies and strategies in higher education in South Africa - national and institutional pathways. High. Educ. Policy. 20(1), 73–95 (2007). https://doi.org/10.1057/ palgrave.hep.8300144 10. Haag, M., Lambert, J., Waddell, J., Crampton, W.: A new and improved electric fish finder with resources for printed circuit board fabrication. Neotrop. Ichthyol. 17(4), 1–10 (2019) 11. Li, Y., Chen, N., Chen, L.: Fishing techniques classification based on Beidou trajectories and machine learning. In: ACM International Conference Proceeding Series. Marseille, France, pp. 123–126 (2020) 12. Janet, M.: The impact of radio and mobile-wireless (ICT) in fishing industry: a case-study of Muttom region in Kannyakumari district. Int. J. Inf. Sci. Comput. 5(2), 1–6 (2011) 13. Hamed, S., Jiddawi, N., Poj, B.: Effect of salinity levels on growth, feed utilization, body composition and digestive enzymes activities of juvenile silver pompano Trachinotus blochii. Int. J. Fish. Aquat. Stud. 4(6), 279–283 (2016) 14. Semboja, J.: Realizing the blue economy in zanzibar: potentials, opportunities and challenges. Uongozi Inst. (2021) 15. Hafidh, H., Mkuya, S.: Zanzibar and the establishment of blue economy strategies. J. Resour. Dev. Manag. 74(1), 34–39 (2021)

Author Index

A Abdalla, Abdi T. 381, 487 Abdouna, Mahamat 115 Adebayo, Isaiah O. 3 Adigun, Matthew O. 3 Ahmat, Daouda 115, 160 Ajayi, Olasupo 341 Aka, Ahoua Cyrille 64 Alawi, Mahmoud 487 Atemkeng, Marcellin 317, 430 Awoyelu, I. O. 391

B Ba, Mouhamadou Lamine 473 Bagula, Antoine 341 Bah, Alassane 268 Bakare, K. O. 391 Bakkali, Mohammed 381 Bame, Ndiouma 414 Bassolé, Didier 430 Bassole, Didier 446 BIAOU, B. O. S. 87 Bikienga, Moustapha 209 Bissyandé, Tegawendé F. 115 Boly, Aliou 414 Borgou, Mahamat 160 Bouityvoubou, Henri 149 Boulou, Mahamadi 33

D Degila, Jules 18, 473 Diop, Cheikh Talibouya Diop, Ibrahima 193 Diop, Idy 268 Diyaolu, I. J. 391

E Effa, Joseph Yves

317

268

F Faissal, Ahmadou 317 Fotso Kuate, Franck Arnaud

317

G Gouho, Bi Jean Baptiste 64 H Hecke Kuwakino, Augusto Akira 149 Houndji, Vinasetan Ratheil 473 I Ibwe, Kwame

487

K Kahenga-Ngongo, Ferdinand 341 Kau, Freddie Mathews 458 Keupondjo, Satchou Gilles Armel 64 Koala, Gouayon 446 Kogeda, Okuthe P. 458 Konfé, Abdoul-Hadi 101 L Lô, Moussa 473 M Mabele, Leonard 285 Magumba, Mark Abraham Maiseli, Baraka 487 Malo, Sadouanouan 193 Megnigbeto, Aurel 18 Moutsinga, Octave 149 Mudali, Pragasen 3 Mugeni, Gilbert 285 Muhamed, Daudi 487 N Namatovu, Hasifah Kasujja Ndiaye, Maodo 268 Ndlovu, Lungisani 369

221, 247

221, 247

© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2023 Published by Springer Nature Switzerland AG 2023. All Rights Reserved R. A. Saeed et al. (Eds.): AFRICOMM 2022, LNICST 499, pp. 499–500, 2023. https://doi.org/10.1007/978-3-031-34896-9

500

Author Index

Ngaruiya, Njeri 301 Niang, Fatou Lo 473 Ntantiso, Lutho 341 Ntsama, Jean Emmanuel 317 Ntshangase, Sthembile 369 Nzeket Njoya, Alima 430

Somé, Doliére Francis 50 Sonoiya, Dennis 285 Sore, Safiatou 209 Sow, Doudou 268 Stofile, Akhona 369

O Ogundare, B. S. 87 Ogunyemi, A. A. 391 Okalas Ossami, Dieu-Donné 149 Okuthe, J. A. 76 Oluwatope, A. O. 87, 391 Olwal, Thomas 301 Ouedraogo, Frédéric T. 209 Ouedraogo, Tounwendyam Frédéric 358 Oumtanaga, Souleymane 64

T Tall, Hamadoun 50 Tapsoba, Théodore Marie Yves 50 Tchakounte Tchuimi, Dimitri 317 Tchakounte, Franklin 317 Tchakounté, Franklin 430 Terzoli, A. 76 Tiendrebeogo, Telesphore 446 Toé, Elisée 50 Traore, Yaya 193, 209 Trawina, Halguieta 193

P Poda, Pasteur 101

U Udagepola, Kalum Priyanath

R Ronoh, Kennedy

W Wasige, Edward

285, 301

S Sabas, Arsène 18 Sall, Ousmane 149 Sanou, Bakary Hermane Magloire Sanou, Bernard Armel 101 Sarr, Jean Gane 414 Sevilla, Joseph 285 Sie, Oumarou 446

33

317, 430

285

Y Yélémou, Tiguiane 33, 50 Youssouf, Adoum 160 Z Zinsou, K. Merveille Santi 268 Zongo, Pengwendé 358