Machine Learning and the Internet of Things in Education: Models and Applications (Studies in Computational Intelligence, 1115) 3031429230, 9783031429231

This book is designed to provide rich research hub for researchers, teachers, and students to ease research hassle/chall

143 92

English Pages 289 [280] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
About the Editors
Introduction to Machine Learning and IoT
1 Artificial Intelligence (AI)
2 Internet of Things
3 Conclusion and the Future of AI and IoT
References
Deep Convolutional Network for Food Image Identification
1 Introduction
2 CNN Architecture
3 Design of FRCNN System
3.1 Food-101 Dataset
3.2 Model Architecture
3.3 Simulations
4 Conclusion
References
Face Mask Recognition System-Based Convolutional Neural Network
1 Introduction
1.1 Objectives and Scope
1.2 Significance of Real-Time Face Mask Detection
2 Literature Review
3 Data Preprocessing
4 Proposed Model (CNN)
4.1 Splitting of the Data into Training and Test Sets
4.2 Activation Function
4.3 Convolutional Layers
4.4 Fully-Connected Layers
4.5 Face Detection and Cropping
4.6 Blob from Image (RGB Mean Subtraction)
4.7 The Architecture of the Application
4.8 Training of the Face Mask Detection Model
4.9 Simulation of the Proposed Model
5 Conclusion and Future Work
References
Fuzzy Inference System Based-AI for Diagnosis of Esophageal Cancer
1 Introduction
2 Related Literature Review
3 Materials and Methods
4 Results and Discussion
5 Conclusion and Recommendation
References
Skin Detection System Based Fuzzy Neural Networks for Skin Identification
1 Introduction
2 Review of Related Study
3 Proposed Algorithm
3.1 Structure of the System
3.2 Fuzzy Neural System for Skin Detection
3.3 Cross Validation
3.4 Parameter Updates and Fuzzy Classification
3.5 Learning Using Gradient Descent
4 Simulation Studies
5 Conclusions
References
Machine Learning Based Cardless ATM Using Voice Recognition Techniques
1 Introduction
1.1 Machine Learning
1.2 The Purpose of Using Voice Recognition in an ATM and Its Advantages
2 Related Works
3 Proposed System
3.1 System Classifier
3.2 Simulation and Result
4 Conclusion
References
Automated Classification of Cardiac Arrhythmias
1 Introduction
2 Review of Existing Works
3 Dataset Analysis
3.1 Feature Selection Method
4 Cardiac Arrhythmia Classification
4.1 Fuzzy Neural Network for Classification of Cardiac Arrythmia
4.2 Naïve Bayes Based Machine Learning Technique
4.3 Radial Basis Function Networks (RBFN) Technique
4.4 Experimental Result Comparison of the Proposed Algorithms
5 Conclusion and Future Work
References
A Fuzzy Logic Implemented Classification Indicator for the Diagnosis of Diabetes Mellitus in TRNC
1 Introduction
2 Methodology
2.1 Database
3 Classification
3.1 Fuzzy Logic
3.2 Fuzzy Logic Solution Approach
3.3 Reasons for Preference of Fuzzy Logic
3.4 Fuzzy Sets
3.5 MF (Membership Functions)
3.6 Variables of Linguistic
3.7 Classification Results
4 Implementation
4.1 System Design
5 Experimental Results
5.1 Rules of Fuzzy
5.2 GUI System Design
5.3 Accuracy Checking
6 Conclusion
References
Implementation and Evaluation of a Mobile Smart School Management System—NEUKinderApp
1 Introduction
2 Proposed Method
2.1 NEUKinderApp Architecture
2.2 Material Design and Optimization
2.3 Backend Customization
3 Result Analysis
3.1 Explored Devices
3.2 Response Times
3.3 Comparative Result Analysis
4 Conclusion
References
The Emerging Benefits of Gamification Techniques
1 Introduction
1.1 Artificial Intelligence (AI), and Implementation of Gamification
2 Methods of Gamification and Engineering Design
3 Conclusion
References
A Comprehensive Review of Virtual E-Learning System Challenges
1 Introduction
2 Related Works
3 Challenges Encountered During Covid-19 Pandemic with E-Learning System
4 Research Methodology
5 Discussion
6 Recommendations
7 Conclusion
References
A Semantic Portal to Improve Search on Rivers State’s Independent National Electoral Commission
1 Introduction
2 Literature Review
3 The Research Methods Used in This Report
3.1 The Study Philosophy
3.2 The Research Approach
3.3 System Analysis, Design, and Architecture
3.4 System Specifications (Domain of the System)
3.5 Ontology Processing
3.6 System Design
3.7 The Election Ontology
3.8 How the System Works
4 Evaluation and Testing
4.1 Analyzing and Evaluation Results of the Proposed and Baseline System
5 Conclusion, Limitation, and Recommendations
References
Implementation of Semantic Web Service and Integration of e-Government Based Linked Data
1 Introduction
1.1 The Open Government Data (OGD)
1.2 Linked Open Data (LOD)
2 Methodology
2.1 System Description
2.2 System Analysis
2.3 The Proposed System
3 System Implementation and Evaluation
3.1 Brief Description of the United Kingdom (UK) Government
3.2 SPARQL Endpoint for Linked Data
3.3 System Evaluation and Discussion
4 Conclusion
References
Application of Zero-Trust Networks in e-Health Internet of Things (IoT) Deployments
1 Introduction
2 E-Health Security Challenges
3 Proposed Model
4 Results
5 Conclusion
References
IoT Security Based Vulnerability Assessment of E-learning Systems
1 Introduction
2 E-learning Vulnerabilities
3 E-learning Vulnerability Assessment
4 Vulnerability Risk Space and Threat Landscape of E-learning system
5 Conclusion
References
Blockchain Technology, Artificial Intelligence, and Big Data in Education
1 Introduction
2 Extent of Past Work
3 Materials and Procedures
4 Results and Discussion
5 Conclusion
References
Sustainable Education Systems with IOT Paradigms
1 Introduction
2 Scope of Previous Works
3 Methodology
4 Results and Discussion
4.1 Methods Using Relational Ontology
4.2 Relationship-Based Epistemologies
4.3 Ethical Methods Based on Relationships
5 Conclusions
References
Post Covid Era-Smart Class Environment
1 Introduction
2 Educational Sensors
3 Ubiquitous Learning
4 Learning Management Systems (LMS)
5 Unified Learning
6 Virtual Classrooms
7 Cloud-Based Smart Classroom
8 Education and the Covid-19 Era
9 Artificial Intelligence in Education
9.1 Intelligent Tutoring Systems
9.2 Smart Classrooms: Good or Bad?
10 Conclusion
References
Recommend Papers

Machine Learning and the Internet of Things in Education: Models and Applications (Studies in Computational Intelligence, 1115)
 3031429230, 9783031429231

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Studies in Computational Intelligence 1115

John Bush Idoko Rahib Abiyev   Editors

Machine Learning and the Internet of Things in Education Models and Applications

Studies in Computational Intelligence Volume 1115

Series Editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland

The series “Studies in Computational Intelligence” (SCI) publishes new developments and advances in the various areas of computational intelligence—quickly and with a high quality. The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life sciences, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, selforganizing systems, soft computing, fuzzy systems, and hybrid intelligent systems. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution, which enable both wide and rapid dissemination of research output. Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.

John Bush Idoko · Rahib Abiyev Editors

Machine Learning and the Internet of Things in Education Models and Applications

Editors John Bush Idoko Department of Computer Engineering Near East University Nicosia, Cyprus

Rahib Abiyev Department of Computer Engineering Near East University Nicosia, Cyprus

ISSN 1860-949X ISSN 1860-9503 (electronic) Studies in Computational Intelligence ISBN 978-3-031-42923-1 ISBN 978-3-031-42924-8 (eBook) https://doi.org/10.1007/978-3-031-42924-8 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Preface

This book showcases several Machine Learning Techniques and Internet of Things technologies, particularly for learning purposes. The techniques and ideas demonstrated in this book can be explored by targeted researchers, teachers, and students to explicitly ease their learning and research exercises and goals in the field of Artificial Intelligence (AI) and Internet of Things (IoT). The AI and IoT technologies enumerated and demonstrated in this book will enable systems to be simulative, predictive, prescriptive, and autonomous, and, interestingly, the integration of these technologies can further enhance emerging applications from being assisted to augmented, and ultimately to self-operating smart systems. This book focuses on the design and implementation of algorithmic applications in the field of artificial intelligence and Internet of Things with pertinent applications. The book further depicts the challenges and understanding of the role of technology in the fields of machine learning and Internet of Things for teaching, learning, and research purposes. The theoretical and practical applications of AI techniques and IoT technologies featured in the book include but not limited to: different algorithmic and practical parts of the AI techniques and IoT technologies for scientific problem diagnosis and recognition, medical diagnosis, e-health, e-learning, e-governance, blockchain technologies, optimizations and predictions, industrial and smart office/home automation, supervised and unsupervised machine learning for IoT data and devices, etc. Nicosia, Cyprus

John Bush Idoko Rahib Abiyev

v

Contents

Introduction to Machine Learning and IoT . . . . . . . . . . . . . . . . . . . . . . . . . . John Bush Idoko and Rahib Abiyev

1

Deep Convolutional Network for Food Image Identification . . . . . . . . . . . . Rahib Abiyev and Joseph Adepoju

9

Face Mask Recognition System-Based Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . John Bush Idoko and Emirhan Simsek

21

Fuzzy Inference System Based-AI for Diagnosis of Esophageal Cancer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . John Bush Idoko and Mohammed Jameel Sadeq

47

Skin Detection System Based Fuzzy Neural Networks for Skin Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Idoko John Bush and Rahib Abiyev

59

Machine Learning Based Cardless ATM Using Voice Recognition Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . John Bush Idoko, Mansur Mohammed, and Abubakar Usman Mohammed Automated Classification of Cardiac Arrhythmias . . . . . . . . . . . . . . . . . . . . John Bush Idoko

75

85

A Fuzzy Logic Implemented Classification Indicator for the Diagnosis of Diabetes Mellitus in TRNC . . . . . . . . . . . . . . . . . . . . . . 101 Cemal Kavalcıo˘glu Implementation and Evaluation of a Mobile Smart School Management System—NEUKinderApp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 John Bush Idoko

vii

viii

Contents

The Emerging Benefits of Gamification Techniques . . . . . . . . . . . . . . . . . . . 131 John Bush Idoko A Comprehensive Review of Virtual E-Learning System Challenges . . . . 141 John Bush Idoko and Joseph Palmer A Semantic Portal to Improve Search on Rivers State’s Independent National Electoral Commission . . . . . . . . . . . . . . . . . . . . . . . . . 153 John Bush Idoko and David Tumuni Ogolo Implementation of Semantic Web Service and Integration of e-Government Based Linked Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 John Bush Idoko and Bashir Abdinur Ahmed Application of Zero-Trust Networks in e-Health Internet of Things (IoT) Deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Morgan Morgak Gofwen, Bartholomew Idoko, and John Bush Idoko IoT Security Based Vulnerability Assessment of E-learning Systems . . . . 235 Bartholomew Idoko and John Bush Idoko Blockchain Technology, Artificial Intelligence, and Big Data in Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Ramiz Salama and Fadi Al-Turjman Sustainable Education Systems with IOT Paradigms . . . . . . . . . . . . . . . . . . 255 Ramiz Salama and Fadi Al-Turjman Post Covid Era-Smart Class Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Kamil Dimililer, Ezekiel Tijesunimi Ogidan, and Oluwaseun Priscilla Olawale

About the Editors

John Bush Idoko graduated from the Benue State University Makurdi, Nigeria where he obtained BSc degree in Computer Science in 2010. He started M.Sc. program in Computer Engineering at Near East University, North Cyprus. After receiving M.Sc. degree in 2017, he started Ph.D. program in the same department in the same year. During his post-graduate programs at Near East University, he worked as Research Assistant in the Applied Artificial Intelligence Research Centre. He obtained his Ph.D. in 2020 and he is currently an Assistant Professor in Computer Engineering Department, Near East University, Cyprus. His research interests include but not limited to: AI, machine learning, deep learning, computer vision, data analysis, soft computing, advanced image processing, and bioinformatics. Rahib Abiyev received the B.Sc. and M.Sc. degrees (First Class Hons.) in Electrical and Electronic Engineering from Azerbaijan State Oil Academy, Baku, in 1989, and the Ph.D. degree in Electrical and Electronic Engineering from ComputerAided Control System Department of the same university, in 1997. He was a Senior Researcher with the research laboratory “Industrial intelligent control systems” of Computer-Aided Control System Department. In 1999, he joined the Department of Computer Engineering, Near East University, Nicosia, North Cyprus, where he is currently a Full Professor and the Chair of Computer Engineering Department. In 2001, he founded Applied Artificial Intelligence Research Centre and in 2008, he created “Robotics” research group. He is currently the Director of the Research Centre. He has published over 300 papers in related subjects. His current research interests include soft computing, control systems, robotics, and signal processing.

ix

Introduction to Machine Learning and IoT John Bush Idoko and Rahib Abiyev

Abstract Smart systems constructed off machine learning/internet of things technologies are systems that have the ability to reason, calculate, learn from experience, perceive relationships and analogies, store and retrieve data from memory, understand complicated concepts, solve problems, speak fluently in plain language, generalize, classify, and adapt to changing circumstances. To make decisions and build smart environment, mart systems combine sensing, actuation, signal processing, and control. The Internet of Things (IoT) is being developed significantly as a result of the real-time networked information and control that smart systems provide. Smart systems are the next generation of computing and information systems, combining artificial intelligence (AI), machine learning, edge/cloud computing, cyber-physical systems, big data analytics, pervasive/ubiquitous computing, and IoT technologies. Recent years have brought along some significant hurdles for smart systems due to the wide variety of AI applications, IoT devices, and technology. A few of these challenges include the development and deployment of integrated smart systems and the effective and efficient use of computing technologies. Keywords Artificial intelligence · Machine learning · Deep learning · Neural networks · Internet of things

1 Artificial Intelligence (AI) The goal of artificial intelligence (AI), a subfield of computer science, is to build machines or computers that are as intelligent as people. Artificial intelligence is not as cutting-edge as we might imagine. The Turing test was created by Alan Turing in 1950, therefore this dates back to that year. Eventually, in the 1960s, ELIZA, the first chatbot computer program, was developed [1]. A world chess champion was J. B. Idoko (B) · R. Abiyev Applied Artificial Intelligence Research Centre, Department of Computer Engineering, Near East University, Nicosia 99138, Turkey e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_1

1

2

J. B. Idoko and R. Abiyev

defeated by the 1977-built chess computer IBM Deep Blue in two out of six games, with the champion winning one and the other three games ending in draws [2]. Apple unveiled Siri as a digital assistant in 2011 [2]. OpenAI was launched in 2015 by Elon Musk and associates [3, 4]. According to John McCarthy, one of the founding fathers of AI, the science and engineering of artificial intelligence is the development of intelligent devices, particularly intelligent computer programs. A method for educating a computer, a robot that is controlled by a computer, or a piece of software to think critically, much like an intelligent person might, is called artificial intelligence. It is essential to first comprehend how the human brain functions as well as how individuals learn, make decisions, and collaborate to solve problems [5] in order to develop intelligent software and systems. Some of the objectives of AI are: 1. to develop expert systems that behave intelligently, learn, demonstrate, explain, and provide their users with guidance and 2. to add human intelligence to machines to make them comprehend, think, learn, and act like people. A science and technology called artificial intelligence is based on fields like computer science, engineering, mathematics, biology, linguistics, and psychology. The development of computer abilities akin to human intelligence, such as problem-solving, learning, and reasoning, is one of the major focus of artificial intelligence. Machine learning, deep learning, and other areas are among the many subfields of AI, which is a broad and expanding field [6–24]. Figure 1 illustrates a transitive subset of artificial intelligence. In nutshell, machine learning is the idea that computers can use algorithms to improve their creativity and predictions such that they more closely mimic human thought processes [7]. Figure 2 shows a typical machine learning model learning process. Machine learning involves a number of learning processes such as: a. Supervised learning: Machines/robots are made to learn through supervised learning, which involves feeding them with labelled data. By providing machines

Fig. 1 Subset of AI

Introduction to Machine Learning and IoT

3

Fig. 2 Learning process of a machine learning model

with access to a vast amount of data and training them to interpret it, machines are being trained in this process [8–14]. For example, the computer is presented with a variety of images of dogs shot from numerous perspectives with various color variations, breeds, and many other varieties. In order for the machine to learn to analyze the data from these various dog images, the “insight” of machines must grow. Eventually, the machine will be able to predict whether a given image is a dog from a completely different image that was not even included in the labelled data set of dog images it was fed earlier. b. Unsupervised learning: Unsupervised learning algorithms, in contrast to supervised learning, evaluate data that has not been assigned a label. This means that in this scenario, we are teaching the computer to interpret and learn from a series of data whose meaning is incomprehensible to eyes of human being. The computer searches for patterns in the data and makes its own decisions based on those patterns. It is important to note that the findings reached here were generated by computers from an unlabeled dataset. c. Reinforcement learning: A machine learning model that depends on feedback is reinforcement learning. In this method, the machine is fed a set of data and asked to predict what it might be. The machine receives feedback about its errors if it draws an incorrect conclusion from the incoming data. When the computer encounters a completely different image of a basketball, such as if you give it an image of a basketball and it erroneously identifies the basketball as a tennis ball or something else, it automatically learns to recognize an image of a basketball. d. On the other hand, deep learning is the idea that computers can mimic the steps a human brain takes to reason, evaluate, and learn. A neural network is used in the deep learning process as a component of an AI’s thought process. Deep learning requires a significant amount of data to be trained, as well as a very powerful processing system. Application areas of AI:

4

J. B. Idoko and R. Abiyev

a. Expert Systems: These applications combine hardware, software, and specialized data to convey reasoning and advice. They offer users explanations and recommendations. b. Speech Recognition: Certain intelligent systems are able to hear and understand the language as it is used by people to speak, including the meanings of sentences. It can manage a variety of accents, background noise, slangs, changes in human sounds brought on by the cold, etc. c. Gaming: In strategic games such as chess, tic-tac-toe, poker, etc., where machines may consider numerous probable locations based on heuristic knowledge, AI plays a key role. d. Natural Language Processing: Makes it possible to communicate with a computer that can understand human natural language. e. Handwriting Recognition: The text written with a pen on paper or a stylus on a screen is read by the handwriting recognition software. It can change it into editable text and recognize the letter shapes. f. Intelligent Robots: The jobs that humans assign to robots can be completed by them. They are equipped with sensors that can identify physical data in in real time, including light, heat, temperature, movement, sound, bumps, and pressure. To demonstrate intelligence, they have powerful processors, numerous sensors, and a large amount of memory. Also, they have the capacity to learn from their mistakes and adapt to the new surroundings. g. Vision Systems: These systems can recognize, decipher, and comprehend computer-generated visual input. Examples include the use of a spy plane’s images to create a map or spatial information, the use of clinical expert systems by doctors to diagnose patients, and the use of computer software by law enforcement to identify criminals based on stored portraits created by forensic artists.

2 Internet of Things In the Internet of Things, computing can be done whenever and wherever you want. In other terms, the Internet of Things (IoT) is a network of interconnected things (things) that are embedded with sensors, actuators, software, and other technologies in order to connect and exchange data with other objects (things) over the internet [15]. IoT, as seen in Fig. 3, is the nexus of the internet, devices, and data. In 2020, there were 16.5 billion connected things globally, excluding computers and portable electronic devices (such as smartphones and tablets). IoT gathers such information from the numerous sensors embedded in vehicles, refrigerators, spacecraft, etc. There is enormous potential for creative IoT applications across a wide range of sectors as sensors become more ubiquitous. Components of IoT system: a. Sensor: a linked device that enables the sensing of the scenario’s or controlled environment’s physical properties, whose values are converted to digital data.

Introduction to Machine Learning and IoT

5

Fig. 3 IoT

b. Actuator: a linked gadget that makes it possible to take action within a given environment. c. Controller: a connected device implementing an algorithm to transform input data in actions. d. Smart things: Sensors, actuators, and controllers work together to create digital devices that provide service functions (potentially implemented by local/ distributed execution platforms and M2M/Internet communications). Application areas of IOT include: automated transport system, smart security cameras, smart farming, thermostats, smart televisions, baby monitors, children’s toys, refrigerators, automatic light bulbs, and many more.

3 Conclusion and the Future of AI and IoT IoT and AI have recently experienced exponential growth. These fields are going to be so significant and influential that they will significantly alter and improve the society we live in. We cannot even begin to fathom how enormous and influential they will be in the near future. With AI and its rapidly expanding applications in our daily lives, there is still a lot to learn. It would be sage to adjust to this rapidly changing world and acquire AI and IoT related skills. To improve this world, we should learn and grow in the same ways that AI does. The use of AI and IoT in education can be very beneficial. In order to create curriculum, tactics, and schedules that are appealing, well-suited and inclusive of the majority, if not all, adults and children, they could be utilized to analyze dataset from individual’s perspectives, capabilities, preferences, and shortcoming. Future modes of transportation will change as a result of the applications of AI. In addition to self-driving automobiles, self-flying planes and drones that conveniently carry your meals faster and better are also being developed. The fear of automation replacing

6

J. B. Idoko and R. Abiyev

jobs is one of the main AI-related worries. But, it is possible that AI creates more employment opportunities than it replaces. By creating new job categories, this will alter how people work.

References 1. Retrieved February 27, 2023, from https://news.harvard.edu/gazette/story/2012/09/alan-turingat-100/ 2. Retrieved March 01, 2023, from https://www.ibm.com/ibm/history/ibm100/us/en/icons/dee pblue/ 3. Retrieved March 03, 2023, from https://openai.com/blog/introducing-openai/0 4. Retrieved March 03, 2023, from https://www.bbc.com/news/technology-35082344 5. Abiyev, R., Arslan, M., Bush Idoko, J., Sekeroglu, B., & Ilhan, A. (2020). Identification of epileptic EEG signals using convolutional neural networks. Applied Sciences, 10(12), 4089. 6. Abiyev, R. H., Arslan, M., & Idoko, J. B. (2020). Sign language translation using deep convolutional neural networks. KSII Transactions on Internet & Information Systems, 14(2). 7. Helwan, A., Idoko, J. B., & Abiyev, R. H. (2017). Machine learning techniques for classification of breast tissue. Procedia Computer Science, 120, 402–410. 8. Sekeroglu, B., Abiyev, R., Ilhan, A., Arslan, M., & Idoko, J. B. (2021). Systematic literature review on machine learning and student performance prediction: Critical gaps and possible remedies. Applied Sciences, 11(22), 10907. 9. Idoko, J. B., Arslan, M., & Abiyev, R. (2018). Fuzzy neural system application to differential diagnosis of erythemato-squamous diseases. Cyprus Journal of Medical Sciences, 3(2), 90–97. 10. Ma’aitah, M. K. S., Abiyev, R., & Bush, I. J. (2017). Intelligent classification of liver disorder using fuzzy neural system. International Journal of Advanced Computer Science and Applications, 8(12). 11. Bush, I. J., Abiyev, R., Ma’aitah, M. K. S., & Altıparmak, H. (2018). Integrated artificial intelligence algorithm for skin detection. In ITM Web of conferences (Vol. 16, p. 02004). EDP Sciences. 12. Bush, I. J., Abiyev, R., & Arslan, M. (2019). Impact of machine learning techniques on hand gesture recognition. Journal of Intelligent & Fuzzy Systems, 37(3), 4241–4252. 13. Uwanuakwa, I. D., Idoko, J. B., Mbadike, E., Re¸sato˘glu, R., & Alaneme, G. (2022, May). Application of deep learning in structural health management of concrete structures. In Proceedings of the Institution of Civil Engineers-Bridge Engineering (pp. 1–8). Thomas Telford Ltd. 14. Helwan, A., Dilber, U. O., Abiyev, R., & Bush, J. (2017). One-year survival prediction of myocardial infarction. International Journal of Advanced Computer Science and Applications, 8(6). https://doi.org/10.14569/IJACSA.2017.080622 15. Bush, I. J., Abiyev, R. H., & Mohammad, K. M. (2017). Intelligent machine learning algorithms for colour segmentation. WSEAS Transactions on Signal Processing, 13, 232–240. 16. Dimililer, K., & Bush, I. J. (2017, September). Automated classification of fruits: pawpaw fruit as a case study. In Man-machine interactions 5: 5th international conference on man-machine interactions, ICMMI 2017 Held at Kraków, Poland, October 3–6, 2017 (pp. 365–374). Cham: Springer International Publishing. 17. Bush, I. J., & Dimililer, K. (2017). Static and dynamic pedestrian detection algorithm for visual based driver assistive system. In ITM Web of conferences (Vol. 9, p. 03002). EDP Sciences. 18. Abiyev, R., Idoko, J. B., Arslan, M. (2020, June). Reconstruction of convolutional neural network for sign language recognition. In 2020 International conference on electrical, communication, and computer engineering (ICECCE) (pp. 1–5). IEEE. 19. Abiyev, R., Idoko, J. B., Altıparmak, H., & Tüzünkan, M. (2023). Fetal health state detection using interval type-2 fuzzy neural networks. Diagnostics, 13(10), 1690.

Introduction to Machine Learning and IoT

7

20. Arslan, M., Bush, I. J., & Abiyev, R. H. (2019). Head movement mouse control using convolutional neural network for people with disabilities. In 13th international conference on theory and application of fuzzy systems and soft computing—ICAFS-2018 13 (pp. 239–248). Springer International Publishing. 21. Abiyev, R. H., Idoko, J. B., & Dara, R. (2022). Fuzzy neural networks for detection kidney diseases. In Intelligent and Fuzzy Techniques for Emerging Conditions and Digital Transformation: Proceedings of the INFUS 2021 Conference, held August 24–26, 2021 (Vol. 2, pp. 273–280). Springer International Publishing. 22. Uwanuakwa, I. D., Isienyi, U. G., Bush Idoko, J., & Ismael Albrka, S. (2020, August). Traffic warning system for wildlife road crossing accidents using artificial intelligence. In International Conference on Transportation and Development 2020 (pp. 194–203). Reston, VA: American Society of Civil Engineers. 23. Idoko, B., Idoko, J. B., Kazaure, Y. Z. M., Ibrahim, Y. M., Akinsola, F. A., & Raji, A. R. (2022, November). IoT based motion detector using raspberry Pi gadgetry. In 2022 5th information technology for education and development (ITED) (pp. 1–5). IEEE. 24. Idoko, J. B., Arslan, M., & Abiyev, R. H. (2019). Intensive investigation in differential diagnosis of erythemato-squamous diseases. In Proceedings of the 13th International Conference on Theory and Application of Fuzzy Systems and Soft Computing (ICAFS-2018) (Vol. 10, pp. 978– 3).

Deep Convolutional Network for Food Image Identification Rahib Abiyev and Joseph Adepoju

Abstract Food plays an integral role in human survival, and it is crucial to monitor our food intake to maintain good health and well-being. As mobile applications for tracking food consumption become increasingly popular, having a precise and efficient food classification system is more important than ever. This study presents an optimized food image recognition model known as FRCNN, which employs a convolutional neural network implemented in Python’s Keras library without relying on transfer learning architecture. The FRCNN model underwent training on the Food101 dataset, comprising 101,000 images of 101 food classes, with a 75:25 trainingvalidation split. The results indicate that the model achieved a testing accuracy of 92.33% and a training accuracy of 96.40%, outperforming the baseline model that used transfer learning on the same dataset by 8.12%. To further evaluate the model’s performance, we randomly selected 15 images from 15 different food classes in the Food-101 dataset and achieved an overall accuracy of 94.11% on these previously unseen images. Additionally, we tested the model on the MA Food dataset, consisting of 121 food classes, and obtained a training accuracy of 95.11%. These findings demonstrate that the FRCNN model is highly precise and capable of generalizing well to unseen images, making it a promising tool for food image classification. Keywords Deep convolutional network · Food image recognition · Transfer learning

1 Introduction Food is a vital component of our daily lives as it provides the body with essential nutrients and energy to perform basic functions, such as maintaining a healthy immune system and repairing cells and tissues. Given its significance in health-related R. Abiyev (B) · J. Adepoju Department of Computer Engineering, Applied Artificial Intelligence Research Centre, Near East University, Lefkosa, North Cyprus, Turkey e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_2

9

10

R. Abiyev and J. Adepoju

issues, food monitoring has become increasingly important [5]. Unhealthy eating habits may lead to the development of chronic diseases such as obesity, diabetes, and hypercholesterolemia. According to the World Health Organization (WHO), the global prevalence of obesity more than doubled between 1980 and 2014, with 13% of individuals being obese and 39% of adults overweight. Obesity may also contribute to other conditions, such as osteoarthritis, asthma, cancer, diabetes mellitus type 2, obstructive sleep apnea, and cardiovascular disorders [4]. This is why experts have stressed the importance of accurately assessing food intake in reducing the risks associated with developing chronic illnesses. Hence, there is a need for a highly accurate and optimized food image recognition system. This system involves training a computer to recognize and classify food items using one or more combinations of machine learning algorithms. Food image recognition is a complex problem that has attracted much interest from the scientific community, prompting researchers to devise various models and methods to tackle it. Although food recognition is still considered challenging due to the need for models that can handle visual data and higher-level semantics, researchers have made progress in developing effective techniques for food image classification. One of the earliest methods used for this task was Fisher Vector, which employs the Fisher kernel to analyse the visual characteristics of food images at a local level. The Fisher kernel uses a generative model, such as the Gaussian Mixture Model, to encode the deviation of a sample from the model into a unique Fisher Vector that can be used for classification. Another technique is the bag of visual words (BOW) representation, which uses vector quantization of affine invariant descriptors of image patches. Additionally, Matsuda et al. [15] proposed a comprehensive approach for identifying and classifying food items in an image that involves using multiple techniques to identify potential food regions, extract features, and apply multiple-kernel learning with non-linear kernels for image classification. Bossard et al. [6] introduced a new benchmark dataset called Food-101 and proposed a method called random forest mining that learns across multiple food classes. Their approach outperformed other methods such as BOW, IFV, RF, and RCF, except for CNN, according to their experimentation results. Over the years, these techniques have been successful in food image classification and identification tasks. However, with the progress in computer vision, machine learning, and enhanced processing speed, image recognition has undergone a transformation [7, 14, 18]. In current literature, deep learning algorithms, especially CNN, have been extensively used for this task due to their unique properties, such as sparse interaction, parameter sharing, and equivariant representation. As a result, CNN has become a popular method for analysing large image datasets, including food images, and has demonstrated exceptional accuracy [9, 10, 13, 16, 17]. The use of CNN in food image classification has shown significant progress in recent years [1–3, 18, 20]. Researchers have achieved high accuracy rates using pretrained models, such as AlexNet and EfficientNetB0, as well as through the development of novel deep CNN algorithms. These approaches have been tested on various datasets, including the UEC-Food100, UEC-Food256, and Food-101 datasets. DeepFood, developed by Lui et al., achieved a 76.30% accuracy rate on the UEC-Food100

Deep Convolutional Network for Food Image Identification

11

dataset, while Hassannejad et al. outperformed this with an accuracy of 81.45% on the same dataset using Google’s Inception V3 architecture. Mezgec and Koroui Seljak modified the well-known AlexNet structure to create NutriNet, which achieved a rating performance of 86.72% on over 520 food and beverage categories. Similarly, Kawano and Yanai [11] used a pre-trained model similar to the AlexNet architecture and were able to achieve an accuracy of 72.26%, while Christodoulidis et al. [8] introduced a novel deep CNN algorithm that obtained 84.90% accuracy on a custom dataset. Finally, VijayaKumari et al. [19] achieved the best accuracy of 80.16% using the pre-trained EfficientNetB0 model, which was trained on the Food-101 dataset. The studies mentioned above have shown that CNN has immense potential for accurately classifying food images, which could have numerous practical applications, such as dietary monitoring and meal tracking. However, while these findings are promising, there is still ample room for improvement, and the primary aim of this research is to propose a highly accurate and optimized model for food image recognition and classification. In this paper, we introduce a new CNN architecture called FRCNN that is specifically designed for food recognition. Our proposed system boasts high precision and greater robustness for different food databases, making it a valuable tool for real-world applications in the field of food image recognition. Here is how this paper is structured: in Sect. 2, we describe the methodology we used to develop our food recognition system. In Sect. 3, we provide the details of the FRCNN design and architecture, including the dataset and proposed model structure. We also provide an overview of the simulation results. Finally, in Sect. 4, we present the conclusion of our work.

2 CNN Architecture Convolutional neural networks (CNNs) are a type of deep artificial neural network used for tasks like object detection and identification in grid-patterned input such as images. CNNs have a similar structure to ANNs, with a feedforward architecture that splits nodes into layers, and the output is passed on to the next layer. They use backpropagation to learn and update weights, which reduces the loss function and error margin. CNNs see images as a grid-like layout of pixels, and their layers detect basic patterns like lines and curves before advancing to more complex ones. CNNs are commonly used in computer vision research due to features like sparse interaction, parameter sharing, and equivariant representation. Convolution, pooling, and fully linked layers make up most CNNs, with feature extraction typically taking place in the first two layers, and the outcome mapped into the fully-connected layer (Fig. 1). One of the most essential layers in CNNs is the convolution layer, which applies filters to the input image to extract features like edges and corners. The output of this layer is a feature map that is passed to the next layer for further processing. Also, the pooling layer is a layer in a convolutional neural network, which is responsible for reducing the spatial dimensions of the feature maps generated by the convolutional layer, thus reducing the computational complexity of the network. Pooling can be

12

R. Abiyev and J. Adepoju

Fig. 1 CNN architecture. Source [12]

performed using different techniques such as max pooling, sum pooling or average pooling. The final layer in a typical CNN is the fully connected or dense layer, which takes the output of the convolution and pooling layers and performs classification using an activation function such as the SoftMax to generate the probability distribution over the different classes. The dense layer connects all the nodes in the previous layer to every node in the current layer, making it a computationally intensive part of the network. By combining these layers, CNNs can extract complex features from images and achieve high accuracy in tasks like object detection and classification [2]. After determining CNN’s output signals the learning of network parameters θ starts. The loss function is applied to train CNN. The loss function can be represented as: L=

N 1 E l(θ ; y(i ) , o(i) ) N i=1

(1)

where oi and yi are current output and target output signals, correspondingly. Using the loss function the unknown parameters θ {( are determined. With the use ) } of training examples consisting of input–output pairs x (i) , y(i ) ; i ∈ [1, .., N ] the learning of θ parameters is carried out to minimize the value of the loss function. For this purpose, Adam optimizer (Kingma & Jimmy, 2015) learning algorithm is used in the paper. For the efficient training of CNN, a large volume of training pairs is required. In the paper, food image data sets are used for training if CNN.

Deep Convolutional Network for Food Image Identification

13

3 Design of FRCNN System 3.1 Food-101 Dataset Bossard et al. [6] developed the food-101 dataset, a new dataset comprising pictures gathered from foodspotting.com, an online platform that allows users to share photos of their food, along with its location and description. To create the dataset, the authors selected the top 101 foods that were regularly labelled and popular on the website. They chose 750 photos for each of the 101 food classes for training, and 250 images for testing (Fig. 2).

3.2 Model Architecture The FRCNN model’s architecture is similar to the standard CNN design, with a layer consisting of a convolution layer, batch normalization, another convolution layer, batch normalization, and max pooling. The remaining five layers of the FRCNN model have a similar structure, as depicted in Fig. 6. After the fourth layer’s maxpooling, the model goes through flattening, a dense layer, batch normalization, a dropout layer, two fully-connected layers, and finally the classification layer. The architecture of the FRCNN model is illustrated in the diagram Fig. 3. The proposed FRCNN model is presented in Table 1. During the development of the FRCNN model, various factors were considered. Initially, the focus was on extracting relevant data from the input data, which was achieved using convolutional and pooling layers. This approach allowed the model to analyse images at varying scales, which reduced dimensionality and helped to identify significant patterns. Secondly, the FRCNN model was designed with efficiency in mind, and computational resources and memory usage were optimized by applying techniques like weight sharing and data compression, along with fine-tuning the number of layers and filters for optimal performance. Lastly, to ensure that the model generalizes well and avoids overfitting, the training dataset used to train the model was of high quality, and regularization methods were used. This approach enabled the FRCNN model to achieve exceptional performance even with unseen data, making it an ideal tool for object detection and recognition tasks. The FRCNN model was trained on the Food-101 dataset, where the model’s training was performed on the training subset, and its performance was subsequently evaluated on the test subset. In evaluating the FRCNN model, the Food-101 dataset was partitioned into 75% for training and 25% for testing purposes. The FRCNN model’s performance was assessed using metrics such as accuracy, precision, recall, and F1 score. Accuracy is determined by calculating the number of true positive, true negative, false positive, and false negative predictions made by the model. Precision and recall are measures of the model’s ability to correctly identify positive instances, while F1 score combines both measures to evaluate the model’s overall performance.

Fig. 2 Food-101 dataset preview

14 R. Abiyev and J. Adepoju

Deep Convolutional Network for Food Image Identification

Input Images

Output

Conv2D 32

BatchNor malization

filters ReLU

15

Conv2D 2 32 filters ReLU

Fully Connected Layer

Classification Layer

Flatten

BatchNorma lization

Maxpooling

Fig. 3 Proposed FRCNN model architecture

A higher F1 score indicates better performance, with the model striking a balance between precision and recall. The formula for accuracy, predictions, recall, and F1 score is given below: Accuracy =

TP +TN T P + T N + FP + FN

(2)

TP T P + FP

(3)

Pr ecision = Recall = f1 =

TP T P + FN

2( pr ecision ∗ r ecall) Pr ecision + r ecall

(4) (5)

3.3 Simulations The FRCNN model was trained on a high-performance computer with an Intel® Core™ i9-9900 k processor, 32 GB of RAM, and an Nvidia GeForce RTX 2080 Ti GPU with 11 GB of GDDR6 memory and 4352 CUDA cores. The training was done using the Anaconda development environment, and the minimum and maximum training times were 942 and 966 s, respectively. The training environment enabled fast training of the FRCNN model. The food-101 dataset, which contains 101,000 images, was used to train and validate the FRCNN model. The dataset was divided into training and validation sets, with 75% used for training and 25% used for validation. This resulted in 75,750 images being used for training and 25,250 images being used for validation. After the training, the Fig. 4 shows the accuracy and validation plots of the FRCNN model while Fig. 5 shows the training loss and validation loss of the FRCNN model.

16

R. Abiyev and J. Adepoju

Table 1 FRCNN model Layer (type)

Output shape

conv2d (Conv2D)

(None, 224, 224, 32)

batch_normalization (BatchNormalization)

(None, 224, 224, 32)

conv2d_1 (Conv2D)

(None, 224, 224, 32)

batch_normalization_1 (BatchNormalization)

(None, 224, 224, 32)

max_pooling2d (MaxPooling2D)

(None, 112, 112, 32)

conv2d_2 (Conv2D)

(None, 112, 112, 64)

batch_normalization_2 (BatchNormalization)

(None, 112, 112, 64)

conv2d_3 (Conv2D)

(None, 112, 112, 64)

batch_normalization_3 (BatchNormalization)

(None, 112, 112, 64)

max_pooling2d_1 (MaxPooling 2D)

(None, 56, 56, 64)

conv2d_4 (Conv2D)

(None, 56, 56, 128)

batch_normalization_4 (BatchNormalization)

(None, 56, 56, 128)

conv2d_5 (Conv2D)

(None, 56, 56, 128)

batch_normalization_5 (BatchNormalization)

(None, 56, 56, 128)

max_pooling2d_2 (MaxPooling)

(None, 28, 28, 128)

conv2d_9 (Conv2D)

(None, 28, 28, 256)

batch_normalization_6 (BatchNormalization)

(None, 28, 28, 256)

conv2d_10 (Conv2D)

(None, 28, 28, 256)

batch_normalization_7 (BatchNormalization)

(None, 28, 28, 256)

max_pooling2d_3 (MaxPooling)

(None, 14, 14, 256)

conv2d_12 (Conv2D)

(None, 14, 14, 512)

batch_normalization_8 (BatchNormalization)

(None, 14, 14, 512)

conv2d_13 (Conv2D)

(None, 14, 14, 512)

batch_normalization_9 (BatchNormalization)

(None, 14, 14, 512)

max_pooling2d_4 (MaxPooling)

(None, 7, 7, 512)

flatten (Flatten)

(None, 25,088)

dense (Dense)

(None, 1024)

batch_normalization_10 (BatchNormalization)

(None, 1024)

dropout (Dropout)

(None, 1024)

dense_1 (Dense)

(None, 512)

dense_2 (Dense)

(None, 256)

dense_3 (Dense)

(None, 101)

Total params

34,241,125

Trainable params

34,235,109

Non-trainable params

6,016

Deep Convolutional Network for Food Image Identification

Fig. 4 FRCNN model training and validation plot

Fig. 5 FRCNN model loss function plot

17

18

R. Abiyev and J. Adepoju

The Table 2 shows the comparison of FRCNN with other models using CNN or transfer learning methodology on the Food-101 dataset. Below are the results of testing the FRCNN model’s ability to perform well on new data by downloading random food images from the internet within the food-101 dataset classes (Fig. 6). Table 2 Comparing the FRCNN model with other models Model comparison Method

Accuracy (%)

Bossard et al.

Random Forest

50.76

Özsert Yi˘git and Özyildirim

Convolution Neural Network (CNN)

73.80

Liu et al.

GoogleNet

77.40

VijayaKumari et al.

EfficientNetB0

80.16

Attokaren et al

Inception V3

86.97

Hassannejad et al

Inception V3

88.28

FRCNN model

Convolution Neural Network (CNN)

96.40

Fig. 6 Test results

Deep Convolutional Network for Food Image Identification

19

Overall, these results show the ability of the FRCNN model to generalize well to unseen data making the model suitable for food recognition tasks.

4 Conclusion Food image recognition is a complex task that involves training a machine learning model to identify and classify food images. A convolutional neural network (CNN) architecture was used in this study to develop a food image classification model called FRCNN. FRCNN has five layers, each with batch normalization, convolution, and pooling operations to increase accuracy. This model was trained on a dataset with 101 food classes and 101,000 images known as food-101. The performance of the system was further improved by using additional methods such as kernel regularizations and kernel initializations, and pre-processing data using the ImageDataGenerator function. In the food classification task, the final model attained 96.40%, proving that deep CNNs can be built from scratch and perform just as well as pretrained models.

References 1. Abiyev, R., Arslan, M., Bush Idoko, J., Sekeroglu, B., & Ilhan, A. (2020). Identification of epileptic EEG signals using convolutional neural networks. Applied Sciences, 10(12), 4089. https://doi.org/10.3390/app10124089 2. Abiyev, R. H., & Arslan, M. (2019). Head mouse control system for people with disabilities. Expert Systems, 37(1). https://doi.org/10.1111/exsy.12398 3. Abiyev, R. H., & Ma’aitah, M. K. S. (2018). Deep convolutional neural networks for chest diseases detection. Journal of Healthcare Engineering, 1–11. https://doi.org/10.1155/2018/ 4168538 4. Akhi, A. B., Akter, F., Khatun, T., & Uddin, M. S. (2016). Recognition and classification of fast food images. Global Journal of Computer Science and Technology, 18. 5. Attokaren, D. J., Fernandes, I. G., Sriram, A., Murthy, Y. V. S., & Koolagudi, S. G. (2017). Food classification from images using convolutional neural networks. In TENCON 2017—2017 IEEE region 10 conference. https://doi.org/10.1109/tencon.2017.8228338 6. Bossard, L., Guillaumin, M., & Van Gool, L. (2014). Food-101—mining discriminative components with random forests. Computer Vision—ECCV, 446–461. https://doi.org/10.1007/9783-319-10599-4_29 7. Bush, I. J., Abiyev, R., & Arslan, M. (2019). Impact of machine learning techniques on hand gesture recognition. Journal of Intelligent & Fuzzy Systems, 37(3), 4241–4252. https://doi.org/ 10.3233/jifs-190353 8. Christodoulidis, S., Anthimopoulos, M., & Mougiakakou, S. (2015). Food recognition for dietary assessment using deep convolutional neural networks. In New trends in image analysis and processing—ICIAP 2015 workshops (pp. 458–465). https://doi.org/10.1007/978-3-31923222-5_56 9. Hassannejad, H., Matrella, G., Ciampolini, P., De Munari, I., Mordonini, M., & Cagnoni, S. (2016). Food image recognition using very deep convolutional networks. In Proceedings of the 2nd international workshop on multimedia assisted dietary management. https://doi.org/ 10.1145/2986035.2986042

20

R. Abiyev and J. Adepoju

10. Kagaya, H., Aizawa, K., & Ogawa, M. (2014). Food detection and recognition using convolutional neural network. In Proceedings of the 22nd ACM international conference on multimedia. https://doi.org/10.1145/2647868.2654970 11. Kawano, Y., & Yanai, K. (2014). Food image recognition with deep convolutional features. In Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing: Adjunct publication. https://doi.org/10.1145/2638728.2641339 12. Kiourt, C., Pavlidis, G., & Markantonatou, S. (2020). Deep learning approaches in food recognition. Learning and Analytics in Intelligent Systems. https://doi.org/10.1007/978-3-030-497 24-8_4 13. Liu, C., Cao, Y., Luo, Y., Chen, G., Vokkarane, V., & Ma, Y. (2016). DeepFood: Deep learningbased food image recognition for computer-aided dietary assessment. Inclusive Smart Cities and Digital Health. https://doi.org/10.1007/978-3-319-39601-9_4 14. Liu, S., Li, S. Z., Liu, X. M., & Zhang, H. B. (2010). Entropy-based action features selection using histogram intersection kernel. In 2010 2nd international conference on signal processing systems. https://doi.org/10.1109/icsps.2010.5555433 15. Matsuda, Y., Hoashi, H., & Yanai, K. (2012). Recognition of multiple-food images by detecting candidate regions. In 2012 IEEE international conference on multimedia and expo. https://doi. org/10.1109/icme.2012.157 16. Mezgec, S., & Korouši´c Seljak, B. (2017). NutriNet: A deep learning food and drink image recognition system for dietary assessment. Nutrients, 9(7), 657. https://doi.org/10.3390/nu9 070657 17. Özsert Yi˘git, G., & Özyildirim, B. M. (2018). Comparison of convolutional neural network models for food image classification. Journal of Information and Telecommunication, 2(3), 347–357. https://doi.org/10.1080/24751839.2018.1446236 18. Sekeroglu, B., Abiyev, R., Ilhan, A., Arslan, M., & Idoko, J. B. (2021). Systematic literature review on machine learning and student performance prediction: Critical gaps and possible remedies. Applied Sciences, 11(22), 10907. https://doi.org/10.3390/app112210907 19. VijayaKumari, G., Vutkur, P., & Vishwanath, P. (2022). Food classification using transfer learning technique. Global Transitions Proceedings, 3(1), 225–229. https://doi.org/10.1016/j. gltp.2022.03.027 20. Yanai, K., & Kawano, Y. (2015). Food image recognition using deep convolutional network with pre-training and fine-tuning. In 2015 IEEE international conference on multimedia and expo workshops (ICMEW). https://doi.org/10.1109/icmew.2015.7169816

Face Mask Recognition System-Based Convolutional Neural Network John Bush Idoko and Emirhan Simsek

Abstract The use of face masks has been widely acknowledged as an effective measure in preventing the spread of COVID-19. Scientists argue that face masks act as a barrier, preventing virus-carrying droplets from reaching other individuals when coughing or sneezing. This plays a crucial role in breaking the chain of transmission. However, many people are reluctant to wear masks properly, and some may not even be aware of the correct way to wear them. Manual inspection of a large number of individuals, particularly in crowded places such as train stations, theaters, classrooms, or airports, can be time-consuming, expensive, and prone to bias or human error. To address this challenge, an automated, accurate, and reliable system is required. Such a system needs extensive data, particularly images, for training purposes. The system should be capable of recognizing whether a person is not wearing a face mask at all, wearing it improperly, or wearing it correctly. In this study, we employ a convolutional neural network (CNN)-based architecture, to develop a face mask detection/recognition model. The model achieved an impressive accuracy of 97.25% in classifying individuals into the categories of wearing masks, wearing them improperly, or not wearing masks at all. This automated system offers a promising solution to efficiently monitor and enforce face mask usage in various settings, contributing to public health and safety. Keywords Face mask · Face detection · Machine learning · Convolutional neural network

J. B. Idoko (B) Applied Artificial Intelligence Research Centre, Department of Computer Engineering, Near East University, Nicosia 99138, Turkey e-mail: [email protected] E. Simsek Department of Computer Engineering, Near East University, Nicosia 99138, Turkey © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_3

21

22

J. B. Idoko and E. Simsek

1 Introduction In recent years, the field of computer vision and image processing has witnessed the emergence of a crucial technology known as face mask detection. The global spread of infectious diseases, particularly the COVID-19 pandemic, has underscored the significance of wearing face masks as a preventive measure. In response to this need, real-time face mask detection systems have been developed to monitor and ensure compliance with mask-wearing guidelines across various settings, including public spaces, workplaces, and transportation hubs [1]. Face detection and recognition have long been areas of interest within the realms of computer science and artificial intelligence. Traditional approaches to face detection employed methods such as the Viola-Jones algorithm and Haar cascades, which aimed to identify faces within images or video frames. However, the advent of deep learning and convolutional neural networks (CNNs) has revolutionized face detection techniques, enabling more accurate and efficient results. These advancements have paved the way for the development of sophisticated face mask detection systems [2–5]. The motivation behind the development of real-time face mask detection systems stems from several factors: a. Public Health and Safety: Wearing face masks has proven to be an effective measure in curbing the transmission of contagious diseases, particularly respiratory illnesses like COVID-19. Real-time face mask detection systems offer a practical solution for enforcing mask-wearing protocols in public areas, thereby reducing the risk of disease spread and ensuring public health and safety [1]. b. Compliance Monitoring: Monitoring compliance with mask-wearing guidelines manually can be labor-intensive and prone to errors. Automated face mask detection systems provide a reliable means of real-time monitoring, ensuring adherence to mask mandates in various settings, including public venues, workplaces, and educational institutions. c. Security and Surveillance: Integrating face mask detection technology with existing surveillance systems enhances security measures. By identifying individuals without masks in sensitive areas, such as airports or government facilities, potential security risks can be identified and addressed promptly. d. Streamlining Processes: Automating the process of face mask detection eliminates the need for manual inspection and intervention, saving time and resources. Real-time systems can be deployed in high-traffic areas, such as airports or shopping centers, to efficiently manage crowds and maintain safety protocols. e. Technological Advancements: Advancements in computer vision, deep learning algorithms, and hardware capabilities have made real-time face mask detection systems more feasible and accurate. The availability of powerful GPUs and dedicated neural network architectures has facilitated the deployment of these systems on a larger scale, enabling their integration into various domains and applications. Real-time face mask detection systems represent a technologically advanced solution to the pressing need for enforcing mask-wearing protocols, ensuring public

Face Mask Recognition System-Based Convolutional Neural Network

23

health and safety, and streamlining processes in diverse environments. Leveraging state-of-the-art computer vision techniques and deep learning algorithms, these systems contribute to global efforts in mitigating the spread of infectious diseases and enhancing security measures [6]. Throughout this project, we will delve into the methodologies, implementation, and evaluation of a real-time face mask detection system, showcasing its effectiveness and potential applications.

1.1 Objectives and Scope The main objectives of the graduation project on real-time face mask detection are as follows: a. Develop a real-time face mask detection system that is accurate and efficient: The project aims to create a system capable of detecting whether individuals are wearing face masks in real-time. This involves processing live video streams or images and providing immediate feedback on mask compliance. b. Explore and evaluate different face detection and recognition techniques: The project seeks to investigate various algorithms and models for face detection, analyzing their strengths and limitations. The objective is to identify the most effective approach for accurately detecting faces in real-time. c. Train and optimize a deep learning model for face mask detection: The project involves training a deep learning model, such as a convolutional neural network (CNN), to classify whether a detected face is wearing a mask or not. Techniques like data augmentation and transfer learning will be used to enhance the model’s performance. d. Implement a real-time face mask detection system: The project aims to develop a software application or system that can seamlessly integrate with video streams or cameras. This system will perform real-time face mask detection, processing frames or images in real-time and providing visual or auditory feedback on mask compliance. The scope of the graduation project on real-time face mask detection includes the following aspects: a. Face detection and localization: The project will focus on detecting and localizing faces in images or video frames. It will explore different face detection algorithms, such as Haar cascades or deep learning-based methods like SSD or Faster R-CNN. b. Mask classification: The project will involve training a deep learning model, typically a CNN, to classify whether a detected face is wearing a mask or not [7]. The model will be trained using a labeled dataset of images and optimized for high accuracy and real-time performance. c. Real-time implementation: The project will develop a real-time face mask detection system that can process video streams or live camera feeds. The system

24

J. B. Idoko and E. Simsek

will provide immediate feedback on mask compliance through visual indicators or alerts, allowing for real-time monitoring and enforcement of mask-wearing guidelines. d. Evaluation and performance analysis: The project will evaluate the performance of the face mask detection system in terms of accuracy, speed, and robustness. It may compare the developed system with existing approaches or datasets to assess its effectiveness. e. Software implementation: The project will primarily focus on the software implementation aspects of the real-time face mask detection system. While hardware requirements and considerations may be discussed, detailed hardware design or development is beyond the project’s scope.

1.2 Significance of Real-Time Face Mask Detection Real-time face mask detection holds great significance in various fields and scenarios, particularly in relation to public health and safety. The following are the key reasons why real-time face mask detection is important: a. Disease Prevention and Control: Real-time face mask detection systems are crucial in preventing and controlling the spread of infectious diseases, especially respiratory illnesses like COVID-19 [8]. These systems quickly identify individuals who are not wearing masks in public spaces, workplaces, or crowded areas, thereby enforcing mask-wearing guidelines and reducing the risk of disease transmission. b. Monitoring and Enforcement of Compliance: Automated face mask detection systems enable real-time monitoring of mask compliance in different settings. They provide a reliable method for ensuring that people adhere to mask-wearing guidelines, allowing authorities, employers, or organizations to identify and address instances of non-compliance promptly. This proactive approach helps maintain a safer environment for individuals and reduces the need for manual monitoring. c. Public Safety and Confidence: Real-time face mask detection systems contribute to enhancing public safety by promoting the use of face masks in high-risk areas. By creating a visible and efficient mechanism for enforcing mask mandates, these systems instill confidence among individuals, reassuring them that necessary measures are in place to protect their well-being. d. Efficient Resource Allocation: Automated face mask detection systems streamline the process of monitoring mask compliance, saving time and resources. Manual inspection and intervention can be resource-intensive and prone to human error. Real-time systems optimize resource allocation by automating the detection process, allowing human resources to focus on other critical tasks. e. Security and Risk Mitigation: Integrating face mask detection technology with existing surveillance systems enhances security measures in various environments. By identifying individuals without masks in sensitive areas such as

Face Mask Recognition System-Based Convolutional Neural Network

25

airports, train stations, or government facilities, potential security risks can be promptly addressed. This integration helps safeguard public safety and supports effective risk mitigation strategies. f. Real-Time Decision-Making: Real-time face mask detection systems provide immediate feedback on mask compliance, enabling timely decision-making. This information can be utilized by authorities, facility managers, or public health officials to implement prompt interventions, such as issuing alerts, enforcing penalties, or adjusting crowd control measures. These proactive actions contribute to better overall management of public spaces. g. Public Health Education and Awareness: Real-time face mask detection systems serve as educational tools by promoting public health awareness and education. They create visible reminders of the importance of wearing masks and contribute to a culture of responsible behavior during disease outbreaks. By continuously reinforcing the message, these systems help educate the public and foster longterm habits of mask usage. Precisely, real-time face mask detection systems have significant implications for disease prevention, compliance monitoring, public safety, efficient resource allocation, security, real-time decision-making, and public health education. By leveraging computer vision and deep learning technologies, these systems play a crucial role in ensuring mask compliance and mitigating the spread of infectious diseases, contributing to a safer and healthier environment for individuals and communities. The remaining part of the research is organized as follows: Sect. 2 presents the literature review. Dataset analysis is depicted in Sect. 3. Section 4 shows the proposed model for the recognition of face mask. And the conclusion of the research is presented in Sect. 5.

2 Literature Review The field of face recognition and detection has advanced significantly with the introduction of deep learning algorithms and convolutional neural networks (CNNs) [9– 11]. These techniques enable accurate and efficient identification and localization of faces in images or video frames. This section provides a summary of the main techniques explored in the literature for face recognition and detection. a. Traditional Approaches: Traditional approaches to face detection and recognition include methods like the Viola-Jones algorithm and Haar cascades. These methods employ machine learning techniques and features, such as Haar-like features and AdaBoost, to detect faces based on specific patterns or characteristics. b. Local Binary Patterns (LBP): Local Binary Patterns is a texture descriptor used for face detection and recognition. It captures local texture patterns in an image by comparing the pixel values of neighboring pixels. LBP is favored for its computational simplicity and robustness.

26

J. B. Idoko and E. Simsek

c. Histogram of Oriented Gradients (HOG): HOG is a popular technique for object detection, including face detection. It calculates the distribution of gradient orientations in an image to extract local gradient information. HOG features are then used to train a classifier, such as Support Vector Machines (SVM), for face detection [12]. d. Deep Learning-Based Approaches: Deep learning techniques, particularly CNNs, have revolutionized face recognition and detection. CNNs learn hierarchical representations of faces, allowing for better feature extraction and discrimination. Several noteworthy deep learning-based face detection models include: a. Multi-task Cascaded Convolutional Networks (MTCNN): MTCNN is a widely used face detection algorithm that employs a cascaded architecture of CNNs [13–16]. It simultaneously performs face detection, facial landmark localization, and face alignment, making it highly efficient. b. Single Shot MultiBox Detector (SSD): SSD is an object detection framework that can be adapted for face detection. It uses a single CNN for both generating region proposals and performing classification, enabling real-time face detection with high accuracy [17]. c. You Only Look Once (YOLO): YOLO is another real-time object detection algorithm that can be applied to face detection. It divides an image into a grid and directly predicts bounding boxes and class probabilities from the grid cells, achieving fast and accurate face detection [18]. d. Cascade Classifiers: Cascade classifiers, such as the Haar cascades implementation in OpenCV, have been widely employed for face detection. These classifiers utilize multiple stages, each consisting of weak classifiers, to progressively reject non-face regions [19]. Cascade classifiers are known for their fast inference speed and robust performance. The literature on face recognition and detection techniques provides valuable insights into the strengths and limitations of different approaches. The choice of technique depends on factors such as accuracy requirements, computational efficiency, and real-time performance. In the context of real-time face mask detection, selecting an appropriate face detection technique is crucial as it forms the basis for subsequent mask classification. The literature review helps identify the most suitable techniques to be explored and evaluated in the graduation project, considering their performance, reliability, and potential for real-time implementation. Face mask detection has gained significant attention in recent times, and several methods have been proposed in the literature to address this task. These methods leverage computer vision techniques and deep learning algorithms to detect and classify whether a person is wearing a face mask or not. Here, we provide an overview of some prominent existing methods for face mask detection: a. Convolutional Neural Networks (CNNs): CNNs have been widely employed for face mask detection. These deep learning models are trained on large datasets of annotated images containing individuals with and without masks [20]. The

Face Mask Recognition System-Based Convolutional Neural Network

b.

c.

d.

e.

f.

27

CNNs learn to extract features from the face region and classify whether a mask is present or not. Transfer learning techniques can also be applied by fine-tuning pre-trained CNN models such as VGGNet, ResNet, or MobileNet for face mask detection. Haar Cascades: Haar cascades, a popular technique for face detection, can be adapted for face mask detection as well. Haar cascades utilize machine learning algorithms to detect specific patterns in images [21]. By training Haar cascades on datasets containing masked and unmasked faces, the cascades can be used to detect faces and subsequently classify the presence of a face mask. Support Vector Machines (SVM): SVMs have been employed as classifiers in face mask detection systems. These machine learning models are trained on features extracted from face images, such as color, texture, or shape information. SVMs learn to classify whether an individual is wearing a face mask based on the extracted features [22]. YOLO (You Only Look Once): YOLO is an object detection algorithm that has been utilized for face mask detection. YOLO divides an image into a grid and predicts bounding boxes and class probabilities directly from the grid cells. By training YOLO on annotated datasets, it can detect faces and classify the presence of face masks in real-time [23]. Faster R-CNN (Region-based Convolutional Neural Networks): Faster R-CNN is another popular object detection framework that has been adapted for face mask detection [24]. It employs a region proposal network to generate candidate regions containing faces, and then classifies whether each region contains a face mask or not. Faster R-CNN provides accurate detection and classification of face masks. Ensemble Methods: Some studies have proposed ensemble methods for face mask detection, combining multiple classifiers or models to improve overall performance. These methods utilize the strengths of different algorithms, such as CNNs, SVMs, or Haar cascades, to achieve robust face mask detection results [25].

It is important to note that the performance and accuracy of face mask detection methods may vary depending on factors such as dataset quality, training procedures, and environmental conditions. The selection of a suitable method should consider real-time performance, accuracy, computational requirements, and the specific context of application. Furthermore, ongoing research in the field continues to explore new techniques and improvements to existing methods, paving the way for more effective and efficient face mask detection systems.

28

J. B. Idoko and E. Simsek

3 Data Preprocessing Data preprocessing refers to the process of preparing a dataset to make it suitable for analysis or machine learning algorithms. During this process, various operations are performed on the dataset to enhance its quality. Data preprocessing is also crucial in the context of Face Mask Detection [26]. Some of the data preprocessing steps involved in the proposed model include: a. Data collection: The first step is to gather a dataset comprising images of both masked and unmasked faces. This dataset include images of individuals wearing masks as well as those without masks. b. Data cleaning: The dataset needs to be cleaned to reduce noise and remove unwanted elements. For instance, corrupted or low-quality images were eliminated. Additionally, regions outside of the facial area can be cropped. c. Labeling: Each image in the dataset were labeled as either “masked” or “unmasked.” This enables the assignment of accurate labels to the data for training purposes. d. Data balancing: It is important to have a balanced distribution of masked and unmasked images in the training dataset. If there is an imbalance, techniques such as data augmentation or data reduction can be employed to address this issue. e. Data resizing: The explored images were resized to a consistent dimension. Typically, images are resized to a standard size to ensure consistency during the training and testing phases. f. Data normalization: The pixel values of the images usually range from 0 to 255. Data normalization ensures better training outcomes by scaling the pixel values to an appropriate range, such as [0, 1], or by standardizing the pixel values. Data preprocessing steps enhance the quality of the dataset obtained in the Face Mask Detection project and contribute to achieving better results. These steps help to highlight the dataset’s features and enable machine learning algorithms to work more effectively [27]. In the data gathering process, various sources were utilized to obtain images of both masked and unmasked faces. These sources include real-time images obtained in public spaces, existing datasets, or pre-recorded videos. The data gathering process enables the acquisition of accurate, diverse, and reliable datasets for this study. The acquired datasets were utilized to train and validate the proposed model to accurately detect the wearing status of masks. The employed procedure is demonstrated in Fig. 1. The proposed model was trained using a diverse set of images illustrating different ways of wearing masks, both correctly and incorrectly. The explore dataset comprised of 15,000 images. The dataset was carefully organized into categories, including “Correctly Masked Face Dataset (CMFD)” representing proper mask usage, and “Incorrectly Masked Face Dataset (IMFD)” consisting of various incorrect mask placements, such as no mask, mouth and chin covered, and nose and mouth covered [2]. The IMFD category in the dataset, including no mask, mouth and chin uncovered, and nose and mouth uncovered, contained approximately 51% of the entire

Face Mask Recognition System-Based Convolutional Neural Network

29

Fig. 1 MaskedFaced-Net data tree

Fig. 2 Dataset labelling

dataset, allowing the system to learn and accurately identify different mask-wearing scenarios. The CMFD category including properly covered mouth, nose and chine consist approximately 49% of the entire dataset, allowing the system to learn and accurately identify different mask-wearing scenarios, this is expressed in Fig. 2. Labeling is a critical step in Face Mask Detection projects to train machine learning models and accurately detect the wearing status of masks. Proper and consistent labeling enables the model to perform effectively and achieve successful results in real-world applications.

30

J. B. Idoko and E. Simsek

Fig. 3 Dataset directory structuring

To label and organize images based on categories, follow these steps: a. b. c. d.

Create a folder for your labeled images, like “Dataset” or “LabeledImages”. Make subfolders within the main folder for each label or category. Move or copy the images into their respective subfolders based on their labels. For example, place Mask images in the “Mask” subfolder and images of the Mask under data in the “Mask under data” subfolder. e. This process establishes a folder structure that efficiently organizes your labeled images into distinct categories. f. Each subfolder represents a label, and the images inside belong to that specific label. g. This folder structure simplifies access and processing of the images, whether for training a machine learning model or conducting image analysis as demonstrated in Fig. 3.

4 Proposed Model (CNN) The convolutional part and the fully connected part. The purpose of the convolutional part is to extract and identify various features from the input image during the feature extraction process. On the other hand, the fully connected part takes the results obtained from the convolutional layers and uses them to predict the image category based on the extracted features [28]. The proposed model follows a specific structure. First, the convolutional layers extract the features from the input image. Then, pooling layers are employed to reduce the size of the convolved feature maps, which helps in reducing computational costs. Finally, fully-connected layers are used to connect the neurons between hidden layers. In this model, the kernel size for the convolutional layers is fixed at 3 × 3, determining the height and width of the 2D convolutional window. The stride, which denotes the step size of the window during convolution, is set to 1. Additionally, a padding of 1 is applied to all the convolutional layers. The filter dimensions vary for each layer, with the first layer having 16 filters, the second layer having 32 filters, and the third layer having 64 filters. The input image dimensions for this model are

Face Mask Recognition System-Based Convolutional Neural Network

31

Fig. 4 Properties of the proposed model

set to 100 × 100 × 3, representing the height, width, and number of color channels (RGB) of the image. The activation functions used in the model are Rectified Linear Unit (ReLU) for intermediate layers and Sigmoid for the output layer. After the convolutional process, all the dimensions of the feature maps are flattened to form a 1D vector, which serves as the input to the fully connected layers. The fully connected layers consist of two hidden neural network layers. The number of neurons in each layer is calculated using the pyramidal rule formula, which is a common approach for determining the number of neurons in hidden layers [29]. Overall, this model follows a sequential flow, starting with feature extraction using convolutional layers, followed by pooling and fully connected layers. The activation functions and filter dimensions are carefully chosen to achieve effective feature representation and accurate prediction of image categories as shown in Fig. 4.

4.1 Splitting of the Data into Training and Test Sets This refers to the process of dividing your dataset into two separate subsets; the training set and the test set. This is a common practice in machine learning and model evaluation to assess the performance of a model. The process involve:

32

J. B. Idoko and E. Simsek

a. Dataset Preparation: Ensure that your dataset is properly prepared, with features (input data) and corresponding labels (output data) properly aligned. b. Import Libraries: In your software, import the necessary libraries for data splitting. For example, you can use the train_test_split function from the scikit-learn library in Python. c. Splitting the Data: Use the data splitting function to divide your dataset into two parts: the training set and the test set. The training set is used to train the machine learning model, while the test set is used to evaluate its performance. d. Specify the Split Ratio: Typically, you need to specify the ratio or the percentage of data to allocate for the test set. For example, you may choose to allocate 80% of the data to the training set and 20% to the test set [30]. e. Randomization: It’s important to randomly shuffle the data before splitting to ensure that the training and test sets represent a diverse and unbiased sample of the overall dataset. Splitting the data into training and testing sets is a crucial step in the training process of a Face Mask Detection CNN. This division is essential to evaluate the model’s performance on unseen data and assess its ability to generalize. Let’s explore the steps to accomplish this: a. Importing Libraries: Begin by importing the necessary libraries required for data splitting. In Python, popular libraries like TensorFlow (tf.data) can be utilized for this purpose. These libraries provide convenient functions and methods to handle data splitting. b. Splitting the Data: Utilize the appropriate function or method from the chosen library to divide the data into training and testing sets. The common approach is to perform a random split, where a certain percentage of the data is allocated for testing/validation. A frequently used split ratio is 80% for training and 20% for testing. c. By splitting the data, we create two distinct sets: the training set, which is used to train the Face Mask Detection CNN, and the testing set, which is used to evaluate the model’s performance [31]. This separation ensures that the model is assessed on data it has not encountered during training, providing a reliable measure of its effectiveness in real-world scenarios. The training set is used to update the model’s parameters through iterations, allowing it to learn the patterns and characteristics of face mask presence. Meanwhile, the testing set serves as an independent dataset to assess how well the trained model generalizes to unseen data. Splitting the data into train and test sets enables you to gauge the model’s accuracy, precision, recall, and other evaluation metrics. By evaluating the model on unseen data, you gain insights into its performance and can make informed decisions on any necessary modifications or improvements. In summary, splitting the data into train and test sets is a fundamental step in training a Face Mask Detection CNN. It ensures the model’s performance evaluation on unseen data and aids in assessing its ability to generalize [32]. By following the outlined steps and leveraging appropriate libraries, you can effectively divide the data, train the model, and evaluate its performance for accurate face mask detection.

Face Mask Recognition System-Based Convolutional Neural Network

33

4.2 Activation Function Activation function is a mathematical function used to compute the outputs of neurons in an artificial neural network. It takes the input values of each neuron, processes them, and maps the result to a specific range, acting as a “function” that activates the neuron [20]. In the context of the Face Mask Detection project, activation functions are used in deep learning models to introduce non-linearity to the neurons. The purpose of the function is to enable the model to capture more complex data distributions. There are several types of activation functions, but the most commonly used ones include sigmoid, tanh, and ReLU (Rectified Linear Unit). a. Sigmoid: The sigmoid function is an S-shaped curve that squeezes the output between 0 and 1. It is often used in binary classification problems. b. Tanh: The tanh function maps the output to the range between −1 and 1. It includes both positive and negative values and has a wider output range. c. ReLU (Rectified Linear Unit): The ReLU function is a simple curve that outputs the input value if it is greater than zero, and zero otherwise. In this study, we utilized the ReLu activation function. The choice of activation function can impact the performance and learning capabilities of the model. Selecting the appropriate activation function allows the model to better represent the data and achieve better results. In my Face Mask Detection project, the use of activation functions is an important component to introduce non-linearity when computing the outputs of neurons in the deep learning model. This helps the model better understand complex data distributions and successfully perform tasks such as face mask detection.

4.3 Convolutional Layers Convolutional layers are commonly used layers in deep learning models, particularly effective for image processing tasks. These layers apply a series of filters to a given image to extract feature maps. Functions of convolutional layers in this study include: a. Feature Extraction: Convolutional layers use filters to extract different features from the input image. These features can represent the edges, corners, patterns, or other relevant details in the image. This allows capturing the necessary information for detecting the presence or absence of a face mask. b. Feature Maps: Convolutional layers generate feature maps as a result of applying each filter. These maps contain pixel intensity values representing each feature. This leads to obtaining maps that represent higher-level features. c. Hierarchical Learning: Convolutional layers in deep learning models enable hierarchical learning of features. The initial layers learn simple features (edges, corners), while subsequent layers can represent more complex features (facial

34

J. B. Idoko and E. Simsek

shapes, mask presence). This allows the identification of increasingly complex and meaningful features. Convolutional layers are a critical component in your Face Mask Detection project for image processing and feature extraction [25]. They enable the extraction of different features required for face mask detection and improve the accuracy of your deep learning model.

4.4 Fully-Connected Layers In addition to the convolutional layers, many face mask detection models also utilize fully-connected layers. Fully-connected layers are a type of neural network layer where each neuron is connected to every neuron in the previous layer. These layers help in capturing high-level representations and making the final classification decisions. After the convolutional layers extract features from the input image, the output is flattened into a vector and fed into one or more fully-connected layers. These layers have weights associated with each connection, which are learned during the training process. The fully-connected layers apply transformations to the input data, gradually reducing its dimensionality and extracting more abstract features. The number of neurons in the fully-connected layers can vary depending on the complexity of the problem and the specific architecture of the model [19]. These layers can have different activation functions, such as ReLU (Rectified Linear Unit) or sigmoid, to introduce non-linearity into the network and enable better learning capabilities. The output of the fully-connected layers is then passed through a final layer, typically a softmax layer, which produces the probability distribution over the different classes, i.e., mask and no mask. This distribution indicates the likelihood of the input image belonging to each class. The fully-connected layers play a crucial role in capturing complex relationships and patterns in the extracted features from the convolutional layers. They enable the model to learn discriminative representations and make accurate predictions for face mask detection. The weights of the fullyconnected layers are learned through the optimization process, such as gradient descent, during the training phase of the model. By incorporating fully-connected layers into the architecture, face mask detection models can achieve higher-level understanding of the input data and make informed decisions about the presence or absence of face masks. These layers contribute to the overall performance and accuracy of the model by capturing more abstract and context-aware representations of the face images.

Face Mask Recognition System-Based Convolutional Neural Network

35

4.5 Face Detection and Cropping Face detection and cropping are crucial steps in various computer vision applications, enabling the identification and extraction of faces from images or video frames. The face detection model used in this study is a deep learning model that has been trained specifically for this purpose. The face detection model is represented by a variable and is loaded using the cv2.dnn.readNetFromCaffe() function. This function reads the model architecture and weights from files, allowing the model to be used for face detection tasks. The face detection model is based on a Convolutional Neural Network (CNN) architecture known as Single Shot MultiBox Detector (SSD) [18]. This architecture has proven to be effective in detecting objects, including faces, in images or video frames. The model has been trained on a large dataset of faces, enabling it to learn and recognize facial features and patterns. To perform face detection using the loaded model, an image is passed through the model, and the detection results are obtained. The model predicts bounding boxes that enclose the detected faces and provides a confidence score for each detection. This confidence score indicates the likelihood of a face being present in the predicted region. To filter out low-confidence detections and ensure reliable results, the model applies a threshold. In this case, a confidence threshold of 0.5 is used. Any detection below this threshold is considered low-confidence and is filtered out. The model then applies a threshold (in this case, a confidence threshold of 0.5) to filter out low-confidence detections. If a face with sufficient confidence is detected, the model extracts the region of interest (ROI) corresponding to the face from the original image. In summary, face detection model is an instance of a pre-trained deep learning model specifically designed for face detection as captured in Fig. 5.

Fig. 5 Face detection procedure

36

J. B. Idoko and E. Simsek

4.6 Blob from Image (RGB Mean Subtraction) Blob from Image (RGB Mean Subtraction) is a technique used in image processing and computer vision tasks, including Face Mask Detection. This technique is employed to normalize the pixel values in an image and is applied during the data preprocessing stage. This phase constitute: a. RGB Mean Subtraction: This technique involves subtracting the mean values of the RGB (Red, Green, Blue) components from each pixel in the image. This captures relative color differences by subtracting the average values from the color values of each pixel. This method is used to correct the overall color balance in the image and enhance color variations. b. Blob from Image: The “Blob from Image” process involves converting the image into a format called a “blob.” A blob is a data structure that represents the size, shape, and content of an image. The Blob from Image operation rescales the image to a specific size and adjusts the channel order and color balance according to a predefined scheme. By utilizing Blob from Image (RGB Mean Subtraction) in Face Mask Detection projects, image data is normalized, resulting in more consistent and reliable outcomes during the processing stage [10]. This technique is employed during the data preprocessing step before feeding the image data into the model, aiding in the accurate detection of the mask-wearing status. The accuracy of the model hinges on the quality of the input data it receives. To ensure precise predictions, it is vital to cleanse the data by eliminating flawed images that could lead to incorrect outcomes. Additionally, resizing the images to a standardized dimension of 100 by 100 is crucial for achieving optimal results. This practice not only prevents memory errors but also upholds accuracy, especially when dealing with limited hardware resources. Furthermore, converting the images into a Numpy array format enhances computational speed, enabling quicker processing of the data. To streamline the data processing pipeline, various image manipulation techniques are employed, including face detection, cropping, and blob extraction. Face detection plays a significant role, and it is accomplished using the highly efficient ‘Single Short MultiBox Detector (SSD) framework.’ This framework leverages a pre-trained deep neural network model called Floating-point 16 version (res10_ 300 × 300_ssd_iter_140000_fp16.caffemodel) developed by OpenCV. It has been meticulously trained on a vast dataset and is adept at identifying facial features and localizing faces within an input image. By utilizing this advanced framework, accurate face detection can be achieved, serving as a crucial step in the overall data processing pipeline as shown in Fig. 6.

Face Mask Recognition System-Based Convolutional Neural Network

37

Fig. 6 Blob and OpenCV functions

4.7 The Architecture of the Application The application architecture follows a structured workflow to provide a seamless user experience: a. Initialization: The main window, “mainWindow,” is created as a QWidget using PyQt. It is configured with a title, icon, and fixed size to establish the application’s appearance and dimensions. b. User Interface Elements: The main window incorporates various UI elements to enhance functionality and visual representation. A QLabel is added to display the application title, providing a clear indication of the application’s purpose. Additionally, a QPushButton named “cameraButton” is created, serving as the control button to toggle the camera on and off. A QLabel named “screen” is utilized as the canvas to display the video feed or captured frames. c. Layout Configuration: The layout is carefully structured to ensure an organized and visually appealing arrangement of the UI elements. The label, cameraButton, and screen widgets are positioned using appropriate layout managers to maintain consistency and responsiveness. d. Camera Button Functionality: The “cameraButtonClick” function is triggered when the camera button is clicked by the user. This function initiates the camera feed by creating an instance of the “VidecoCapture” class. It establishes the connection between the captured frame signal and the “updateImage” slot to enable real-time updates. The capture process is then started to begin retrieving video frames. e. Image Updating: The “updateImage” function acts as a slot to receive the captured frame signal. It performs necessary image manipulations, such as converting the frame from BGR to RGB format, creating a QPixmap object from the image, scaling it to fit the screen, and finally updating the QLabel with the updated QPixmap. This process ensures that the video feed or processed frames are displayed seamlessly in real-time. This application demonstrates the integration of PyQt and OpenCV to create a user-friendly graphical interface [4]. It

38

J. B. Idoko and E. Simsek

Fig. 7 Architecture of the application

provides the functionality to open the camera, capture video frames, conduct face detection and mask recognition using OpenCV’s capabilities, and display the processed frames instantly. By combining the strengths of both frameworks, this application delivers an interactive and efficient face mask recognition solution as demonstrated in Fig. 7.

4.8 Training of the Face Mask Detection Model The training of a face mask detection model involves the process of training a machine learning or deep learning algorithm to accurately classify whether a person in an image or video is wearing a face mask or not. This process typically consists of the following steps as depicted in Fig. 8: a. Dataset Collection: The first step in training a face mask detection model is to collect a diverse and representative dataset. This dataset should contain a

Face Mask Recognition System-Based Convolutional Neural Network

39

Fig. 8 Training procedure of the face mask detector

sufficient number of images or video frames that include individuals wearing face masks and individuals without masks. The dataset can be obtained from various sources, such as publicly available datasets, online repositories, or by collecting and annotating data specifically for the task [6]. b. Data Preprocessing: Once the dataset is collected, it needs to be preprocessed to ensure consistency and quality. Preprocessing steps may include resizing the images to a uniform size, normalizing pixel values, and augmenting the dataset by applying transformations like rotations, translations, or flips. Additionally, the dataset needs to be properly labeled or annotated, indicating the presence or absence of face masks in each image or frame. c. Model Selection: The next step is to select an appropriate model architecture for face mask detection. This can range from traditional machine learning algorithms like SVMs or decision trees to deep learning models such as convolutional neural networks (CNNs). The choice of model depends on factors such as dataset size, complexity of the problem, computational resources, and desired performance.

40

J. B. Idoko and E. Simsek

d. Model Training: With the dataset prepared and the model architecture selected, the training process begins. During training, the model learns to recognize and differentiate between masked and unmasked faces by optimizing its internal parameters. This is done by feeding batches of labeled training data into the model and adjusting the model’s weights based on the calculated loss. The optimization is typically performed using gradient-based optimization algorithms like stochastic gradient descent (SGD) or Adam. e. Hyperparameter Tuning: The training process involves setting various hyperparameters that govern the behavior and performance of the model. These include learning rate, batch size, number of epochs, regularization techniques, and network architecture-specific parameters. Hyperparameter tuning is an iterative process of selecting optimal values for these parameters to improve the model’s performance. f. Model Evaluation: Once the training is completed, the trained model is evaluated using a separate validation dataset. The evaluation metrics may include accuracy, precision, recall, F1-score, or others, depending on the specific requirements of the face mask detection task. This evaluation helps assess the model’s performance and identify any potential issues like overfitting or underfitting. g. Fine-tuning and Iteration: Based on the evaluation results, further iterations may be required to improve the model’s performance. This can involve fine-tuning the hyperparameters, adjusting the model architecture, collecting additional data, or applying advanced techniques like transfer learning or ensemble methods [29]. The iterative process continues until the desired level of performance is achieved. Training a face mask detection model is an iterative and resource-intensive process that requires careful dataset preparation, model selection, and hyperparameter tuning. The success of the training process relies on the availability of high-quality labeled data, proper choice and configuration of the model architecture, and sufficient computational resources for efficient training. Overall, the PyQt platform provides a powerful and flexible framework for developing GUI applications using Python. Its integration with the Qt framework and extensive set of features make it a popular choice for building cross-platform desktop applications with rich user interfaces [9]. The application interface of the proposed model is shown in Fig. 9. When the open camera button is clicked, the camera becomes activated to capture any object in view.

4.9 Simulation of the Proposed Model 4.9.1

Image with a Correctly Masked Face

This image portrays an individual responsibly wearing a face mask as a protective measure. The person is seen wearing a well-fitted mask that completely covers their nose, mouth, and chin. The mask appears to be made of breathable fabric, with multiple layers, providing an effective barrier against airborne particles [2]. The mask

Face Mask Recognition System-Based Convolutional Neural Network

41

Fig. 9 Application interface of the proposed system Fig. 10 Face mask recognition application (correctly masked face)

is securely fastened behind the ears with elastic bands, ensuring a snug fit as captured in Fig. 10. The individual is demonstrating a responsible attitude towards public health by following the recommended guidelines and contributing to the collective effort of preventing the spread of infectious diseases.

4.9.2

Image with an Incorrectly Masked Face

In this image, an individual is depicted without a face mask, leaving their nose and mouth uncovered as shown in Fig. 11. It is crucial to highlight that not wearing a face mask in public settings poses a potential risk to both the individual and those around

42

J. B. Idoko and E. Simsek

Fig. 11 Face mask recognition application (incorrectly masked face)

them. Without a mask, the person is more susceptible to inhaling or exhaling respiratory droplets that may contain harmful pathogens [7]. It is essential for individuals to be aware of the importance of wearing face masks as a preventive measure, particularly in crowded or enclosed environments, to minimize the risk of transmitting infectious diseases and to protect the well-being of the community.

4.9.3

Optimization Algorithm

The optimization algorithm plays a crucial role in determining the values of the weights that minimize the error between the model’s predicted outputs and the actual outputs. These algorithms, also known as optimizers, have a significant impact on the correctness, accuracy, and computational efficiency of deep learning models. In this section, we explored several examples of optimizers, including Adam, SGD, Adagrad, and RMSProp, and analyze their practical performance using 20 epochs. These optimizers are integrated into the face mask detection model as part of the compiler. Their primary purpose is to enhance the accuracy of the system and improve its speed and efficiency [28]. By experimenting with different optimizers on a dataset consisting of 15,000 images, we were able to assess their impact on the model’s performance. The system’s performance in terms of accuracy is computed using: Recog− rate =

Number of items correctly classified · 100% Total number of items

(1)

The results obtained from running the proposed model with the optimizers for 20 epochs are presented in Table 1 and Figs. 12 and 13. These results include the accuracy achieved by the model, the loss observed during the training process, and the computation time required for each optimizer. By examining these metrics, we can gain insights into the performance and effectiveness of each optimizer in the

Face Mask Recognition System-Based Convolutional Neural Network

43

context of face mask detection. The accuracy metric provides an indication of how well the model can correctly classify images with and without face masks. The loss metric, on the other hand, quantifies the discrepancy between the predicted and actual outputs, helping us understand the model’s training progress. Finally, the computation time metric measures the speed at which the model can process the dataset using different optimizers. By analyzing the results obtained from these experiments, we can make informed decisions about selecting the most suitable optimizer for the face mask detection model. Factors such as accuracy, loss, and computation time are all important considerations in optimizing the performance and efficiency of the system. Through this analysis, we aim to identify the optimizer that strikes the best balance between accuracy and computational efficiency, enabling us to build an effective and efficient face mask detection solution. The model underwent 20 epochs and utilized the Adam optimizer during the compilation process. This configuration yielded impressive results in terms of accuracy. As the training progressed, the model’s accuracy showed consistent improvement, while the loss steadily decreased. This trend continued until the model had completed the maximum number of epochs specified. The significant increase in accuracy and decrease in loss indicate that the model effectively learned from the training data and made more precise predictions with each epoch. The choice of Table 1 Comparative results between different epochs Epochs

Loss

Accuracy

Val. loss

Val. accuracy

1

0.2220

0.9202

0.0755

0.9764

5

0.0216

0.9925

0.0692

0.9774

10

0.0156

0.9950

0.0458

0.9885

15

3.9399e−05

1.0000

0.0600

0.9900

20

9.4511e−06

1.0000

0.0694

0.9900

Fig. 12 Loss function of the proposed system

44

J. B. Idoko and E. Simsek

Fig. 13 Accuracy of the proposed system

the Adam optimizer played a crucial role in achieving these favorable outcomes, as it facilitated efficient parameter updates and optimization when compared to the performance of SGD, Adagrad, and RMSProp. Overall, the results demonstrate the model’s ability to effectively learn and generalize from the given data, providing a reliable and accurate prediction mechanism as depicted in Figs. 12 and 13.

5 Conclusion and Future Work In this study, we developed a real-time face mask detection system that leverages deep learning techniques to accurately identify individuals wearing or not wearing face masks. Through the use of a robust dataset and the implementation of a convolutional neural network (CNN), we achieved promising results in detecting face masks in various scenarios. The system demonstrated high accuracy and efficiency in real-time face mask detection, making it suitable for applications in public places, workplaces, and transportation hubs. By effectively detecting face mask compliance, the system can contribute to public health efforts in preventing the spread of infectious diseases, such as the COVID-19 pandemic. Furthermore, the system can serve as a valuable tool for monitoring and enforcing mask-wearing guidelines, providing real-time feedback and alerts in situations where mask usage is required. This can help create a safer and more health-conscious environment for individuals and communities. While this study has successfully developed a real-time face mask detection system, there are several avenues for further exploration and improvement. Some potential areas for future work include improved robustness, mask type classification, face mask compliance analysis, deployment and integration, continuous model improvement—this could involve periodic retraining on updated datasets to adapt to evolving mask styles, patterns, and usage trends.

Face Mask Recognition System-Based Convolutional Neural Network

45

References 1. Himeur, Y., Al-Maadeed, S., Varlamis, I., Al-Maadeed, N., Abualsaud, K., & Mohamed, A. (2023) Face mask detection in smart cities using deep and transfer learning: Lessons learned from the COVID-19 pandemic. Systems, 11(2), 107. https://doi.org/10.3390/systems11020107 2. Hussain, D., Ismail, M., Hussain, I., Alroobaea, R., Hussain, S., & Ullah, S. S. (2022) Face mask detection using deep convolutional neural network and MobileNetV2-based transfer learning. Communications and Mobile Computing, 2022, Article ID 1536318, 10 pages. https://doi.org/ 10.1155/2022/1536318 3. Kaur, G., Sinha, R., Tiwari, P. K., Yadav, S. K., Pandey, P., Raj, R., Vashisth, A., & Rakhra, M. (2022). Face mask recognition system using CNN model. Neuroscience Informatics, 2(3), 100035. ISSN: 2772-5286. 4. Amer, F., & Al-Tamimi, M. (2022). Face mask detection methods and techniques: A review. The International Journal of Nonlinear Analysis and Applications (IJNAA). 5. Sharma, S., Sharma, S., & Athaiya, A. (2017). Activation functions in neural networks. Towards Data Science, 6(12), 310–316. 6. Sibi, P., Jones, S. A., & Siddarth, P. (2013). Analysis of different activation functions using back propagation neural networks. Journal of Theoretical and Applied Information Technology, 47(3), 1264–1268. 7. Abiyev, R., Arslan, M., Bush Idoko, J., Sekeroglu, B., & Ilhan, A. (2020). Identification of epileptic EEG signals using convolutional neural networks. Applied Sciences, 10(12), 4089. 8. Abiyev, R. H., Arslan, M., & Idoko, J. B. (2020). Sign language translation using deep convolutional neural networks. KSII Transactions on Internet & Information Systems, 14(2). 9. Helwan, A., Idoko, J. B., & Abiyev, R. H. (2017). Machine learning techniques for classification of breast tissue. Procedia Computer Science, 120, 402–410. 10. Sekeroglu, B., Abiyev, R., Ilhan, A., Arslan, M., & Idoko, J. B. (2021). Systematic literature review on machine learning and student performance prediction: Critical gaps and possible remedies. Applied Sciences, 11(22), 10907. 11. Idoko, J. B., Arslan, M., & Abiyev, R. (2018). Fuzzy neural system application to differential diagnosis of erythemato-squamous diseases. Cyprus Journal Of Medical Sciences, 3(2), 90–97. 12. Ma’aitah, M. K. S., Abiyev, R., & Bush, I. J. (2017). Intelligent classification of liver disorder using fuzzy neural system. International Journal of Advanced Computer Science and Applications, 8(12). 13. Bush, I. J., Abiyev, R., Ma’aitah, M. K. S., & Altıparmak, H. (2018). Integrated artificial intelligence algorithm for skin detection. In ITM Web of Conferences (Vol. 16, p. 02004). EDP Sciences. 14. Bush, I. J., Abiyev, R., & Arslan, M. (2019). Impact of machine learning techniques on hand gesture recognition. Journal of Intelligent & Fuzzy Systems, 37(3), 4241–4252. 15. Sundar, K. S., Bonta, L. R., Baruah, P. K., & Sankara, S. S. (2018, March). Evaluating training time of Inception-v3 and Resnet-50,101 models using TensorFlow across CPU and GPU. In 2018 Second. 16. Suresh, K., Palangappa, M. B., & Bhuvan, S. (2021, January). Face mask detection by using optimistic convolutional neural network. In 2021 6th International Conference on Inventive Computation Technologies (ICICT) (pp. 1084–1089) IEEE. 17. Tahir, S. B. U. D., Jalal, A., & Kim, K. (2020). Wearable inertial sensors for daily activity analysis based on adam optimization and the maximum entropy Markov model. Entropy, 22(5), 579. 18. Vijitkunsawat, W., & Chantngarm, P. (2020, October). Study of the performance of machine learning algorithms for face mask detection. In 2020 5th International Conference on Information Technology (InCIT) (pp. 39–43). IEEE. 19. Uwanuakwa, I. D., Idoko, J. B., Mbadike, E., Re¸sato˘glu, R., & Alaneme, G. (2022, May). Application of deep learning in structural health management of concrete structures. In Proceedings of the Institution of Civil Engineers-Bridge Engineering (pp. 1–8). Thomas Telford Ltd.

46

J. B. Idoko and E. Simsek

20. Helwan, A., Dilber, U. O., Abiyev, R., & Bush, J. (2017). One-year survival prediction of myocardial infarction. International Journal of Advanced Computer Science and Applications, 8(6). https://doi.org/10.14569/IJACSA.2017.080622 21. Bush, I. J., Abiyev, R. H., & Mohammad, K. M. (2017). Intelligent machine learning algorithms for colour segmentation. WSEAS Transactions on Signal Processing, 13, 232–240. 22. Dimililer, K., & Bush, I. J. (2017, September). Automated classification of fruits: pawpaw fruit as a case study. In Man-Machine Interactions 5: 5th International Conference on Man-Machine Interactions, ICMMI 2017 Held at Kraków, Poland, October 3–6, 2017 (pp. 365–374). Springer International Publishing. 23. Bush, I. J., & Dimililer, K. (2017). Static and dynamic pedestrian detection algorithm for visual based driver assistive system. In ITM Web of Conferences (Vol. 9, p. 03002). EDP Sciences. 24. Abiyev, R., Idoko, J. B., & Arslan, M. (2020, June). Reconstruction of convolutional neural network for sign language recognition. In 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE) (pp. 1–5). IEEE. 25. Abiyev, R., Idoko, J. B., Altıparmak, H., & Tüzünkan, M. (2023). Fetal health state detection using interval type-2 fuzzy neural networks. Diagnostics, 13(10), 1690. 26. Arslan, M., Bush, I. J., & Abiyev, R. H. (2019). Head movement mouse control using convolutional neural network for people with disabilities. In 13th International Conference on Theory and Application of Fuzzy Systems and Soft Computing—ICAFS-2018 (Vol. 13, pp. 239–248). Springer International Publishing. 27. Abiyev, R. H., Idoko, J. B., & Dara, R. (2022). Fuzzy neural networks for detection kidney diseases. In Intelligent and Fuzzy Techniques for Emerging Conditions and Digital Transformation: Proceedings of the INFUS 2021 Conference, Held August 24–26, 2021 (Vol. 2, pp. 273–280). Springer International Publishing. 28. Uwanuakwa, I. D., Isienyi, U. G., Bush Idoko, J., & Ismael Albrka, S. (2020, August). Traffic warning system for wildlife road crossing accidents using artificial intelligence. In International Conference on Transportation and Development 2020 (pp. 194–203). American Society of Civil Engineers. 29. Idoko, B., Idoko, J. B., Kazaure, Y. Z. M., Ibrahim, Y. M., Akinsola, F. A., & Raji, A. R. (2022). IoT based motion detector using Raspberry Pi gadgetry. In 2022 5th Information Technology for Education and Development (ITED) (pp. 1–5). IEEE. 30. Idoko, J. B., Arslan, M., & Abiyev, R. H. (2019). Intensive investigation in differential diagnosis of erythemato-squamous diseases. In Proceedings of the 13th International Conference on Theory and Application of Fuzzy Systems and Soft Computing (ICAFS-2018) (Vol. 10, pp. 978– 3). 31. Wu, X., Sahoo, D., & Hoi, S. C. (2020). Recent advances in deep learning for object detection. Neurocomputing, 396, 39–64. 32. Yamashita, R., Nishio, M., Do, R. K. G., & Togashi, K. (2018). Convolutional neural networks: An overview and application in radiology. Insights into Imaging, 9(4), 611–629.

Fuzzy Inference System Based-AI for Diagnosis of Esophageal Cancer John Bush Idoko and Mohammed Jameel Sadeq

Abstract Due to global changes, the incidence and mortality rate of esophagus cancer has skyrocketed in the last decades, with about 500,000 new cases. Esophageal cancer is a real life problem with uncertain data and human error, giving room for possible misdiagnosis. This study developed a fuzzy intelligent system (FIS) to screen and provide predictive diagnosis of esophageal cancer. Fuzzy IF THEN rules were generated from a combination of esophageal symptoms, general risk factors, and diagnostic tests, under expert considerations. MATLAB software was used to design and run the FIS. The data was retrieved from a hospital in Erbil for 7 patients. The system provides recommendations with each predictive diagnosis, whether a patient is positive or negative for esophageal cancer or something suspicious is wrong with the esophagus. After implementing the data on FIS, the system shows an overall system accuracy of 95.24%, with an even higher accuracy of 98% for each patient’s prediction. For future studies, it is highly recommended that the fuzzy rules be expanded to include more variables and dataset. Keywords Esophageal cancer · Artificial intelligence · Fuzzy Logic · FIS · MATLAB

1 Introduction The esophagus is an important organ in the human body. It is a hallow tube that connects the throat to the stomach, where ingestion from the mouth is passed through to the stomach for digestion. Esophageal cancer is an aggressive malignancy and a major health burden [1, 2], the 6th most common cause of cancer deaths globally [1], J. B. Idoko (B) Applied Artificial Intelligence Research Centre, Department of Computer Engineering, Near East University, Nicosia 99138, Turkey e-mail: [email protected] M. J. Sadeq Department of Computer Engineering, Near East University, Nicosia 99138, Turkey © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_4

47

48

J. B. Idoko and M. J. Sadeq

making it a fatal medical condition. Esophageal cancer can start in the organ lining, but can become malignant and spread to nearby organs. Reports have shown several factors to be related to esophageal cancer such as age, gender, ethnicity, region, and individual lifestyle (like smoking, drinking, and obesity) etc. [3, 4]. The annual cancer mortality rate for esophageal cancer in 2018 was above 500,000 [5]. The incidence rate of esophagus cancer has skyrocketed in last decades, with about 500,000 new cases [6, 7]. Unlike mortality, the incidence rate of the cancer varies from region to region in relation to ethnicity and other factors mentioned previously. Among all malignancy, esophageal cancer is the most variably distributed across nationalities, with a reported 60-fold rates from country to country [8]. Esophageal cancer is of different types, but the adenocarcinoma and squamous are the two most common esophageal cancer diagnosed in about 95% of patients [5]. The squamous sub variant of esophageal cancer occurs more in developing countries (like Tanzania), while adenocarcinoma is more prevalent in developed countries like the USA [9, 10]. It is important to determine the prognosis as well as survival rates of esophageal cancer patients including early screening/diagnosis and accurate staging [2]. There are several ways to screen and reach an acceptable diagnosis. For effective and efficient diagnosis, as well as staging, imaging techniques such as Endoscopy, computed tomography, whole-body positron emission tomography with 18-fluorodeoxyglucose, and endoscopic ultrasound (EUS) are paramount [5]. Unfortunately, despite the many advances in medicine in recent times, esophageal cancer still remain an aggressive malignancy with poor prognosis [2] and poor early diagnosis. It is usually detected at a later or malignant stage, which can attributed to some factors include lac of early manifestation of clinical symptoms. According to Hopkins Medicine, to improve the clinical outlook of esophageal cancer is to devise means of improving accuracy in diagnosis as well as the effective staging of the cancer [11] to decide on the most useful medical intervention. This study aims to develop an intelligent AI system for diagnosing esophageal cancer. The developed system functions to screen and provide assisted diagnosis of esophageal cancer. Several symptoms are used as indicators for diagnosing esophageal cancer that can be vital in predicting the status of any form of cancer that can assist medical decision making on proper diagnosis, treatment, and prognosis. Of contributing importance is the role of risk factors in screening individuals for esophageal cancer. It has the ability of revealing the susceptibility or vulnerability of individual to the occurrence of esophageal cancer. AI system that can combine the symptoms, risk factors, demography and other important indicators to diagnose esophageal cancer especially in the early stage. This will prevent ignorance or misdiagnosis (due to uncertainty and human error) of the cancer before it reaches advance stage. The study utilized the principles of fuzzy logic to develop a fuzzy intelligent system (FIS) that diagnose esophageal cancer. Fuzzy logic is an excellent choice for dealing with real world (uncertain data) medical problems, because it uses natural language [12, 13]. Artificial intelligent models that used fuzzy logic principles have been shown to be highly sensitive, accurate and flexible in dealing with vogue data. It also have ability to resist errors/dynamism in the data set.

Fuzzy Inference System Based-AI for Diagnosis of Esophageal Cancer

49

2 Related Literature Review For different applications, AI has been used for solving several real life issues affecting esophageal cancer. In [14], Wang and associates created an AI system on the principles of fuzzy logic. The system was developed with the aim of predicting the esophageal cancer risk associated with each individual in the study. The input considers several factors, while the output predict the risk score of esophageal cancer. The dataset utilized in the study are numerical values from serum concentrations of C-reactive protein (CRP) and albumin. This dataset was cultivated from 271 patients who had been confirmed to have esophageal cancer, but were not treated with radiotherapy yet. The result shows that applying fuzzy logic to datasets improved esophageal cancer prediction for complete one year survival (area under curve (AUC) = 0.773). The study also found fuzzy logic to perform better than Glasgow prognostic score (AUC = 0.745). The study concluded that fuzzy logic I a useful tool that can help analyze and predict the result of esophageal cancer patients. In [15], Hamed developed a fuzzy logic algorithm to tackle issues of esophageal cancer. The algorithm is called adaptive fuzzy Petri net (AFPN), which is a system that shows the prognosis of the diseases, and it uses if then rules to achieve the predictive outcome for esophageal cancer. The input variables are e serum concentrations of C-reactive protein and albumin. Simulations and experimental results demonstrate the effectiveness and performance of the proposed algorithms. In [16], Scrobot˘a and associates applied the principles of fuzzy logic to assess all cancer affecting the mouth, among which esophageal cancer was the focus. The main of the study was to use fuzzy logic to estimate the oral cancer risks, to improve early detection, and better management. They used clinical data and data from histopathological exam of 16 patients who were already diagnosed with oral potentially malignant disorders. Additionally, the study utilized fuzzy logic to interpret the values in the input numerical data (MDA and DONORS_PROTONS) and, based on a set of rules, to assign values to the output (ILLNESS RISK). The result found the highest MDA value associated with the lowest DONORS_PROTONS value in the variation fields corresponds to the highest value of ILLNESS RISK, and vice versa. Li and colleagues in [17] identified the implication of manual segmentation of medical images especially in esophageal cancer. Therefore, they utilized semantic to automatically segment the pathological sections of esophageal cancer images. The main contribution of their study was establishing a comprehensively labeled dataset containing 1388 patches that were marked with cancer cells. Thee patches were divided in to Normal (958), and Abnormal (430). The study proceeded to test the dataset on DeeplabV3, FCN + ResNet, Unet etc. The result reveal FCN + ResNet as the best performing among all. Huang and colleagues in [7] attempted to diagnose esophageal cancer in it early stage, using AI. A new AI system was developed by Du and associates in [18] to solve esophageal issues. The study was the first of it kind and it is referred to as an efficient channel deep dense convolution neural network (ECA-DDCNN). The ECA-DDCNN is capable of classifying gastroscopic images of the esophagus. In the classification, the system is

50

J. B. Idoko and M. J. Sadeq

Fig. 1 Diagrammatic illustration of the proposed model

able to group them in to healthy, pre-cancer, early cancer, and advanced esophageal cancer. The participants of the study included 4,077 patients. From these patients, 20,965 gastroscopic images were gathered for training and testing the system. After system was run through the implementation of the collected data, the system achieved an overall accuracy of 90.63%, which when compared with other similar AI models; the proposed ECA-DDCNN performs better. Figure 1 shows diagram illustrating the proposed AI system by Du and associates [18]. Fang and associates in [19] developed a semantic segmentation model with the aim to diagnose esophageal cancer in early stage. The segmentation model predicts and label esophageal cancer using some of the early factors that are helpful in screening the cancer. A total of 165 esophagus images (75 white-light images (WLI) and 90 narrow-band images (NBI)). These images included images of normal or healthy esophagus. The image dataset also included two other categories, which are esophageal dysplasia and squamous cell carcinoma of the esophagus. Result shows that within a mean time of 111 ms, the AI system was able to predict the state of the esophagus whether it is normal or healthy, dysplasia, or squamous cell carcinoma, with an overall accuracy of 84.724% and 82.377% respectfully for NBI and WLI respectively.

Fuzzy Inference System Based-AI for Diagnosis of Esophageal Cancer

51

3 Materials and Methods MATLAB software was used to design and run the intelligent system for screening and diagnosing esophageal cancer. The MATLAB tools used in this study include Interference fuzzy system (IFS) and Graphics users Interface (GUI) as used in [20]. The data set was retrieved from a hospital in Erbil (due to confidentiality request by the hospital, name will not be mentioned in the study). The data set was divided in to these categories of input variables; symptoms, general risk factors, and Histopathology test data, for 7 patients. Generally, the risk factors of esophageal cancer show the level of vulnerability of an individual to esophageal cancer. They include age, gender, ethnicity, region, and general lifestyle etc. Symptoms are those physical or clinical manifestations on patient that may be related to esophageal cancer. Symptoms of esophageal cancer that may indicate the occurrence of the cancer include dysphagia, weight loss, chest pain, coughing or hoarseness, digestive problems, and the esophageal conditions of GERD and Barrett etc. These symptoms individually or collectively can indicate esophageal cancer. A combination of the symptoms and the general risk factors may provide a better screening of esophageal cancer. The dataset also included data from the following; Video-fluoroscopic swallowing studies (VFSS), Endoscopic confocal microscopy (ECM), Esophagogastroduodenoscopy (EGD) Endoscopy with Biopsy, Endoscopic ultrasonography (EUS). Both the dataset and the IF THEN rules of the study are provided in a depository. Figure 2 shows the fuzzy IF THEN rules for age as a risk factor for esophageal cancer. After dataset is implemented in the developed AI system, the expected output of the system is positive (P), suspicious (S), and negative (N), which is according to the fuzzy logic IF THEN rules that were established according to expert consideration. These rules ascertain the range and values of all input variables, were incorporated in to MATLAB (Matrix laboratory) software. Graphics User Interface (GUI) provides an easier technique for inputting esophageal cancer patients. The system provides recommendations with each predictive diagnosis for further procedure. For a positive (P) diagnosis, the system recommends staging of the cancer and other possible recommendation useful for selecting the best therapy options. For suspicious (S) diagnosis, the system recommends a further screening to arrive at a conclusive diagnosis. Lastly, for a negative (N), there is no need for further evaluation, hence, there is no recommendation. Figure 3 depicts a screen shot of input crisp variables and the output of the system. The dataset were inputted in to the FIS system, the system provides an output, which is the predictive diagnosis, which is compared to the real result provided by the hospital. Figure 4 shows the first main page of the developed FIS system; the left side of the page is where data are inputted, while the right side of the page provides the output (esophageal cancer predictive diagnosis). From the main page shown in Fig. 4, the user can inputs patients’ data accordingly to various dialogs in the left pane of the main page. To generate output, the user clicks on the result button to simulate the input variables to provide a predictive

52

J. B. Idoko and M. J. Sadeq

Fig. 2 Fuzzy rules membership plots for age risk

diagnosis (output) in the right side of the main page. The system operates a comparative of patients’ risk factors, symptoms, and diagnostic tests. The system provides a diagnosis by utilizing the IF and THEN fuzzy rules (including the range) of each variable that was established by the medical practitioner. Instances of such variable is presented in Table 1. Table 1 shows the original data with the diagnosis score. From the diagnosis score, we used simple arithmetic of (diagnostic score/10) × 100 to the get percentage diagnosis. Therefore, our system does not constitute any algorithm that calculate the accuracy.

4 Results and Discussion The predictive diagnosis of esophageal cancer by FIS is presented on the output dialog box either as positive for cancer, with ability to differentiate between AC and SCC esophageal cancer subtypes. For a Positive (P) prediction, the system detects the presence of esophageal cancer cells from a combination of all patient’s input

Fuzzy Inference System Based-AI for Diagnosis of Esophageal Cancer

Fig. 3 A screen shot of input crisp variables and the output of the system

Fig. 4 Input and output dialog boxes shown in the main page of the FIS system

53

54 Table 1 Excerpt of explored dataset

J. B. Idoko and M. J. Sadeq

Patients

Diagnosis

Diagnosis score

P_1

Positive AC

9.19

P_2

Positive AC

9.32

P_3

Positive SCC

9.29

P_4

Negative

9.69

P_5

Suspicious

9.12

P_6

Positive AC

9.70

variables, and recommends a staging test to determine the stage of the test. The system also provides a Negative (N) prediction of esophageal cancer if comparative simulation of input variable of a patient is outputted as so. Lastly, the system can also provide an output of Suspicious (S), when the system is indecisive between negative or positive prediction. This means the system’s result is ambiguous, and which recommends a repeat of previous clinical screening/tests, or a more advance medical examination to come to reach a more precise diagnosis. Concisely, a Suspicious (S) output indicate that the patient may not have cancer, but a noncancerous tumor or any other esophageal condition (such as GERD and Barrett), that may require further screening. The intelligent predictive result/diagnosis of esophageal cancer for six patients’ data implemented in this study is are provided and discussed in the following sections accordingly. The result of patient 1 is provided in Fig. 5 which is according to the data retrieved from Erbil hospital, will follow the fuzzy IF and THEN rules. The Patient 1 is a male gender, aged 50 years old falling in the interval [≥ 50]; a high risk age group according to experts which is incorporated in the fuzzy rules. Men are at a 75% higher risk of esophageal cancer compared to females. The patient is obese (high risk of AC), has indigestion issues (medium risk), Barrett (30 to 40 times risk of AC), and GERD (5 times risk of AC). Furthermore, Patient 1 is in the Asian (AS) region, which has risk of SCC esophageal subtype. Data from the hospital shows that Patient 1 has odynophagia from VFSS test (YES), ECM indicated cancer (cancer), imaging was performed through EGD and staging was recommended by the procedure of EUS. For Patient1, the FIS system predicted the diagnosis to be POSITIVE for esophageal cancer with the AC subtype. The system recommends patient to go for procedure to check Gleason score and stage of the cancer development. The results of patient 1 to patient 6 is tabulated in Table 2. A sum up of patient 1 to patient 6 results shows an overall system accuracy of 95.24%, and individual patient accuracy as shown in Table 2. The overall accuracy is the overall ability of the system to predict the diagnostic outcome of esophageal cancer according to the fuzzy rules and patent data retrieved from Erbil hospital, while the individual accuracy is the individual prediction of the system, to provide diagnosis of each patient that match the diagnosis provided by the Erbil hospital. The FIS system developed in this study has performed considerably well with an overall accuracy of 95.24%,

Fuzzy Inference System Based-AI for Diagnosis of Esophageal Cancer

55

Fig. 5 Predictive diagnostic result of patient 1 showing positive cancer (AC)

Table 2 Summary of all patient predictive diagnostic results for esophageal cancer

Patients

Diagnosis (output) Recommendation

Accuracy

Patient 1 Positive AC

Staging

93.91

Patient 2 Positive AC

Staging

94.24

Patient 3 Positive SCC

Staging

94.01

Patient 4 Negative

No need for staging 98.2

Patient 5 Suspicious

Further screening

92.53

Patient 6 Positive AC

Staging

98.52

which in comparison with the accuracies in [18] (90.63%) and [19] (84.72%), is a commendable performance.

5 Conclusion and Recommendation In this research, we explored esophageal cancer patients’ dataset, proposed an AI system [21–40] and fuzzy IF THEN rule, and implemented the system for predictive diagnosis of esophageal cancer. Among the tools used for the actualization of the objectives of this study include MATLAB (Matrix laboratory) software, which constitutes the Graphics users Interface (GUI) function tool. Data was collected for a total of 7 patients from a hospital in Erbil. After the fuzzy logic rules were established and implemented in the AI system, the dataset were inputted to run the system. It is of vital importance to note that the predictive result was according to the established fuzzy IF THEN rules which was attained through expert consideration. Our result obtained a high accuracy of up to 98% for each patient’s predictive diagnosis, and

56

J. B. Idoko and M. J. Sadeq

an overall accuracy of 95.24% for all patients’. This gives our study a competitive advantage compared to accuracies in [18] (90.63%) and [19] (84.72%). Above all, this study reveals the capabilities of fuzzy logic in assistive diagnosis of not only cancer, but of other ailment that have uncertain data. The result of the developed system was successful due to the system linking of all vital input variables (symptoms, risk factors, and diagnostic tests) to reach a predictive diagnosis of esophageal cancer. For future studies, it is highly recommended that the fuzzy logic rule should be expanded to include more variables and scenarios, so that esophageal cancer can be detected early before it reaches advanced or late stage. In addition, it is important to point out the limitation in the dataset, which is only a collection of 7 patients’ data. From this 7 patients, only 6 patients’ data were complete. Hence only 6 patients’ data were used in this study, which is very limited. Therefore, future studies should collect as much data as possible from different geographical location, ages, ethnicity, and genders.

References 1. Bray, F., Ferlay, J., Soerjomataram, I., Siegel, R. L., Torre, L. A., & Jemal, A. (2018). Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: A Cancer Journal for Clinicians, 68(6), 394–424. 2. Corona, E., Yang, L., Esrailian, E., et al. (2021). Trends in esophageal cancer mortality and stage at diagnosis by race and ethnicity in the United States. Cancer Causes and Control, 32, 883–894. 3. Huang, F. L., & Yu, S. J. (2018). Esophageal cancer: Risk factors, genetic association, and treatment. Asian Journal of Surgery, 41, 210–215. 4. Buckle, G. C., Mmbaga, E. J., Paciorek, A., Akoko, L., Deardorff, K., Mgisha, W., Mushi, B. P., Mwaiselage, J., Hiatt, R. A., Zhang, L., & Van Loon, K. (2022). Risk factors associated with early-onset esophageal cancer in Tanzania. JCO Global Oncology, 2022, 8. 5. Thakkar, S., & Kaul, V. (2020). Endoscopic ultrasound staging of esophageal cancer. Gastroenterology & Hepatology, 16(1). 6. Ferlay, J., Soerjomataram, I., Dikshit, R., et al. (2015). Cancer incidence and mortality worldwide: Sources, methods and major patterns in GLOBOCAN 2012. International Journal of Cancer, 136(5), E359–E386. 7. Huang, J., Koulaouzidis, A., Marlicz, W., Lok, V., Chu, C., Ngai, C. H., Zhang, L., Chen, P., Wang, S., Yuan, J., et al. (2021). Global burden, risk factors, and trends of esophageal cancer: An analysis of cancer registries from 48 countries. Cancers, 13, 141. 8. Salek, R., Safa, E. B., Hamid, S. S., et al. (2009). A geographic area with better outcome of esophageal carcinoma: Is there an effect of ethnicity and etiologic factors? Oncology, 77, 172–177. 9. Arnold, M., Ferlay, J., van Berge Henegouwen, M. I., et al. (2020). Global burden of esophageal and gastric cancer by histology and subsite in 2018. Gut, 69, 1564–1571. 10. Simba, H., Tromp, G., Sewram, V., Mathew, C. G., Chen, W. C., & Kuivaniemi, H. (2022). Esophageal cancer genomics in Africa: Recommendations for future research. Frontiers in Genetics, 13, 864575. 11. Hopkins Medicine. (2022). Warning signs of esophageal cancer. Retrieved April 11, 2023. https://www.hopkinsmedicine.org/kimmel_cancer_center/cancers_we_treat/esopha geal_cancer/warning-signs.html

Fuzzy Inference System Based-AI for Diagnosis of Esophageal Cancer

57

12. Chen, Y., Chen, X., Yu, H., Zhou, H., & Xu, S. (2019). Oral microbiota as promising diagnostic biomarkers for gastrointestinal cancer: A systematic review. OncoTargets and Therapy, 12, 11131–11144. 13. Fraccaro, P., O’Sullivan, D., Plastiras, P., O’Sullivan, H., Dentone, C., Di Biago, A., & Weller, P. (2015). Behind the screens: Clinical decision support methodologies–a review. Health Policy and Technology, 4(1), 29–38. 14. Wang, C., Lee, T., & Fang, C., et al. (2012). Fuzzy logic-based prognostic score for outcome prediction in esophageal cancer. IEEE Transactions on Information Technology in Biomedicine, 16(6). 15. Hamed, R. I. (2015). Esophageal cancer prediction based on qualitative features using adaptive fuzzy reasoning method. Journal of King Saud University–Computer and Information Sciences, 27, 129–139. 16. Scrobot˘a, I., B˘aciut, , G., & Filip, A. G., et al. (2017). Application of fuzzy logic in oral cancer risk assessment. Iranian Journal of Public Health, 46(5), 612–619. 17. Li, S., Zheng, L., & Zhang, Y., et al. (2018). Automatic segmentation of esophageal cancer pathological sections based on semantic segmentation. In 2018 International Conference on Orange Technologies (ICOT) (pp. 1–5). https://doi.org/10.1109/ICOT.2018.8705880 18. Du, W., Rao, N., & Dong, C., et al. (2021). Automatic classification of esophageal disease in gastroscopic images using an efficient channel attention deep dense convolutional neural network. Biomedical Optics Express 3066, 12(6). 19. Fang, Y., Mukundan, A., & Tsao, Y., et al. (2022). Identification of early esophageal cancer by semantic segmentation. Journal of Personalized Medicine, 12(8), 1204. 20. Farokhzad, M. R., & Ebrahimi, I. (2016). A novel adaptive neuro fuzzy inference system for the diagnosis of liver disease. International journal of academic research in computer engineering, 1(1), 61–66. 21. Abiyev, R., Arslan, M., Bush Idoko, J., Sekeroglu, B., & Ilhan, A. (2020). Identification of epileptic EEG signals using convolutional neural networks. Applied sciences, 10(12), 4089. 22. Abiyev, R. H., Arslan, M. & Idoko, J. B. (2020). Sign language translation using deep convolutional neural networks. KSII Transactions on Internet & Information Systems, 14(2). 23. Helwan, A., Idoko, J. B., & Abiyev, R. H. (2017). Machine learning techniques for classification of breast tissue. Procedia computer science, 120, 402–410. 24. Sekeroglu, B., Abiyev, R., Ilhan, A., Arslan, M., & Idoko, J. B. (2021). Systematic literature review on machine learning and student performance prediction: Critical gaps and possible remedies. Applied Sciences, 11(22), 10907. 25. Idoko, J. B., Arslan, M., & Abiyev, R. (2018). Fuzzy neural system application to differential diagnosis of erythemato-squamous diseases. Cyprus J Med Sci, 3(2), 90–97. 26. Ma’aitah, M. K. S., Abiyev, R. & Bush, I.J. (2017). Intelligent classification of liver disorder using fuzzy neural system. International Journal of Advanced Computer Science and Applications, 8(12). 27. Bush, I.J., Abiyev, R., Ma’aitah, M.K.S., & Altıparmak, H. (2018). Integrated artificial intelligence algorithm for skin detection. In ITM Web of Conferences (Vol. 16, p. 02004). EDP Sciences. 28. Bush, I. J., Abiyev, R., & Arslan, M. (2019). Impact of machine learning techniques on hand gesture recognition. Journal of Intelligent & Fuzzy Systems, 37(3), 4241–4252. 29. Uwanuakwa, I. D., Idoko, J. B., Mbadike, E., Re¸sato˘glu, R, Alaneme, G. (2022). Application of deep learning in structural health management of concrete structures. In Proceedings of the Institution of Civil Engineers-Bridge Engineering (pp. 1–8). Thomas Telford Ltd. 30. Helwan, A., Dilber, U. O., Abiyev, R., & Bush, J. (2017). One-year survival prediction of myocardial infarction. International Journal of Advanced Computer Science and Applications, 8(6). https://doi.org/10.14569/IJACSA.2017.080622 31. Bush, I. J., Abiyev, R. H., & Mohammad, K. M. (2017). Intelligent machine learning algorithms for colour segmentation. WSEAS Transactions on Signal Processing, 13, 232–240.

58

J. B. Idoko and M. J. Sadeq

32. Dimililer, K., & Bush, I.J. (2017). Automated classification of fruits: Pawpaw fruit as a case study. In Man-Machine Interactions 5: 5th International Conference on Man-Machine Interactions, ICMMI 2017 Held at Kraków, Poland, October 3–6, 2017 (pp. 365–374). Cham: Springer International Publishing. 33. Bush, I. J., & Dimililer, K. (2017). Static and dynamic pedestrian detection algorithm for visual based driver assistive system. In ITM Web of Conferences (Vol. 9, p. 03002). EDP Sciences. 34. Abiyev, R., Idoko, J. B., & Arslan, M. (2020). Reconstruction of convolutional neural network for sign language recognition. In 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE) (pp. 1–5). IEEE. 35. Abiyev, R., Idoko, J. B., Altıparmak, H., & Tüzünkan, M. (2023). Fetal health state detection using interval type-2 fuzzy neural networks. Diagnostics, 13(10), 1690. 36. Arslan, M., Bush, I.J., Abiyev, R.H. (2019). Head movement mouse control using convolutional neural network for people with disabilities. In 13th International Conference on Theory and Application of Fuzzy Systems and Soft Computing—ICAFS-2018 (vol. 13, pp. 239–248). Springer International Publishing. 37. Abiyev, R.H., Idoko, J.B., & Dara, R. (2022). Fuzzy neural networks for detection kidney diseases. In Intelligent and Fuzzy Techniques for Emerging Conditions and Digital Transformation: Proceedings of the INFUS 2021 Conference, held August 24–26, 2021. Volume 2 (pp. 273–280). Springer International Publishing. 38. Uwanuakwa, I.D., Isienyi, U.G., Bush Idoko, J., & Ismael Albrka, S. (2020). Traffic warning system for wildlife road crossing accidents using artificial intelligence. In International Conference on Transportation and Development 2020 (pp. 194–203). Reston, VA: American Society of Civil Engineers. 39. Idoko, B., Idoko, J.B., Kazaure, Y.Z.M., Ibrahim, Y.M., Akinsola, F. A., Raji, A.R. (2022). IoT based motion detector using raspberry Pi gadgetry. In 2022 5th Information Technology for Education and Development (ITED) (pp. 1–5). IEEE. 40. Idoko, J.B., Arslan, M., & Abiyev, R.H. (2019). Intensive investigation in differential diagnosis of erythemato-squamous diseases. In Proc. 13th International Conference on Theory and Application of Fuzzy Systems and Soft Computing (ICAFS-2018) (Vol. 10, pp. 978–3).

Skin Detection System Based Fuzzy Neural Networks for Skin Identification Idoko John Bush and Rahib Abiyev

Abstract Machine learning is an effective technology for big data augmentation and analysis. This research proposes an integrated machine learning algorithm (Skin Detection System) based on clustering technique and fuzzy neural networks on extremely big datasets for identification of skin. We explored the Takagi–Sugeno fuzzy network utilizing Fuzzy C-Mean (FCM) clustering and crossvalidation approach with the gradient descent learning algorithm. Using clustering technique, targeted features were extracted from the large datasets and utilized for the implementation of the Skin Detection System (SDS). We also incorporated crossvalidation technique utilizing fuzzy c-means and gradient descent algorithms to update the SDS parameters. Finally, the SDS was trained and tested on the domain. The comparative results are provided in Table 3 to depict the efficiency of using the proposed approach to classifying big datasets. Keywords Skin · Non-skin · Machine learning · Fuzzy neural networks

1 Introduction Machine learning algorithms have gained considerable importance and popularity in solving different classification, identification, control, recognition and optimisation problems existing in science and engineering. Most of these problems are characterised with huge datasets having missing values. Different approaches have been proposed to efficiently predict outcomes and improve the performance. In this paper, we considered one of such problems having huge datasets. To solve this problem, we designed a machine learning algorithm (Skin Detection System) based fuzzy neural system using Takagi-Sugeno-Kang (TSK) type fuzzy rules to detect skin from nonskin instances. Skin detection as applied in several fields have applications I. J. Bush (B) · R. Abiyev Applied Artificial Intelligence Research Centre, Computer Engineering Department, Near East University, Lefkosa, Northern Cyprus Mersin 10, Turkey e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_5

59

60

I. J. Bush and R. Abiyev

including identity confirmation in security applications, facial characteristics, video conferences, information access, pictorial information management in the banking sector and so on. The suggested Skin Detection System (SDS) models systems by combining neural networks and fuzzy logic parameters that have been trained on input and target data [1]. The rule of diminishing gradient is the primary notion employed in training the framework. Gradient reduction alone is ineffective due to the slowness of the network during training. However, new hybrid methods have been designed by integrating gradient functions and other learning approaches to boost learning speed. In the paper, clustering is employed for feature extraction from the large dataset. The extracted features are applied as an input for the proposed SDS. Fuzzy set theory has gained real time applications in control, classification and prediction. We implemented the SDS by exploring fuzzy reasoning process and neural network structure. Under the condition of sufficient fuzzy rules, fuzzy neural systems are known as estimators capable of estimating all possible non-linear functions with expected accuracy. Recent research in neural networks and fuzzy systems fields suggests that, combining these two learning techniques is highly effective in the implementation of non-linear systems. The use of fuzzy clustering for colored image segmentation [2] is a method of pixel-based segmentation, in which the fuzzy system identifies which pixel belong to which of the categories. The goal here is to construct a fuzzy framework capable of classifying a wider range of colors. To accomplish this, an expert must set the membership functions and rules in line with the training dataset, which is not convenient and time consuming, so the final rules may not be enough. For this reason, there is need for automated method for the creation of fuzzy membership functions and rules in accordance with the training data. In this paper, we designed a machine learning technique (SDS) based fuzzy neural system that can automatically generate membership functions and fuzzy rules to distinguish skin from nonskin. Each pixel is separated into skin and non-skin pixels using fuzzy rules determined from multiple color spaces during the training phase. Pixel-based color has the advantage of not requiring a spatial ground, allowing for faster processing, detection, and tracking applications such as face, human body part, and nudity detection systems, which benefit from skin identification. Furthermore, detecting skin tone aids in the blocking of abusive image or video content. Human skin color is important in human relationships, in addition to its usage in computing. To obtain maximum satisfaction from image segmentation, exploration of black and white colors is totally insufficient. We solved this problem by projecting the degree of membership using fuzzy neural system reasoning. Integration of fuzzy neural system can be accomplished by augmenting fuzzy system with NN to improve some of its characteristics including adaptability, flexibility, and speed. The SDS is enforced in the architecture of our proposed algorithm to induce the processes of fuzzy reasoning where network connections are consistent with fuzzy reasoning parameters. One of the major and interesting goals that provide seamless meaning for skin modeling frameworks is their simple decision making rules. We capitalized on this explicit property and implemented a machine learning technique (SDS) base on fuzzy neural system capable of detecting skin in color images.

Skin Detection System Based Fuzzy Neural Networks for Skin …

61

2 Review of Related Study Recently, several research papers have been published in the field skin detection. Shamir in [3] demonstrated a scenario where a perception of segmentation of color of pixel is linked to human based approach. The HSV color space’s components define fuzzy sets and generate a fuzzy logic model targeted at conducting color classification-base human perception. A simple adjustment of classification by knowledge-driven model in light of necessities of a particular application and the effectiveness of the algorithm regarding the complexity of the computation makes the proposed model fit for applications where proficiency is an essential issue. Borji and Hamidi in [4], using fuzzy logic, suggested another technique for colored image segmentation in which a system was automatically developed for segmentation and classification of colored images with minimum error rate and least number of rules. A broadly learning technique, particle swarm optimization, is utilized to identify optimal membership functions and fuzzy rules since it prevents early convergence. When compared to other approaches such as ANFIS, this strategy requires less computational load. Variety of large training dataset makes the proposed algorithm invariant to illumination of noise. Recently, a renowned skin detection choice is the RGB, explored by Brown et al. [5], Barone and Caetano [6], Bergasa et al. [7], Sebe et al. [8], Soriano et al. [9], Schwerdt and Crowley [10], Storring et al. [11], Iraji [12] and Kang et al. [13]. A luminance component Y, as well as two other components X and Z, are depicted as color in the Commission Internationale de l’Eclairage (CIE) framework. The CIE–XYZ values were derived through psychophysical exams and are related to the color matching qualities of the human visual system. Brown et al. [5] used this color space, which has been confirmed. Hamid A. Jalab used a cluster pixel model to create skin segmentation under certain environmental circumstances in his study [14]. Changes in complicated back-grounds, sensitivity, and light conditions can be overcome with this proposed model [15]. Kakumanu et al. demonstrated skin modeling and detection in [16] and suggested further research. Outstanding results are produced employing a fuzzy system paired with a support vector machine (FS-FCSVM) [17] in relation to the current topic. IIT Delhi’s Rajen Bhatt and Abhinav Dhall first generated this dataset in 2009 [18], when they published an article in the Indian IEEE conference INDICO1 titled “efficient skin region segmentation using low complexity fuzzy decision tree model.” In the same year, the same team published another paper named “adaptive digital makeup” in the international symposium on visual computing (ISVC). In real-time systems for video conferencing, Leonid Sigal et al. established a unique approach for skin segmentation in [19]. Despite the wide range of skin types, the segmentation proved to be pretty reliable. This strategy worked well even when the brightness changed. In addition, the Markov’s approach was employed in [20]. Bouzerdoum presented an edge and color-based algorithm for skin region segmentation in color images. Traditionally, the skin color regions have been used to determine human skin color using Bayesian techniques [21]. Ro-drigo Verschae et al. completed skin region segmentation by inspecting the closest pixels [22] because of

62

I. J. Bush and R. Abiyev

the fast processing speed. Han et al. [23] devised a new strategy for resolving the skin segmentation problem. A general model of skin was originally built based on a video sequence of gestures. The data was collected automatically from numerous frames. On an active learning basis, an SVM classifier was used to recognize skin pixels [23]. Mohammad Shoyaib et al. used a color distance map to detect skin in their study [24]. In addition, SCS-PCA [25], a mix of principal components analysis and skin color segmentation, was introduced as a new face identification technique. Bhoyar and Kakde developed a unique neural network symmetric classification approach to distinguish between skin and non-skin pixels in color photographs in [26]. Skin was detected using K-means data mining methods, and worthwhile results were produced, as shown in [14]. Fuzzy systems are actively used for solving biomedical problems. For instance in [27], Arnav Chowdhury and Sanjaya Shankar Tripathy presented fuzzy approach for extraction of skin region to detect individual faces. Here, Eigenface is used to recognize different faces. Ali et al. in [28] introduced a number of modified fuzzy rules to manage the skin-like pixels issue. The modified fuzzy rules were incorporated with skin modeling technique to separate non-skin pixel from skin pixel. In [29], Afia et al. presented evaluation of performance of two well-known fuzzy based skin detection methods; Modified Fuzzy C-Mean algorithm (MFCM) and fuzzy inference system on a standard dataset for the classification and identification of skin pixels. Aureli et al. implemented a system for skin detection based on fuzzy integral exploration which subsumes the execution of more basic combination of operators, to adapt to the skin detection complexity under changing brightening conditions [30]. Pujol et al. in [31] proposed a skin detection task using fuzzy system in color space of RGB, where fuzzy sets were utilized to model each color space.

3 Proposed Algorithm 3.1 Structure of the System In this study, we implemented a machine learning approach (Skin Detection System) based on fuzzy neural structure to detect skin from nonskin images. At the design phase of the Skin Detection System (SDS), we explored an extremely large domain having a huge number of data. Due to the extremely large number of data in the domain, the design of the skin identification system using machine learning technique consumed a lot of time and to solve this problem, we introduced a feature extraction module to decrease the learning time. The structure of the proposed skin identification system is depicted in Fig. 1. As demonstrated in Fig. 1, we used clustering technique to obtain useful features of the data. At first iteration, the dataset is separated into parts in accordance with the output clusters. As a result of partitioning, we obtained data subsets equal to the output clusters number. This clustering algorithm is run for each data subset separately

Skin Detection System Based Fuzzy Neural Networks for Skin …

63

Fig. 1 Structure of the skin detection system

using input signals [32–35]. As a result, the size of initial dataset is reduced. Crossvalidation is utilized to reduce dataset to determine training and evaluation sets. After these operations, using the clustering and gradient descent algorithms, the design of the SDS based on FNN is implemented. At the training phase, input signals are applied to SDS input layer. At the output layer of the network, the deviations of current outputs from target ones are calculated. We utilized 10-fold cross validation approach at implementation phase and the algorithms used for proposed system are explained in Sects. 3.2, 3.3, 3.4 and 3.5.

3.2 Fuzzy Neural System for Skin Detection In this research, we used TSK type fuzzy rules to develop the SDS. Our suggested SDS incorporates fuzzy inference methods, linguistic rules, and neural network learning capabilities. The suggested SDS was created utilizing fuzzy rules with an IFTHEN structure. We were able to achieve the classification system through the neural networks training capabilities by optimizing the consequential and premise sections of the fuzzy IF-THEN rules. The Takagi-SugenoKang (TSK) type and Mamdani fuzzy rules are the two types of IF-THEN rules used in a fuzzy system. The former employs fuzzy rules with fuzzy antecedents and crisp subsequent portions, whereas the latter employs rules with fuzzy antecedents and consequent parts. The model was created using TSK type fuzzy rules. TSK fuzzy systems approximate linear systems with nonlinear systems and have the following structure: If x1 is Al j and x2 is A2 j and . . . and xm is Am j Then y j = b j +

m E

ai j xi

(1)

i=1

Here, the system’s input and output signals is denoted by x i and yj correspondingly. Input signal number is denoted by i = 1, …, m and the rules number is j = 1, …, r. The input fuzzy sets are denoted by Aij and the coefficients are respectively represented by bj and aij . As demonstrated in Fig. 1, the fuzzy neural networks structure used

64

I. J. Bush and R. Abiyev

for our proposed system is based on TSK type fuzzy rules. Here, the system is made up of 6 layers and the x i (i = 1, …, m) input signals are evenly spread across the 1st layer. Membership functions are embedded in the 2nd layer where a linguistic term is mapped to each node. Here, the membership degree to which input value belongs is calculated for each input signal entering the system. The Gaussian membership function is utilized to properly evaluate the linguistic terms such that, μ1 j (xi ) = e

(xi −ci j )2 σi2j

, i = 1, . . . , m, j = 1, . . . , r

(2)

Here, the Gaussian membership functions’ center and width are cij and σ ij respectively. The membership function of i-th input variable is μ1j (x i ), for j-th term. The rule layer of the framework is the third layer where number of nodes equals number of rules. Here, we represent the rules by R1 , R2 , …, Rr . We calculated the output signals of this layer using t-norm min (AND) operation as: μ j (x) =

n

μ1 j (xi ) i = 1, . . . , m, j = 1, . . . , r

(3)

i

Here, the main operation is n. The consequent layer represents the fourth layer. It includes n linear systems where linear functions (LF) are used to determine the output values of the rules thus: y1 j =

m E

xi wi j + b j

(4)

i=1

The μj (x) signals are represented by the input signals of the fifth layer as depicted in (3). In this layer, the output signals of the fourth layer are multiplied by the output signals of the third layer and we used the illustration yj = μj (x) · y1j to calculate the j-th node output. We calculated the output signals of our proposed system using the illustration: y1 j =

m E

xi wi j + b j

(5)

i=1

The illustration above shows that the output signals of the system represent the uk. Within layer 5 and layer 6, (k = 1, …, n), and wjk were utilized for weight coefficients of connections. Network training begins immediately the output signal is calculated.

Skin Detection System Based Fuzzy Neural Networks for Skin …

65

3.3 Cross Validation Rotation estimation is the most common name for the cross-validation approach. It’s a model validation technique for determining how well a statistical finding would generalize to a different dataset. In general, cross-validation is used to test how well a predictive model will perform in situations where prediction is the goal. A dataset of known data (training dataset) is typically input to the model on which training is done, and a dataset of unknown data (testing dataset) is typically used to test the model for a specific predicting problem. Cross-validation also averages/combines measurements of fit (prediction error) to arrive at a more exact estimate of model prediction performance. In this study, we used the cross-validation technique to automatically generate a dataset from the domain to test the model, with the explicit purpose of limiting overfitting and generating a generalized model. We used 10-fold cross-validation in this investigation, which involves randomly segmenting the original dataset into 10 equal-sized subsamples. The validation data for the model is a single subsample from the ten subsamples, while the training dataset is made up of the remaining nine subsamples. The process is then repeated ten times, with each of the ten subsamples serving as the validation dataset exactly once. The average of the 10-fold outcomes is then computed, yielding a single prediction. The main advantage of this method over randomly iterated subsampling is that all perceptions are used for both training and validation, and each perception is used only once for validation.

3.4 Parameter Updates and Fuzzy Classification Fuzzy if-then rules design cut across several techniques some of which are examined based on clustering as demonstrated in [36–41], gradient algorithms [36–38, 41, 42], the least-squares method (LSM) [37, 43], genetic algorithms [39, 40, 44], particle swarm optimization (PSO) [45]. The design of the proposed system as seen in Fig. 1 involves determining the unknown parameters of the fuzzy IF-THEN rules antecedent and the consequent parts (1). In fuzzy rules, the input space is represented by the antecedent part, by dividing the space into a set of fuzzy regions while the system behavior is described by the consequent part in those regions. Considering the design of our proposed system, we used gradient technique and fuzzy clustering. Firstly, the fuzzy clustering is used to design the antecedent (premise) parts, and then gradient algorithm is used to design the fuzzy rules consequent parts. We used fuzzy clustering for the construction of the premise parts because it is an efficient technique. The aim of clustering techniques is for identification of a certain class of data from the original dataset so that an accurate representation of the system behavior is generated. Here, a fuzzy rule translates each cluster center for class identification. In [46], we see the design of different clustering algorithms where fuzzy c-means and subtractive clustering algorithms were designed for fuzzy

66

I. J. Bush and R. Abiyev

frameworks. In this study, fuzzy c-means (FCM) clustering technique was utilized to construct the premise part of the fuzzy framework where SDS learning begins with the parameters update in the antecedent part of IF-Then rules that is, the parameters of the SDS second layer. To achieve our goal, we applied FCM classification in order to partition the input space and construct fuzzy if-then rules antecedent part. We used the following objective function in FCM algorithm thus: Jq =

C N E E

q

u i j di2j , where di j = ||xi − c j ||, 1 ≤ q < ∞

(6)

i=1 j=1

Here, we represent all numbers greater than 1 by q, and represent uij as the membership degree of x i in the j cluster. ith of d-dimensional data is represented as x i . cj represent the k-dimension center of the cluster and finally, ||*|| represent any norm expressing the similarity between any cluster centers and the measured data. Input data fuzzy classification is performed by repeating the objective function optimization (6), with the cluster centers cj and update of membership uij . The algorithm denotes the steps below: • Initialize U = [uij ] matrix, U (0) • The centers vectors is calculated using C (t) = [cj ] with U (t) and: cj =

( N E

) q ui j

· xi /

i=1

N E

q

ui j

(7)

i=1

• To update U (t) , U (t+1) , ) c ( E dik q−1 2

u i j = 1/

k=1

d jk

(8)

| | If |U (t+1) − U (t) | < ε then stop, else set t = t + 1 and return to step 2. The cluster centers are determined based on the partitioning results [47–59]. In the SDS input layer, the cluster centers are mapped to the centers of the membership functions. The distance between cluster centers affects the membership function’s width. Following the design of the antecedent sections using fuzzy clustering, the parameter update rules are generated for training the parameters of the resulting portions of the fuzzy rules. We used adaptive learning with a gradient learning rate in this study. The adjustable learning rate ensures network convergence and accelerates network learning.

Skin Detection System Based Fuzzy Neural Networks for Skin …

67

3.5 Learning Using Gradient Descent The learning starts by randomly generating the SDS parameters. In generating accurate SDS model, we carefully trained network parameters. Generally, we gave the learning procedure of SDS all parameters, utilizing gradient descent algorithm. The membership function of linguistic values in the network second layer, as well as the fourth and fifth layers parameters, is represented by these parameters. The research of cross validation technique for splitting the data into training and testing sets is also part of the SDS design. The adjustment of parameter values is part of the training process. Gradient learning with adaptive learning rate was used to update parameters in this investigation. The adaptive learning rate secures the network’s intersection and speeds up its learning. Furthermore, the momentum is used to speed up the learning process. The network output error is calculated as follows: )2 1 E( d uk − uk 2 k=1 n

E=

(9)

Here, SDS output signals is denoted as n, values of desired and current output (k = 1, …, n) are uk d and uk respectively. wjk , aij , bj , (I = 1, …, m, j = 1, …, r, k = 1, …, n) parameters in SDS consequent part and the parameters of membership functions cij and σij (I = 1, …, m, j = 1, …, r) in the SDS premise part are constantly updated via: ( ) ∂E w jk (t + 1) = w jk (t) − γ ∂w + λ w jk (t) − w jk (t − 1) ; jk ( ) ∂E ai j (t + 1) = ai j (t) − γ ∂a + λ ai j (t) − ai j (t − 1) ; ij ( ) ∂E b j (t + 1) = b j (t) − γ ∂b + λ b j (t) − b j (t − 1) ; j ( ) ∂E ci j (t + 1) = ci j (t) − γ ∂c + λ ci j (t) − ci j (t − 1) ij ( ) ∂E σi j (t + 1) = σi j (t) − γ ∂σ + λ σi j (t) − σi j (t − 1) ij

(10)

(11)

i = 1, . . . , n; j = 1, . . . , r ; k = 1, . . . , n

We denote the learning rate as γ , momentum as λ, SDS input signals (input neurons) as m, r represents fuzzy rules number, and finally, the output neurons number is n. Derivatives in (10) are derived by: ∂E ∂w jk

∂E ∂ai j ∂E ∂b j

n E ( ) ∂ E ∂u k = u k (t) − u dk (t) · y1 j / μj, ∂u k ∂w jk j=1 n E( E ) ∂ E ∂u k ∂ y1 j ∂ y j u k (t) − u dk (t) · wk μ j xi / = = μj, ∂u k ∂ y1 j ∂ y j ∂ai j k j=1 n E E( ) ∂ E ∂u k ∂ y1 j ∂ y j u k (t) − u dk (t) · wk μ j / = = μj, ∂u k ∂ y1 j ∂ y j ∂b j k j=1

=

here i = 1, . . . , m, j = 1, . . . , r, k = 1, . . . , n.

(12)

68

I. J. Bush and R. Abiyev

Derivatives in (11) are computed by: E ∂E ∂u k ∂E = ∂ci j ∂u k ∂μ j k E ∂E ∂E ∂u k = ∂σi j ∂u k ∂μ j k

∂μ j ∂ci j ∂u j ∂σi j

(13) (14)

Here I = 1, …, m, j = 1, …, r, k = 1, …, n. y j − uk ∂E ∂u k = u k (t) − u dk (t); = En ∂u k ∂μ j j=1 μ j ( ( ) )2 2 xi − ci j ∂μ j (xi ) 2 xi − ci j ∂μ j (xi ) = μ j (xi ) ; = μ j (xi ) ∂ci j ∂σi j σi2j σi3j

(15) (16)

From Eqs. (12)–(16), derivatives in (10) and (11) were computed and SDS parameters are updated.

4 Simulation Studies The above described SDS algorithms are used for skin identification. For this purpose, the dataset characterizing skin is taken from UCI machine learning repository. The dataset includes set of values characterizing three input parameters. Data items number is 245057. The major issue here is the accurate identification of skin, which is separating skin data from non-skin data. The fragment of the dataset is given in Table 1. The input data is normalized and scaled in the interval 0–1 during simulation, as shown in [60–62]. The normalization of input data enables for faster input-output data training and reduces training time. The data is used as an SDS classifier input signal after normalization. As mentioned, because the skin data set is extremely large, the training of the system takes more computational time. To decrease the training time the extraction of important features from the input data was performed. The clustering technique is utilized in the study for feature extraction for this purpose. The SDS is learned with the help of these features. The input dataset is sorted according to the output clusters to implement classification. The data subsets are obtained based on the clusters. The clustering of input data sets is then performed individually for each subset. After clustering operation, the received datasets are combined in order to get final dataset. In the result of clustering operation, the number of data samples obtained is equal to 245. The data samples number has been decreased approximately 1000 times. The dataset obtained from feature extraction process is used for the design of the SDS. The dataset used for the design of SDS includes three input and two output signals. To create training, evaluation, and testing datasets, the input/output dataset is used.

Skin Detection System Based Fuzzy Neural Networks for Skin … Table 1 Fragment from data set

69

R

G

B

Classes

74

85

123

1

73

84

122

1

72

83

121

1

70

81

119

1

70

81

119

1

69

80

118

1

70

81

119

1

70

81

119

1

170

90

149

2

184

100

158

2

170

87

142

2

176

94

147

2

160

78

131

2

182

100

153

2

182

100

153

2

200

118

170

2

185

105

158

2



Three inputs and two outputs are used to create the SDS. Various numbers of rules are employed to test the system during simulation. The membership functions of the antecedent part and the subsequent part parameters are produced at random in the beginning. The output signals are determined using input signals and the SDS structure, and the divergence of current outputs from goal outputs is computed using the SDS output. The root mean square error is calculated using these deviations (RMSE). The SDS performance is calculated using RMSE and recognition rate in this paper. The formula for calculating RMSE is: F | N |1 E ( d )2 | y − yi RMSE = N i=1 i

(17)

where yi d and yi are the target and current output signals. N denotes the number of samples. Using the values of RMSE, the learning of the SDS is computed. Recognition rate is computed by: Recog_rate =

Number of items correctly classified · 100% Total number of items

(18)

70

I. J. Bush and R. Abiyev

Gradient descent and classification and algorithms of fuzzy c-means are employed for learning the parameters of SDS. At first, the input is fed to the fuzzy c-means classifier to select the membership functions centers between input and hidden layers. For this purpose, equations in Sect. 3.4, is used for classification. The membership functions width is computed using distances between the membership functions centers. The gradient descent approach is used to learn the parameters of the SDS subsequent portion after clustering. For 1000 epochs, the training continues. Five, eight, and sixteen rules were used in the learning process. Figure 2 demonstrates the plot of RMSE, using 16 rules. As a result of learning with 16 rules, training error value is obtained as 0.213228; the value of error for evaluation is 0.217820. After training, the testing of the SDS is performed. Here, 245057 data were fed to the SDS input layer. At SDS output layer, RMSE value for testing was obtained as 0.468475. The recognition accuracy of the system was obtained as 96.84. The obtained results satisfy the suitability of using SDS structure in skin identification. Table 2 depicts the results of simulation of SDS using different number of rules. Results in Table 2 demonstrate that as the number of rules increases, training, evaluation, and testing errors reduce while the recognition rate also rises. Clustering techniques were incorporated into the system design to shorten learning time and improve the SDS performance. In the second stage, the performance of the implemented SDS is compared with the performances of Neural Network (NN) and Fuzzy Decision Tree (FDT) based identification systems. RMSE and recognition rates of the NN and FDT systems were computed using the same procedure used for SDS. We used these two indices Fig. 2 Plot of RMSE of SDS

Table 2 Simulation results of SDS Epochs Number of hidden neurons

RMSE Train

Recognition rate Evaluation

Test

Testing

5

1000

0.3597

0.3661

0.7308

94.3156

8

1000

0.3016

0.3057

0.5479

95.6324

16

1000

0.2132

0.2178

0.4684

96.8435

Skin Detection System Based Fuzzy Neural Networks for Skin …

71

Table 3 Comparative results of different models Number of hidden neurons

Epochs

Fuzzy decision tree[18]



Neural networks SDS

Methods

RMSE

Recognition rate

Train

Evaluation

Test









94.10

16

1000

0.43

0.74

0.73

81.24

16

1000

0.21

0.21

0.46

96.84

(RMSE and recognition) to compare the performances of NN, FDT and SDS. Table 3 demonstrates the comparative results. As depicted in the table, SDS outperformed the other two models.

5 Conclusions In this paper, we designed a model (SDS) using machine learning approach based on fuzzy neural network to accurately detect skin from non-skin. The designed SDS was as well implemented. Due to the large number of data in the domain, we applied feature extraction technique to extract valuable features. The feature extraction procedure decreased the volume of dataset. The obtained dataset is utilized for the design of the model. We also applied fuzzy c-means classification and gradient descent algorithms at the design phase of the system. The learning is implemented using 10 fold cross-validation technique. The explored algorithms allow considerable decrease of training time and enhanced the system performance. After the design phase, SDS was trained and tested on the selected domain. The obtained comparative results demonstrate the suitability of SDS for further exploration. Future engagement of this study is targeted on deployment of the implemented SDS to dermatology department in hospitals.

References 1. Iraji, A., Saber, M., & Tosinia, A. (2012). Skin color segmentation in YCBCR color space with adaptive fuzzy neural network (Anfis). International Journal of Image, Graphics and Signal Processing, 4(4), 35. 2. Akshay, B., Srivastava, S., & Agarwal, A. (2010). Face detection using fuzzy logic and skin color segmentation in images. In 3rd International Conference on Emerging Trends in Engineering and Technology (ICETET). IEEE. 3. Shamir, L. (2006). Human perception-based color segmentation using fuzzy logic. In International Conference on Image Processing, Computer Vision and Pattern Recognition, IPCV (Vol. 2, pp. 496–505).

72

I. J. Bush and R. Abiyev

4. Borji, A., & Hamidi, M. (2007). Evolving a fuzzy rule base for image segmentation. Proceedings of World Academy of Science, Engineering and Technology, 22, 4–9. 5. Brown, D., Craw, I., & Lewthwaite, J. (2001). A SOM based approach to skin detection with application in real time systems. In BMVC01. 6. Caetano, T., & Barone, D. (2001). A probabilistic model for the human skin-color. In ICIAP01 (pp. 279–283). 7. Bergasa, L., Mazo, M., Gardel, A., Sotelo, M., & Boquete, L. (2000). Unsupervised and adaptive Gaussian skin-color model. Image and Vision Computing, 18(12), 987–1003. 8. Sebe, N., Cohen, T., Huang, T., & Gevers, T. (2004). Skin detection, a Bayesian network approach. In ICPR04. 9. Soriano, M., MartinKauppi, J., Huovinen, S., & Laaksonen, M. (2003). Adaptive skin color modeling using the skin locus for selecting training pixels. Pattern Recognition Letters, 36(3), 681–690. 10. Schwerdt, K., & Crowely, J. (2000). Robust face tracking using color. In AFGR00. 11. Stoerring, M., Koeka, T., Anderson, H., & Granum, E. (2003). Tracking regions of human skin through illumination changes. Pattern Recognition Letters, 24, 11. 12. Iraji, M., & Saber, A. Y. (2011). Skin color segmentation in fuzzy YCBCR color space with the Mamdani inference. American Journal of Scientific Research, 7, 131–137. 13. Kang, S., Kwon, O., & Chien, S. (2011). Preferred skin color reproduction based on Ydependent Gaussian modeling of skin color. Journal of Imaging Science and Technology, 55, 040504. 14. Ng, P., & Chi-Man, P. (2011). Skin color segmentation by texture feature extraction and k-mean clustering. In IEEE Third International Conference on Computational Intelligence, Communication Systems and Networks. 15. Naji, S., Zainuddin, R., & Jalab, H. (2012). Skin segmentation based on multi pixel color clustering models. Elsevier Inc. 16. Kakumanu, P., Makrogiannis, S., & Bourbakis, N. (2007). A survey of skin-color modeling and detection methods. Pattern Recognition Letters, 40, 1106–1122. 17. Juang, C., Shih-Hsuan, C., & Shen-Jie, S. (2007). Fuzzy system learned through fuzzy clustering and support vector machine for human skin color segmentation. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE, 37(6), 1077–1087. 18. Rajen, B. B., Sharma, G., Dhall, A., & Chaudhury, S. (2009). Efficient skin region segmentation using low complexity fuzzy decision tree model. In IEEE-INDICON (pp. 1–4). 19. Dhall, A., Sharma, G., & Bhatt, R. (2009). Adaptive digital makeup. In Proceedings of International Symposium on Visual Computing, ISVC (pp. 728–736). 20. Sigal, L., Sclaroff, S., & Athitsos, V. (2000). Estimation and prediction of evolving color distributions for skin segmentation under varying. In IEEE Conference on Computer Vision and Pattern Recognition. 21. Phung, S., Bouzerdoum, A., & Chai, D. (2003). Skin segmentation using color and edge information. IEEE. 22. Ruiz-del-Solar, J., & Verschae, R. (2003). Robust skin segmentation using neighborhood information. IEEE. 23. Han, J., Award, G., Sutherland, A., & Wu, H. (2006). Automatic skin segmentation for gesture recognition combining region and support vector machine active learning. In Proceedings 7th International Conference on Automatic Face and Gesture Recognition. IEEE. 24. Abdullah-Al-Wadud, M., Shoyaib, M., & Chae, O. (2008). A skin detection approach based on color distance map. In EURASIP, Advances in Signal Processing. 25. Lang, L., & Weiwei, G. (2009). The face detection algorithm combined skin color segmentation and PCA. IEEE. 26. Bhoyar, K., & Kakde, O. (2010). Skin color detection model using neural networks and its performance evaluation. JCS, 6(9), 963–968. 27. Chowdhury, A., & Tripathy, S. S. (2014). Human skin detection and face recognition using fuzzy logic and eigenface. In International Conference on Green Computing Communication and Electrical Engineering (ICGCCEE), Coimbatore (pp. 1–4).

Skin Detection System Based Fuzzy Neural Networks for Skin …

73

28. Ali, S., Mohd, A. M., & Tey, Y. C. (2009). Fuzzy Mamdani inference system skin detection. In Ninth International Conference on Hybrid Intelligent Systems. IEEE. https://doi.org/10.1109/ HIS.2009.224 29. Afia, N., Laiq, H., Alamzeb, M. Z., & Rehanullah, K. (2013). Fuzzy based skin detection and segmentation. International Journal of Computer Science Issues, 10(3), No 1. 30. Aureli, S. F., Rodrigo, V., & Aitor, O. (2007). Fuzzy fusion for skin detection. Fuzzy Sets and Systems, 158(3), 325–336. 31. Pujol, F. A., Espí, R., Mora, H., & Sánchez, J. L. (2008). A fuzzy approach to skin color detection. In A. Gelbukh & E.F. Morales (Eds.), MICAI: advances in artificial intelligence. MICAI 2008, Lecture Notes in Computer Science (Vol. 5317). Springer. 32. Helwan, A., Ozsahin, D. U., Abiyev, R., & John, B. (2017). One-year survival prediction of myocardial infarction. International Journal of Advanced Computer Science and Applications (IJACSA), 8, 173–178. 33. Kamil, D., & Idoko, J. B. (2017). Automated classification of fruits: pawpaw fruit as a case study. In International Conference on Man–Machine Interactions (pp. 365–374). Springer. 34. Idoko, J. B., & Kamil, D. (2017). Static and dynamic pedestrian detection algorithm for visual based driver assistive system. In ITM Web of Conferences (Vol. 9, p. 03002). 35. Helwan, A., John, B. I., & Rahib, H. A. (2017). Machine learning techniques for classification of breast tissue. In 9th International Conference on Theory and Application of Soft Computing, Computing with Words and Perception, ICSCCW 2017. Procedia Computer Science (Vol. 120, pp. 402–410). 36. Rahib, H. A., & Kaynak, O. (2008). Fuzzy wavelet neural networks for identification and control of dynamic plants—A novel structure and a comparative study. IEEE Transactions on Industrial Electronics, 55(8), 3133–3140. 37. Rahib, H. A. (2011). Fuzzy wavelet neural network based on fuzzy clustering and gradient techniques for time series prediction. Neural Computing & Applications, 20(2), 249–259. 38. Rahib, H. A., Kaynak, O., Alshanableh, T., & Mamedov, F. (2011). A type-2 neuro-fuzzy system based on clustering and gradient techniques applied to system identification and channel equalization. Applied Soft Computing Journal, 11(1), 396–1406. 39. Rahib, H. A. (2014). Credit rating using type-2 fuzzy neural networks. Mathematical Problems in Engineering, 2014. 40. Rahib, H. A., Rafik, A., Okyay, K., Burhan, T. I., & Karl, W. B. (2015). Fusion of computational intelligence techniques and their practical applications. Computational Intelligence and Neuroscience, 2015. 41. Quang, H. D., & Jeng-Fung, C. (2013). A neuro-fuzzy approach in the classification of students’ academic performance. Computational Intelligence and Neuroscience, 2013. 42. Song, P., Serdar, I., Kevin, W., & Tipu, Z. A. (2012) Parkinson’s disease tremor classification— A comparison between support vector machines and neural networks. Expert Systems with Applications, 39(12), 10764–10771. 43. Kasabov, N. K. (2002). DENFIS: Dynamic evolving neural-fuzzy inference system and its application for time-series. IEEE Transactions on Systems, Fuzzy Systems, 10(2), 144154. 44. Jyun-Ting, L., Yung-Chung, C., & Cheng-Yi, H. (2015). The optimization of chiller loading by adaptive neuro-fuzzy inference system and genetic algorithms. Mathematical Problems in Engineering, 2015. 45. Haydee, M., & Junzo, W. (2016). Gaussian-PSO with fuzzy reasoning based on structural learning for training a neural network. Neurocomputing, 172(8), 405–412. 46. Rahib, H. A., & Sanan, A. (2016). Diagnosing Parkinson’s diseases using fuzzy neural system. Computational and Mathematical Methods in Medicine, 2016. Article ID 1267919. 47. Abiyev, R., Arslan, M., Bush Idoko, J., Sekeroglu, B., & Ilhan, A. (2020). Identification of epileptic EEG signals using convolutional neural networks. Applied Sciences, 10(12), 4089. 48. Abiyev, R. H., Arslan, M., & Idoko, J. B. (2020). Sign language translation using deep convolutional neural networks. KSII Transactions on Internet & Information Systems, 14(2). 49. Sekeroglu, B., Abiyev, R., Ilhan, A., Arslan, M., & Idoko, J. B. (2021). Systematic literature review on machine learning and student performance prediction: Critical gaps and possible remedies. Applied Sciences, 11(22), 10907.

74

I. J. Bush and R. Abiyev

50. Idoko, J. B., Arslan, M., & Abiyev, R. (2018). Fuzzy neural system application to differential diagnosis of erythemato-squamous diseases. Cyprus Journal of Medical Sciences, 3(2), 90–97. 51. Bush, I. J., Abiyev, R., & Arslan, M. (2019). Impact of machine learning techniques on hand gesture recognition. Journal of Intelligent & Fuzzy Systems, 37(3), 4241–4252. 52. Uwanuakwa, I. D., Idoko, J. B., Mbadike, E., Re¸sato˘glu, R., & Alaneme, G. (2022, May). Application of deep learning in structural health management of concrete structures. In Proceedings of the Institution of Civil Engineers-Bridge Engineering (pp. 1–8). Thomas Telford Ltd. 53. Abiyev, R., Idoko, J. B., & Arslan, M. (2020, June). Reconstruction of convolutional neural network for sign language recognition. In 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE) (pp. 1–5). IEEE. 54. Abiyev, R., Idoko, J. B., Altıparmak, H., & Tüzünkan, M. (2023). Fetal health state detection using interval type-2 fuzzy neural networks. Diagnostics, 13(10), 1690. 55. Arslan, M., Bush, I. J., & Abiyev, R. H. (2019). Head movement mouse control using convolutional neural network for people with disabilities. In 13th International Conference on Theory and Application of Fuzzy Systems and Soft Computing—ICAFS-2018 (Vol. 13, pp. 239–248). Springer International Publishing. 56. Abiyev, R. H., Idoko, J. B., & Dara, R. (2022). Fuzzy neural networks for detection kidney diseases. In Intelligent and Fuzzy Techniques for Emerging Conditions and Digital Transformation: Proceedings of the INFUS 2021 Conference, held August 24–26, 2021 (Vol. 2, pp. 273–280). Springer International Publishing. 57. Uwanuakwa, I. D., Isienyi, U. G., Bush Idoko, J., & Ismael Albrka, S. (2020, August). Traffic warning system for wildlife road crossing accidents using artificial intelligence. In International Conference on Transportation and Development 2020 (pp. 194–203). American Society of Civil Engineers. 58. Idoko, B., Idoko, J. B., Kazaure, Y. Z. M., Ibrahim, Y. M., Akinsola, F. A., & Raji, A. R. (2022, November). IoT based motion detector using Raspberry Pi gadgetry. In 2022 5th Information Technology for Education and Development (ITED) (pp. 1–5). IEEE. 59. Idoko, J. B., Arslan, M., & Abiyev, R. H. (2019). Intensive investigation in differential diagnosis of erythemato-squamous diseases. In Proceedings of the 13th International Conference on Theory and Application of Fuzzy Systems and Soft Computing (ICAFS-2018) (Vol. 10, p. 9783). 60. Idoko, J. B., Rahib, H. A., & Mohammad, K. M. (2017). Intelligent machine learning algorithms for colour segmentation. WSEAS Transactions on Signal Processing, 13, 232–240. 61. Idoko, J. B., Rahib, A., Mohammad, K. S. M., & Hamit, A. (2018). Integrated artificial intelligence algorithm for skin detection. In International Conference Applied Mathematics, Computational Science and Systems Engineering (AMCSE 2017). ITM Web of Conferences. 62. Mohammad, K. S. M., Rahib, A., & Idoko, J. B. (2017). Intelligent classification of liver disorder using fuzzy neural system. International Journal of Advanced Computer Science and Applications (IJACSA), 8, 25–31.

Machine Learning Based Cardless ATM Using Voice Recognition Techniques John Bush Idoko, Mansur Mohammed, and Abubakar Usman Mohammed

Abstract Automatic Teller Machines (ATMs) are machines used for banking activities or personal and business transactions. It can be used without the support of banking staff. Due to their portability and user-friendliness, ATMs have remained popular among the general public. These days, ATMs could be found in an array of places, like colleges, markets, service stations, online banking, hotels, businesses, restaurants, and entertainment facilities, and they all frequently receive significant amounts of customer traffic. The present ATM requires customers to authenticate their identities to access and make use of the ATM’s services. There are various problems, such as card expiry, maintenance expenses, illegal access to user accounts, waiting durations before the replacement of cards, card destruction, and much more. In this research, we propose a voice recognition ATM system using machine learning algorithms. In some research, a linear neural network was used in which the weights were trained using a method other than backpropagation. We have used a hybrid method; mel-frequency cepstral coefficient and random forest. Keywords Machine learning · Voice recognition · Analysis Cepstral · ATM

1 Introduction Voice recognition has advanced significantly over time and has long been regarded as one of the most promising technologies in the field of audio signal processing. There have been many audio algorithms proposed [1]. Researchers have historically utilized several categorization and comparison methods to address speech recognition and voice comparison. Several research has suggested voice recognition systems as a J. B. Idoko (B) · M. Mohammed Applied Artificial Intelligence Research Centre, Department of Computer Engineering, Near East University, Nicosia 99138, Turkey e-mail: [email protected] A. U. Mohammed Department of Computer Science, Federal University Gusau, Gusau, Nigeria © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_6

75

76

J. B. Idoko et al.

defense against fraud for a number of years. The authors suggested a method for identifying speech samples that had been manipulated to obfuscate their identity. By utilizing non-electronic deception techniques, such as covering the microphone with a cloth or using a different intonation, it is possible to identify a speaker [2–5]. A look into the history of voice recognition reveals that it has been the subject of many research papers. Some methods are based on audio fingerprints. Fingerprinting, is a content-based signature that contains a condensed audio recording. This technology has attracted researchers. This is because fingerprinting technology handles audio data without considering its format [3]. The method caught the interest of many researchers since it monitors the recorded audio without taking the format into account [3]. The discovery includes a full audio fingerprinting technique for checking audio transmissions. In order to represent the audio sources and increase the value of fingerprint descriptors and observers, the systems use HMM (Hidden Markov Model). Extracting the audio features and analyzing signals, presents several challenges. Which are noise distortion, dimensionality, discrimination, and algorithm efficiency [5–7]. Nevertheless, accuracy can be improved by using larger data sets. Machine learning voice recognition systems can handle the challenges because of their strength in accuracy when trained with large amounts of data.

1.1 Machine Learning Throughout the past couple of decades, machine learning has substantially expanded in power. The improvements are the result of more datasets, more powerful computers, and algorithms that can employ machine learning to forecast outcomes in nonlinear dynamic problem contexts, like voice recognition [8]. Machine learning refers to the ability of a computer program or system to evolve and change over time. The most fundamental form of machine learning makes use of algorithms to find patterns that are then used going forward [9–28]. Machine learning is the idea that a computer can self-improve through time and resemble human intellect. Machines could anticipate the future based on what they have observed and discovered in the past. Supervised learning: In machine learning, supervised learning is a rather active process. Humans supply the machine with both input and output. The computer uses algorithms to determine the best route from point A to point B. Giving the machine the intended result teaches it how to get the same result later on. It can learn information about the input variable and the corresponding output from its observations and employs algorithms to construct relationships between positions A and B. Unsupervised learning: Unsupervised learning is the process of providing the computer with merely the input and letting it create the output based on any patterns it finds. This type of machine-learning technique typically has more failures because you are not giving the software the right response. Robots can, however, learn based

Machine Learning Based Cardless ATM Using Voice Recognition …

77

solely on what they observe when using unsupervised learning. There is less significant human interaction, so there is less significant human interaction, unsupervised learning algorithms are less sophisticated. Machines are given data science tasks in unsupervised learning [29, 30]. Reinforcement learning: A reinforcement learning technique is used by the program to learn about its benefits. Reinforcement learning sometimes referred to as semi-supervised learning, is the process by which a computer is reminded of what it is doing well so that it keeps performing the same kind of work. In order to motivate neural networks and machine learning algorithms to try the same pattern or sequence again, semi-supervised learning makes it possible for them to recognize when they have partially solved the issue correctly. Sometimes the results of reinforcement learning are made public, other times they are not. Reinforcement learning’s major objective is to help the computer program or machine understand the right course of action so that it can repeat it in the future.

1.2 The Purpose of Using Voice Recognition in an ATM and Its Advantages Voice recognition is useful in many different domains, including homeland security, criminal identification, human–computer interface, privacy security, and more. The speech recognition feature limits who can access the account. It prevents fraud efforts like: Eavesdropping: A user’s ATM card or PIN can be spied on and easily accessed if the card is obtained fraudulently. This may have serious repercussions. Spoofing: It is possible that a hacker will pose as the approved site when a user enters their PIN during the transaction process, prompting them to enter their PIN again as a result of a system fault. When a user follows the instructions, the hacker retains the information and uses it for his nefarious purposes in the future. The fact that every single transaction momentarily assigns a new password renders this man-in-the-middle (hacker) attack useless [31]. Brute-force: Attack If we attempt to decipher the current static four-digit PIN using brute force.

2 Related Works The examination of two methods, FAST ICA (Independent element analysis) and MMSE (minimum mean-square error) methodologies, is combined in [32]. To separate multimodal data, FAST ICA works (audio, noise and image signals). The audio signal’s white noise is eliminated by the MMSE. The three signals will be produced in the first step, which will result in the formation of the multimodal data. In [32],

78

J. B. Idoko et al.

Tabassum Feroz mentioned that the interpretation of separated data will be done using FAST ICA in order to have accurate SNR (signal to noise ratio) comparisons. The MMSE approach, which eliminates white noise from the data, will be used to reduce noise once mixed data has been separated. The experiment’s findings indicate that the MMSE is effective for both noise. Another breakthrough is the use of technology to recognize non-electronic disguised sounds [29]. When a criminal is discovered using voice samples that have been altered to conceal identification, a problem develops. The goal is to identify a speaker using non-electronic disguise, such as covering the microphone with a cloth or communicating with a different intonation. In the experiment, the speaker identity was identified and confirmed using both spectrographic and aural methods [33]. In an experiment with ten participants, intraspeaker changes were found to give more light on the kind of acoustic feature that can be used to recognize disguised voices. The nasality, pauses, and emotional intensity of the participant voices enabled for recognition. This approach’s presumption is that it can differentiate between real and fake. In [3], Eloi Battle defined audio fingerprint as a content-based compact signature containing data about in an audio recording in an unique contribution. The method caught the interest of many researchers since it monitors the recorded audio without taking the format into account. The discovery includes a full audio fingerprinting technique for checking audio transmissions. In order to represent the audio sources and increase the value of fingerprint descriptors and observers, the system uses HMM (Hidden Markov Model). Nevertheless, accuracy can be improved through using larger HMM sets. One-dimensional signals and other-dimensional manifolds were explained in the research [5] and show geometric structure that may help with the classification and analysis of vast sets of signals. An algorithm that extracts geometric features and maps the resulting geometric object to the domain has indeed been developed. Owing to the coronavirus problem and other speech problems including hidden accent differences, a lung sounds were taken. This can be improved further to enable for voice distortion in greater spatial dimensions. Moreover, [8] implemented a cutting-edge strategy for song recognition. The authors provide a way for generating fingerprint from song samples utilizing Welch’s spectral density method and Mel filter bank. The generated fingerprints really aren’t affected by background noise or audio compression. So a basic song recognition was created using Mel-scale weighted short time PSD. The cutting-edge Music retrieval exploits by Zafar Ali in [33] elaborated on how various unstructured data, such image and video, are collected and analyzed. Nevertheless, audio data like speech, music, and sounds were given little attention. Information retrieval relating to music caught people’s attention. They made use of examplebased queries. The ideal requirement for the MIR system is the attempt to query by example. however encountered computational and complexity problems. The approach proposed in [6] created a distinctive fingerprint for the audio files using the audio fingerprinting technique. The fingerprints are created by taking the mean of the spectra after extracting an MFCC spectrum, which will then be converted

Machine Learning Based Cardless ATM Using Voice Recognition …

79

to a binary picture. The added a new dimension are fed into the LSTM for categorization. Furthermore, MFCC and Chroma gram can use each other as model to generate categorized speech fingerprints. In [31], Shuyu argues that the present quantity of music data available online has considerably accelerated the creation of new methods for retrieving the data. The majority of the data is provided as text labels and symbolic knowledge rather than audio material. They were motivated to carry out the study by these and other factors, which will be addressed. To improve the effectiveness and efficiency of signal deformation audio fingerprinting technology. The study compares the Melfrequency Cepstral Coefficient (MFCC), Chroma Spectra, Constant Q Spectrum, and Product Spectrum among other spectral features. Additionally, the robustness of audio fingerprints was enhanced by using the Gaussian Mixture Model (GMM). The final comparison included their developed method and Audio DNA, which is comparable save for the fact that the similarity metric and acoustic features are very different. In the study [30], Shefali Arora analyses the impact of noise on the quality of images and signals. Analysis of the role of various filters to remove the noise in the fingerprints was conducted. The proposed framework in the end helps in achieving accuracy without ttradeoffs. The study conducted by Pei-chun Lin in [7] proposes speech recognition (SR) that has the privacy identification Information System. Artificial Intelligence-Yourself (AIY) of Google was used and the SR-PII was added to it. In addition, cloud response and speaker response times were recorded. It was finally verified that SR-II system can secure private information.

3 Proposed System The proposed system is divided into three phases: 1. pre-processing phase, 2. feature extraction phase, and 3. the classification phase. The proposed system is based on voice recognition using machine learning algorithm. It consist of voice dataset, preprocessing, feature extraction, and classification phases. The explored voice dataset was downloaded from the kaggle.com website in the WAV file format. The architecture of the proposed system is presented in Fig. 1. Pre-processing: The pre-processing phase organizes the data, making the recognition task easier. It involes the following: a. Noise removal Windowing is a method of analyzing long sound signals by selecting a sufficiently representative segment. This process is used to remove noise in a signal that is polluted by noise present in a wide frequency spectrum [6]. y(n) = x(n)w(n), 0 ≤ n ≤ N − 1

(1)

80

J. B. Idoko et al. Microphone

Window

Dataset

Pre-emphasis FFT MFCC

Train

VQ

Machine learning

Prediction

Test

Fig. 1 Architecture of the proposed system

where, y(n) is the product of the convolution between the input signal and the window function. x(n) is the signal to be convolved by the window function. And w(n) is the window hamming. b. Smoothing spectral of the speech signal (pre-emphasis filter) Speech signal processing need the pre-emphasis filter. The filter is based on the equation (2) [5], which depicts the time domain input/output relationship: y(n) = x(n) − ax(n − 1)

(2)

where, a is the pre-emphasis filter constant c. Signal domain transform based on (FFT) The Fourier series can be used to express a function with a finite time. A time series of a bounded time-domain signal is converted into a frequency spectrum using the Fourier transform. This process is used to convert each frame from the time domain to the frequency domain. The process is done using the equation below [29]: xn=

N −1( E k=0

xk e

−2π jkn/N i

) (3)

where, x k is the signal of a frame, and x n is the n frequency pattern formed by the fourier transform.

Machine Learning Based Cardless ATM Using Voice Recognition …

81

Feature extraction: The process of compiling a number of feature vectors into a single, compact representation of a certain speech sign is known as feature extraction. In this research, the explored feature extraction method involve: a. Mel-Frequency Cepstral Coefficient (MFCC) MFCC is a method that uses human hearing activity to detect frequencies. The MFCC is focused on the frequency differences that the human ear can detect. b. Vector Quantization Quantization is a phase in the digital representation of signals for computer processing. It is used to convert the binary matrix created by MFCC to a one-row matrix.

3.1 System Classifier Machine learning classifiers, including feature extraction techniques, are critical in assessing the overall effectiveness of the voice recognition model. In this research, we considered two classifiers; random forest and neural network. Random Forest: Based on decision trees, the supervised classification method known as random forest classification is used in machine learning. Here, each tree in the collection is made by randomly selecting a small group of features for each node to be split on, then figuring out the appropriate split based on these features in the training set. The specific feature vector receives a vote from each tree. The class with the most votes for each feature vector is selected by the forest. It is straightforward to build and forecast, performs swiftly on huge datasets, easily guesses missing data, and maintains accuracy even when a sizable portion of the data is missing [31, 32]. Neural Network: Neural Network (NN) is a machine learning technique that operates like the human brain [34]. Some of the features and advantages of neural networks include adaptive learning, parallel operation, recognition/classification, and fast processing.

3.2 Simulation and Result The voice recognition system is designed using the orange application. Orange is an opensource application that can be used in machine learning, data mining, analysis and data visualization tasks [35]. The Orange application was initially constructed using Python, Cython, C++ and C programming languages [36]. Figure 2 depicts operation of the orange application on the voice WAV file and the proposed classifiers, and the resultant performances displayed in Table 1. Table 1 shows the performance of random forest and neural network. In the table, the random forest classifier recorded AUC of 98%, accuracy, F1 score, precision and recall of 93%. While the neural network classifier recorded AUC of 99%, accuracy

82

J. B. Idoko et al.

Fig. 2 The orange application structure

Table 1 Test score Methods

AUC

Accuracy

F1 score

Precision

Recall

Random forest

0.98

0.93

0.93

0.93

0.93

Neural network

0.99

0.96

0.95

0.95

0.95

of 96%, F1 score, precision and recall of 95%. In all the featured metrics, the neural network classifier recorded higher values as compared to the random forest classifier, hence recommended for voice recognition ATMs.

4 Conclusion This research presents a speech recognition system for ATMs. The explored WAV dataset was acquied from the kaggle repository. To acheive high performance, we process and extracted the relevant features using the mel-frequency cepstral coefficient. After which extratted features were fed into the implored classifiers (random

Machine Learning Based Cardless ATM Using Voice Recognition …

83

forest and neural network) to perform voice recognition. The evaluation results depicts that the NN classifier achieved the highest level of accuracy.

References 1. Arun Kumar Arigela, D. V. (2015). An intelligence decision making and analysis by machine learning methods and genetic algorithms. International Journal of Technology and Engineering Science, 5170–5179. 2. Ashraf Tahseen Ali, H. S. (2021). Voice recognition system using machine learning techniques. Materials Today:Proceedings. 3. Eloi Battle, J. M. (2003). System analysis and performance tuning for broadcast audio fingerprinting. In 6th international conference on digital audio effects (pp. 1–3). 4. Liu, F., & Ng, G. N. (2006). Artificial ventilation modelling using neuro-fuzzy hybrid sysmem. International Joint Conference on Neural Networks. 5. Jeremy Levy, A. N. (2022). Classification of audio signals using spectrogram surfaces and extrinsic distortion measures. EURASIP Journal on Advances in Signal Processing. 6. K. Banuroopa, D. S. (2021). MFCC based hybrid fingerprinting method for audio classification. International Journal of Nonlinear Analysis and Applications 2125–2136. 7. Pei-chun Lin, B. Y. (2022). Building a speech recognition system with privacy identification Information based on Google voice for social robots. The Journal of supercomputing, 15060– 15088. 8. Salvatore Serrano, M. A. (2022). A new fingerprint defination for effective songs recognition. Pattern Recognition Letters, 135–141. 9. Abiyev, R., Arslan, M., Bush Idoko, J., Sekeroglu, B., & Ilhan, A. (2020). Identification of epileptic EEG signals using convolutional neural networks. Applied Sciences, 10(12), 4089. 10. Abiyev, R. H., Arslan, M., & Idoko, J. B. (2020). Sign language translation using deep convolutional neural networks. KSII Transactions on Internet & Information Systems, 14(2). 11. Helwan, A., Idoko, J. B., & Abiyev, R. H. (2017). Machine learning techniques for classification of breast tissue. Procedia computer science, 120, 402–410. 12. Sekeroglu, B., Abiyev, R., Ilhan, A., Arslan, M., & Idoko, J. B. (2021). Systematic literature review on machine learning and student performance prediction: Critical gaps and possible remedies. Applied Sciences, 11(22), 10907. 13. Idoko, J. B., Arslan, M., & Abiyev, R. (2018). Fuzzy neural system application to differential diagnosis of erythemato-squamous diseases. Cyprus J Med Sci, 3(2), 90–97. 14. Ma’aitah, M. K. S., Abiyev, R., & Bush, I. J. (2017). Intelligent classification of liver disorder using fuzzy neural system. International Journal of Advanced Computer Science and Applications, 8(12). 15. Bush, I. J., Abiyev, R., Ma’aitah, M. K. S., & Altıparmak, H. (2018). Integrated artificial intelligence algorithm for skin detection. In ITM web of conferences (vol. 16, p. 02004). EDP Sciences. 16. Bush, I. J., Abiyev, R., & Arslan, M. (2019). Impact of machine learning techniques on hand gesture recognition. Journal of Intelligent & Fuzzy Systems, 37(3), 4241–4252. 17. Uwanuakwa, I. D., Idoko, J. B., Mbadike, E., Re¸sato˘glu, R., & Alaneme, G. (2022). Application of deep learning in structural health management of concrete structures. In Proceedings of the institution of civil engineers-bridge engineering (pp. 1–8). Thomas Telford Ltd. 18. Helwan, A., Dilber, U. O., Abiyev, R., & Bush, J. (2017). One-year survival prediction of myocardial infarction. International Journal of Advanced Computer Science and Applications, 8(6). https://doi.org/10.14569/IJACSA.2017.080622 19. Bush, I. J., Abiyev, R. H., & Mohammad, K. M. (2017). Intelligent machine learning algorithms for colour segmentation. WSEAS Transactions on Signal Processing, 13, 232–240.

84

J. B. Idoko et al.

20. Dimililer, K., & Bush, I. J. (2017). Automated classification of fruits: pawpaw fruit as a case study. In Man-machine interactions 5: 5th international conference on man-machine interactions, ICMMI 2017 held at Kraków, Poland, October 3–6, 2017 (pp. 365–374). Springer International Publishing. 21. Bush, I. J., & Dimililer, K. (2017). Static and dynamic pedestrian detection algorithm for visual based driver assistive system. In ITM web of conferences (vol. 9, p. 03002). EDP Sciences. 22. Abiyev, R., Idoko, J. B., & Arslan, M. (2020). Reconstruction of convolutional neural network for sign language recognition. In 2020 international conference on electrical, communication, and computer engineering (ICECCE) (pp. 1–5). IEEE. 23. Abiyev, R., Idoko, J. B., Altıparmak, H., & Tüzünkan, M. (2023). Fetal health state detection using interval type-2 fuzzy neural networks. Diagnostics, 13(10), 1690. 24. Arslan, M., Bush, I. J., & Abiyev, R. H. (2019). Head movement mouse control using convolutional neural network for people with disabilities. In 13th international conference on theory and application of fuzzy systems and soft computing—ICAFS-2018 13 (pp. 239–248). Springer International Publishing. 25. Abiyev, R. H., Idoko, J. B., & Dara, R. (2022). Fuzzy neural networks for detection kidney diseases. In Intelligent and fuzzy techniques for emerging conditions and digital transformation: Proceedings of the INFUS 2021 conference, held August 24–26, 2021 (vol. 2, pp. 273–280). Springer International Publishing. 26. Uwanuakwa, I. D., Isienyi, U. G., Bush Idoko, J., & Ismael Albrka, S. (2020). Traffic warning system for wildlife road crossing accidents using artificial intelligence. In International conference on transportation and development 2020 (pp. 194–203). American Society of Civil Engineers. 27. Idoko, B., Idoko, J. B., Kazaure, Y. Z. M., Ibrahim, Y. M., Akinsola, F.A., & Raji, A. R. (2022). IoT based motion detector using Raspberry Pi Gadgetry. In 2022 5th information technology for education and development (ITED) (pp. 1–5). IEEE. 28. Idoko, J. B., Arslan, M., & Abiyev, R. H. (2019). Intensive investigation in differential diagnosis of erythemato-squamous diseases. In Proceedings of the 13th international conference on theory and application of fuzzy systems and soft computing (ICAFS-2018) (vol. 10, pp. 978– 983). 29. Sanjeev Kumar, D. K. ( 2022). Acoustic featues extraction of non-electronic disguised voice for speaker identification. Evergreen Joint Journal of Novel Carbon Resource Sciences and Green Asia Strategy, 853–860. 30. Shefali Arora, R. M. (2022). An evaluation of denoising techniques and classification of biometric images based on deep learning. Innovation in Multimedia Information Processing and Retrieval. 31. Shuyu, F. (2007). Efficient and robust audio fingerprinting. Wuhan. 32. Tabassum Feroz, U. N. (2021). Suppression of white noise from the mixture of speech and image for quality enhancement. Journal of Mechanics of Contiua and Mathematical Sciences, 67–78. 33. Zafar Ali Khan, S. S. (2022). Modern technological development in music information retrieval. International Journal of Research Publication and Reviews, 2194–2197. 34. Kukreja, H. (2016). An introduction to artificial neural network (vol. 1, no. 5, hlm. 5). 35. Kodati, S., & Vivekanandam, D. R. (2018). Analysis of heart disease using in data mining tools orange and weka (hlm. 7). 36. Pattnaik, P. K., Swetapadma, A., & Sarraf, J. (Ed.) (2018). Expert system techniques in biomedical science practice. IGI Global.

Automated Classification of Cardiac Arrhythmias John Bush Idoko

Abstract Early detection of cardiac arrhythmia is of great importance both for the patients and the cardiologist for proper treatment. The detection is achieved through an extensive analysis of the electrocardiogram (ECG) signals, feature extraction method and machine learning techniques. The extracted features are used for the classification of the different types of arrhythmias. This research presents sequential feature selection (SFS) method whereas, the extracted features are fed into the classifiers; the Fuzzy Neural Network (FNN), Naïve Bayes (NB) and Radial Basis Function Network (RBFN) for classification. Observations from the results demonstrated that FNN gained the highest accuracy of 99% as compared with the other two machine learning techniques; NB and RBFN. Keywords Arrhythmia · Electrocardiogram · Sequential feature selection · Fuzzy neural network · Naïve Bayes · Radial basis function network

1 Introduction An irregular single heartbeat (arrhythmic beat) or an irregular cluster of heartbeats (arrhythmic episode) is referred to as an arrhythmia. In other words, arrhythmias may be the root cause of erratic heartbeats. These cardiac rhythms can be quick or sluggish [1]. Breathing sinus arrhythmia, a normal periodic change in heart rate that corresponds to breathing action, is one type of arrhythmia that a healthy heart may undergo with little repercussion. However, as shown in [2, 3], they might point to a major issue that could cause a stroke or sudden cardiac death. Therefore, automatic arrhythmia identification and categorization is very useful in clinical cardiology, especially when done in real time. This classification is made actual by doing a thorough examination of the electrocardiogram (ECG) and its extracted features. The J. B. Idoko (B) Department of Computer Engineering, Applied Artificial Intelligence Research Centre, Near East University, Nicosia 99138, Turkey e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_7

85

86

J. B. Idoko

heart’s “natural pacemaker” is the Sino Atrial node (SA), which is situated at the top of the right chamber or Atrium (RA) and is responsible for starting the electrical signal that promotes the heartbeat. Atrio Ventricular Node (AV): The referenced signal travels via the atria, causing them to constrict and pump blood to the lower chambers, the ventricles, where it continues. The pacemaker malfunction makes the heart beat at an unnatural rhythm, which has a harmful impact on blood flow throughout the body. Electrocardiogram (ECG) was used to measure the electrical signals described above, where a series of electrical waves depicting each heart beat are characterized by peaks and valleys. An electrocardiogram describes two main categories of data. First, it measures the time intervals on the ECG, allowing one to calculate the length of the electrical wave that crosses the heart and, in turn, determine whether the electrical activity is regular, slow, fast, or irregular. A pediatric cardiologist may be able to determine whether certain areas of the heart are too big or overworked by measuring the amount of electrical activity flowing through the heart muscle. An ECG signal has a frequency range of 0.05 to 100 Hz and a dynamic range of 1 to 10 mV. The five peaks and valleys that make up the ECG signal are identified by the letters P, Q, R, S, and T in that order. The accurate and dependable detection of the T, P, and QRS complexes is essential for an ECG analysis system to function properly. The P wave of the heart, or the atria, symbolizes activation of the upper chambers. While the heart’s QRS wave (complex) and T wave are a representation of the excitement of the lower chambers (ventricles). The most important task in the processing of an automatic ECG signal is QRS complex detection. The ST segment, heart rate, ECG signal, and other factors are analyzed and used after the QRS complex has been correctly identified. Figure 1 depicts different components of ECG signal [1].

Fig. 1 ECG signal with different components

Automated Classification of Cardiac Arrhythmias

87

With the aim that machines can assist in the typically arduous and error-prone process of learning from empirical data, as well as support people in explaining and codifying their knowledge, machine learning approaches can uncover previously unknown regularities and trends from a heterogeneous dataset. The generalization of fresh, unused data is achieved by applying a wide range of machine learning algorithms to uncover correlations, rules, and patterns in data sets, as shown in [3]. The output of a machine learning algorithm explains the dataset morphology that was discovered from examples of the provided data. The system’s descriptions of the knowledge it has learnt are condensed and include a variety of representations. The success of this study was built on an experimental framework using machine learning techniques; Fuzzy Neural Network (FNN), Naïve Bayes (NB) and Radial Basis Function Network (RBFN) techniques to accurately classify the sixteen classes of arrhythmias as depicted in the dataset. An intense comparison between the techniques demonstrated that the FNN outperformed the other machine learning techniques with great margin. The remaining part of the research is organized as follows: Sect. 2 presents the literature review. Dataset analysis is depicted in Sect. 3. Section 4 shows the different classification models proposed to classify cardiac arrhythmias. And the conclusion of the research is presented in Sect. 5.

2 Review of Existing Works Numerous scholars have tackled the issue of automatic arrhythmia detection and categorization [2–4]. While some researchers looked at the discriminating between two different forms of arrhythmias, some researchers developed strategies based on the detection and differentiation of a single type of arrhythmia from normal sinus rhythm [2]. Sequential hypothesis testing method [3], time-domain analysis [2], threshold crossing intervals [3], time–frequency analysis [2], artificial neural networks [3], sequential detection algorithm [4], and fuzzy adaptive resonance theory mapping [3] are a few examples of approaches in this area. As clearly illustrated in [2–4], arrhythmia detection and classification can also be carried out based on the identification of various heart rhythms, their categorization into two or three categories of arrhythmia, and the normal sinus rhythm. Wavelet analysis [1–3], multiway sequential hypothesis testing [4], non-linear dynamical modeling [2], multifractal analysis [1, 2], complexity measure [3], wavelet analysis combined with radial basis function neural networks [4] and artificial neural networks [4] are a few of the techniques used for this category. Only a few kinds of arrhythmia—atrial tachycardia, atrial fibrillation, ventricular tachycardia, and ventricular fibrillation—are addressed by any of the treatments mentioned above. A further interest field of electrocardiogram is the ECG beat-by-beat classification. Here several different rhythm types is used to classify each beat [3, 4]. This kind of methods classifies more arrhythmic beat types. Moreover, single beat classification focused rather than the arrhythmic episode detection. Fuzzy neural networks

88

J. B. Idoko

[3], artificial neural networks [4], a combination of expert approaches [2], time– frequency analysis combined with knowledge-based systems [4], and hermite functions combined with self-organizing maps [3] are more commonly used in beat categorization techniques. Basically, the majority of studies on the interpretation of ECG signals focus on either identifying different heart rhythms or classifying beats individually and identifying certain arrhythmia types. These techniques use the ECG signal’s derived features to detect and categorize arrhythmias. The procedure is inefficient and timeconsuming for real-time analysis, and the presence of noise makes feature extraction challenging and occasionally impossible (P wave). As a result, it is not frequently feasible. Since it is anticipated that some forms of arrhythmias can be identified and categorized in such a circumstance, the RR-interval signal is an alternative computation for classification.

3 Dataset Analysis A dataset from the UCI machine learning repository was used in this work [5]. The goal is to identify the types and presence of cardiac arrhythmia, and also classify it into one of the 16 groups. There are 452 instances (patient records), as shown in the dataset, which are described by 279 feature values identifying 16 classes; normal ECG, ischemic changes, old anterior myocardial infarction, old inferior myocardial infarction, sinus tachycardy, sinus bradycardy, ventricular premature contraction, supraventricular premature contraction, left bundle branch block, right bundle branch block, 1° atrioventricular block, 2° atrioventricular block, 3° atrioventricular block, left ventricule hypertrophy, atrial fibrillation or flutter, and class 16 refers to the rest. The dataset also includes features like the first nine features: age, sex, height, weight, average QRS duration in milliseconds (msec), average time between P and Q wave onset (msec), average time between Q wave onset and T wave offset (msec), average time between two consecutive T waves (msec), and average time between two consecutive P waves (msec). The characteristics f10–f14 represent, respectively, the vector angles in degrees on the front plane of the QRS (f10), T(f11), P(f12), QRS (f13), and (f14). Heart rate, or the number of beats per minute, is Feature F15. The DI channel is used to measure the following 11 features: F16 is the average width of the Q wave in microseconds, F17 is the average width of the R wave in microseconds, F18 is the average width of the S wave in microseconds, F19 is the average width of the R’ wave in microseconds, F20 is the average width of the S’ wave in microseconds, F21 is the number of intrinsic deflections, F22 is the existence of the diphasic R wave, F23 is the existence of the notched R wave (boolean), etc. The complete description of the dataset is available at [5]. It can be seen from the dataset that roughly 0.33% of the feature values are missing. Additionally, there is an unjust class distribution, with no instances in classes 11, 12, or 13. The distribution of values for that attribute based on the input dataset of

Automated Classification of Cardiac Arrhythmias

89

the models is used to probabilistically assign a possible value to the instance with missing values.

3.1 Feature Selection Method A crucial data processing step before the installation of a learning algorithm is frequently feature extraction. Removing unnecessary and irrelevant data will help the machine learning system perform better. In this study, sequential feature selection (SFS) was taken into account. By using only a portion of the measured characteristics (predictor variables) to build a model, SFS was used to minimize the dimensionality of the arrhythmia dataset. The selection criteria fit to various subgroups while minimizing predicted error. The enclosed algorithms looked for a subset of predictors, subject to restrictions like needed or excluded features and the size of the subset, that best predicts measured responses. Since the modeling objective is to find an influential subset, feature selection is preferred to feature transformation since the original units and meaning of features are significant. Due to the presence of categorical features in the dataset and the inadequacy of numerical transformations, SFS was chosen as the method of dimension reduction. Sequential feature selection consists of two parts: the criterion, an objective function that aims to minimize all possible feature subsets, is the first part. Misclassification rate and mean squared error are typical evaluation metrics for classification and regression models, respectively. The sequential search algorithm, which is the second element, adds or removes features from a candidate subset while assessing the criterion. Sequential searches only proceed in one direction, either diminishing or always enlarging the candidate set, because it is often impractical to compare the criteria value at all 2n subsets of an n-feature data set exhaustively (depending on the size of n and the cost of objective calls). There are basically two variants of sequential feature selection: . Sequential forward selection (SFS): Here, features are incrementally added to a blank candidate set until more features are introduced without lowering the criterion. . Sequential backward selection (SBS): Here, features are gradually eliminated from a complete candidate set up till the criterion is raised by eliminating further features. Stepwise regression, a sequential feature selection methodology created especially for least-squares fitting, was a key component of the strategy. Only the least-squares criterion allows for the optimizations used by the stepwise and stepwisefit routines. Stepwise regression, in contrast to generalized sequential feature selection, may add features that have been added or subtract features that have been deleted. Sequential feature selection is done using Statistics and the sequentialfs function from the machine learning toolbox. Target and response data as well as a function handle to a file implementing the criterion function are included as input arguments.

90

J. B. Idoko

The classifier can define SFS or SBS, necessary or omitted features, and the size of the feature subset using optional inputs. To evaluate the criterion at several candidate sets, the function calls (cvpartition and crossval).

4 Cardiac Arrhythmia Classification For the task’s implementation, this research pleaded for the help of FNN, Naive Bayes, and RBFN approaches. The network models under consideration employ the backpropagation learning algorithm. The experimental findings of the FNN, Naive Bayes, and RBFN models showed that FNN performed significantly better than Naive Bayes and RBFN approaches. The proposed models’ flowchart is shown in Fig. 2.

Fig. 2 Flow chart of the framework

Automated Classification of Cardiac Arrhythmias

91

4.1 Fuzzy Neural Network for Classification of Cardiac Arrythmia A fuzzy neural network (FNN) for classifying cardiac arrhythmias is given. The characteristics of cardiac arrhythmia serve as the fuzzy neural networks’ inputs, while the diseases’ types serve as their outputs. The cardiac arrhythmias can be represented by an if–then rule basis using input–output relationships. Fuzzy neural networks (FNN) use neural network structure to describe fuzzy reasoning. The design of FNN incorporates the creation of suitable if–then rule bases in the network’s architecture [6]. The construction of the premise and subsequent elements of fuzzy rules using FNN training is the difficulty. For the design of the FNN in the study, we employ TSK fuzzy rules. The ambiguous regulations are shown as: If x1 is Al j and x2 is A2 j and . . . and xm is Am j Then y j = b j +

m E

ai j xi

(1)

i=1

where Aij are input fuzzy sets, aij and bj are coefficients used in linear functions, xi and yj are input–output variables. i = 1…m, and j = 1…r are the number of inputs and rules correspondingly. The use of linear functions to describe a nonlinear system is problematic in this case. Figure 3 illustrates the FNN construction using a fuzzy rule base. Distribution of input signals takes place on the first layer. Following formulas are used to determine the membership grades of incoming input signals in the second layer: μ1 j (xi ) = e

(xi −ci j )2 σi2j

, i = 1 . . . m, j = 1 . . . r

(2)

where cij and σij are centers and widths of Gaussian membership functions. μ1j (xi ) are membership functions. The t-norm min procedure is used to determine the outputs in the rule layer. μ j (x) =

||

μ1 j (xi ), i = 1, .., m, j = 1, . . . , r

(3)

i

The outputs of linear functions are determined as follows in the consequent section of the FNN structure: y1 j = b j +

m E

ai j xi

(4)

i=1

Using the outputs of rule layer and outputs of linear functions the output of FNN is determined thus:

92

J. B. Idoko

y1

μ11(x1) R1

x1

u1

μ1(x)

/ y2

:

uu2

R2

/

μ2(x)

: ‘

:

x2

: ‘

:



:

xm

Rn

μ1r(xm) Layer 1

un

yr

Layer 2

/

μr(x)

Layer 3

Layer 4

Layer 5

Layer 6

Fig. 3 FNN structure

Er j=1

w jk y j

j=1

μ j (x)

u k = Er

, here y j = μ j (x)y1 j

(5)

where uk are FNN outputs, k = 1,…,n. After calculating the FNN output signals the training of network starts. It includes finding appropriate values of the cij (t), σij (t) (i = 1,…,m, j = 1,…,r) coefficients of membership functions and wjk (t), aij (t), bj (t) (i = 1,…,m, j = 1,…,r, k = 1,…,n) coefficients of linear functions of FNN structure. For determining the appropriate values of FNN parameters in this study, fuzzy c-means clustering and gradient descent techniques are used [7]. The clustering algorithm is used initially to identify the centers of the membership functions. The widths of membership functions are calculated using centers. The gradient descent approach is then used to discover the coefficients of linear functions after these procedures.

4.1.1

FNN Simulations

The suggested FNN structure is employed to categorize cardiac arrhythmias. Downloaded from the UCI machine learning repository is the examined dataset. The goal is to identify different cardiac arrhythmia kinds and states, and maybe classify them into one of the 16 groups. The dataset shows that there are 452 instances (patient records) and 279 feature values that separate 16 types. The dataset is pre-processed

Automated Classification of Cardiac Arrhythmias

93

Table 1 Simulation results Number of rules (hidden neurons)

Training Training error

Testing Evaluation error

test error

Accuracy

8

0.247

0.325

0.312

0.970

12

0.183

0.211

0.186

0.985

16

0.152

0.158

0.154

0.998

0.5 0.45

RMSE

0.4 0.35 0.3 0.25 0.2 0.15

0

100

200

300

400

500

Epoch number

Fig. 4 Plot of RMSE obtained during training

in the first step. All input values are scaled in the interval [0,1] in order to increase learning accuracy. RMSE values are employed during simulation to gauge network performance. Tenfold cross-validation has been used during the simulation. Different numbers of rules (hidden neurons) were used to simulate the network. Eight fuzzy rules are first used for learning. The training and evaluation errors are calculated as 0.247 and 0.325, respectively, as shown in Table 1. The testing error was determined to be 0.312 and the accuracy rate to be 97% when we tested the model utilizing the entire data set. We first increased the amount of hidden neurons to 12 and then to 16. The 16 hidden neurons produced the best results. The model’s accuracy at 16 rules/ hidden neurons was determined to be 99.85%. The results showed that the RMSE for training, evaluation, and testing was 0.152, 0.158, and 0.154, respectively. The plot of the RMSE values obtained during the training of the FNN is shown in Fig. 4.

4.2 Naïve Bayes Based Machine Learning Technique This study also suggested a Nave Bayes classification algorithm, which is based on the Bayes theorem of posterior probability. The algorithm calculates conditional

94

J. B. Idoko

probabilities for the classes given the instances of heartbeats and selects the class with the highest posterior. The naive Bayes classification model presupposes attribute independence. Traditional counting is used to estimate the probability for nominal attributes, while continuous attributes are evaluated under the assumption that all attributes and classes have a normal distribution. Unknown properties are simply skipped by the algorithm by design.

4.2.1

NB Simulations

Cross Validation Approach (CVA) was employed and applied to the arrhythmia dataset, which had 452 observations/instances, during the Naive Bayes algorithm’s training phase. The dataset was automatically divided into two parts during CVA execution: a training set with 407 observations and a test set with 45 observations. The CVA was implemented into the suggested Nave Bayes technique, and the total accuracy obtained using Nave Bayes machine learning without the usage of the Sequential Feature Selection (SFS) algorithm was 64%. And when SFS was included, the combined algorithms produced a total accuracy of 80%. This suggests that adding SFS to the Naive Bayes algorithm improved performance by 25%. The Nave Bayes machine learning technique’s tabular experimental result representation and the generated convolution matrix are shown, respectively, in Tables 2, 3, and Fig. 6. Table 2 shows that the arrhythmia dataset was dimensionally reduced by the sequential feature selection method to six subsets inside columns: 5, 15, 40, 90, 197, and 252 with corresponding criterion values. Up until the minimal criterion value Table 2 Sequential feature selection data analysis

Computed steps

Added columns

Criterion values

Step 1

15

0.422222

Step 2

197

0.355556

Step 3

5

0.333333

Step 4

90

0.288889

Step 5

252

0.244444

Step 6

40

0.222222

Table 3 Training parameters and performance of Naïve Bayes (NB) technique Learning parameters

NB training with SFS

NB training without SFS

Number of training observations

407

407

Number of testing observations

45

45

Number of connected workers

2

0

Number of feature vectors inputted

6

0

Recognition rate

80%

64%

Automated Classification of Cardiac Arrhythmias

95

of 0.222222 was attained, the feature extraction process was carried out. The Naive Bayes learning system received these six automatically produced feature subsets for further classification. Table 3 clearly shows the effect of feature extraction method (SFS) in the two experiments; NB training with SFS and NB training without SFS. The first experiment was conducted using Naïve Bayes technique with SFS inclusive. The SFS dimensionally reduced the datasets to six feature vectors which was fed into the NB classifier obtaining the total accuracy of 80%. While the second experiment was performed without SFS obtaining the corresponding total accuracy of 64%. These two results show the usefulness and importance of feature extraction/selection in machine learning. The convolution matrix of NB model is demonstrated in Fig. 5 Based on the original arrhythmia dataset, the target ought to have a 16 × 16 convolutional matrix. But the NB model successfully classified the arrhythmia classes into a 13 × 13 convolutional matrix since 3 classes from the original 16 classes had no record of observations. The diagonal instances in the convolutional matrix indicates the correctly classified observations while the rest none zero observations were wrongly classified. Fig. 5 A 13 × 13 convolution matrix of NB model

24

2

0

0

1

0

0

0

0

2

1

0

0

0

2

0

0

0

0

0

0

0

0

0

0

1

0

0

1

0

0

0

0

1

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

2

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

3

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

96

J. B. Idoko

Fig. 6 RBFN3 MSE learning curve

4.3 Radial Basis Function Networks (RBFN) Technique The topology of RBFN networks is quite similar to that of back-propagation networks. The analogy used to calculate weight is where the fundamental difference lies. There is really only one hidden layer to the activation function employed at the outputs of neurons. When input patterns are expanded into hidden space in a neural network, the hidden units offer a set of “functions” that can be freely used as the “basis” for the input patterns [8]. These functions are known as radial-basis functions. Based on the understanding that patterns transformed into higher-dimensional, nonlinear spaces are more likely to be linearly separable than in the low-dimensional vector representations of the same patterns (cover’s separable theorem on patterns), this knowledge is the driving force behind RBFN and some other neural network classifiers. K-means clustering methods are used to calculate the output of neuron units, and a Gaussian function is then applied to generate the unit’s final output [8– 27]. During the training phase, the hidden layer neurons are frequently distributed randomly throughout some or all of the training patterns’ spatial dimensions [28]. Following this, the radial basis function (also known as a kernel) is applied to the estimated distances between each neuron and the training pattern vectors. The radius distance is the argument to the function [29], as illustrated below, which gives rise to the name “radial basis function”: Weight = R B F N ( distance )

(6)

Automated Classification of Cardiac Arrhythmias

97

In RBFN networks, additional functions like logistic and thin-plate spline can be employed, although the task in this research was carried out using the Gaussian function. The radius of the Gaussian function is often selected during the training phase, and this has an impact on how much the neurons influence the distance being taken into account. By adding the output values of the RBFs multiplied by the weights calculated for each neuron, as illustrated in [28], the best projected value for the new point is discovered. The following equation relates the output of a Gaussian function to the separation between the centers of neurons at data points (r > 0): ϕ(r) = e−r

2/2

σ2

(7)

where, r is the Euclidean distance from a neuron center to the training data point, and σ is used to regulate the smoothness of the interpolating function [8].

4.3.1

RBFN Simulations

The same data set used for training the Naive Bayes machine learning technique is utilized to train three radial basis function networks (RBFNs) with various hidden neuron and spread constant values. 45 observations were used for testing, while 407 occurrences were used for training. This seeks to track how well the networks perform after being trained with various spread constant values. The training parameters/ performance and RBFN MSE learning curve of the three RBFNs utilized in the simulation are described in Table 4 and Fig. 6. Table 4 demonstrates the simulation results using different RBFN structures. It can be observed from the table that RBFN3 with 60 hidden neurons and spread constant of 1.0 achieved the lowest mean square error (MSE) of 0.011. It was able to obtain such low MSE in 19 s. Figure 6 depicts the learning curve of RBFN3 model. Table 4 RBFNs training parameters and performance

Network parameters

RBFN1 RBFN2 RBFN3

Number of training observations 407

407

407

Number of testing observations

45

45

45

Number of hidden neurons

25

40

60

Spread constant

0.15

0.5

1.0

Maximum epochs

100

150

200

Execution time (secs)

12

16

19

Mean square error

0.019

0.014

0.011

Networks recognition rate

92.3%

94.2%

95.3%

98 Table 5 Comparative experimental results of the three classifiers

J. B. Idoko

Network parameter

FNN

NB

RBFN

Number of training observations

407

407

407

Number of testing observations

45

45

45

Mean square error

0.15

NILL

0.011

Recognition rate

99%

80%

95.3%

4.4 Experimental Result Comparison of the Proposed Algorithms The experimental results of the classification methods; FNN, NB and RBFN are expressed as the total accuracy of each of the systems which equals the total percentage of correct classifications. Here the experiment with the best result from each of the classification methods is used for the comparison. For instance, considering FNN experiments, testing accuracy for 16 rules is chosen to be compared with the best experimental results from the other two classification methods; NB and RBFN. Considering the three methods proposed in this study, the experimental results demonstrated that RBFN model (RBFN3) tends to learn more rapidly than the FNN and NB algorithms due to their rigorous training procedures. Performance wise, Table 5 summarizes the comparative results of the three implored classification methods. From the table, it is depicted that amongst the three classifiers, FNN having the total accuracy of 99% outperformed the other classifiers; NB and RBFN with total accuracies of 80% and 95.3% respectively. Hence, FNN classifier stands out to be an outstanding classification tool in the computer-aided diagnosis of cardiac arrhythmias based on ECG signals.

5 Conclusion and Future Work This study implored three machine learning techniques; FNN, NB and RBNF to classify cardiac arrhythmias and all the three machine learning techniques were evaluated. A good number of extensive experiments were conducted on the cardiac arrhythmia dataset of ECG signals to diagnose cardiac arrhythmias in an automatic perspective. All the proposed techniques demonstrates stable accuracy rate. But FNN having total accuracy of 99% outperformed the other two classifiers; NB and RBFN having total accuracies of 80% and 95.3% respectively. Considering the above experimental result evaluation, cardiac arrhythmia diagnosis is strongly suggested using any of the three proposed techniques but, most recommendation would be on FNN since it outperformed the other two methods. Future contributions to this problem would include rerunning the experiment using advanced machine learning methods to provide better results, such as deep learning, support vector machines (SVMs), and extreme learning machines (ELMs).

Automated Classification of Cardiac Arrhythmias

99

References 1. Sandoe, E., & Sigurd, B. (1991). Arrhythmia–a guide to clinical electrocardiology. Publishing Partners Verlags GmbH. 2. Owis, M. I., Abou-Zied, A. H., Youssef, A. M., & Kadah, Y. M. (2002). Study of features based on nonlinear dynamical modelling in ECG arrhythmia detection and classification. IEEE Transactions on Biomedical Engineering, 49, 733–736. 3. Osowski, S., & Linh, T. H. (2001). ECG beat recognition using fuzzy hybrid neural network. IEEE Transactions on Biomedical Engineering, 48, 1265–1271. 4. Tsipouras, M. G., Fotiadis, D. I., & Sideris, D. (2002). Arrhythmia classification using the RR-interval duration signal. In A. Murray (Ed.), Computers in cardiology (pp. 485–488). IEEE. 5. UCI Machine Learning Repository: http://www.ics.uci.edu/~mlearn/MLRepository.html 6. Ma’aitah, M. K. S., Abiyev, R., & Bus, I. J. (2017). Intelligent classification of liver disorder using fuzzy neural system. International Journal of Advanced Computer Science and Applications, 8(12), 25–31. https://doi.org/10.14569/IJACSA.2017.081204 7. Abiyev, R. H., & Helwan, A. (2018). Fuzzy neural networks for identification of breast cancer using images’ shape and texture features. Journal of Medical Imaging and Health Informatics, 8(4), 817–825. https://doi.org/10.1166/jmihi.2018.2308 8. Abiyev, R., Arslan, M., Bush Idoko, J., Sekeroglu, B., & Ilhan, A. (2020). Identification of epileptic EEG signals using convolutional neural networks. Applied Sciences, 10(12), 4089. 9. Abiyev, R. H., Arslan, M., & Idoko, J. B. (2020). Sign language translation using deep convolutional neural networks. KSII Transactions on Internet & Information Systems, 14(2). 10. Helwan, A., Idoko, J. B., & Abiyev, R. H. (2017). Machine learning techniques for classification of breast tissue. Procedia computer science, 120, 402–410. 11. Sekeroglu, B., Abiyev, R., Ilhan, A., Arslan, M., & Idoko, J. B. (2021). Systematic literature review on machine learning and student performance prediction: Critical gaps and possible remedies. Applied Sciences, 11(22), 10907. 12. Idoko, J. B., Arslan, M., & Abiyev, R. (2018). Fuzzy neural system application to differential diagnosis of erythemato-squamous diseases. Cyprus J Med Sci, 3(2), 90–97. 13. Ma’aitah, M. K. S., Abiyev, R., & Bush, I. J. (2017). Intelligent classification of liver disorder using fuzzy neural system. International Journal of Advanced Computer Science and Applications, 8(12). 14. Bush, I. J., Abiyev, R., Ma’aitah, M. K. S., & Altıparmak, H. (2018). Integrated artificial intelligence algorithm for skin detection. In ITM Web of conferences (vol. 16, p. 02004). EDP Sciences. 15. Bush, I. J., Abiyev, R., & Arslan, M. (2019). Impact of machine learning techniques on hand gesture recognition. Journal of Intelligent & Fuzzy Systems, 37(3), 4241–4252. 16. Uwanuakwa, I. D., Idoko, J. B., Mbadike, E., Re¸sato˘glu, R., & Alaneme, G. (2022). Application of deep learning in structural health management of concrete structures. In Proceedings of the institution of civil engineers-bridge engineering (pp. 1–8). Thomas Telford Ltd. 17. Helwan, A., Dilber, U. O., Abiyev, R., & Bush, J. (2017). One-year survival prediction of myocardial infarction. International Journal of Advanced Computer Science and Applications, 8(6). https://doi.org/10.14569/IJACSA.2017.080622 18. Bush, I. J., Abiyev, R. H., & Mohammad, K. M. (2017). Intelligent machine learning algorithms for colour segmentation. WSEAS Transactions on Signal Processing, 13, 232–240. 19. Dimililer, K., & Bush, I. J. (2017). Automated classification of fruits: pawpaw fruit as a case study. In Man-machine interactions 5: 5th international conference on man-machine interactions, ICMMI 2017 held at Kraków, Poland, (pp. 365–374). Springer International Publishing. 20. Bush, I. J., & Dimililer, K. (2017). Static and dynamic pedestrian detection algorithm for visual based driver assistive system. In ITM web of conferences (vol. 9, p. 03002). EDP Sciences.

100

J. B. Idoko

21. Abiyev, R., Idoko, J. B., & Arslan, M. (2020). Reconstruction of convolutional neural network for sign language recognition. In 2020 international conference on electrical, communication, and computer engineering (ICECCE) (pp. 1–5). IEEE. 22. Abiyev, R., Idoko, J. B., Altıparmak, H., & Tüzünkan, M. (2023). Fetal health state detection using interval type-2 fuzzy neural networks. Diagnostics, 13(10), 1690. 23. Arslan, M., Bush, I. J., & Abiyev, R. H. (2019). Head movement mouse control using convolutional neural network for people with disabilities. In 13th international conference on theory and application of fuzzy systems and soft computing—ICAFS-2018 13 (pp. 239–248). Springer International Publishing. 24. Abiyev, R. H., Idoko, J. B., & Dara, R. (2022). Fuzzy neural networks for detection kidney diseases. In Intelligent and fuzzy techniques for emerging conditions and digital transformation: Proceedings of the INFUS 2021 conference, held August 24–26, 2021 (Vol. 2, pp. 273–280). Springer International Publishing. 25. Uwanuakwa, I. D., Isienyi, U. G., Bush Idoko, J., & Ismael Albrka, S. (2020). Traffic warning system for wildlife road crossing accidents using artificial intelligence. In International conference on transportation and development 2020 (pp. 194–203). American Society of Civil Engineers. 26. Idoko, B., Idoko, J. B., Kazaure, Y. Z. M., Ibrahim, Y. M., Akinsola, F. A., & Raji, A. R. (2022). IoT based motion detector using Raspberry Pi gadgetry. In 2022 5th information technology for education and development (ITED) (pp. 1–5). IEEE. 27. Idoko, J. B., Arslan, M., & Abiyev, R. H. (2019). Intensive investigation in differential diagnosis of erythemato-squamous diseases. In Proceedings of the 13th international conference on theory and application of fuzzy systems and soft computing (ICAFS-2018) (vol. 10, pp. 978– 983). 28. Strumiłło, P., & Kami´nski, W. (2003). Radial basis function neural networks: theory and applications. In Neural networks and soft computing (vol. 19, pp. 107–119). Physica-Verlag Heidelberg. 29. Ng, W., Dorado, A., Yeung, D., Pedrycz, W., & Izquierdo, E. (2007). Image classification with the use of radial basis function neural networks and the minimization of the localized generalization error. Pattern Recognition, 40, 19–32.

A Fuzzy Logic Implemented Classification Indicator for the Diagnosis of Diabetes Mellitus in TRNC Cemal Kavalcıo˘glu

Abstract This study aims to apply fuzzy logic in the decision support system for the diagnosis of diabetes mellitus, allowing laypeople to make an early diagnosis and start treatment right away. Techniques for decision support systems have been developed to improve decision-makers efficacy. This study’s application was carried out at TRNC. This method employs eight variables-Oral Glucose Tolerance Test, Glucose Level, Diastolic BP, Skin Thickness, Serum Insulin, BMI (Mass), Pedigree, Age-as inputs and one variable-Diabetes mellitus Stages-as an output. Data collected from 25 randomly chosen participants were used in this investigation. Creating a fuzzy system to categorize Diabetes mellitus as safe, medium, or dangerous is the aim of this research. The obtained data were examined utilizing the Matlab programming with approximate fuzzy logic and the produced Graphical User Interface (GUI). Additionally, testing includes validation and implementation checks before estimating the importance of accuracy. According to test results for the application of system services, the created diabetes disease diagnosis system substantially satisfies the fundamental expectations. With a 96% accuracy rate, this program can improve TRNC’s service quality while also gratifying people. Keywords Fuzzy logic · TRNC · Fuzzy inference system · Artificial intelligence · Diabetes complications · Diabetes mellitus · Expert systems

1 Introduction Artificial intelligence (AI) techniques are widely applied in numerous industries, including healthcare and medicine. One of the multidisciplinary research study fields in the field of computer science and artificial intelligence is the technology of soft computing. Numerous soft computing approaches, such as fuzzy logic, genetic algorithms, expert systems, neural networks, and expert systems were created with the C. Kavalcıo˘glu (B) Department of Electrical and Electronic Engineering, Near East University, Nicosia, Cyprus e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_8

101

102

C. Kavalcıo˘glu

intention of managing problems including partial truth, ambiguity, and inaccuracy, which includes the field of health [4, 5]. A system of experts is a type of computer computing that mimics how humans solve problems, especially complicated ones, according to their knowledge [6, 7]. Like humans, the system of experts first gathers inputs—the problems that need to be solved—and then employs particular procedures to account for and assess the available inputs in order to reach a conclusion. As a computer, which acts without any human disposition or emotions, lacks the competence of an expert, a system of experts has been developed to replicate that competence [8–10]. One of the computer computing paradigms that integrates the language people use to communicate in their reasoning is fuzzy logic [11]. The end result will be the arrangement of the knowledge base and fuzzy reasoning that is investigated as a real human expert [12–17]. The TRNC then put the findings into practice. The goal of an expert system is frequently to transfer human knowledge to the computer so that it can be utilized to solve problems in the same way that an expert would. In order to approach human aptitude in a certain topic, a system of specialists is built for a specific expertise in that field of knowledge. The expert system is utilized to find a workable solution and can, just like a professional, describe the steps taken and explain the results that were attained. Numerous scholars are interested in fuzzy logic to address problems like ship navigation [18], an integrated assessment of the professional skills of training participant groups [19], the detection of line pathways on an AGV (Automatic Robot Guided Vehicle) [18–20], and financial field problems. In diabetes mellitus, impaired insulin secretion, impaired insulin secretion, or both contribute to elevated blood sugar levels. Because their bodies are unable to respond to or manufacture the hormone insulin produced by the pancreas organ, patients with diabetes mellitus have elevated blood sugar levels, which can cause both immediate and long-term problems. As a result, it is anticipated that a professional assessment of the diabetes system that needs to be addressed will help in identifying current issues, such as the dearth of physicians with expertise in the diagnosis of diabetes and the complete lack of equipment in the TRNC. This study’s objective is to apply and confirmatory tests, expert system testing, and applications for the diagnosis of diabetes disease. The International Diabetes Federation estimates that there are 420 million people with diabetes worldwide between the ages of 20 and 79 in 2019. There are also 463 million youngsters under the age of 20 who have the disease, up from 1.1 million in 2018. Prediabetes (hidden sugar) affects as many individuals as persons with diabetes, and if this trend continues, it is predicted that there will be 578 million cases of diabetes worldwide by 2030 and 700 million cases by 2045. Diabetes is a common condition in the TRNC, where it is extremely prevalent (32%) and increasing quickly. In Northern Cyprus, the prevalence of diabetes is three times higher than it is globally. The statistics highlight the scale of the diabetes epidemic in the TRNC. Type 2 diabetes can be prevented or put off for years. It affects adults who are overweight and have a family history of the disease. In this study, data on the oral glucose tolerance, blood sugar levels, diastolic blood pressure, skin thickness, serum insulin, body mass index (BMI), ancestry, and age of 120 randomly selected subjects were

A Fuzzy Logic Implemented Classification Indicator for the Diagnosis …

103

Fig. 1 Block diagram of the complete system of work

gathered. Creating a fuzzy system to categorize diabetes mellitus as safe, medium, or dangerous is the aim of this effort.

2 Methodology 2.1 Database The Near East University Hospital Ethics Committee approved the collection of the data from 120 randomly selected patients who were admitted to the Internal Medicine (Endocrinology and Metabolism) Department over a two-month period. The research ethics committee gave its permission to all study participants (Fig. 1). Information from 25 randomly chosen individuals who were admitted to an internal medicine hospital is shown in Table 1 (endocrinology and metabolism section).

3 Classification 3.1 Fuzzy Logic In 1965, Lotfi A. Zadeh introduced fuzzy logic as an extension of classical set theory. This logic was initially based on fuzzy sets and open sets. At this point, any polynomial mathematics that makes use of fuzzy sets and describes built-in logic as an

104

C. Kavalcıo˘glu

Table 1 Input variables of 25 participants OGTT

Glucose level

Blood pressure

Skin thickness

Serum insulin

Body mass index

Pedigree

Age

79

148

72

35

0

33.6

0.627

50

80

85

66

29

0

26.6

0.351

31

84

183

64

0

0

23.3

0.672

32

110

89

66

23

94

28.1

0.167

21

116

137

40

35

168

43.1

2.288

33

120

116

74

0

0

25.6

0.201

30

98

78

50

32

88

31

0.248

26

94

115

0

0

0

35.3

0.134

29

92

197

70

45

543

30.5

0.158

53

140

125

96

0

0

0

0.232

54

143

110

92

0

0

37.6

0.191

30

146

168

74

0

0

38

0.537

34

163

139

80

0

0

27.1

1.441

57

177

189

60

23

846

30.1

0.398

59

87

166

72

19

175

25.8

0.587

51

107

100

0

0

0

30

0.484

32

100

118

84

47

230

45.8

0.551

31

200

107

74

0

0

29.6

0.254

31

206

103

30

38

83

43.3

0.183

33

223

115

70

30

96

34.6

0.529

32

143

126

88

41

235

39.3

0.704

27

119

99

84

0

0

35.4

0.388

50

196

196

90

0

0

39.8

0.451

41

99

119

80

35

0

29

0.263

29

126

143

94

33

146

36.6

0.254

51

increment of all its operations qualifies as fuzzy logic. When a vector of input data produces a linguistic explanation generated in numbers from a vector of input data, in some cases the fuzzy logic technique is the main erroneous map of the scalar conclusion. The fuzzy system cannot therefore be compared. The fuzzy logic algorithm can process both verbal and numerical data. There are many connections that suggest that there are countless possible outcomes that make sense.

A Fuzzy Logic Implemented Classification Indicator for the Diagnosis …

105

3.2 Fuzzy Logic Solution Approach A precise approach to issue solving is fuzzy logic. Fuzzy logic is a method that makes it easier to govern complex systems without being aware of their precise mathematical definitions. A fuzzy logic expression can indicate the degree to which an element belongs to a given set by accepting any real value between 0 and 1. A computing paradigm known as fuzzy logic offers a formal method for addressing the uncertainty and ambiguity present in human reasoning. The capacity of fuzzy logic to convey knowledge in language terms allows for the definition of systems using straightforward, approachable principles. The fuzzy set framework has been applied in several ways to simulate the diagnostic process.

3.3 Reasons for Preference of Fuzzy Logic Data obscurity is tolerated by fuzzy logic. Fuzzy logic has the potential to be simple to understand and non-rigid in its application and concept. Fuzzy logic can be used in conjunction with conventional limitation strategies. By using its fuzzy model, specialists with the right understanding can identify complexity in random nonlinear functions. The use of natural language in fuzzy logic. These are often the general characteristics of fuzzy logic, which distinguish fuzzy systems from conventional control procedures and make implementations simpler.

3.4 Fuzzy Sets When a component in a set of procedures is achieved, the sum equals a component function that is 1 inside the set or 0 outside the set.

3.5 MF (Membership Functions) An association function describes how each point’s input space is altered between 0 and 1. Discourse universe is the subject of the input data space between while, which is a catchy name for an uninteresting theme. Each membership function has a description in the tag. As an illustration, an input variable, or eight membership features label, includes eight features that are relevant to this study’s Oral Glucose Tolerance, Glucose Level, Diastolic BP, Skin Thickness, Serum Insulin, BMI (Mass), Pedigree, and Age.

106

C. Kavalcıo˘glu

The function type chosen for membership is: . Gaussian: From the viewpoint of fuzzy logic literature, gaussian fuzzy coupling features are very significant since they serve as the primary link between the RBF (Radial Basis Function) system of neural networks and fuzzy systems. The fuzzy common post values are calculated by this function using the gaussian membership’s attributes. A gaussian probability distribution is not how the gaussian membership function is defined. . For instance, the maximum value of a function of Gaussian membership is always 1.

3.6 Variables of Linguistic Input data or output variables are named in an artificial or natural language system based on quantitative values. Language variants that are usually categorized as linguistic sectarians include oral glucose tolerance, glucose level, diastolic blood pressure, skin thickness, serum insulin, BMI (mass), pedigree, and age.

3.7 Classification Results Fuzzification, knowledge bases, inference engines, and defuzzification were all integrated in the Fuzzy Inference System (FIS) [21–29]. Simple and complicated tactics are referred to as fuzzy derivation approaches. The methods of Sugeno and Mamdani are mostly applied to direct strategies. The use of indirect procedures is becoming less clear-cut. The Mamdani method, which uses common learning estimation, is the simplest basic fuzzy derivation method. The Model is an idea that is assessed in determining future benefit when the real benefit is uncertain and is improved by Mamdani. It functions in well-defined data input and language phrases and acts as a significant component of this model (Fig. 2).

4 Implementation 4.1 System Design The fundamental GUI and code structure were created with Matlab software, while the main program was written using Matlab. With the use of Matlab’s fis rule objects, fuzzy if–then rules that tie input membership function conditions to corresponding output membership functions have been built. As a result, Matlab was used to write

A Fuzzy Logic Implemented Classification Indicator for the Diagnosis …

107

Fig. 2 Fuzzy inference system structure

the fuzzy inference engine’s logic. The Fuzzy Logic Tool Box features in Matlab were also used to generate the graph of the output membership function. Input and output parameters. Input parameters: . . . . . . . .

Oral Glucose Tolerance Test (OGTT) Glucose level Diastolic Blood Pressure Skin Thickness Serum Insulin Body mass index Pedigree Age Output parameters:

. Diabetes Mellitus (DM) Proposed Algorithm Input: The fuzzy set for C1, C2, C3, C4, C5, C6, C7 and C8. Output: The fuzzy set for Diabetes Mellitus (Table 2).

5 Experimental Results This section displays a reconstruction of the suggested framework. The projected system’s usage is shown in Figs. 3, 4, 5 and 6.

108

C. Kavalcıo˘glu

Table 2 Parameters of triangular membership functions Fuzzy variables

Fuzzy representation

Fuzzy mumbers

Oral glucose tolerance test (OGTT)

C1

Low/medium/high

Glucose level

C2

Low/medium/high

Diastolic blood pressure

C3

Low/medium/high

Skin thickness

C4

Good/average/below

Serum insulin

C5

Low/medium/high

Body mass index

C6

Low/medium/high

Pedigree

C7

Low/medium/high

Age

C8

Young/medium/old

DM

Dangerous/medium/safe

Output Diabetes mellitus

5.1 Rules of Fuzzy The main structure of this research’s fuzzy standards is “IF a, THEN b,” and the membership function and fuzzy rules are “b” and “b” respectively. Fuzzy standards are based on a variety of information elements generated by semantic IF–THEN. Figure 3 defines a fuzzy standard-based selection model.

Fig. 3 Rule editor

A Fuzzy Logic Implemented Classification Indicator for the Diagnosis …

Fig. 4 Fuzzy rule-based selection process

Fig. 5 Fuzzy rule-based model

109

110

C. Kavalcıo˘glu

Fig. 6 Membership function of output classifier

5.2 GUI System Design A graphical user interface (GUI) is a set of interactive visual elements used in computer software. A graphical user interface shows items that represent activities the user can take and convey information. As a user interacts with an object, it alters its color, size, or visibility (Figs. 7 and 8).

5.3 Accuracy Checking The capacity of a test to distinguish between sick and healthy instances is a measure of its accuracy. The confusion matrix has been used to calculate the fuzzy prediction system’s accuracy. Therefore, accuracy can be defined as: Accuracy =

A+C A+ B +C + D

The terms have been formed using the following parameters. . A: Sick people who have been diagnosed with illness accurately. . B: Healthy people who are mistakenly labeled as ill. . C: It is accurate to say that healthy people are healthy.

(1)

A Fuzzy Logic Implemented Classification Indicator for the Diagnosis …

Fig. 7 Graphical user interface to evaluate the participant’s data

Fig. 8 Graphical user interface to give the participant’s stage of diabetes

111

112

C. Kavalcıo˘glu

Table 3 Accuracy checking results

Predicted: No

Predicted: Yes

Actual: No

C = 23

B=1

Actual: Yes

D=1

A = 25

. D: ill individuals who are incorrectly labeled as healthy. The results achieved are as follows (Table 3). Accuracy =

25 + 23 = 96% 25 + 1 + 23 + 1

With the approval of the Near East University Hospital Ethics Committee, the information was successfully gathered from 120 randomly selected patients who were admitted to the Internal Medicine (Endocrinology and Metabolism) Department over the course of two months in order to evaluate the outcomes from the fuzzy prediction system.

6 Conclusion Diabetes is a global issue. It is one of the diseases that is spreading over the globe quickly. Diabetes, also known as diabetes mellitus, is an organic process in which a person’s blood sugar levels rise or where there is insufficient insulin synthesis and the cells in our bodies do not respond to the insulin generated. An important argument against diabetes research is its early detection. The rise in population, Western food habits, and a decline in physical activity have all contributed to an increase in the number of people with diabetes in recent years. Generally speaking, there are 2 forms of diabetes: Type 1 diabetes and Type 2 diabetes. Historically inherited traits are the cause of type 1 diabetes. Researchers have used applications of neural networks, naive bayes, and support vector machines to carry out a number of processes in the detection of diabetes mellitus. However, current systems’ performance is ineffective. Here, a quicker and more effective method for employing the fuzzy inference system to diagnose diabetes is suggested. The user (perhaps a doctor or nurse) only needs to provide a few physical details. The diagnosis of diabetes is based on a fuzzy inference technique that asks if the patient is experiencing symptoms at an early stage. Indicating whether a person is in the dangerous, medium, or safe stages of diabetes, the system assigns a membership value based on their percentage of diabetes. In this implementation, eight variables have been used as inputs, and one variable has been suggested as an output for the early identification of the Diabetes Mellitus disease. According to experimental findings, this technique efficiently categorizes diabetes mellitus disease diagnosis. In this study, it is suggested that the Fuzzy Inference System stages of diabetes be divided into safe, medium, and dangerous categories.

A Fuzzy Logic Implemented Classification Indicator for the Diagnosis …

113

Acknowledgements The Near East University Hospital Ethics Committee has approved this work.

References 1. Harliana, P., & Rahim, R. (2017). Comparative analysis of membership function on Mamdani fuzzy inference system for decision making. Journal of Physics: Conference Series, 930(1), 012029. 2. Rahim, R., Zufria, I., Kurniasih, N., Simargolang, M., Hasibuan, A., Sutiksno, D., Nanuru, R., Anamofa, J., Ahmar, A., & Achmad, G. S. (2018). C4.5 classification data mining for ınventory control. International Journal of Engineering & Technology, 7(68). https://doi.org/10.14419/ ijet.v7i2.3.12618 3. Putera, A., Siahaan, U., & Rahim, R. (2016). Dynamic key matrix of hill cipher using genetic algorithm. International Journal of Security and Its Applications, 10(8), 173–180. 4. Marimin. (2012). Penalaran Fuzzy Bogor: Departemen Ilmu Komputer. Institut Pertanian Bogor. 5. Siahaan, A. P. U., & Farta Wijaya, R. (2018). Technique for order of preference by similarity to ıdeal solution (TOPSIS) method for decision support system in top management. International Journal of Engineering & Technology Sciences, 7, 290–293. https://doi.org/10.14419/ijet.v7i3. 4.20113 6. Gomide, F. (2003). Fuzzy engineering expert systems with neural network applications. Fuzzy Sets and Systems, 140(2), 397–398. 7. Mesran, M. et al. (2018). Expert system for disease risk based on lifestyle with fuzzy Mamdani. International Journal of Engineering and Technology, 7(2.3), 88–91. 8. Suryanto, T., Rahim, R., & Ahmar, A. S. (2018). Employee recruitment fraud prevention with the ımplementation of decision support system. Journal of Physics: Conference Series, 1028(1), 012055. 9. Yanie, A., Abdurrozaq Hasibuan, I., Ishak, M., Marsono, S. L., Nurmalini, N., Mesran, M., Nasution, S. D., Rahim, R., Nurdiyanto, H., & Ahmar, A. S. (2018). Web based application for decision support system with ELECTRE method. Journal of Physics: Conference Series, 1028(1), 012054. 10. Siregar, D., Arisandi, D., Usman, A., Irwan, D., & Rahim, R. (2017). Research of simple multiattribute rating technique for decision support. Journal of Physics: Conference Series, 930(1), 012015. 11. Alesyanti, A., Ramlan, R., Hartono, H., & Rahim, R. (2018). Ethical decision support system based on hermeneutic view focus on social justice. International Journal of Engineering and Technology, 7(2.9), 74–77. 12. Supriyono, H., Sujalwo, S., & Sulistyawati, T. (2015). Sistem Pakar Berbasis Logika Fuzzy Untuk Penentuan Penerima Beasiswa. Emitor, 15(1), 22–28. 13. Simanihuruk, T., Hartono, H., Abdullah, D., Erliana, C. I., Napitupulu, D., & Ongko, E. (2018). Hesitant fuzzy linguistic term sets with fuzzy grid partition in determining the best lecturer. International Journal of Engineering & Technology, 7(2.3), 59–62. 14. Nasution, M., Rossanty, Y., Achmad Daengs, G. S., Sahat, S., & Rosmawati, R. (2018). Decision support rating system with analytical hierarchy process method. International Journal of Engineering and Technology, 7(2.3), 105–108. 15. Indahingwati, A., Barid, M., Wajdi, N., Susilo, D. E., Kurniasih, N., & Rahim, R. (2018). Comparison analysis of TOPSIS and fuzzy logic methods on fertilizer selection. International Journal of Engineering and Technology, 7(2.3), 109–114. 16. Rossanty, Y., Hasibuan, D., Napitupulu, J., Nasution, M. D. T. P., & Rahim, R. (2018). Composite performance index as decision support method for multi case problem. International Journal of Engineering and Technology, 7(2.9), 33–36.

114

C. Kavalcıo˘glu

17. Sahir, S. H., Rosmawati, R., & Rahim, R. (2018). Fuzzy model tahani as a decision support system for selection computer tablet. International Journal of Engineering and Technology, 7(2.9), 61–65. 18. Perera, L. P., Carvalho, J. P., & Soares, C. G. (2014). Solutions to the failures and limitations of mamdani fuzzy inference in ship navigation. IEEE Transactions on Vehicular Technology, 63(4), 1539–1554. 19. Petukhov, I., & Steshina, L. (2014). Assessment of vocational aptitude of man-machine systems operators. In Proceeding of the 7th International Conference on Human Systems Interactions (HSI) (pp. 44–48). 20. Nugraha, M. B. (2015). Design and implementation of RFID line-follower robot system with color detection capability using fuzzy logic. In Proceeding of the International Conference on Control, Electronics, Renewable Energy and Communications (ICCEREC) (pp. 75–78). 21. Maltoudoglou, L., Boutalis, Y., & Loukeris, N. (2015). A fuzzy system model for financial assessment of listed companies. In Proceeding of 6th International Conference Information, Intelligence, Systems and Applications (IISA) (pp. 1–6). 22. Sugiyono. (2016). Metodologi penelitian kuantitatif, kualitatif, dan R&D Bandung. Alfabeta. 23. Niswati, Z., Mustika, F. A., & Paramita, A. (2016). A diagnosis of the diabetes mellitus disease with fuzzy ınference system Mamdani. In Proceeding International Conference on Mathematics, Education, Theory, and Application ICMETA. 24. Haryanto, T. (2012). Logika fuzzy dan Sistem Pakar Berbasis Fuzzy. Departemen Ilmu Komputer, Institut Pertanian Bogor. 25. Xie, H., Duan, W., Sun, Y., & Du, Y. (2014). Dynamic DEMATEL group decision approach based on intuitionistic fuzzy number. ELKOMNIKA (Telecommunication, Computing, Electronics and Control), 12(4), 1064. 26. Adedeji, B., & Badiru, J. Y. C. (2002). Fuzzy engineering expert systems with neural network applications. Department of Industrial Engineering University of Tennessee Knoxville, TN. School of Electrical and Computer Engineering University of Oklahoma Norman, OK. 27. Sandya, H. B., Hemanth Kumar, P., Bhudiraja, H., & Rao, S. K. (2013). Fuzzy rule based feature extraction and classification of time series signal. International Journal of Soft Computing and Engineering (IJSCE). 28. Jantzen, J. (2008). Tutorial on fuzzy logic.Technical University of Denmark. 29. Güler, I., & Ubeyli, D. E. (2005). Adaptive neuro-fuzzy inference system for classification of EEG signals using wavelet coefficients. (A) Department of Electronics and Computer Education, Faculty of Technical Education, Gazi University, 06500 Teknikokullar, Ankara, Turkey; (B) Department of Electrical and Electronics Engineering, Faculty of Engineering, TOBB Ekonomi ve Teknoloji Universitesi.

Implementation and Evaluation of a Mobile Smart School Management System—NEUKinderApp John Bush Idoko

Abstract The backlash from Covid-19 pandemic on the lower-tier educational institutions opened the eyes of school proprietors, educators and parents. This makes the popularity of smart mobile devices in school management to gain some growth. These digital devices are part of a new wave of technical tools that allow even young children to access content with amazing ease and engage creatively. This paper presents the design and implementation of a smart school management system called NEUKinderApp (Near East Kindergarten School Mobile App). We implored the adaptive and context-aware user interface to design the NEUKinderApp to cover; new student enrolment, teacher-parent communication, course registration, announcement center, result portal and attendance report. Parental involvement has a favorable impact on students’ academic achievement and social skills; excellent parent-teacher communication not only increases parents’ reliance on a school but also broadens their understanding of child rearing. E-communication is already replacing traditional paper communication as information technology advances quickly. To notify parents and lessen conflicts resulting from direct phone calls, inadequate network coverage at the time of phone calls, etc., mobile app message may be a useful convenience. Experimental results depicts that NEUKinderApp enhances teacher-parents communication there by improving students’ performance, and the response time recorded from NEUKinderApp is lower as compared to the response time of other related mobile apps. Keywords Importance of communication · Smart mobile devices · Early childhood education · Educational apps

J. B. Idoko (B) Applied Artificial Intelligence Research Centre, Department of Computer Engineering, Near East University, Nicosia 99138, Turkey e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_9

115

116

J. B. Idoko

1 Introduction Reformers and decision-makers have viewed technology as a possible agent for change in education for the past century. At one point, the radio, overhead projector, video, pocket calculator, computer, and smartboard were all considered to be cuttingedge new teaching aids. More recently, adaptive tutoring programs, Massive Open Online Courses, and flipped classrooms have all contributed to the creation of options for personalized learning. The ways that schools and teachers connect with parents are already beginning to change profoundly thanks to new mobile communication technology. There is presently a sizable market for mobile communication applications (apps) made to improve communication and coordination between educational institutions, staff members, parents, and students. For instance, Class Dojo, a communication app, has 35 million users and is valued at much to $400 million [1]. It has raised $65 million in venture capital. Another communication tool for educators, Edmodo, claims to have a network of more than 100 million users [2]. Although there is a growing market for mobile communication applications, little is known about who uses them, how they are utilized, and how they affect the quantity and quality of conversation. Due to their potential to remove many of the barriers that now exist with the most popular modes of communication between schools and families, mobile apps hold great promise. Phone numbers quickly go out of date, and notes sent home in backpacks frequently never reach the parents. Many families are unable to take advantage of email contact or online gradebooks due to restrictions on computer access, technological ability, and English language. By utilizing the nearly universal availability of smartphones [3], the dependability and practicality of text messages, and the benefits of automatic translation tools, mobile apps seek to address these issues. Although parents and instructors play important roles in their children’s education, conflicting attitudes and ideas have a major negative impact on their ability to learn [4]. To enhance academic performance, education authorities in many nations encourage parental involvement in school administration and the creation of efficient lines of communication between parents and instructors. Parental engagement has advantages and disadvantages from the perspective of the school; in order to maximize the advantages, parents and instructors must work together to minimize conflicts and miscommunications. In addition to acting as teacher partners by communicating and collaborating to create a foundation for mutual trust, parents take on a thirdparty role to minimize unneeded disruptions when conflicts arise [5]. Children’s academic performance and future successes are significantly influenced by active parental involvement; to stay informed, parents should communicate with instructors frequently and in a variety of ways [6]. Parental participation has a favorable impact on students’ academic achievement and social skills [7]; excellent parent-teacher communication not only increases parent reliance on a school but also improves parent knowledge of childrearing. Parent-teacher disagreements grow more problematic when a family and school interact often. Conflict between parents and teachers

Implementation and Evaluation of a Mobile Smart School Management …

117

can arise from contrasting viewpoints, poor communication, and insufficient information. More possibilities for communication and the use of more efficient tools for communication lessen misunderstandings and conflicts between parents and a school; they also foster consensus. At Taiwan, the traditional paper communication book is the main method for informing parents about what is happening at their child’s elementary school. Teachers are unable to ensure that parents read and comprehend the books’ contents using these resources. When necessary, teachers call parents’ homes on landlines and cellphones. Twenty-seven million people in Taiwan now use mobile phones, according to the Taiwan National Communication Commission. Teachers should therefore think of their smartphones as helpful communication tools. One of the most often used methods of communication among users is text messaging on mobile phones. Mobile phone messages may be a handy way to inform parents of important information and lessen problems resulting from oversights like missed phone calls, ignored notifications, etc. Parents can view the material while they are free and store messages to read them again later using mobile phone texts. As a result, using text messages to convey information between parents and instructors could be a useful communication tool. Mobile applications have taken over many industries recently, including entertainment, education, health, and business. This is because mobile computing, which supports location independence, can give the user a tool when and when they need it, regardless of user movement [8]. New student enrollment, course registration, teacher-parent contact, and other aspects of the education sector—particularly as they relate to students—play a crucial role. The process for registering for a course could be manual, web-based, or online as well as mobile-based. Students can use this to enroll in classes, update their profiles, and pay tuition and any other expenses associated with their enrollment. The use of a mobile application has not been investigated, even though most educational institutions are rapidly abandoning the manual method in favor of the web-based and online course registration method. The demand for adaptable tools to support carefully thought-out blended learning scenarios is growing as more and more teachers in higher education experiment with technology in an effort to improve their traditional methods of instruction [9]. The influence of web-based or online course registration is acknowledged, but the development of mobile communication technology is altering how information technology (IT) is seen. The capabilities of mobile technologies are improving for rich social interactions, context awareness, and internet connectivity [10]. Mobile technologies are becoming more embedded, pervasive, and networked [11–30]. Mobile phones appear to be a common possession, especially among students, who carry them nearly everywhere. As a result, they are a very effective way to provide information to them quickly, conveniently, and while they are on the go [31]. According to a poll conducted in [32] with 3900 Purdue University students, 68% of respondents favor mobile apps over web platforms because they are faster, and 70% believe that mobile apps are simpler to use. As a result, this paper attempts to bring course registration to the doorsteps of the students by developing a Mobile Application Based Course Registration Platform (MABCRP). The goal is to take advantage of the opportunities provided by mobile

118

J. B. Idoko

applications while addressing issues inherent in web based course registration. The created MABCRP acts as a substitute route for enrolling in academic courses on mobile devices. The MABCRP created has a good usability in terms of utility and ease of use, according to user evaluation results. The findings also suggest that the created MABCRP can make it simpler for students to enroll for academic courses without regard to their geographic location. In this research, we applied the adaptive and context-aware user interface to implement the NEUKinderApp (smart school management app) designed to manage the day to day activities of Near East Kindergarten School. NEUKinderApp consists of several components/sections such as new applicant enrolment, registration, course selection, announcement corner, result checker, and the most important section— teacher-parent communication incorporated to enable uninterrupted communication between teachers and parents of the pupils. The remaining part of the paper is organized as follows: the proposed method is given in Sect. 2. Section 3 presents the results obtained from the app testing. Section 4 depicts conclusions of the paper.

2 Proposed Method The proposed system “NEUKinderApp” is designed based on the adaptive and context-aware user interface. Any details that can describe a specific circumstance pertinent to a user, including the user themselves, are included in this context [33]. Utilizing context data while creating mobile applications creates new possibilities for an intelligent generation of applications that dynamically adapt to changing contexts and appear extremely personalized to the user. These programs provide a distinctive user experience. In order to offer mobile applications as a service, NEUKinderApp employs a variety of context information, including but not restricted to: Time: Time is used by NEUKinderApp to implement time-based access control across mobile apps. Applications may place time restrictions on who can use its feature. For instance, businesses might only provide on-site access to confidential information during business hours. To avoid synchronization issues, NEUKinderApp manages time-related limitations using the server time as a reference. Based on current location data, the server applies the proper time transformation according to the user’s time zone. User Profile: Various users may have varying levels of access to the same programs. For instance, different users may have varied positions and duties inside an organization, which affects their right to access particular information or confidential company data. A user’s level of access entitlement to such data may also vary depending on their location or the time of day. In order to grant the user with the appropriate access privileges to an application, NEUKinderApp makes use of such user information. NEUKinderApp maintains track of user profiles and login information for this purpose.

Implementation and Evaluation of a Mobile Smart School Management …

119

User Ratings: Users can now make comments and discuss their shopping experiences with the help of web 2.0 and open ecosystems. The service quality as indicated by user reviews. By using such a function, NEUKinderApp can select between programs that offer comparable services. Device Profile: In order to find the suitable version of a pertinent application that fits the device platform, NEUKinderApp makes use of the device profile. The device profile data is gathered by NEUKinderApp through communication sessions [34]. Location: What programs a mobile user can run on their device depends on where they are. In order to receive the greatest service a place might offer, a user may not be aware of the pertinent application(s) to run in that area. The most suitable application is sent to a user’s context by NEUKinderApp using location data. Numerous methods exist for the automatic collecting of position data, whether it be outdoors (such as GPS and mobile networks) [35] or indoors (such as Received Signal Strength approaches) [36].

2.1 NEUKinderApp Architecture The NEUKinderApp was created using Java, SQL, and Android Studio. The app’s elements appear to be tiered for ease of understanding. It consists of Java files and XML. The app features a dynamic backend based on SQL that assists the administrator in updating and maintaining backend data. It is developed using Google standards to assure the maximum accuracy and portability. The application makes use of MySQL architecture, which outlines the various elements of a MySQL system and how they relate to one another. In essence, the MySQL architecture is a client– server setup. Applications that connect to a MySQL database server are the clients, and the MySQL database server represent the clients.

2.2 Material Design and Optimization A visual language called Material Design combines the fundamentals of excellent design with cutting-edge technological and scientific advancements. Having a presentable user interface with an aesthetically pleasing design is crucial. Popular design tools and languages that represent component libraries, including Balsamiq [37] and Google’s Material Design [38], were mentioned in order to define a set of UI component categories typically found in Android apps. To enhance the user experience and make sure the app is mobile friendly and completely optimized, the material design has been adopted for the app’s components. Because code optimization is crucial to making an app work smoothly and without lagging, NEUKinderApp was created with fully optimized code. The performance of NEUKinderApp has been optimized in every single implementation. Additionally, the code has been carefully

120

J. B. Idoko

Fig. 1 Flowchart of NEUKinderApp

designed and modularized for other developers’ ease of understanding. When a line of code has to be described, comments are utilized. To use the features of NEUKinderApp, a user needs sign up at the user interface. The user’s “name,” “email address,” and “password” are required for registration. An individual user ID is created using the email address. NEUKinderApp launches the main process when the user has successfully authenticated, which coordinates with other tasks including enrollment, registration, course selection, the announcement corner, teacher-parent communication, etc. Figures 1, 2, and 3 show a flowchart for the NEUKinderApp, a procedure for creating an account, and a dashboard, respectively.

2.3 Backend Customization This is directory where the data that will appear on NEUKinderApp dynamically are added. At first, the creation of the database is initiated as demonstrated in Fig. 4. A database is a structured collection of data that is typically electronically stored and accessed from a computer system. Where databases are more complicated, formal design and modeling techniques are frequently used in their development. In order to collect and analyze data, the database management system (DBMS) interacted with applications, end users, and the database itself. The primary tools offered to manage the database are also included in the DBMS software. For data storage, this program requires a database. To enable permissions for reading, writing, etc., a user is also created after the database is created. The server URL is copied to the code in Android Studio after the database has been created in order to display data on the app. Constant.java is the name of a Java class that is also created in Android Studio. This class accepts SQL database table parameters. Figure 5 shows how to

Implementation and Evaluation of a Mobile Smart School Management …

121

Fig. 2 Account creation process

build the apk file after adding the project id to Android Studio by choosing Build > build Signed Bundle/APK.

3 Result Analysis Evaluation of the performance of NEUKinderApp is based on runtimes. This was executed using two most popular platforms the Android (for Android phone users) and the iOS (iPhone users). This evaluation was conducted in order to identify the best platform for NEUKinderApp, and further propose/suggest to potential users the type of smart phone to run NEUKinderApp on. Applications are protected from underlying platform incompatibilities by runtimes, which act as compatibility layers. Typically, a compilation process converts source code into a binary or intermediate language that the runtime can understand. A small number of technologies, like Titanium, allow the source code to run directly on the runtime. Some solutions, like NeoMAD, transform the source code into native source code for the various platforms without the need for a runtime. The generated source code is compiled for each of the platforms. Runtimes and source code translators frequently use genuine native

122

J. B. Idoko

Fig. 3 NEUKinderApp dashboard

Fig. 4 Creation of SQL database for NEUKinderApp

UI components, providing a genuine native experience. For each platform, a native implementation was used to run the performance tests. Java is used to create native Android apps, whereas Swift and Objective-C are available for iOS. Using ObjectiveC, this paper’s native iOS implementation was created. The NEUKinderApp has been completely optimized to provide the quickest response time. Among the techniques investigated are, but not limited to: . Implementation of splash screen: The apparent reaction phase of portable APIs in information stacking is a significant component of mobile UI/UX configuration. One trick to help improve client experience by hiding the actual program load

Implementation and Evaluation of a Mobile Smart School Management …

123

Fig. 5 Generation of NEUKinderApp.apk file

time is to include a sprinkling screen when the application is sent out. Although they lack content, these start-up screens mimic the application’s main theme and reassure the user that new content is being loaded. The NEUKinderApp splash screen is shown in Fig. 6. . Mobile-centric APIs to connect back end: In fact, many businesses rely on inheritance frameworks that weren’t designed with flexibility in mind. Information originating from inheritance APIs might not be up to date for preparation or presentation on a mobile device, which can degrade the client’s experience. However, many project portability arrangements require the information and data stored on legacy systems. MBaaS contributions provide the tools and environment necessary to construct flexible driven RESTful APIs that are designed to integrate with legacy frameworks, provide faster load times, and produce transferable useful content. . Removal of unnecessary data and files: Large amounts of information require a significant amount of time to download. The amount of time a client must wait can be reduced by restricting the payload size transferred from the worker to a cell phone. There are several ways to reduce payloads. Eliminating information that the client and cell phone don’t need to see is one method. Even though it may seem obvious as day, it is possible to overlook this while making improvements. MBaaS arrangements today frequently include the Node.js JavaScript runtime, which can reorganize heritage information marshalling to remove unnecessary fields and adapt to device friendly JSON designs. This can reduce the payload and hence shorten the time it takes for a cell phone to download data. The estimation of the application and any subsequent ROI can be reduced by a moderate reaction time or task that can hinder appropriation or utilization. Improved user experience may result from faster reaction times. The smooth experience of users can be

124

J. B. Idoko

Fig. 6 NEUKinderApp splash screen

achieved by the use of portable driven APIs, storage tools, and other excellent UI/ UX techniques.

3.1 Explored Devices Table 1 lists the devices that were investigated for the performance analysis. Both a high-end and a low-end device were chosen for Android and iOS. All equipment was upgraded to the most recent operating systems offered by the manufacturer and reverted to the default factory settings. None of the devices had been jailbroken or rooted.

3.2 Response Times Regarding the user experience, response times are crucial. This study tracks the reaction times for three distinct actions: the application’s start-up, the time it takes to load a new page, and the time it takes to go back to the previous page. The duration

Implementation and Evaluation of a Mobile Smart School Management …

125

Table 1 Explored devices for performance analysis Devices

Low-end

High-end

Android device

Sony Xperia E3

Motorola Nexus 6

Operating system

Android 4.4.2

Android 6

RAM memory

1 GB

3BG

CPU

Quad-core 1,2 GHz

Quad-core 2.7 GHz

iOS device

iPhone 6

iPhone 8 Plus

Operating system

iOS 9

iOS 11

RAM memory

1 GB

3 GB

CPU

Dual-core 1.4 GHz

A11 Bionic system-on-chip

required to fully launch or start an application (i.e., from touching the symbol to seeing the main screen) is known as the launch time or start time. After the application has been opened, it’s critical for the user to be able to easily switch between its many pages. A poor user experience will be the outcome of slow response times. The application’s most popular page was chosen to test page navigation response times. Aside from the homepage, this was the only page that didn’t require Internet access. There is no additional communication overhead when you open this page. The iOS device reaction time was measured using the Time Profiler, whereas the Android device response time was measured using DBMS. Compared to iOS, the Android platform is far more open. As a result, the measurement process may be more precisely managed or fine-tuned on Android than it can on iOS, where vendors only give analysis tools that allow users to see the measured parameters but not the raw data. In order to determine the response times for Android, the elapsed time between particular timestamps in the console log of a PC connected to the mobile device was calculated. Android logs timestamps whenever an application is launched and when it has finished running. However, a different date was utilized for implementations that ran on PhoneGap. Because the application is fired while the webview is running, the application running timestamp for PhoneGap applications was not indicative of the whole launch time of the application. The JavaScript framework and application must still be loaded after the webview has started. As a result, an additional log message was added to the JavaScript code to allow for the consideration of the complete startup time. To track the response times of page navigation, more log messages were added to all implementations. All measurements of response time on iOS are made using the Time Profiler function of the instruments tool. The overall execution time of each application component is shown by this tool. To calculate the overall launch time, the startup times of the launch components were summed. The response times for page navigation were handled similarly.

126

J. B. Idoko

3.3 Comparative Result Analysis The performance measures’ results are listed and discussed in this section. On a highend (HE) and low-end (LE) device for Android and iOS, all measurements are carried out. The measurements were processed with a 95% confidence interval. The average values of the measures are listed in Table 2. Regarding the application’s response time and the accompanying perceived user experience, there are a few general guidelines. The user will perceive response times of less than 100 ms as being instantaneous. The user is okay with response times of up to one (or a few) seconds if they don’t happen often. The user experience will be considerably worsened by delays longer than a few seconds. Table 2 depicts the launch time, time to navigate between pages of the application, and the time to return to previous page (homepage) in ms were the response times put into cognizance. The cross-platform tools (CPTs) overhead are not considered in this research because, most times, they results in unstable response times. As stated in Sect. 3.2, the launch time/start time is the time taken for the application to be fully launched. The response times for navigating to a page, which is done by tapping a button on the screen, and going back to a page, which is done by merely using the back navigation item, are the times it takes to navigate between pages of the application. The duration it takes for the application to return to the homepage or a previously launched page is known as the time to return to previous page. In general, for response time, the smaller the time, the better the performance of the application. Between the response times for launch time, time to navigate between pages, and the time to return the previous page as depicted in Table 2, it is obvious that the HEs outperformed the LEs for each of the platforms. Also, considering these response times, the HEs of iOS correspondingly outperformed the HEs of Android. This high performance recorded by iOS may be attributed to factors such as UIs components, device parameters/resources, etc., leading to device high speed. To this end, iOS HE device is suggested to NEUKinderApp potential users. Table 2 Response times in ms Response times

Android HE

LE

HE

LE

Launch times

289

457

187

606

87

105

24

39

8

18

3

9

Time to navigate Btw pages Time to return to previous page

iOS

Implementation and Evaluation of a Mobile Smart School Management …

127

4 Conclusion In this research, we implemented a mobile smart school management system called NEUKinderApp for Near East Kindergarten School. At design phase, we implored the adaptive and context-aware user interface to design NEUKinderApp to process; new student enrolment, teacher-parent communication, course registration, announcement center, result portal and attendance report. The teacher-parent communication component/section of NEUKinderApp is incorporated to facilitate cohesive communication between teachers and parents of the pupils. Parental involvement has a favorable impact on students’ academic achievement and social skills; excellent parent-teacher communication not only increases parents’ reliance on a school but also broadens their understanding of child rearing. Experimental results depicts that NEUKinderApp enhances teacher-parents communication there by improving students’ performance. To validate the performance of NEUKinderApp, we measured the response times of the app launch time, time to navigate between pages, and time to return to previous page on both high-end (HE) and low-end (LE) Android and iOS devices. The measurements shows that the HEs outperformed the LEs for each of the platforms (Android and iOS), and the HEs of iOS correspondingly outperformed the HEs of Android.

References 1. Wan, T. (2023). Now with revenue, ClassDojo raises $35 Million to expand to homes across the world. Retrieved January 20, 2023, from https://www.edsurge.com/news/2019-02-28-nowwith-revenue-classdojoraises-35-million-to-expand-to-homes-across-the-world 2. Edmodo. (2023). About Edmodo. Retrieved January 20, 2023, from http://www.go.edmodo. com/about/ 3. Pew Research Center. (2023). Mobile fact sheet. Retrieved January 26, 2023, from https:// www.pewresearch.org/internet/factsheet/mobile/ 4. Hornby, G. (2011). Parental involvement in childhood education (pp. 1–2). Springer. 5. Lueder, D. C. (1998). Creating partnerships with parents: An educator’s guide. Technomic Pub. Co. 6. Ohio State Dept. of Education. (1995). Parents: The key to a child’s success, parent: Partners in study skills and planning for graduation: How will you do? 7. Chen, H., Yu, C., & Chang, C. (2007). E-homebook system: A web-based interactive education interface. Computers & Education, 49, 160–175. 8. Fernando, N., Loke, S. W., & Rahayu, W. (2013). Mobile cloud computing: A survey. Future Generation Computer Systems, 29, 84–106. 9. Georgouli, K., Skalkidis, I., & Guerreiro, P. (2008). A framework for adopting LMS to introduce eLearning in a traditional course. International Forum of Educational Technology & Society, 11(2), 227–240. 10. Naismith, L., Lonsdale, P., Vavoula, G., & Sharples, M. (2008). Literature review in mobile technologies and learning. Future Lab Series Report 11, submitted to the University of Birmingham (pp. 1–48). ISBN: 0-9548594-1-3. 11. Abiyev, R., Arslan, M., Bush Idoko, J., Sekeroglu, B., & Ilhan, A. (2020). Identification of epileptic EEG signals using convolutional neural networks. Applied Sciences, 10(12), 4089.

128

J. B. Idoko

12. Abiyev, R. H., Arslan, M., & Idoko, J. B. (2020). Sign language translation using deep convolutional neural networks. KSII Transactions on Internet & Information Systems, 14(2). 13. Helwan, A., Idoko, J. B., & Abiyev, R. H. (2017). Machine learning techniques for classification of breast tissue. Procedia Computer Science, 120, 402–410. 14. Sekeroglu, B., Abiyev, R., Ilhan, A., Arslan, M., & Idoko, J. B. (2021). Systematic literature review on machine learning and student performance prediction: Critical gaps and possible remedies. Applied Sciences, 11(22), 10907. 15. Idoko, J. B., Arslan, M., & Abiyev, R. (2018). Fuzzy neural system application to differential diagnosis of erythemato-squamous diseases. Cyprus Journal of Medical Sciences, 3(2), 90–97. 16. Ma’aitah, M. K. S., Abiyev, R., & Bush, I. J. (2017). Intelligent classification of liver disorder using fuzzy neural system. International Journal of Advanced Computer Science and Applications, 8(12). 17. Bush, I. J., Abiyev, R., Ma’aitah, M. K. S., & Altıparmak, H. (2018). Integrated artificial intelligence algorithm for skin detection. In ITM Web of Conferences (Vol. 16, p. 02004). EDP Sciences. 18. Bush, I. J., Abiyev, R., & Arslan, M. (2019). Impact of machine learning techniques on hand gesture recognition. Journal of Intelligent & Fuzzy Systems, 37(3), 4241–4252. 19. Uwanuakwa, I. D., Idoko, J. B., Mbadike, E., Re¸sato˘glu, R., & Alaneme, G. (2022, May). Application of deep learning in structural health management of concrete structures. In Proceedings of the Institution of Civil Engineers-Bridge Engineering (pp. 1–8). Thomas Telford Ltd. 20. Helwan, A., Dilber, U. O., Abiyev, R., & Bush, J. (2017). One-year survival prediction of myocardial infarction. International Journal of Advanced Computer Science and Applications, 8(6). https://doi.org/10.14569/IJACSA.2017.080622 21. Bush, I. J., Abiyev, R. H., & Mohammad, K. M. (2017). Intelligent machine learning algorithms for colour segmentation. WSEAS Transactions on Signal Processing, 13, 232–240. 22. Dimililer, K., & Bush, I. J. (2017, September). Automated classification of fruits: pawpaw fruit as a case study. In Man-Machine Interactions 5: 5th International Conference on Man-Machine Interactions, ICMMI 2017, held at Kraków, Poland, October 3–6, 2017 (pp. 365–374). Springer International Publishing. 23. Bush, I. J., & Dimililer, K. (2017). Static and dynamic pedestrian detection algorithm for visual based driver assistive system. In ITM Web of Conferences (Vol. 9, p. 03002). EDP Sciences. 24. Abiyev, R., Idoko, J. B., & Arslan, M. (2020, June). Reconstruction of convolutional neural network for sign language recognition. In 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE) (pp. 1–5). IEEE. 25. Abiyev, R., Idoko, J. B., Altıparmak, H., & Tüzünkan, M. (2023). Fetal health state detection using interval type-2 fuzzy neural networks. Diagnostics, 13(10), 1690. 26. Arslan, M., Bush, I. J., & Abiyev, R. H. (2019). Head movement mouse control using convolutional neural network for people with disabilities. In 13th International Conference on Theory and Application of Fuzzy Systems and Soft Computing—ICAFS-2018 (Vol. 13, pp. 239–248). Springer International Publishing. 27. Abiyev, R. H., Idoko, J. B., & Dara, R. (2022). Fuzzy neural networks for detection kidney diseases. In Intelligent and Fuzzy Techniques for Emerging Conditions and Digital Transformation: Proceedings of the INFUS 2021 Conference, held August 24–26, 2021 (Vol. 2, pp. 273–280). Springer International Publishing. 28. Uwanuakwa, I. D., Isienyi, U. G., Bush Idoko, J., & Ismael Albrka, S. (2020, August). Traffic warning system for wildlife road crossing accidents using artificial intelligence. In International Conference on Transportation and Development 2020 (pp. 194–203). American Society of Civil Engineers. 29. Idoko, B., Idoko, J. B., Kazaure, Y. Z. M., Ibrahim, Y. M., Akinsola, F. A., & Raji, A. R. (2022, November). IoT based motion detector using Raspberry Pi gadgetry. In 2022 5th Information Technology for Education and Development (ITED) (pp. 1–5). IEEE. 30. Idoko, J. B., Arslan, M., & Abiyev, R. H. (2019). Intensive investigation in differential diagnosis of erythemato-squamous diseases. In Proceedings of the 13th International Conference on Theory and Application of Fuzzy Systems and Soft Computing (ICAFS-2018) (Vol. 10, pp. 978– 3).

Implementation and Evaluation of a Mobile Smart School Management …

129

31. Adagunodo, E. R., Awodele, O., & Idowu, S. (2009). SMS user interface result checking system. Issues in Informing Science and Information Technology, 6, 101–112. 32. Bowen, K., & Pistilli, M. D. (2012). Student preferences for mobile app usage. (Research Bulletin). EDUCAUSE Center for Applied Research. 33. Dey, A. K. (2001). Understanding and using context. Pers Ubiquitous Comput, 5, 4–7. 34. Al-Masri, E., & Mahmoud, Q. H. (2010). Mobieureka: An approach for enhancing the discovery of mobile web services. Personal and Ubiquitous Computing, 14, 609–620. 35. Ahn, J., Heo, J., Lim, S., & Kim, W. (2008). A study on the application of patient location data for ubiquitous healthcare system based on lbs. In 10th International Conference on Advanced Communication Technology (Vol. 3, pp. 2140–2143). 36. Diaz, J., Maues, R. de A., Soares, R., Nakamura, E., & Figueiredo, C. (2010). Bluepass: An indoor bluetooth-based localization system for mobile applications. In IEEE Symposium on Computers and Communications (ISCC) (pp. 778–783). 37. Balsamiq Studios. (2018). basalmiq. Retrieved March 11, 2023, from https://balsamiq.com/ 38. Call-Em-All. (2018). Material-UI. Retrieved March 11, 2023, from https://material-ui-next. com

The Emerging Benefits of Gamification Techniques John Bush Idoko

Abstract Academic institutes, health institutions, and other companies utilize a variety of methods and processes in the improvement of their services, hence improving the experience of their students, customers, and end users, leading to increased productivity. In recent years, gamification has emerged as a popular method of improvement used in academic institutions. The role of gamification addresses the problems with user engagement, and this is a target that is mostly unrelated to the nature of the environment, institute, or personnel. It is virtually oriented, and although users of gamification could fail to excel in academic or business pursuits which is all attributed to poor gamification design. The objectives of this study are to awaken the consciousness of the user on the importance of gamification as a mode of study and possibly enhance careers in model-driven engineering technologies. This requires in-depth knowledge and an ethical process. The study is not a substitute for companies’ engineering policies but rather the conceptualization of ideas that can yield successful model-driven engineers. Keywords Academic · Design · Education · Gamification · Model-driven

1 Introduction In the field of education, gamification has become a popular technique in recent times. The idea behind gamification is to utilize game design principles and mechanics in non-game settings, such as education, to enhance student motivation, engagement, and learning results [1]. This research presents an introduction to gamification in education, covering its advantages, difficulties, and recommended strategies for implementation. The concept of gamification is rooted in the idea that humans have an innate desire to compete, collaborate, explore, and achieve, [2] J. B. Idoko (B) Applied Artificial Intelligence Research Centre, Department of Computer Engineering, Near East University, Nicosia 99138, Turkey e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_10

131

132

J. B. Idoko

which are also fundamental components of successful game design. By integrating these elements into educational activities, gamification aims to make learning more enjoyable and fulfilling, while also enhancing students’ knowledge retention, motivation, and participation [3]. One significant advantage of gamification in education is its ability to make learning more personalized and interactive [4]. By creating customized learning paths and activities that are tailored to students’ interests and learning styles, educators can provide more engaging and relevant learning experiences that capture students’ attention and inspire them to learn. Gamification can also foster healthy competition and collaboration among students [5], as they compete with one another to earn points and rewards or collaborate to achieve shared goals. This can help students develop important social and emotional skills, such as teamwork, communication, and problem-solving, which are essential for success in both academic and professional settings. According to Tondello et al. [6] gamification can provide immediate feedback to students, enabling them to monitor their progress and identify areas for improvement. This can help students become more self-directed and autonomous learners, as they can take ownership of their learning and develop strategies to achieve their goals. The use of gamification in education offers many potential benefits, but some challenges must be addressed to ensure its effectiveness. One major challenge is ensuring that gamification is aligned with the underlying learning objectives and curriculum standards, [7] and not used as a substitute for good teaching practices like clear communication, effective feedback, and active learning. Another challenge is finding a balance between extrinsic rewards and intrinsic motivation to ensure that gamification does not focus solely on rewards and neglects to learn [8]. To overcome this challenge, educators should aim to create a balance between the two and ensure that students are learning the material and not just focused on earning rewards. Finally, implementing gamification can be a complex and time-consuming process that requires careful planning, design, and implementation. To overcome this challenge, educators need to have a clear understanding of game design principles, techniques, and the interests and needs of their students. Howbeit, Gamification according to a finance online review of business has outstanding benefits [9, 10], rapid growth across the globe, and organization as can be seen in Fig. 1.

1.1 Artificial Intelligence (AI), and Implementation of Gamification The use of artificial intelligence (AI) technology can improve gamification in education. AI has the potential to tailor the learning process to the needs of individual students by offering personalized learning paths and suggestions based on their unique learning styles and progress. Artificial intelligence (AI) can enhance gamification in education by personalizing the learning experience for students through adaptive learning paths and recommendations based on individual learning styles and progress. AI-powered learning platforms can also offer real-time feedback and

The Emerging Benefits of Gamification Techniques

133

Fig. 1 Showing the benefits of gamification

assessment to aid students in tracking their progress and identifying areas of improvement. However, the integration of AI must be done responsibly and transparently, with a focus on improving the learning experience for students and supporting teachers in their instructional approach. In AI, gamification personalizes ideas in terms of data, for instance, an AI-powered educational app monitors the progress of students and offers rewards. This adaptive learning technique changes the behavior of students and makes futuristic predictions [11–31]. To effectively implement gamification in education, educators should follow several best practices. Firstly, they should identify the learning objectives and curriculum standards they want to achieve, and then design gamification elements that align with these objectives. Secondly, they should create engaging and motivating gamification elements that provide meaningful learning experiences for students. Lastly, educators should provide clear instructions, regular feedback, and support throughout the gamification process to ensure that students understand the relevance of their learning objectives and remain engaged. Gamification is employed in all sectors of life. The US military introduces gamification for recruits to evaluate their skills. According to Polyanska et al. [32], many companies apply gamification in sales, create a good work environment, and fıght dısınformatıon. For instance, during the covid 19 lockdown, several game-gamified techniques were created to spread authentic information in collaboration with the world health organization (WHO) to promote distance learning. Figure 2 is a gamified technique about the social distances during the COVID-19 pandemic [10].

134

J. B. Idoko

Fig. 2 Gamified technique for social distances

It is now well-established that Gamification and mobile learning are often combined to create interactive and engaging learning experiences. This can involve incorporating games, badges, rewards, progress tracking, competition, and collaboration into mobile learning apps [33]. For instance, a language learning app might use games to reinforce key concepts, reward users for completing tasks, track their progress, and encourage competition and collaboration among users. However, it’s crucial to use these technologies responsibly and take into account their potential risks and unintended consequences. The educational environment uses virtual reality as bases for gamification. Recently, there has been renewed interest in the Standford university virtual human interaction lab that designed a user-friendly downloadable Standford ocean acidification experience (SOAE) for learning. The SOAE uses cutting-edge technology to provide an interactive and exploratory teaching atmosphere about the consequences of carbon emissions on ocean waters [34].

2 Methods of Gamification and Engineering Design The first method in gamification techniques is planning. The gamification themes are identified and prioritized. The selected themes are verified for applicability. During the verification stages, the legal and ethical approval of the games or lectures is considered. Also the expiration of the selected themes or courses. The next stage is the game instance stage, characterized by library architecture, merits, and disadvantages. Here analysis of users and type of user- age, sex, gender, level of education, exposure, race, language, and religion are defined. Additionally, in the execution of plans and

The Emerging Benefits of Gamification Techniques

Planning

135

Legal approval Ethical Expiration of the lecture/courses

Game instance Library architecture

Hardware repair methods

pros and cos

User analysis

Age, sex, gender, race Level of education Expectation Language and religion

Execution of ideas

Prototype Simulation

Software and methods

Validation Implementation

Fig. 3 Gamification design

ideas. The model is prepared and theme selection progresses to the engineering designs. The prototype is created, evaluated and gamification simulated. Finally, the prototype is implemented and validated and gamification is adopted [35]. The overall setup is depicted in Figs. 3, 4 and 5 respectively.

136

Fig. 4 Game simulation

J. B. Idoko

The Emerging Benefits of Gamification Techniques

137

Fig. 5 Game validation/ analysis

3 Conclusion In overall gamification, the idea, modification, the action gives feedback to the user. The game theme objectives are to the gamification expert who develops the concept. Experts like software engineers are incorporated in the modeling of the language to achieve a game full application. The methodology elaborated here is for educational purposes and should not circumvent the industrial gamification engineering protocols. More ethical values are needed for gameful development.

References 1. Abdul Rahman, M. H., Ismail Yusuf Panessai, I., Mohd Noor, N. A. Z., & Mat Salleh, N. S. (2018). Gamification elements and their impacts on teaching and learning—A review. The

138

2.

3.

4.

5.

6.

7.

8. 9.

10. 11. 12. 13. 14. 15.

16. 17.

18.

19. 20.

J. B. Idoko International Journal of Multimedia & Its Applications, 10(6), 37–46. https://doi.org/10.5121/ ijma.2018.10604. Shi, L., & Cristea, A. I. (2016). Motivational gamification strategies rooted in self-determination theory for social adaptive e-learning. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 9684, 294–300. https://doi.org/10.1007/978-3-319-39583-8_32 Giráldez, V.A., Sanmiguel-Rodríguez, A., Álvarez, O. R., & R. Navarro-Patón. (2022). Can Gamification Influence the Academic Performance of Students? Sustainability, 14(9), 5115. https://doi.org/10.3390/SU14095115. Lampropoulos, G., Keramopoulos, E., Diamantaras, K., & Evangelidis, G. (2022). Augmented reality and gamification in education: A systematic literature review of research, applications, and empirical studies. Applied Sciences (Switzerland), 12(13). MDPI. https://doi.org/10.3390/ app12136809. Dichev, C., & Dicheva, D. (2017). Gamifying education: what is known, what is believed and what remains uncertain: a critical review. International Journal of Educational Technology in Higher Education, 14(1). https://doi.org/10.1186/S41239-017-0042-5. Tondello, G. F., Wehbe, R. R., Diamond, L., Busch, M., Marczewski, A., & Nacke, L. E. (2016). The gamification user types hexad scale. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play, Oct. 2016 (pp. 229–243). https://doi.org/10.1145/296 7934.2968082. Landers, R. N., Auer, E. M., Helms, A. B., Marin, S., & Armstrong, M. B. (2019). Gamification of adult learning: Gamifying employee training and development. In The Cambridge Handbook of Technology and Employee Behavior (pp. 271–295). Cambridge University Press, https://doi. org/10.1017/9781108649636.012. Medalia, A., & Saperstein, A. (2011). The role of motivation for treatment success. Schizophrenia Bulletin, 37(Suppl 2), S122. https://doi.org/10.1093/SCHBUL/SBR063 Gamification as an Effective Learning Tool to Increase Learner Motivation and Engagement. International Journal of Advanced Trends in Computer Science and Engineering, 10(5), 3004– 3008, 2021. https://doi.org/10.30534/ijatcse/2021/111052021. 12 Gamification trends for 2022/2023: Current forecasts you should be thinking about— Financesonline.com. https://financesonline.com/gamification-trends/. Accessed Mar. 06, 2023. Dufort, D., et al. Gamification, artificial intelligence and support to motivation. Actes de conférences. https://hal.science/hal-03598969 Abiyev, R., Arslan, M., Bush Idoko, J., Sekeroglu, B., & Ilhan, A. (2020). Identification of epileptic EEG signals using convolutional neural networks. Applied sciences, 10(12), 4089. Abiyev, R. H., Arslan, M., Idoko, J. B. (2020). Sign language translation using deep convolutional neural networks. KSII Transactions on Internet & Information Systems, 14(2). Helwan, A., Idoko, J. B., & Abiyev, R. H. (2017). Machine learning techniques for classification of breast tissue. Procedia computer science, 120, 402–410. Sekeroglu, B., Abiyev, R., Ilhan, A., Arslan, M., & Idoko, J. B. (2021). Systematic literature review on machine learning and student performance prediction: Critical gaps and possible remedies. Applied Sciences, 11(22), 10907. Idoko, J. B., Arslan, M., & Abiyev, R. (2018). Fuzzy neural system application to differential diagnosis of erythemato-squamous diseases. Cyprus Journal of Medical Sciences, 3(2), 90–97. Ma’aitah, M. K. S., Abiyev, R., Bush, I. J. (2017). Intelligent classification of liver disorder using fuzzy neural system. International Journal of Advanced Computer Science and Applications, 8(12). Bush, I. J., Abiyev, R., Ma’aitah, M. K. S., Altıparmak, H. (2018). Integrated artificial intelligence algorithm for skin detection. In ITM Web of Conferences (Vol. 16, p. 02004). EDP Sciences. Bush, I. J., Abiyev, R., & Arslan, M. (2019). Impact of machine learning techniques on hand gesture recognition. Journal of Intelligent & Fuzzy Systems, 37(3), 4241–4252. Uwanuakwa, I. D., Idoko, J. B., Mbadike, E., Re¸sato˘glu, R., Alaneme, G. (2022). May. Application of deep learning in structural health management of concrete structures. In Proceedings of the Institution of Civil Engineers-Bridge Engineering (pp. 1–8). Thomas Telford Ltd.

The Emerging Benefits of Gamification Techniques

139

21. Helwan, A., Dilber, U. O., Abiyev, R., & Bush, J. (2017). One-year survival prediction of myocardial infarction. International Journal of Advanced Computer Science and Applications, 8(6) doi:https://doi.org/10.14569/IJACSA.2017.080622 22. Bush, I. J., Abiyev, R. H., & Mohammad, K. M. (2017). Intelligent machine learning algorithms for colour segmentation. WSEAS Transactions on Signal Processing, 13, 232–240. 23. Dimililer, K., Bush, I. J. (2017). Automated classification of fruits: pawpaw fruit as a case study. In Man-Machine Interactions 5: 5th International Conference on Man-Machine Interactions, ICMMI 2017 Held at Kraków, Poland, October 3–6, 2017 (pp. 365–374). Cham: Springer International Publishing. 24. Bush, I. J., Dimililer, K. (2017). Static and dynamic pedestrian detection algorithm for visual based driver assistive system. In ITM Web of Conferences (Vol. 9, p. 03002). EDP Sciences. 25. Abiyev, R., Idoko, J. B., Arslan, M. (2020). Reconstruction of convolutional neural network for sign language recognition. In 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE) (pp. 1–5). IEEE. 26. Abiyev, R., Idoko, J. B., Altıparmak, H., & Tüzünkan, M. (2023). Fetal Health State Detection Using Interval Type-2 Fuzzy Neural Networks. Diagnostics, 13(10), 1690. 27. Arslan, M., Bush, I. J., Abiyev, R. H. (2019). Head movement mouse control using convolutional neural network for people with disabilities. In 13th International Conference on Theory and Application of Fuzzy Systems and Soft Computing—ICAFS-2018 13 (pp. 239–248). Springer International Publishing. 28. Abiyev, R. H., Idoko, J. B., Dara, R. (2022). Fuzzy neural networks for detection kidney diseases. In Intelligent and Fuzzy Techniques for Emerging Conditions and Digital Transformation: Proceedings of the INFUS 2021 Conference, Held August 24–26, 2021 (Vol. 2, pp. 273–280). Springer International Publishing. 29. Uwanuakwa, I. D., Isienyi, U. G., Bush Idoko, J., Ismael Albrka, S. (2020). Traffic warning system for wildlife road crossing accidents using artificial intelligence. In International Conference on Transportation and Development 2020 (pp. 194–203). Reston, VA: American Society of Civil Engineers. 30. Idoko, B., Idoko, J. B., Kazaure, Y. Z. M., Ibrahim, Y. M., Akinsola, F. A. and Raji, A. R. (2022). November. IoT based motion detector using raspberry Pi gadgetry. In 2022 5th Information Technology for Education and Development (ITED) (pp. 1–5). IEEE. 31. Idoko, J. B., Arslan, M., Abiyev, R. H. (2019). Intensive investigation in differential diagnosis of erythemato-squamous diseases. In Proc. 13th International Conference on Theory and Application of Fuzzy Systems and Soft Computing (ICAFS-2018) (Vol. 10, pp. 978–3) 32. Polyanska, A., Andriiovych, M., Generowicz, N., Kulczycka, J., Psyuk, V. (2022). Gamification as an improvement tool for HR management in the energy Industry—A case study of the Ukrainian market. Energies, 15(4), 1344. https://doi.org/10.3390/EN15041344. 33. Apandi, A. M. (2019). Gamification meets mobile learning (pp. 144–162). https://doi.org/10. 4018/978-1-5225-7832-1.CH009 34. The Stanford Ocean Acidification Experience | VHIL.https://stanfordvr.com/soae/. Accessed Mar. 06, 2023. 35. GDF: a Gamification Design Framework powered by Model-Driven Engineering. https:// modeling-languages.com/gdf-gamification-design-framework-model-driven-engineering/. Accessed Mar. 04, 2023.

A Comprehensive Review of Virtual E-Learning System Challenges John Bush Idoko and Joseph Palmer

Abstract The development of advanced smart electronic devices and application in the twenty-first century has made the World Wide Web a learning platform and easy dissemination of information. This research explore some of the challenges encountered using Virtual E-Learning Systems and proposed recommendations to address some of these challenges. VE-LS has been a medium where tutors pass knowledge by training students in a non-physical form. This has created possibilities as it enhances the efficiency which traditional education and tutoring could not. The Covid-19 virus pandemic disrupted a lot of learning activities over the years, this made academic institutions to merge conventional (Face–Face) and online virtual learning. This process is been led by an instructor in an engaging manner to facilitate learning activities and assist with problem solving. However, several technological challenges were encountered as majority of users were novice and have no idea how to use such systems. In this study, we reviewed some of these challenges and proffer possible solutions to them. Keywords Virtual e-learning · Covid-19 · Online learning · Framework · Content filtering

1 Introduction Information retrieval and sharing is a recent trend of online daily activities for either academic purposes or personal usage. According to postulations in [1], functional learning has been defined as an experience-induced change in attitude and has been characterized in the organism. Research has identified that adaption and behavioral J. B. Idoko (B) Applied Artificial Intelligence Research Centre, Department of Computer Engineering, Near East University, Nicosia 99138, Turkey e-mail: [email protected] J. Palmer Department of Information Systems Engineering, Near East University, Nicosia 99138, Turkey © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_11

141

142

J. B. Idoko and J. Palmer

change can be induced by patterns of the environment. With some of the dismay of conventional traditional lectures, people expect more from their education than what textbooks and lectures provide. As “The increase of e-learning systems is likely to be considered based on economic factors, but it is made to grow exponentially by learner’s demands for flexibility and more learner-centered learning” [1]. This is especially true in the digital era when so much information is easily accessible online via electronic smart devices. As e-learning has become more eminent among academic institutions, this has created competition among them considering the idea of distributing subject matter via a platform between lecturers and students, experts have seen it necessary to use this system that will support the learning process in universities and another institutional forum, this will aid them to grow and become competitive. It is visible that there are severally issues with the present ways of tutoring in the tertiary institution which include the lack of inadequate training and commitment by tutors, and unreliable technology systems which tutors may be unfamiliar with. This pull-down of the effort to monitor students, and the low technical ability of students to handle e-learning systems which is also not taken into consideration hinder the motivations with Virtual e-learning systems. The workload of many lectures in higher institution also contribute to e-learning being seen as an inefficient medium of mentoring students, some institutions preferable will move with the traditional face– face learning of students. With countless research been done about the efficiency and effectiveness of online e-learning and some of its restrictions, glitches, and challenges which admonish or prevent e-learning been used by tutors and students have been taken into consideration by research carried out by Hazwani and associates in [2], and Irfan and colleagues in [3]. Irfan and associates in [3] stated that online learning is ineffectual and is conducted inappropriately. This is indicated by several factors to name but a few, unavailability of the internet, tutors’ inability to handle e-learning systems online, and minimal cooperation from parents. On the other hand, in [1], Buhari and colleagues disputed this fact and found that online learning is effective but inefficient. It was well known to their knowledge that e-learning is effective in other to complement traditional learning as a means of improvising and preventing the spread of covid-19 hence adhering to the urgency of covid pandemic. However, these bring a huge burden of cost as suitable internet packages to enable connectivity with e-learning systems, this may also prevent the outcomes of lessons being taught by lectures or tutors. In addition, Hazwani and colleagues in [2] consider online learning as effective because of the existence of various applications platforms such as ‘Whatsapp,’ ‘Zoom,’ and ‘Google Classroom’. Buhari and Roko in [1], on the other hand, agree that internet access and internet packages limit the effectiveness of online learning. According to Ja’ashan in [4], the deficiency in teacher and student interaction, internet connectivity issues, and a lack of technological facilities all pose challenges to the effectiveness of Virtual e-learning learning. Hazwani et al. in their submission in [2], institutional infrastructure is also a key factor to ensure the efficiency to facilitate online learning. This aid easy accessibility of the internet by students, a poor infrastructure will impede internet availability.

A Comprehensive Review of Virtual E-Learning System Challenges

143

The effort of Virtual e-online learning does not only limit to the unavailability of internet access, the commitment of lectures and students to open up in accepting new technologies also counts. As stated by Ja’ashan in [4], some of the infrastructure and technology used to develop existing e-learning has long outdated, hence the environment (Modules) which users interact with makes it more difficult to use, poor internet can also affect the system hence leave the user with no choice but get discouraged from getting their-selves acquitted with the said system. E-learning varies from different points of systems development, considering some of the functionality that will be embedded in the system, such as video live streaming, upload and download of lecture material, e-exam system, assessments, server base connectivity, etc. optimum consideration should be taken. Web base system has developed over the years in providing relevant information in intelligent ways. With semantic web technology, the WWW web architecture plays a relevant role in supporting web content which is considered to be related to formal semantics. Semantic web information is deemed to be considered in a well-defined manner and meaning in a precise and better way for computers and people to work in cooperation, this is mainly to get tasks and functions handle by intelligent machines. This concept was proposed by Tim Berners Lee, head of W3C. Giant tech companies such as Google and Microsoft with some of the products of their intelligent system used for e-learning which are “Google classroom, Gmail, Duo, Google forms, and Microsoft Team have helped the institutions shift from on-campus to online education” [5]. These online systems have also aided institutions for conduct online virtual education, however, it comes with a huge cost for maintenance and technical support, and issues related to installation, and hosting may require an expert to manage the system. With some of its restrictions to the number of enrolled students, accessibility and provision of grade book for assessment records accessibility, video, and audio. Furthermore, individual interaction between students and teachers becomes a prominent issue during online education. Teachers’ workload increases as they demonstrate practical subjects using advanced technology. Teachers and students must both learn about modern technologies. As a result of the adaptation and in addition to the search for new technological resources for virtual online teaching, it is prudent the need students and teachers mentoring and guidance in order to improve their skills and knowledge of online learning methods. Otherwise, this may be less engaging in the environment, confusing, and boring for tutors and students. An institution may also find it costly to maintain an online virtual system and training tutors [6].

2 Related Works Virtual e-leaning engagement has grown intensively in recent years, this has made many academic institutions has merge traditional lecture with online learning in other to uphold and maintain labor market hence putting measures in place in training staff in the use of advance technology. However, numerous challenges are still face by

144

J. B. Idoko and J. Palmer

teacher and students. Extensive research have been made by researchers on e-learning on how it can be more interesting and easily used by tutors. Existing system such has Moodle, Blackboard System etc. Which has been around for decades have been doing extremely well, but does have numerous challenges and limitations. According to Lungu [7], Moodle is free software, a learning management system providing a platform for e-learning and it helps various educators considerably in conceptualizing the various courses, course structures, and curriculum thus facilitating interaction with online students. As a widely used platform, it enables people to create engaging, and utterly personal, online learning courses for the people who matter the most whether that’s students, team members, partners, affiliates, recruits, or even volunteers. Every coin has its two sides, even though Moodle is considered to be one of the most used e-learning platforms for its flexibility and many tools, it also lacks certain critical features which is well needed by an e-learning system. Features like video, audio, and whiteboard inclusion. Digital Chalk LMS is another robust, flexible online training software with an excellent support team. It provides a learning experience for both academic and corporate organizations. Additionally, Digital Chalk LMS also provides the option to sell online learning courses [7]. It is a time-saving, efficient, learning management system (LMS) that enables users to design and launch their course products, their way. It is also designed to handle the needs of small organizations to large enterprises, supporting course sales and internal course delivery. Unlike other LMS applications, digital chalk also has its own pros and cons, for instance: There are no BlackBoard/ Moodle integrated features (video chat, blogs, forums, etc.). It is also limited to about 500 Mb video limits with a less high-tech resolution which makes vision difficult for students. With it limited space storage, this may hinder the upload of long video lecture contents that will help students to get better understanding of relevant lessons, have been around for years have several limitation especially in its designed frame work, these limit many useful functional platforms which are not considered in some e-learning systems. With these existing e-learning system which have been in existence for decades, Ali et al. in [5], identify the difficult in selection of appropriate learning resource base on users learning requirements and interest. Particularly, for those who are yet to familiarize or have little knowledge of exploring some of the advanced internet technological tools in developing countries, does need this platform that could help them greatly in resolving some of the challenges encountered in online virtual learning environment. In resolving this issue, the researchers propose an intelligent mechanism which correctly predicts appropriate selection of resources of user need based on user perspective this enhanced better quality of online education and familiarization of advanced digital online learning environment. With collaborative-based filtering (CBF) and content filtering (CF), this enhances previous user experiences and current user information, respectively and it aid learners in identifying important materials and experience in accordance to knowledge. A daily virtual assistance for new learners in resolving issues has been another major problem as stated by the researchers, using hybrid filtering (HF), which is a coinage of CBF and CF, this personalized option and predictions for accurate semantic analysis term

A Comprehensive Review of Virtual E-Learning System Challenges

145

mismatch issues resolution. With semantic base prediction, visualized environments is significantly improved base on relevant selection of personalize preferences, this out performed existing method. In a quantity research conducted by Ja’ashan [4], based on questionnaire categorize in to academic, technology and student challenges with responses from 36 teaching staff and 261 selected randomly, a total mean of (3.60) show for academic challenges and SD of (0.91), which indicates that academic staff agrees they face several challenges while using e-learning to teach. A mean of (3.62), which is at a high scale shows the lack of interest of preparing and integrating the lecture material online, this obviously hinder the interest of student in online learning and lack of interaction between students and teaching staff. As technological software application tend to solve several challenges we face but also comes with it on difficulties in how to use them. The Table 1 shows challenges faces as conducted for 36 teaching staff from a survey. Table 1 shows some of the technological challenges faced by teaching staff [4]. The challenges and prospects of using e-learning among EFL students in Bisha University. Jabr and Al-omari in their submission in [8], proposed a web services oriented framework for an e-leaning management system with a client scripting support for cross browser which integrate several databases. This framework speed up access to required data facilities and also enable students to manage control of their on learning facilities and experience. As previous e-learning system has base their development on HTML, a more dynamic means of transforming web content can also improve user experienced using XML, an extensible markup language accepted for standard web pages development and better online experience. This proposed framework increase the efficiency and strength of collaborative learning with regards to re-usability, accessibility and modularization. The access to e-leaning systems does not necessarily mean to be restricted to bigger towns and accessible at a particular given period. Due to internet access and limitation in many regions, students who lives in remote rural regions find it difficult to access e-learning system, in worst cases may access but because of internet limitation and speed it derail the anxiety of Most students in gulf countries not having Table 1 Technology challenges of teaching staff No

Statements

Mean

Std. deviation

7

Lack of technology/software required for home access

3.60

1.00

8

Lack of technical support/advice

4.02

1.04

9

Lack of necessary adaptive technology

3.88

0.96

10

Inaccessibility of audio/video material, PDF, PowerPoint, etc.

3.91

1.01

11

Technological background

3.60

0.77

12

Lack of training courses provided by institutions

3.71

0.89

13

The software of e-learning is too complicated to use

3.45

1.01

Total

4.64

0.65

146

J. B. Idoko and J. Palmer

much interest in learning the new and more beneficial courses like programming, web development, designing and this discourage the use and purpose of e-learning platforms system. In [9], Siddiqui and associates researched and proposed that there is a significant use and access to internet by people, however, the courage to learn new things was not due to lack of proper basic education. In many Asian and gulf countries like Indian, Saudi Arabia and Egypt whose geographical area is extremely wide and more populated, it has been a priority for these countries to ensure and provide basic improved education for everyone to propel. Siddiqui and Masud in [9] presented interactive system for e-learning. This system includes a dedicated educational satellite which is responsible to distribute the e-learning contents to the universities connected to it hence improve the performance this is achieved by using satellite that works on spot beam technology supported with VSAT terminals. Using this method does help relays contents from key centers and transmitted to satellite, this in-turn distribute all content to university website which can be accessible by students using Web and Mobile Application. Satellite connection may prove to be fast for daily use but in many cases it latency affect data transmission. Internet cable like coaxial cable can offer speed of data for up to 2000 mbps which is very much higher than satellite transmission and more reasonable to maintain and afford [9].

3 Challenges Encountered During Covid-19 Pandemic with E-Learning System The majority of academic institutions move their academic lectures during the covid19, to prevent the spread of the deadly virus. However, much preparation regarding the usage of e-learning systems was not taken into consideration, hence users who do understand can not necessarily operate or use the environment appropriately. These issues associated with e-learning are mostly linked to not enough experience users, slow internet connectivity, the reason being that they are not well oriented on how to use it, secondly, the design framework of the system which may not favor the physically challenged, causes issues of speed, data transfer such hard copy upload, the transmission of live video due to slow internet connectivity which will literally involve cost, this will demotivate the use of e-learning by many student and parent as well. Any successful information system relies on the usage of the system by its users [8]. Siddiqui and Masud in [9], re-instated some of the thematic main issues which are encountered with e-learning systems. As earlier mentioned by Ali et al. in [5], e-learning system users may face difficult challenges if they are not very much familiar with the technology itself. As demand increases for the course of e-learning, educations institution more especially for those who are into distance learning, the university should create provisions for ways of self-thought or automated virtual assistance of e-learning systems.

A Comprehensive Review of Virtual E-Learning System Challenges

147

4 Research Methodology This review follows a qualitative research methodology used to attain the objectives of this study. It was done in order to thoughtfully and specifically address the questions raised. The qualitative approach to study adheres to multiple approaches, including participant observation, individual, data analysis or centered group interviews, documentation analysis, and more, to understand and describe the meanings, relationships, and patterns in a given data [10–30]. A detailed study was made using the catalog of papers, and also the best research that dealt with the issue raised with precise solutions was presented. Finding the most appropriate papers from sources is the sole aim of this review study. For the purpose of capturing the best possible results, these papers were reviewed and analyzed. Many issues and possible solutions were raised leading to e-learning challenges. However, from various articles and submissions, e-learning challenges do not emerge from one point of view since there are many parties involved, that is the User, Framework used and Technology used, and most importantly connectivity of the e-learning systems have contributed one way or the other of e-learning challenges. Table 2 summarizes the major challenges of e-learning system and their corresponding solutions. Table 2 Summary of challenges of e-learning system and solutions Authors

Challenges observed

Methods

Solutions outcome

Ali et al. (2021) [5]

Difficulty in selection of appropriate learning resources

Collaborative-based filtering (CBF) and content filtering (CF)

Visualized environments are improved by selecting personalize preferences

Ja’ashan (2020) [4]

Users and technological challenges

Quantitative research method

Propose training of users on familiarity of e-learning systems and quick virtual response support

Jabr and Al-omari (2010) [8]

Slow web load to data facilities

web services oriented framework

Increase web pages load and speed

Siddiqui and Masud (2012) [9]

Impelled of learning new things via e-learning

satellite that works on spot beam technology supported with VSAT

Improved performance of relaying e-learning resource from key learning centers to university websites

148

J. B. Idoko and J. Palmer

5 Discussion Virtual e-learning learning system operations have been used and implemented by educational institutions, medical and other private companies to teach students and employees over the past years. The usage of this mode of learning is more prominent in educational institutions. Since the insurgence of covid-19 virus, a lot of institutions has to merge traditional face–face tutoring and e-learning systems altogether, however, this has encountered enormous challenges across the globe, especially in developing countries where most universities have only been habitual to face–face learning methods. Virtual-e-learning brings a lot of benefits despite its challenges with users, internet connectivity, design, and system deployment. From previous findings, researchers have identified most of these issues, and some of them have helped out in solving and making the system environment more accessible. With various issues of e-learning, Designing, managing, and time to tutoring users on how to use this plat has to be seriously taken into consideration, as stated by Siddiqui and Masud [9], ninety percent (90%) of the users in the Arab region have access to the internet, but the distance and quick access to Improved performance of relaying e-learning resource where major challenges, with the satellite that works on spot beam technology supported with VSAT, has aided institution from distance to have access to e-learning resources easily. This method may look improved but comes with challenges of systems and cost maintenance. The coaxial cable is considered to be faster in connectivity of the internet and transfer data at a speed of 2000 mbs it is a good and reliable internet connection, but the limitation of distance will derail its connectivity, hence which will slow down many e-learning systems functions such as video streaming, learning resource files upload and download, e-exams and other relevant functions. As new users of e-learning increase every day, more issues will be encountered daily as the access to servers holding learning resources by millions of smart hand-held and laptop devices will experience heavy network traffic while requesting web page load.

6 Recommendations Designing and implementing e-learning without creating aid to consistently teach users does not help the situation in using a virtual learning system, this discourages the use and motive of e-learning. Under busy schedules, preparation of material and little idea of how to work around the e-learning system has to discourage its use of it in many developing countries, instant virtual support and a concise frame method of the system design using newer technology that can function through all web browsers immensely improved connectivity page load and readability of resource, also taking into consideration intelligent mechanism which correctly predicts appropriate selection of resources need based on user perspective is of the essence. “Despite the importance of web accessibility in recent years, websites remain partially or completely

A Comprehensive Review of Virtual E-Learning System Challenges

149

inaccessible to certain sectors of the population” [31]. Access to e-learning for disability users has been another added demerit of e-learning system, which many developers give oversight to, according to W3C’s Web Content Accessibility Guidelines (WCAG) standards of web accessibility using advanced tools, these guidelines should be taken into consideration while designing virtual e-learning system, this tool does improve accessibility for the physical challenge. Additionally, thousands or millions of devices access virtual e-learning systems, these create dense network traffic and possibly affect network speed these slow down internet connectivity. The development and implementation of AI in Network Traffic Control have brought immense help to gaining relevant delivery and efficiency of well-developed resource utilization through monitoring, inspecting, and regulating data flows [32]. Powerful smart mobile devices and pervasive ultra-dense radio networks have dramatically extended network scale with explosive volumes of data traffic, putting a major strain on Internet management with the development of IoT technology and the coming of the beyond fifth generation (5G) era. The internet’s traffic flow models and service architecture have also undergone considerable change as a result of developments in central cloud and smart edge services. New technologies are needed to manage traffic control in a highly saleable and adaptive way. Access to web pages and servers drives heavy traffic which in fact cannot be handled by human mind capability. This has an impact on server page load, access to learning resources, and slow connectivity. To cope with traffic control with dynamic and large-scale topology, AI, which has human-like capabilities in environment perception, data processing, and strategy determination, is currently a potential option. The tremendous increase in smart devices cannot be handled by 4G, hence 5G is of the essence since wireless networks are accelerated and do offer an ultra-fast and reliable wireless link [33]. This deployment will drive traffic at a speed of 100 exabytes and will affect over 30 Billion devices.

7 Conclusion Virtual e-learning complements face–face lectures in the twenty-first century. It has advanced features compared to textbooks and classroom lecturers. Today’s learners are more engaged with mobile, and digitally plugged-in, enabling them to interact with courses, services, and resources wherever and whenever it is convenient. Airbone diseases like the coronavirus have destabilized academic activities, especially in developing countries which has impacted the quality of delivery and student academic performance and output. Challenges with e-learning systems are inevitable, however, these gainsay are addressable if the necessary user requirements, framework design, and implementation are taken into consideration. This will significantly brand elearning and help improve the learning process and result.

150

J. B. Idoko and J. Palmer

References 1. Buhari, B. A., & Roko, A. (2017). An improved e-learning system. Saudi Journal of Engineering and Technology, 2(2), 114–118. 2. Hazwani Mohd, N., Noor Raudhiah Abu, B., & Norziah, O. (2020). E-Pembelajaran Dalam Kalangan Pelajar Di Sebuah Institusi Pengajian Tinggi Selangor. Selangor. Malaysian atas talian. Journal of Education. 3. Irfan, F., & Iman Hermawan Sastra, K. (2020). Teachers elementary school in atas talian learning of COVID-19 pandemic conditions. Jakarta. Jurnal Iqra’. 4. Ja’ashan, M. M. N. H. (2020). The challenges and prospects of using e-learning among EFL students in Bisha University. Arab World English Journal, 11(1), 124–137. https://doi.org/10. 24093/awej/vol11no1.11 5. Ali, S., Hafeez, Y., Abbas, M. A., Aqib, M., & Nawaz, A. (2021). Enabling remote learning system for virtual personalized preferences during COVID-19 pandemic. Multimedia Tools and Applications, 80, 33329–33355. 6. Renée, C. (n.d). Getting to know the blackboard learning system. Manager: eLearning, 043704(7031), 1. 7. Lungu, M. (2016). Studyportalsonlinecourse. Retrieved February 3, 2023, from Distancelearningportal: https://www.distancelearningportal.com 8. Jabr, M. A., & Al-omari, H. K. (2010). E-learning management system using service oriented architecture. 9. Siddiqui, A. T., & Masud, M. (2012). An e-learning system for quality education. International Journal of Computer Science Issues (IJCSI), 9(4), 375. 10. Salihu, A. (2013). Designed and implemented an elearning system for a vocational study Centre, Jedo Computer Institute. 11. Abiyev, R., Arslan, M., Bush Idoko, J., Sekeroglu, B., & Ilhan, A. (2020). Identification of epileptic EEG signals using convolutional neural networks. Applied Sciences, 10(12), 4089. 12. Abiyev, R. H., Arslan, M., & Idoko, J. B. (2020). Sign language translation using deep convolutional neural networks. KSII Transactions on Internet & Information Systems, 14(2). 13. Helwan, A., Idoko, J. B., & Abiyev, R. H. (2017). Machine learning techniques for classification of breast tissue. Procedia Computer Science, 120, 402–410. 14. Sekeroglu, B., Abiyev, R., Ilhan, A., Arslan, M., & Idoko, J. B. (2021). Systematic literature review on machine learning and student performance prediction: Critical gaps and possible remedies. Applied Sciences, 11(22), 10907. 15. Idoko, J. B., Arslan, M., & Abiyev, R. (2018). Fuzzy neural system application to differential diagnosis of erythemato-squamous diseases. Cyprus Journal of Medical Sciences, 3(2), 90–97. 16. Ma’aitah, M. K. S., Abiyev, R., & Bush, I. J. (2017). Intelligent classification of liver disorder using fuzzy neural system. International Journal of Advanced Computer Science and Applications, 8(12). 17. Bush, I. J., Abiyev, R., Ma’aitah, M. K. S., & Altıparmak, H. (2018). Integrated artificial intelligence algorithm for skin detection. In ITM Web of conferences (Vol. 16, p. 02004). EDP Sciences. 18. Bush, I. J., Abiyev, R., & Arslan, M. (2019). Impact of machine learning techniques on hand gesture recognition. Journal of Intelligent & Fuzzy Systems, 37(3), 4241–4252. 19. Uwanuakwa, I. D., Idoko, J. B., Mbadike, E., Re¸sato˘glu, R., & Alaneme, G. (2022). Application of deep learning in structural health management of concrete structures. In Proceedings of the Institution of Civil Engineers-Bridge Engineering (pp. 1–8). Thomas Telford Ltd. 20. Helwan, A., Dilber, U. O., Abiyev, R., & Bush, J. (2017). One-year survival prediction of myocardial infarction. International Journal of Advanced Computer Science and Applications, 8(6). https://doi.org/10.14569/IJACSA.2017.080622 21. Bush, I. J., Abiyev, R. H., & Mohammad, K. M. (2017). Intelligent machine learning algorithms for colour segmentation. WSEAS Transactions on Signal Processing, 13, 232–240.

A Comprehensive Review of Virtual E-Learning System Challenges

151

22. Dimililer, K., & Bush, I. J. (2017). Automated classification of fruits: Pawpaw fruit as a case study. In Man-Machine Interactions 5: 5th International Conference on Man-Machine Interactions, ICMMI 2017 Held at Kraków, Poland, October 3–6, 2017 (pp. 365–374). Springer International Publishing. 23. Bush, I. J., & Dimililer, K. (2017). Static and dynamic pedestrian detection algorithm for visual based driver assistive system. In ITM Web of Conferences (Vol. 9, p. 03002). EDP Sciences. 24. Abiyev, R., Idoko, J. B., & Arslan, M. (2020). Reconstruction of convolutional neural network for sign language recognition. In 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE) (pp. 1–5). IEEE. 25. Abiyev, R., Idoko, J. B., Altıparmak, H., & Tüzünkan, M. (2023). Fetal health state detection using interval type-2 fuzzy neural networks. Diagnostics, 13(10), 1690. 26. Arslan, M., Bush, I. J., & Abiyev, R. H. (2019). Head movement mouse control using convolutional neural network for people with disabilities. In 13th International Conference on Theory and Application of Fuzzy Systems and Soft Computing—ICAFS-2018 (Vol. 13, pp. 239–248). Springer International Publishing. 27. Abiyev, R. H., Idoko, J. B., & Dara, R. (2022). Fuzzy neural networks for detection kidney diseases. In Intelligent and Fuzzy Techniques for Emerging Conditions and Digital Transformation: Proceedings of the INFUS 2021 Conference, held August 24–26, 2021 (Vol. 2, pp. 273–280). Springer International Publishing. 28. Uwanuakwa, I. D., Isienyi, U. G., Bush Idoko, J., & Ismael Albrka, S. (2020). Traffic warning system for wildlife road crossing accidents using artificial intelligence. In International Conference on Transportation and Development 2020 (pp. 194–203). American Society of Civil Engineers. 29. Idoko, B., Idoko, J. B., Kazaure, Y. Z. M., Ibrahim, Y. M., Akinsola, F. A., & Raji, A. R. (2022). IoT based motion detector using Raspberry Pi gadgetry. In 2022 5th Information Technology for Education and Development (ITED) (pp. 1–5). IEEE. 30. Idoko, J. B., Arslan, M., & Abiyev, R. H. (2019). Intensive investigation in differential diagnosis of erythemato-squamous diseases. In Proceedings of the 13th International Conference on Theory and Application of Fuzzy Systems and Soft Computing (ICAFS-2018) (Vol. 10, pp. 978– 3). 31. Assareh, A., & Bidokht, M. H. (2011). Barriers to e-teaching and e-learning. Procedia Computer Science, 3, 791–795. 32. Saleh, E. (2020). Using e-learning platform for enhancing teaching and learning in the field of social work at Sultan Qaboos University, Oman. In E-Learning and Digital Education in the Twenty-First Century. Intech Open. 33. Graydon, M., & Parks, L. (2020). ‘Connecting the unconnected’: A critical assessment of US satellite Internet services. Media, Culture & Society, 42(2), 260–276.

A Semantic Portal to Improve Search on Rivers State’s Independent National Electoral Commission John Bush Idoko and David Tumuni Ogolo

Abstract Semantic search portals in electoral processes have emerged as a promising approach to enhance the accessibility, efficiency, and transparency of electoral information. This research focuses on applying a semantic search portal within Nigeria’s context of the Independent National Electoral Commission (INEC). The objective is to provide an overview of the benefits and implications of implementing a semantic search portal in the electoral domain. The semantic search portal leverages the power of semantic web technologies and ontologies to enable efficient and intelligent information retrieval. The portal facilitates accurate and context-aware search results by representing electoral data and knowledge in a structured and interconnected manner. It allows users, including citizens, researchers, and electoral officials, to access and retrieve relevant information regarding the electoral process, candidates, and policies. Developing and implementing the semantic search portal in collaboration with INEC Nigeria entails several crucial steps. These include the creation of a comprehensive ontology that captures the complexity of the electoral domain, integrating diverse data sources, utilizing natural language processing techniques for query understanding, and incorporating machine learning algorithms for search relevance and personalization. By embracing a semantic search portal, INEC Nigeria can realize numerous benefits, such as improved access to electoral information, enhanced decision-making processes, increased citizen engagement, and greater transparency in the electoral process. The portal can also contribute to data-driven insights, predictive analytics, and policy evaluation, thus empowering stakeholders to make informed decisions and strengthen democratic practices. Keywords Semantic search portal · Electoral process · INEC Nigeria · Semantic web · Ontologies · Information retrieval · Transparency · Citizen engagement J. B. Idoko (B) Department of Computer Engineering, Applied Artificial Intelligence Research Centre, Near East University, Nicosia 99138, Turkey e-mail: [email protected] D. T. Ogolo Department of Software Engineering, Near East University, North Cyprus, Mersin-10, Turkey © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_12

153

154

J. B. Idoko and D. T. Ogolo

1 Introduction In this study, we designed and developed a new ontology model for electoral processes in River State, Nigeria. Upon successfully testing the framework in a flow environment, it is clear that there is great potential for further development. The following chapter provides a comprehensive overview of the project. Elections are the formal process through which a population chooses individuals to hold public office; they have been the primary method of practicing representative democracy since the seventeenth century [1]. They play a crucial role in giving citizens a voice in the political process and a fair chance to participate in decisionmaking. Elections allow citizens to elect representatives democratically by voting for their preferred candidates. The candidate with the most votes is elected, and elections are held at various levels of government and in different organizations. The election process involves nomination, campaign, voting, and counting stages. In Nigeria, election fraud poses a significant challenge to establishing a foundation for democracy; the country needs more effective election management, transparent regulations, and manipulative tactics by political elites [2]. To build a solid democratic foundation, free, fair, and credible elections are essential. Election regulations must be effective, and political elites should avoid imposing unpopular candidates or engaging in manipulative tactics [3]. The Independent National Electoral Commission (INEC) is responsible for planning, managing, and conducting elections in Nigeria. The Internet has transformed various aspects of our lives, including banking, social media, e-commerce, education, entertainment, communication, and information sharing. It has become an indispensable tool for individuals and businesses, revolutionizing online sales, entertainment, education, and communication. The Internet continues to evolve and adapt to new technologies, playing a vital role in our daily lives [4]. The Semantic Web aims to make web content understandable not only to humans but also to machines. Currently, most web data is accessible to humans but not quickly processed by machines. Semantic tools and the concept of the Semantic Web enable machines to analyze data at a semantic level, going beyond simple syntax processing [5]. The Semantic Web relies on structured and ordered data presentation, allowing machines to understand and analyze information. Ontologies play a crucial role in encoding concepts, relationships, and restrictions within the semantic model, providing formal descriptions that machines can use to implement automation and reuse information across applications [6]. The Semantic Web enhances the search capabilities of the Web by considering the intent behind user queries, going beyond simple phrase matching. It aims to bring artificial intelligence to the Web, enabling machines to understand the meaning of queries and provide more accurate search results. Additionally, the Semantic Web enables machines to reason automatically, make inferences, and draw conclusions from available data. It also facilitates automated data aggregation and integration from multiple sources, simplifying user access to information [7].

A Semantic Portal to Improve Search on Rivers State’s Independent …

155

Overall, the Semantic Web represents a significant shift in how we interact with the Web, providing a more intelligent and intuitive user experience. As technology evolves and improves, it will profoundly impact our lives and work. The layers that Tim Berners Lee created for the semantic web include: 1. Unicode and URI layer. 2. XML, XML schema layer, and RDF layer. 3. RDFS ontology. 4. SPARQL Query. 5. Logic. 6. Proof. 7. Trust. And 8. User Interface. The research project integrates various concepts such as RDF, ontology, SPARQL, and user interface ideas. RDF is a standard for defining web resources, while ontology formalizes the description of items and their relationships in a semantic model. SPARQL is a query language used to access and modify RDF data. The project aims to explore how these concepts can be combined within ontologies to improve and automate electoral processes using the Semantic Web. The focus is on creating a formalized ontology of elections using Protégé 5.0 as the implementation language and description logic as the language of representation. The ARPA Knowledge Sharing Effort (Neches, 1991) presented a novel approach to developing clever artificial intelligence systems using reusable components. This ingenious concept proposes that these reusable components form the basis or structure for new systems, into which specific expertise and unique reasoning approaches can be integrated to tackle a particular challenge [8]. An “ontology” is a formal and explicit declaration of the ideas, traits, and connections that make up a domain; it stands for the “static” knowledge associated with it. On the other hand, problem-solving approaches relate to the strategies utilized to address specific issues within that field [9]. Utilizing knowledge components across various systems and tasks can significantly diminish the duration and expenses of constructing new intelligent systems. Reusing these knowledge components offers high cost and time savings compared to developing them from scratch [10]. This is because the processes involved in acquiring domain knowledge, constructing conceptual models, formalizing, and implementing such knowledge require considerable time and effort [11]. The Independent National Electoral Commission (INEC) is experiencing challenges in defining the functions of each electoral officer and developing efficient processes for conducting elections and more accessible and quicker access to information and data required. This is a problem, even though there are significant benefits that can be achieved by utilizing Semantic Web technology to create search systems. To address this issue, we are investigating and studying the Semantic Web concept to improve their website’s search structure. By implementing the Semantic Web approach, INEC can improve the efficiency and effectiveness of its electoral processes and enhance its ability to provide accurate and timely information to stakeholders. The specific goals for this research project are as follows: Firstly, to establish a shared comprehension of the structure of election processes. Secondly, to gather and analyze diverse data from multiple sources, including the Independent National Electoral Commission (INEC) Website, and consolidate it into a unified system using an ontology based on the OWL (Web Ontology Language). Thirdly, it facilitates the reuse of domain knowledge, allowing for the integrating of existing knowledge components. Fourthly, to perform queries on the data derived from OWL ontologies,

156

J. B. Idoko and D. T. Ogolo

enabling efficient retrieval of information. Lastly, to provide prompt and accurate responses to user queries. The research questions utilized the competence criteria specific to the river state to query the ontology. These questions aimed to gather relevant information about the election process, such as identifying the key players involved, determining eligibility for executive and senate positions, understanding the positions candidates are running for, identifying political parties, knowing the procedures that occur on election day, and determining who has the right to cast a ballot. A comprehensive understanding of the electoral landscape in the river state can be obtained by addressing these research questions. The study highlights the importance of elections as the core process in contemporary representative democracies for selecting public officeholders. Despite the assumption of transparency and understanding in election procedures, this is only sometimes the case. We aim to make the election process easily understandable and accessible to everyone. This study proposes the development of a defined ontology of electoral procedures. By organizing concepts and relationships through domain knowledge, the study aims to contribute to the existing knowledge and enable the effective implementation of an electoral ontology framework in Rivers State, Nigeria. The study investigates the current state of electoral ontology, identifies obstacles and influences on its growth, and evaluates the willingness of election participants to accept and utilize an Election Ontology. It also suggests methods for involving stakeholders in creating an election ontology, considering institutional, legal, and regulatory elements for designing an efficient election procedure. The anticipated results will provide a platform for INEC members, voters, and political parties to access detailed information about the election process, available positions, and candidates. The study uses fundamental Semantic Web elements to characterize the knowledge domain and provide a user-friendly interface for users to make inquiries and gather information about candidates and electoral processes. Semantic web is a technology that enables machines to comprehend and interpret the meaning of data on the Internet. This research project focuses on analyzing the potential of the Semantic Web to enhance the organizational structure of the Independent National Electoral Commission (INEC). Due to time constraints, the study has a limited scope and will primarily concentrate on developing ontologies specifically for the Rivers state election domain within INEC. The objective is to explore how Semantic Web can improve the effectiveness and efficiency of INEC’s organizational structure and election procedures in Rivers State. This involves the creation of ontologies, which are structured vocabularies defining concepts and relationships in a specific domain. While the project does not encompass all aspects of INEC’s electoral processes, it aims to understand better how Semantic Web can support INEC’s work and identify areas where ontologies can enhance decision-making processes within the organization.

A Semantic Portal to Improve Search on Rivers State’s Independent …

157

2 Literature Review Regular, fair, and free elections are important institutions of liberal and participatory democracy. The form and structure of elections may vary depending on the preferences of the local polity. Candidates for leadership positions are chosen through democratic voting, which is a necessary component of a functioning democracy. It is important to note that there is a clear distinction between “voting” and “elections.” While voting plays a significant role in the election process, it is merely one of many components required for successful elections. The latter encompasses many activities before, during, and after voting. These activities include planning, voter registration, constituency delineation, station layout, polling method, vote counting, and results announcement. Therefore, it is crucial to acknowledge the importance of all the activities involved in elections to ensure a smooth and efficient process. The 1964 Nigerian elections hold significant historical importance for the country. It highlights the violence, manipulation, and fraud during the election, ultimately contributing to thirteen years of military rule and the Nigerian civil war. The text also provides insight into the 1979 elections, which marked Nigeria’s reinstatement of civil power. The election held by the Federal Military government saw the participation of five political parties. A coalition government was established in light of one party’s incapacity to form an independent government [12]. The author emphasizes the importance of power-sharing in politics and criticizes the “winner takes all” approach that has caused problems in the past. The 2023 elections in Nigeria were subject to thorough examination and evaluation following various related matters. It presented an impartial perspective while discussing the campaigns, political parties, and voting procedures. Although many Nigerians have divergent views on their country’s future, many remain taciturn and ruminate on current affairs. Nevertheless, citizens must express their opinions and actively participate in shaping the destiny of their nation [13]. Nigeria’s electoral history has been plagued by violence, manipulation, and fraudulent elections. The current method of power distribution, which favors the winner takes all approach, has proven ineffective. Nigeria must revive its culture of memory and derive valuable lessons from it to prevent the recurrence of past errors [12]. The Independent National Electoral Commission (INEC) is responsible for organizing fair and transparent elections in Nigeria. This analysis delves into the electoral processes of Nigeria and other West African countries, highlighting key differences in the organizations responsible for overseeing elections, voting systems, laws concerning campaign financing, and incidents of violence and intimidation before and after elections. They recommended measures to improve Nigeria’s electoral process, including bolstering the independence and neutrality of INEC, implementing measures to prevent vote tampering and electoral fraud, ensuring the safety and protection of voters, promoting transparency and accountability, and encouraging political tolerance and peaceful forms of political expression. These measures will effectively enhance Nigeria’s electoral process, making it more credible and reliable for all stakeholders [14].

158

J. B. Idoko and D. T. Ogolo

Sir Tim Berners-Lee created the web, which consists of interconnected hypertext documents distributed across the internet, commonly known as web pages. These pages can be accessed using web browsers and contain various forms of content such as text, images, audio files, and videos [15]. The Hypertext Transfer Protocol (HTTP) enables the exchange and retrieval of this data, and Uniform Resource Locators (URLs) are used to visit HTML-coded web pages. Web 2.0 represents a more recent version of the Web and has brought about new advancements compared to its initial iteration. It has facilitated the rise of online social networking platforms like Facebook and Twitter, leading to new concepts such as blogs, wikis, and other social media websites. Web 2.0 enables easier sharing and rating of information on websites without uploading the entire page. However, the sheer number of interconnected websites on the Web presents a challenge in finding specific information using current online applications. The text highlights the need to make search engines more intelligent to meet complex search queries. The emergence of Web 2.0 in 2000 is a response to this challenge, aiming to enhance the functionality and capabilities of web applications. The Semantic Web, according to Sir Tim Berners-Lee, “would bring structure to the important content of Web pages, creating a realm where programming experts meandering from page to page can quickly execute difficult tasks for clients [16].” The Semantic Web is an enhancement of the present Web rather than a replacement since data can be comprehended because the information has been given an unambiguous significance [17]. The Semantic Web represents a technological advance that provides computers with a better and more effective method to organize and draw conclusions from Internet data; it offers several components in layers for developing Semantic Web applications at its core, commonly called the semantic layer cake [18]. The “Rules,” which provide computers and applications the capability to think critically about Web content and derive inferences regarding new information from existing knowledge, are one of the crucial elements of the semantic layer cake. Figure 1 below depicts the semantic layer cake. The MAFALDA framework was introduced to enhance machine learning analysis of IoT data streams. It addresses limitations in IoT scenarios where low-power Fig. 1 An illustration of the semantic layer cake

A Semantic Portal to Improve Search on Rivers State’s Independent …

159

micro-devices generate data. The framework provides a semantic description of the physical world, conducts fine-grained event detection, and uses advanced reasoning techniques. It achieves effective event recognition with minimal processing power. Through a case study on traffic analysis, MAFALDA demonstrates improved performance compared to traditional machine learning systems. The method utilizes data modeling, ontologies, and annotation properties for semantic-based analysis, enabling informed decision-making and data reuse. The suggested approach enhances machine learning precision and effectiveness in IoT applications [19]. Neuro-symbolic AI, which combines Machine Learning and Knowledge Representation, was explored. They propose a framework for reporting Neuro-symbolic AI systems to improve comparability and understanding. The framework includes categories for system information, reusable patterns, and a lifecycle model for audibility. The aim is to foster shared learning, comparability, and machine-readability of descriptions in Neuro-symbolic AI [20]. Establishing semantic mappings between two ontologies is known as the ontology matching problem. To address this challenge, a system called GLUE was proposed. GLUE utilizes anchor points and instance-based matching to narrow the search space and discover mappings between taxonomies. It extends the concept of anchor points to handle non-equivalent concepts and adopts a multi-level approach. GLUE constructs semantic mappings by employing machine learning techniques and generating a bipartite network. The performance of GLUE surpasses that of other ontology matching algorithms, making it a vital tool in advancing semantic web technologies [21]. This study’s subject was the relationship between the Semantic Web (SW) and Business Intelligence (BI) in the context of big data. It highlights the benefits of integrating SW and BI, proposes strategies for addressing data limitations using XML and RDF, and examines contextualization approaches in BI. The study also compares two integration approaches for SW and OLAP: multidimensional model-oriented and OLAP analysis-oriented. While both methods have strengths and challenges, scalability and query optimization remain essential considerations. Furthermore, the study underscores the significance of incorporating cloud technologies in extensive data management. Cloud technologies offer scalability, cost-effectiveness, data sharing, and reliability, making them essential in effectively managing large volumes of data [22]. Social network platforms can be used for sentiment analysis and knowledge extraction. A framework that utilizes machine learning [23–42] and semantic web techniques to analyze sentiments on Twitter and Facebook was presented. The framework includes an analytics dashboard, domain ontology, sentiment analysis, and social network extractor. It retrieves data from social networks based on userdefined keywords, apply sentiment analysis using SenticNet and visual analysis using SentiBank, and provides graphical visualizations of the results. The framework proves effective during the French presidential election, extracting and analyzing millions of tweets and retweets [43]. The POLARE ontology was introduced to represent relationships between people and organizations in a political system. The ontology defines various types of relations

160

J. B. Idoko and D. T. Ogolo

and incorporates the FOAF vocabulary for describing persons. The Shapes Constraint Language is used to specify composition patterns in the knowledge graph. The texts highlight the importance of data quality and the subjective nature of knowledge derived from the knowledge graph [44]. E-Participation is crucial in engaging citizens in dialogue with elected officials through technology. To address the complexity of e-participation, an ontology has been developed. This ontology encompasses various topics, including the policy life cycle, tools, methodologies, and actors involved. The ontology is organized hierarchically, with domain ontologies capturing specific aspects of e-participation. It establishes connections between concepts to facilitate inference and reasoning. This ontology enables efficient searches across a vast knowledge base by providing a shared understanding. It aids in locating e-participation initiatives, identifying the individuals involved, and determining frequently utilized tools and technologies. [45]. The connection between ontologies and electronic democracy is explored to comprehend its dynamics better, facilitate application development, and evaluate its effectiveness. A dedicated ontology is constructed to support this endeavor, involving various steps such as integrating activities, identifying relevant knowledge sources, categorizing terms, defining classes and subclasses, mapping relationships, and encoding domain-specific knowledge. The ontology is implemented using Protege-OWL 4 and undergoes evaluation by domain experts and potential users. Through this evaluation process, key elements are identified and formally described within the knowledge domain [46]. An ontology named POWER has been developed to represent the intricate domain of politics effectively. This ontology utilizes OWL2, DCMI, and SKOS to formalize knowledge about the political landscape, including roles, relationships, and interactions among entities. The ontology is populated with data extracted from specific databases and websites and serialized in RDF/XML format. The ontology can be expanded and refined to enhance its scope by incorporating new files into the triple store. Additionally, an upcoming online interface will provide access to the datasets through a SPARQL endpoint. Introducing the EMPOWERED system is a valuable tool for managing information extraction and RDF statement development using the POWER vocabulary. The process involves a bootstrap phase for semi-automatic instance generation and an enrichment phase incorporating new data using textmining techniques and other relevant instruments. This systematic approach ensures the continuous enhancement and enrichment of the ontology with updated and relevant information [47]. Table 1 provides a comparison of various features found in semantic portals. The purpose of this table is to highlight the differences and similarities between different semantic portals based on specific criteria. The table includes columns representing different features or aspects of the portals, such as search capabilities, data integration, user interface, collaboration tools, and performance. By examining the values in the table, one can gain insights into the strengths and weaknesses of each semantic portal and make informed decisions about which portal best suits their needs.

A Semantic Portal to Improve Search on Rivers State’s Independent …

161

Table 1 Comparison of various semantic portals Content edited/ provision

Search

Inference

Personalization

Seal

Using RDf crawler and OntoEdit ontology engineering workbench

Ontology-based. Ranking based on similarity to query

F-Logic based during search and navigation

Semantic bookmarks (predefine d queries)

Koan

Syndicating from web pages, DB, RDF files and using Web forms. Do not operate in real-time

Ontology-based

KAON datolog – engine, OWL reasoner (KA ON DL)

ONTO webber

Collecting data from – Web pages using wrappers and converts than into RDF

An inference engine i s attached to navigation module

Customization based on different user types and rules

Ontoweb

Syndicating mechanisms and the use of Web forms. Do not operate in real-time

Term-based and template-based

Based on SEAL (F-Logic)



ODESeW

Allows import/ Keyword based export of data. An and ontology interface to edit data based and an external info gateway

Prolog inference

Customiza tion onbasedon a user mo del

OntoWeaver

Through knowledge acquisition forms (templates)

Search forms

Jess inference engine to provide customization using rules

Customization based on user ontology and customization rules

MeseumFinland

Semi-automatic tool to convert XML data to RDF, Probte (edit/ update)

Combined keyword and multi-faceted search

SWI-Prolog inference engine for navigation and search

Device- based adaptation (i.e. to PCs, mobiles)

Rewerse

Based on SWED-E Faceted search waivers (scanning known data sources)

Rule-based reasoner

Displays distances of online users and predefined filters (continued)

162

J. B. Idoko and D. T. Ogolo

Table 1 (continued) Content edited/ provision

Search

Inference

Personalization

Rease

Updated by users by Ontology-based using Web forms

Not specified

Collaborative ranking-based ordering during navigation and search

SEMport

Through RDF file aggregator web interface, Protégé (edit/update) and editing/provision Web interface (editing). Operate in real-time

Jena owl and rule-based reasoners for navigation and search

A user model for supporting AH (link sorting, link annotation, adaptive presentation)

Ontology-based using transitive and rule-based reasoners

The application of semantic web technologies and ontologies in the election domain remains an exciting area of research with the potential for substantial future progress. Research studies have shown that using ontologies and semantic web technologies in the electoral space can lead to advantageous outcomes, improving data precision, transparency, and management. Although these technologies have demonstrated success, there are still challenges and opportunities for further advancements.

3 The Research Methods Used in This Report A conceptual framework was created in this study to support the implementation of an election ontology in Nigeria. This information technology artifact was created using the Design-Science research methodology. The research approach used to create the framework artifact, which was later validated, is described in this section. The theory of the various options and tactics research could make during the study will be thoroughly covered. After considering numerous options, the decisions taken for this research are shown below. The choices are described in narrative detail in the Table 2.

3.1 The Study Philosophy The two extreme and dominant perspectives are positivism and interpretivism [48]. It is simple to see the main difference comparing the two tactics. On the other hand, a

A Semantic Portal to Improve Search on Rivers State’s Independent …

163

Table 2 Various options and tactics research used Research aspect

Research angle of choice

Research paradigms

Design theory with pragmatism

Research approaches

Inductive

Research strategies

case study

Choices

Qualitative

Time horizon

Cross-sectional

Data-collection tools and techniques

Literature review observations

Sampling

Convenience sampling purposive sampling

Data triangulation and evaluation

Descriptive data analysis Content evaluation Review of academic publications

positivist method solely assesses information objectively [49], whereas an interpretative approach analyzes knowledge from a range of perspectives. Instead of taking a positivist or interpretative stance, this study used a pragmatic approach combined with design-science approaches. The first paradigm used to frame this study is the pragmatism paradigm, which Peirce, James, Mead, and Dewey created [50]. The worldview of pragmatism is shaped by actual events, actual situations, and actual results rather than by preexisting conditions; all competing realities and philosophical systems are welcome in the pragmatic philosophy [49]. A realistic paradigm is an intellectual foundation where the research issue is addressed, and several approaches are applied to understand the problem. The design-science paradigm is the second paradigm that guides this study since the primary goal of this research is to construct an artifact in the shape of a conceptual ICT framework (ontology). As a result, the researcher can select the study’s methodologies, tactics, and procedures that best satisfy its goals and objectives [51]. The researcher combined this study’s qualitative and quantitative methods using findings and expert opinions.

3.2 The Research Approach A semantic framework that was created for this study has the potential to be used to deploy election ontologies in Nigeria successfully. This study strongly emphasized having a thorough awareness of the situation being investigated; as a result, an inductive research methodology was used. The study gathered qualitative data that served as the foundation for designing and creating the Nigerian semantic framework.

3.2.1

The Study Strategy

The main objective of this research project is to offer a semantic search framework that might be used to implement electoral ontologies in Nigeria effectively. This

164

J. B. Idoko and D. T. Ogolo

research considered the pros and cons of the existing voting process and an analytical structure for an election ontology. It also looked at the voting system today and its advantages and downsides. This study was forced to focus on a present occurrence: “elections” that did not demand supervision.

3.2.2

The Choice of Study

A mixed-technique approach was used in this investigation. In addition to aiding in developing hypotheses, future research, and interpretation of quantitative data, this model helped the researcher compare or validate the data. It also examines problems that arise in the actual world and offers deeper insights into them. The purpose of this approach is to reach reliable findings that are backed by evidence for a particular phenomenon [52].

3.2.3

The Time Horizon

Due to the researcher’s restricted resources, including time and money, this study is cross-sectional.

3.3 System Analysis, Design, and Architecture System analysis involves looking at a system to see how well it functions, what changes need to be made, and the caliber of the output. The primary goals of the activities are to gain a thorough understanding of the current system, its advantages and disadvantages, and the justifications for restructuring, replacing, or automating it. It is the procedure of gathering and analyzing data, determining the issues, and breaking down a system into its constituent parts, such as risk planning and cost estimation. The project analyst often organizes it. The system that is recommended aims to develop a plan for better amenities. The proposed method can overcome the current setup’s shortcomings and reduce the time required for online election services. Despite the lack of compatibility in the current system, semantics is expected to improve from the new approach. However, with INEC’s expansion of the ability to connect community involvement, it is essential to consider using integrated network infrastructure in public service and providing citizens with government services stored in these various systems due to the need for interoperability. However, there is a need to consider the lack of interoperability and the use of integrated network infrastructure to provide government services to citizens. Semantic web technologies offer a foundation for exchanging knowledge and information to streamline business activities. The manual data processing approach faces numerous challenges, including the volume and complexity of records, resource-intensive manual collection, daily operational tasks, the risk of information loss, paper consumption, human errors,

A Semantic Portal to Improve Search on Rivers State’s Independent …

165

lack of a centralized information hub, difficulties in content extraction, SEO limitations, and navigation challenges. Ontology and semantic web services have addressed these issues by enabling information interlinking and implementing an online and automated design for improved accuracy. The proposed system for the Independent National Electoral Commission (INEC) aims to implement a semantic electoral ontology. The new system addresses operational difficulties and improves efficiency while ensuring adequate security. The existing manual method must be revised to avoid problems such as reliance on paper and pen, lack of systematic data organization, information inaccuracies, redundant data, and slow decision-making processes. On the other hand, the recommended system seeks to eliminate or significantly reduce these issues by introducing an integrated network infrastructure, promoting transparency, and enhancing public services. The new approach emphasizes connected data and semantic web services, aiming for simplicity, reduced manual data access, streamlined processes, improved performance, better customer service, and a user-friendly and interactive interface.

3.3.1

System Architecture

Figure 2 depicts the architecture of the proposed system. This visual representation illustrates the overall structure and components of the system. Semantic Web tools and technologies are being applied to develop a semantic search engine for this research project. This system incorporates various technologies, including Protege editor, Python, Flask, Idlib, Jena APIs, and SPARQL queries. The data relating to the electoral system, such as voters, candidates, electoral processes, and political positions, are stored in an owl file created with the Protege editor. The system’s main component is an internet-based user interface that accesses

Fig. 2 Architecture of the proposed system

166

J. B. Idoko and D. T. Ogolo

data stored in RDF format and organized according to an electoral ontology. The data is collected from the websites of the Independent National Electoral Commission. The system retrieves data from the ontology using rdflib and displays it through the web interface. The interface provides users with a user-friendly experience, allowing them to browse the system and easily search for specific information based on their requirements.

3.4 System Specifications (Domain of the System) This study focused on the Independent National Electoral Commission (INEC) and aimed to collect information from INEC’s websites. To achieve this, the researchers developed an electoral ontology, which defined the concepts related to elections, such as the electoral process, candidates, and political parties. The Protégé Ontology Editor manually added information from INEC’s web pages to the electoral ontology. This process allowed the researchers to gather and organize the necessary data for their research. The Protégé Ontology Editor facilitated easy addition, modification, and connection establishment between ontology elements. The data stored in the ontology was readily accessible for retrieval and analysis, thanks to the database where the ontology entries were saved.

3.4.1

Storage and Representation of Information

The system focuses on the Independent National Electoral Commission (INEC) and utilizes an ontology to describe its information. The ontology deals explicitly with elections and provides a formal definition of the shared understanding of this domain. The researchers created an electoral ontology file to store and manage the data related to INEC, including attributes, subcategories, and connections. The Protégé Ontology Editor was used for creating and updating the ontology file, enabling easy modification and creation of relationships. The ontology file was stored in a database, facilitating convenient access and analysis of the information. By organizing the data through ontology, the researchers achieved a systematic approach that allowed for thorough analysis and access to valuable insights.

3.5 Ontology Processing We processed the ontology data using the RDFLIB model to make communication with end users more accessible. With this model’s help, they could access the ontology using SPARQL queries, a potent tool for exploring and modifying RDF data. The ontology data could be accessed using the RDFLIB model, and SPARQL queries could be run against it. This enabled the researchers to extract specific data from the

A Semantic Portal to Improve Search on Rivers State’s Independent …

167

ontology and provide it to end users helpfully. The model then returned the queries’ responses as RDF data, which could be processed and examined further.

3.6 System Design The search system application developed for this research project consists of three main components: the RDFLIB model, the interface, and the election ontology. The election ontology serves as the primary data source for the application, containing all the relevant attributes, categories, and connections related to INEC. The RDFLIB model handles ontology processing and communication with end-users. It provides the necessary tools for working with RDF, including parsers and serializers for various RDF formats. The interface, built using HTML/CSS, enables users to interact with the system by making queries to the RDFLIB model and receiving HTMLformatted responses. The interface does not store any data but is responsible for handling client interactions and forwarding requests to the RDFLIB model.

3.7 The Election Ontology An OWL ontology named Election Ontology was created to organize data regarding the Independent National Electoral Commission. The elements that make up the constructed ontology are: classes and class hierarchy, object properties, data properties, and individuals.

3.7.1

Classes and Class Hierarchy

. Accreditation and Voting Procedures at Election: This class describes the procedures used for accrediting and voting in an election. . Actors: It shows information about the actors involved in an election. . Candidates: It shows information about all contesting candidates. . Collation of Election Result and Making of Return: It describes the collation of results and election returns. . Election Collation Levels: It gives information about the levels at which an election is collated. . Electoral Officers: It gives information about electoral officers and their roles. . Geopolitical Zones: It gives details about the zones in rivers state. . Political Party: It gives information about Political Parties and their structure. . Political Position: It describes the Political Position candidates can contest for. . Procedures: It identifies procedures on how to vote and confirm polling unit location. . Voters: information about voters in an election.

168

J. B. Idoko and D. T. Ogolo

Fig. 3 The hierarchy of classes from the election ontology shown in protégé

Figure 3 depicts the hierarchical structure of classes within the election ontology. This hierarchy provides a visual representation of how different classes in the ontology are organized and related to each other.

3.7.2

Object Properties of the Ontology

In Fig. 4, you will see a list of the object characteristics utilized in this project. Range and domain are defined for object attributes. The image below illustrates the many object attributes employed in the Election ontology. Some of them are reciprocal to one another. Linking two people together are object attributes. Below is a screen capture demonstrating the usage of a few of the different object attributes.

A Semantic Portal to Improve Search on Rivers State’s Independent …

169

Fig. 4 The election ontology object properties

3.7.3

Data Type Properties

Data type attributes play a crucial role in the ontology. Data type properties, such as adding the attributes FirstName, LastName, Memid, address, etc., are used to preserve some data value. In contrast, object properties establish relationships between two classes, i.e., connect two persons. Several data type attributes are employed in this project, these are depicted in the Fig. 5.

170

J. B. Idoko and D. T. Ogolo

Fig. 5 The election ontology list of data type properties

3.7.4

Individuals

With the use of the protege ontology editor, individuals are manually inserted. However, generating the instances took time. Semantic web expertise is required, for instance, in construction. The figures below are examples of an electoral process. We will either incorporate screen scrapers or instance generation into the UI for creating websites in the future. Figure 6 displays the instances or specific examples of the “Collation of Election Results and Making of Returns” concept within the context of the depicted system or domain. These instances represent concrete occurrences or occurrences of this process in real-world scenarios. RDF data from the INEC website may be automatically retrieved using this.

A Semantic Portal to Improve Search on Rivers State’s Independent …

171

Fig. 6 Instances of collation of election results and making of returns

3.8 How the System Works The application first creates a persistent database for the triple ontology store or establishes a connection if it already exists. It utilizes the rdflib BerkeleyDB plugin for permanent storage. The application receives requests from the front end and builds queries based on the received input using Queries.py. These queries are then passed to the Executer.py, which fetches the results from the ontology triple store. Three functions, GetClasses, GetClassResult, and GetFreeTextResult, are defined to extract data from the database using SPARQL queries. Each function takes a query parameter, runs the query against the ontology database, extracts and formats the relevant data, and returns the findings. The Flask application processes request using a collection of functions. The process function requires two arguments, Request, and _class. It extracts results from the database by receiving form data from a POST request and passing it to the getClassResult method. It also retrieves a list of classes associated with the _class argument using the getClasses function. The function returns a list combining the results from the outcomes. The index.html template serves as the application’s home page and is rendered via the index endpoint. When a POST request is made, the endpoint receives the request string and sends it to the getFreeTextResult function, which extracts database results from a free text query and displays them on the page. Each endpoint from q1 to q10 is defined, and they use the process function to retrieve the necessary results from the database. The results are then passed to the appropriate HTML template for display on the website.

172

J. B. Idoko and D. T. Ogolo

4 Evaluation and Testing A user study was conducted comparing the Baseline and proposed portal systems. The study involved different users searching with both systems, and the structures were swapped among users for evaluation. Performance and user feedback were recorded. The Baseline system of the INEC Website search was tested initially through a sample search to familiarize participants and ensure unbiased results. Tasks were randomly evaluated to balance any potential bias. The experiment assessed the Baseline system’s effectiveness in providing users with information and recording relevant search results. This established a performance baseline for comparison with the Semantic search system. The experiment started with a sample search of the proposed system to understand its functionality. An example query was used to become familiar with the system. Then, the real semantic search task was assigned using the proposed system to evaluate its effectiveness in providing the required information. By conducting the sample search and example query beforehand, we familiarized ourselves with the proposed system and minimized potential bias. This ensured comfort and familiarity with both systems before the actual experiment began. Table 3 presents the evaluation tasks for the INEC Website Search System (Baseline) and the Semantic Search System (Proposed system). These tasks have been designed to assess the performance and effectiveness of each system. By comparing the results of these tasks, we can evaluate the strengths and weaknesses of the Baseline and Proposed systems in providing the desired functionalities and meeting user expectations. Figure 7 depicts a comparison between the Baseline system and the proposed system. This comparison aims to assess the performance and capabilities of both systems. By examining the data presented in Fig. 7, we can gain insights into how the proposed system fares against the Baseline system in various aspects. Figure 8 illustrates the comparison of the average time between the Baseline system and the proposed system. This comparison provides insights into the efficiency and speed of both systems in completing tasks. By analyzing each system’s average time, we can evaluate their performance and determine which system offers a faster and more time-effective experience.

4.1 Analyzing and Evaluation Results of the Proposed and Baseline System To evaluate its effectiveness, an experiment compared the proposed semantic search system to a baseline system. The time taken to complete the search was measured for both systems, indicating their efficiency. A comparison was made considering the search results’ time, quality, and relevance. The study revealed that the proposed system outperformed the baseline system by providing users with information more quickly and efficiently. The proposed system obtained a more significant amount

A Semantic Portal to Improve Search on Rivers State’s Independent …

173

Table 3 Tasks for INEC website search system (baseline) and semantic search system (proposed system) S/N Task

Baseline system time (sec)

Proposed system time (sec)

1

Please search for a political party and details about it

22

7

2

Search for a governorship candidate “SIMINALAYI FUBARA” and details about him

57

12

3

Search for Candidates according to the position they are contesting for “Federal constituency”

16

7

4

Search for the election collation levels

18

6

5

Find the candidate for your Senatorial district and Federal constituency

45

14

6

Assuming you are a presiding electoral officer, check the duties you can perform during an election

48

9

7

The accreditation and voting procedure provides details about impersonation and underage voting

74

11

8

Check for what an Actor security personnel can do during the election and details about them

36

19

9

Check the list of Election collation and return procedures

34

6

Total

350

91

Average

38.9

10.1

Fig. 7 The baseline in comparison with the proposed system

174

J. B. Idoko and D. T. Ogolo

Fig. 8 The average time of baseline in comparison with the proposed system

of relevant information within a short time. Overall, the results suggest that the proposed semantic search system is superior in meeting users’ information needs. These findings inform recommendations for future development and implementation.

5 Conclusion, Limitation, and Recommendations The research aimed to create a semantic search portal for INEC (Independent National Electoral Commission) in Rivers State, Nigeria, utilizing the semantic Web and ontologies. This approach improves search efficiency, accuracy, and user experience by capturing the semantics and relationships of electoral data. Ontologies enable structured organization, integration of diverse data sources, advanced querying, and reasoning functionalities. The portal enhances user experience with intuitive interfaces and personalized recommendations. It promotes transparency, access to information, and citizen engagement. Challenges include ontology development, data quality, and maintenance. To address these challenges, collaboration, data governance, and verification processes are essential. Technical skills and interface design can influence data quality and availability, impacting search effectiveness and user acceptance. The use of semantic Web and ontologies in an election semantic search portal offers advantages but also faces limitations. Developing and maintaining the ontology requires expertise and collaboration, while scalability can be challenging for large-scale elections. Interoperability with existing systems and addressing privacy and security concerns are additional

A Semantic Portal to Improve Search on Rivers State’s Independent …

175

challenges. Overcoming these limitations requires data governance, user-friendly interfaces, standardization, and robust security measures. Future work on the election semantic search portal holds great potential for advancing the accessibility, efficiency, and transparency of electoral information. Key focus areas include refining and expanding the election ontology, integrating heterogeneous data sources, enhancing natural language processing capabilities, addressing privacy and security concerns, leveraging machine learning and AI techniques, and enabling advanced analytics and predictive modeling. These advancements improve search relevance, personalize results, ensure data quality, and empower users to make informed decisions within the electoral domain.

References 1. Sagay, I. E. (2008). Election tribunals and the survival of Nigerian democracy. W. A Lecture Delivered at the Launching Ceremony of the Osun Defender, Nigeria. 2. Donald, U. U., Godson, K. U., & Felix AJA, E. (2020). Electoral fraud as a major challenge to political development in Nigeria–African journal of politics and administrative studies. https://www.ajpasebsu.org.ng/electoral-fraud-as-a-major-challenge-to-politicaldevelopment-in-nigeria/ 3. Olorunmola, A. (n.d.). Nigeria and the 2023 general elections. Westminster Foundation for Democracy. https://www.wfd.org/commentary/nigeria-and-2023-general-elections 4. Dwivedi, Y. K., Ismagilova, E., Hughes, D. H., Carlson, J., Filieri, R., Jacobson, J., Jain, V., Karjaluoto, H., Kefi, H., Krishen, A. S., Kumar, V., Rahman, M. M., Raman, R., Rauschnabel, P. A., Rowley, J., Salo, J., Tran, G. A., & Wang, Y. (2021). Setting the future of digital and social media marketing research: Perspectives and research propositions. International Journal of Information Management, 59, 102168. https://doi.org/10.1016/j.ijinfomgt.2020.102168 5. Wikipedia Contributors. (2023). Semantic web. Wikipedia. https://en.wikipedia.org/wiki/Sem antic_Web 6. Horrocks, I. (2008). Ontologies and the semantic web. Communications of the ACM, 51(12), 58–67. https://doi.org/10.1145/1409360.1409377 7. Haque, A. F., Arifuzzaman, B. M., Siddik, S. A. N., Kalam, A., Shahjahan, T. S., Saleena, T. S., Alam, M., Islam, M. R., Ahmmed, F., & Hossain, M. J. (2022). Semantic web in healthcare: A systematic literature review of application, research gap, and future research avenues. International Journal of Clinical Practice, 2022, 1–27. https://doi.org/10.1155/2022/6807484 8. Shaw, M. J. (2001). Information-based manufacturing. In Springer eBooks. https://doi.org/10. 1007/978-1-4615-1599-9 9. Husáková, M., & Bureš, V. (2020). Formal ontologies in information systems development: A systematic review. Information, 11(2), 66. https://doi.org/10.3390/info11020066 10. Bollinger, T., & Pfleeger, S. L. (1990). Economics of reuse: Issues and alternatives. Information & Software Technology, 32(10), 643–652. https://doi.org/10.1016/0950-5849(90)900 97-b 11. Poulin, J. S. (1996). Measuring software reuse: Principles, practices, and economic models. http://cds.cern.ch/record/362137 12. JIM-NWOKO, U. (2019). Nigerian elections: A history and a loss of memory | TheCable. TheCable. https://www.thecable.ng/nigerian-elections-a-history-and-a-loss-of-memory 13. Adetayo, O. (2023). Attacks on electoral commission spark concerns for Nigeria polls. Elections | Al Jazeera. https://www.aljazeera.com/features/2023/1/18/nigeria-electoral-commission-att acks-spark-polls-concern

176

J. B. Idoko and D. T. Ogolo

14. Asaolusam. (2022). The history of Nigeria election: The struggles and wins. Asaolusam. https://asaolusam.wordpress.com/2022/12/18/the-history-of-nigeria-election-thestruggles-and-wins/ 15. Wikipedia Contributors. (2023). World wide web. Wikipedia. https://en.wikipedia.org/wiki/ World_Wide_Web 16. Kuck, G. (2004). Tim Berners-Lee’s semantic web. SA Journal of Information Management, 6(1). https://doi.org/10.4102/sajim.v6i1.297 17. Hotho, A., Jäschke, R., Schmitz, C., & Stumme, G. (2006). Emergent semantics in BibSonomy. ResearchGate. https://www.researchgate.net/publication/221383913_Emergent_ Semantics_in_BibSonomy 18. Sharma, V. (2022). Web 3.0: The evolution of web. Analytics Vidhya. https://www.analyticsvid hya.com/blog/2022/06/difference-between-web-2-0-and-web-3-0/ 19. Ruta, M., Scioscia, F., Loseto, G., Pinto, A., & Di Sciascio, E. (2018). Machine learning in the Internet of Things: A semantic-enhanced approach. Semantic Web, 10(1), 183–204. https:// doi.org/10.3233/sw-180314 20. Ekaputra, F. J., Waltersdorfer, L., Breit, A., & Sabou, M. (2022). Towards a standardized description of semantic web machine learning systems. In Proceedings of poster and demo track and workshop track of the 18th international conference on semantic systems co-located with 18th international conference on semantic systems, vol 3235. http://ceur-ws.org/Vol-3235/ 21. Doan, A., Madhavan, J., Dhamankar, R. D., Domingos, P., & Halevy, A. (2003). Learning to match ontologies on the semantic web. The Vldb Journal, 12(4), 303–319. https://doi.org/10. 1007/s00778-003-0104-2 22. Hussain, A. A., Al-Turjman, F., & Sah, M. (2020). Semantic web and business intelligence in big-data and cloud computing era. Lecture Notes in Networks and Systems. https://doi.org/10. 1007/978-3-030-66840-2_107 23. Abiyev, R., Arslan, M., Bush Idoko, J., Sekeroglu, B., & Ilhan, A. (2020). Identification of epileptic EEG signals using convolutional neural networks. Applied Sciences, 10(12), 4089. 24. Abiyev, R. H., Arslan, M., & Idoko, J. B. (2020). Sign language translation using deep convolutional neural networks. KSII Transactions on Internet & Information Systems, 14(2). 25. Helwan, A., Idoko, J. B., & Abiyev, R. H. (2017). Machine learning techniques for classification of breast tissue. Procedia computer science, 120, 402–410. 26. Sekeroglu, B., Abiyev, R., Ilhan, A., Arslan, M., & Idoko, J. B. (2021). Systematic literature review on machine learning and student performance prediction: Critical gaps and possible remedies. Applied Sciences, 11(22), 10907. 27. Idoko, J. B., Arslan, M., & Abiyev, R. (2018). Fuzzy neural system application to differential diagnosis of erythemato-squamous diseases. Cyprus J Med Sci, 3(2), 90–97. 28. Ma’aitah, M. K. S., Abiyev, R., & Bush, I. J. (2017). Intelligent classification of liver disorder using fuzzy neural system. International Journal of Advanced Computer Science and Applications, 8(12). 29. Bush, I. J., Abiyev, R., Ma’aitah, M. K. S., & Altıparmak, H. (2018). Integrated artificial intelligence algorithm for skin detection. In ITM Web of conferences (vol. 16, p. 02004). EDP Sciences. 30. Bush, I. J., Abiyev, R., & Arslan, M. (2019). Impact of machine learning techniques on hand gesture recognition. Journal of Intelligent & Fuzzy Systems, 37(3), 4241–4252. 31. Uwanuakwa, I. D., Idoko, J. B., Mbadike, E., Re¸sato˘glu, R., & Alaneme, G. (2022). Application of deep learning in structural health management of concrete structures. In Proceedings of the institution of civil engineers-bridge engineering (pp. 1–8). Thomas Telford Ltd. 32. Helwan, A., Dilber, U. O., Abiyev, R., & Bush, J. (2017). One-year survival prediction of myocardial infarction. International Journal of Advanced Computer Science and Applications, 8(6). https://doi.org/10.14569/IJACSA.2017.080622 33. Bush, I. J., Abiyev, R. H., & Mohammad, K. M. (2017). Intelligent machine learning algorithms for colour segmentation. WSEAS Transactions on Signal Processing, 13, 232–240. 34. Dimililer, K., & Bush, I. J. (2017). Automated classification of fruits: pawpaw fruit as a case study. In Man-machine interactions 5: 5th international conference on man-machine

A Semantic Portal to Improve Search on Rivers State’s Independent …

35. 36.

37. 38.

39.

40.

41.

42.

43.

44.

45. 46.

47.

48. 49. 50. 51. 52.

177

interactions, ICMMI 2017 held at Kraków, Poland (pp. 365–374). Springer International Publishing. Bush, I. J., & Dimililer, K. (2017). Static and dynamic pedestrian detection algorithm for visual based driver assistive system. In ITM Web of conferences (vol. 9, p. 03002). EDP Sciences. Abiyev, R., Idoko, J. B., & Arslan, M. (2020). Reconstruction of convolutional neural network for sign language recognition. In 2020 international conference on electrical, communication, and computer engineering (ICECCE) (pp. 1–5). IEEE. Abiyev, R., Idoko, J. B., Altıparmak, H., & Tüzünkan, M. (2023). Fetal health state detection using interval type-2 fuzzy neural networks. Diagnostics, 13(10), 1690. Arslan, M., Bush, I. J., & Abiyev, R. H. (2019). Head movement mouse control using convolutional neural network for people with disabilities. In 13th international conference on theory and application of fuzzy systems and soft computing—ICAFS-2018 (vol. 13, pp. 239–248). Springer International Publishing. Abiyev, R. H., Idoko, J. B., & Dara, R. (2022). Fuzzy neural networks for detection kidney diseases. In Intelligent and fuzzy techniques for emerging conditions and digital transformation: Proceedings of the INFUS 2021 conference (vol. 2, pp. 273–280). Springer International Publishing. Uwanuakwa, I. D., Isienyi, U. G., Bush Idoko, J., & Ismael Albrka, S. (2020). Traffic warning system for wildlife road crossing accidents using artificial intelligence. In International conference on transportation and development 2020 (pp. 194–203). American Society of Civil Engineers. Idoko, B., Idoko, J. B., Kazaure, Y. Z. M., Ibrahim, Y. M., Akinsola, F. A., & Raji, A. R. (2022). IoT based motion detector using Raspberry Pi gadgetry. In 2022 5th information technology for education and development (ITED) (pp. 1–5). IEEE. Idoko, J. B., Arslan, M., & Abiyev, R. H. (2019). Intensive investigation in differential diagnosis of erythemato-squamous diseases. In Proceedings of the 13th international conference on theory and application of fuzzy systems and soft computing (ICAFS-2018) (vol. 10, pp. 978– 983). Hamdouni, M. E., Hanafi, H., Bouktib, A., Bahra, M., & Fennan, A. (2017). Sentiment analysis in social media with a semantic web-based approach: Application to the French presidential elections 2017. Lecture Notes in Networks and Systems. https://doi.org/10.1007/978-3-31974500-8_44 Schwabe, D., Laufer, C., & Busson, A. J. G. (2019). Building knowledge graphs about political agents in the age of misinformation. arXiv (Cornell University). https://arxiv.org/pdf/1901. 11408.pdf Wimmer, M. A. (2007). Ontology for an e-participation virtual resource center. https://doi. org/10.1145/1328057.1328079 Santos, P. M., & Rover, A. J. (2016). Knowledge representation through ontologies: An application in the electronic democracy field. Perspectivas Em Ciencia Da Informacao, 21(3), 22–49. https://doi.org/10.1590/1981-5344/2523 Moreira, S., Batista, D. S., Carvalho, P., Couto, F. M., & Silva, M. J. (2011). POWER—politics ontology for web entity retrieval. Lecture Notes in Business Information Processing. https:// doi.org/10.1007/978-3-642-22056-2_51 Creswell, J. W. (2013). Qualitative inquiry and research design: Choosing among five approaches. SAGE. Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. Handbook of Qualitative Research, 105–117. Jonker, H. L. (2009). Security matters: Privacy in voting and fairness in digital exchange. Patton, M. Q. (1990). Qualitative evaluation and research methods. SAGE Publications. Saunders, M., Lewis, P., & Thornhill, A. (2012). Research methods for business students. Financial Times/Prentice Hall.

Implementation of Semantic Web Service and Integration of e-Government Based Linked Data John Bush Idoko and Bashir Abdinur Ahmed

Abstract The use of the internet has augmented implicitly most popular platforms for exchanging information and providing people with a wide range of services. The semantic web and linked data aim to take the web of data to the next level of intelligence, where humans and machines understand data. E-government represents the information and communication technologies in the public sector that is more open, encourages participation of community members, and serves as an effective administrative framework. The domain knowledge that semantic web service brings to e-government is immense. This research provides a case study to develop linked data with the emphasis on data governance in the United Kingdom data portal for central government particularly, semantic web service. This framework explain how the semantic web services and e-government are related to linked data. Additionally, the research presents an e-government concrete ontology. During implementation, we explored instances of linked datasets of the UK e-government central domain ontologies, and e-government semantic web services via linked data, to construct a highquality integrated ontology that is easily understandable and effective in acquiring knowledge from various data sets using simple SPARQL queries. Keywords Semantic web · Linked data · e-Government · Open government data · Ontology · Web services

J. B. Idoko (B) Department of Computer Engineering, Applied Artificial Intelligence Research Centre, Near East University, Nicosia 99138, Turkey e-mail: [email protected] B. A. Ahmed Department of Software Engineering, Near East University, Nicosia 99138, Turkey © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_13

179

180

J. B. Idoko and B. A. Ahmed

1 Introduction The internet has seen significant improvement in the creation, storage, processing, transmission, integration, and use of information and communication technology. The web is becoming more widely recognized as a global information environment comprising linked data and documents. Information is the phenomenon that spreads the fastest in this process, making the world a global village. Knowledge-intensive applications are made simpler by semantic web technologies enabling a web-based machine-understandable data based on proper and clear representation of the design and denotation of the data [1]. An information system is “a collection of interconnected elements that gather, process, maintain, and disseminate information within an organization to facilitate administrative decision-making, communication, and control.” With the exclusion of aiding organizational policymaking, supervision, and manipulation [2], it can similarly help managers and staff members evaluate and assess issues, create innovative ideas, and invent new things. It may also focus on how important people, places, and things are communicated within the business or to those outside of it. Information is not simply raw data; rather, it is material “formed into a type that is valuable and helpful to humans.” The converse is also true: “Data are streams of raw facts describing events that exist in organizations before they have been turned into a format that everyone can understand.” Figure 1 depicts the relationship between data and information. A program known as an “information system” processes and produces information from data (facts). Information processing cycles are referred to as processes. There are four functions that make up the information processing step; input, process, output, and archiving [2]. The corporate institution in charge of providing information technology services is the information systems department. It oversees the hardware, software, data storage, and networks that make up the company’s IT infrastructure. The department is headed by a “chief information officer” and is formed up of

Fig. 1 Data and information [2]

Implementation of Semantic Web Service and Integration …

181

professionals like developers, computer scientist, strategy managers, and information system administrators. E-government is one of the main information systems technologies in the public sector that is more open, encourages citizen participation, is less expensive, and provides more services. Although it can be described as a practical administrative framework, it refers to more than just moving transactions manually to a computer environment. One form of e-government, a restructuring model, uses electronic communication to reach out to institutions and citizens and executes procedures that guarantee efficiency and transparency [3]. The ability to realize the principal goals of the new public administration method has been made possible by the state’s internal operations. Thus, e-Government is used to provide a service. The method evolved into a change in the state structure rather than just being a method. Web 2.0, the subsequent stage of the technology, has given rise to numerous concepts. These include social media, electronic government, electronic schools, and massive open online courses (MOOC). Although initially resisting, every field—from states and politicians to the health industry and the education community—had to be repositioned to fit this new reality in the following years. The “Social Network Age” of the Internet is currently underway, but the semantic web will soon take its place [1]. The semantic web, currently thought of as the internet’s future, is an equation with numerous unanswered questions, just as social networks were at the start of the 2000s. It is impossible to predict when the internet business will adopt this new network architecture, in which artificial intelligence is employed more frequently, and which projects will reach their breaking point. But it is recognized that we are making headway toward this network [4]. The internet enables people to ponder new questions every day. We can see how much more the systems we have now can be developed as we pose these questions. Google, a search engine that we all use, serves as this. Google returns 479.00 results when we type in “what is excellent for stomachache.” One of the important points that we should pay attention to in the example of Google is the following. Just as Web 2.0, of which social media is an important part, made social transformations through the internet, a similar scenario awaits us in Web 3.0. “Understanding is change,” said an Indian philosopher. His motto is seen as a situation that will be encountered frequently in the future of the internet [5]. The fundamental necessity for semantic web applications is the usage of resource description format (RDF) for metadata. This can be inferred from RDF’s essential part in the “semantic layer” of web standards. In our study, we used a group of proper vocabularies to obtain our system domain and used SPARQL as our data-enquiry language. All applications are investigated to encounter these conditions, excluding those that use systematic retrieve to RDF information for competence objectives. The semantic web, regard as the phase of next-generation evolutionary step of the World Wide Web is a vision of the world wide web of linked data. In the semantic web, the whole data on the web is arranged in machine-understandable fashion. This allows devices to understand relations among properties and opens up the possibility for humans to discover new areas of knowledge-graph [6]. The modern predecessor of the semantic web is the Google Knowledge Panel (GKP). It summarizes information

182

J. B. Idoko and B. A. Ahmed

from numerous providers into a compressed and clear information container. Linked data doesn’t just put data on the internet. It creates connections that grant access to people and devices to explore the data on the web [7]. By specifying some of these access, we can discover data associated to linked data. For the hypertext link, the data web is built from documents on the web, contrasting hypertext on the link, where links are relational anchor-tag in hypertext documents written in HTML, and URI in data link among random objects written in RDF. A URI identifies each entity or model. But with HTML and RDF, the similar prospects concern to the growing linked data include: 1. Allow users to look up those names using an HTTP URI. 2. If someone looks up her URI, use standards (RDF*, SPARQL) to provide helpful information. 3. Links to other URIs so additional URIs can be uncovered. Linked data principle defines how to publish RDF data and acknowledges RDF datasets to be linked into format data mesh. Project linking open data (http://linkeddata.org) offers the basics of the linked data available today [8].

1.1 The Open Government Data (OGD) The electronic government framework works to achieve compatibility at the stage of information technologies, processes, and services to support e-government at the national, regional, and municipal levels. However, the UK PSC’s organization of service activity data does not allow for connections between related service activities. For instance, while A is a requirement for B, there is no connection among two service accomplishments A and B. Furthermore, there is no connection among service accomplishments A and its English transformation. This is because there is no conceptual connection among related service accomplishments. Hence, there is a requirement to convert this instance of Open Government Data (OGD) to linked data service [6]. OGD initiatives have only lately been implemented. Therefore, classification techniques to study them still need to be improved. However, there are more and more valuable recommendations being made by diverse stakeholders. The e-governance concentration association of the World Wide Web Consortium (W3C) recommends three measures for public governments. To publish and distribute their data: Data should be published in three stages: first, as raw data in files in well-known, non-proprietary formats like Comma Separated Value (CSV) and Extensible Markup Language (XML); second, as web catalogs; and third, as machine-readable data [9].

1.2 Linked Open Data (LOD) Future OGD initiatives appear to rely heavily on linked open data. Linked data defined as “data published on the web in such a way that it is machine-readable, has a clear definition, is linked to other external datasets, and can, in turn, be linked

Implementation of Semantic Web Service and Integration …

183

to from other datasets”. The concept and tools of the semantic web are the foundation of the linked data initiative. However, unlike the fully realized semantic web vision, it primarily focuses on publishing designed data in RDF format using URIs as opposed to concentrating on the ontological level or inferencing [10]. Also, data from diverse and decentralized source may be linked together through reserved links, which promises to usher in the “web of data”. Software architecture and engineering concerns have been relatively understudied in the recent studies on implementation of semantic web service technologies, which has a tendency to concentrate on information modeling, data retrieval, data extraction, and information integration benefits have all been thoroughly studied. However, the entire life cycle of semantic web data needs to be evaluated in terms of the costs and benefits of the application development to promote the widespread use of semantic web technologies. This life cycle, in accordance with this definition, entails the initial stage of ontology construction, to decide how to utilize the data, generation of new data or improvement of current data, determination of data archiving, dissemination of data, and public access [11]. The application may execute creation, refinement, archiving, and publication during runtime. As a result, software engineering and software architecture concepts are also present in addition to data modeling concepts. To our knowledge, there is no experimental study depicting the difficulties associated with the implementation/developing, enhancing, archiving, and publishing data based on linked data. This is in contrast to the empirical examinations of the difficulties associated with ontology development projects. Through an empirical analysis of semantic web applications, we can pinpoint the most often used shared components as well as the difficulties in putting these components into practice. These elements come together to form a reference architecture for semantic web applications [12]. According to the poll, using semantic web technologies poses difficulties that could have an impact on the entire application. Semantic web technologies are just beginning to see the emergence of best practices and standard solutions. The absence of these has made it difficult to develop and deploy apps that make use of semantic web technologies in practical applications. During the conceptual design of the framework, we considered the information from the society and e-government, and how they relate to semantic web services and linked data. Here, we present the relationship between the information from the society and e-government, the difficulty of defining government portal and its characteristics, advantages, objectives, dimensions, and elements that affects it [13]. The issues that arise in [13] are looked at. Historically, societies have changed, and there are more people now than ever. The public administration system has also been altered due to the many needs. The public need to reorganize its management in the current system’s operation by pinpointing the issues encountered; the system will be more effective and practical with the shifting needs of society. These changes are rational and should be understood. Electronic government, often known as egovernment, is a notion on the agenda and serves a tool accepted and utilized in industrialized and developing nations. The main importance of this work lies in the fact that it is one of the few egovernment studies that addresses the topic of semantic search. This represents

184

J. B. Idoko and B. A. Ahmed

a modern revolution in information retrieval systems. The study aims to direct egovernment attention toward third-generation web technologies of linked data and utilize this technical development in the exploration for and retrieval of government information via search engines in the internet. In [14], the researchers identified characteristics of information retrieval and display of results that differentiate Hakka from Google. The study follows the descriptive approach by conducting an analytical study of the search engine field. Using four different search terms and searching both Hakka and Google, the researchers determined the characteristics of information retrieval and created the display. There is currently no global report that evaluates the state of e-government progress in all countries. It is believed that each country has to decide which level and extent of e-government initiatives it pursues, including members of the United Nations, since they measure the performance of e-government against each other rather than on an absolute basis. It also acts as a resource and development tool for nations to share knowledge, determine strengths and weaknesses in e-government, and tailor their policies and plans in response. This evaluation aims to facilitate discussions between intergovernmental bodies, including the UN General Assembly and the Economic Council Social and high-level political forum, allowing for an in-depth discussion of e-government and the role that ICT plays in development [15]. The requirements imposed on the e-government service by semantic development tools include at least three elements (the integration service, crawler, and the search service) for enhancing data. Semantic web services can work with both local and online datasets so managing and control depend on the source of the dataset [16–35]. Also, the process of exchanging information e-government data in its current situation form requires a lot of time, labor and effort. Web services have the advantage of allowing consumers to focus on the services they want to get rather than the data they require. However, the main shortcomings of web services tools are their incapability to provide automatic invention, synthesis, and collection, necessitating human work & intervention. The aforementioned issues with web services are resolved by integrating semantic web technology [36]. The study outlined how much work still has to be done in order to create workable techniques and patterns for the publication of government data because the world of linked open data is prepared for a big e-government portals to adopt these standards on a broad scale. The use of semantic web service standards for publishing e-government data is outlined in this research, along with some of the advantages. This research emphases on the reason reliable data publication is so crucial to the e-government data association as well as how semantic web service standards exclusively enable e-governments to share data significantly. There has been a lot of research done on uniform resource identifiers (URIs), provenance and versioning, and semantic web services. In each instance, creating simple, repeatable patterns has taken precedence. The objective of this research is to develop a platform that ensures its clients semantic web services for e-government linked data, specifically to transform traditional e-government data into linked open data and uncover a platform that ensure reliable, effective and efficient e-government. One of the most significant benefits of the proposed application is high speed, accuracy and a user-friendly UI that saves

Implementation of Semantic Web Service and Integration …

185

time and resources. This research provides an integrated solution to security and availability concerns for semantic web services through an authorized e-governmentlinked data framework. As most of the existing e-government data systems are not entirely linked, the proposed application incorporates procedure that coordinates schemes, provide an accessible, adaptable, and effective working unit, and have central storing, backup, and robust testing abilities. In this study, we provide a case study of a semantic web base United Kingdom (UK) open data for central government to develop a linked data service. The remaining part of the study is structure as follows: the proposed method is presented in Sect. 2. Section 3 presents the results obtained from the app testing. Section 4 depicts conclusions of the paper.

2 Methodology Here, we presents appropriate system designs and methods that e-governments and organizations use to process linked data. The system development requirements (software and hardware) are also presented in this section. A suite of system methods and modeling techniques are substantiated to develop a semantic netting service prototype.

2.1 System Description Application of semantic web and linked data to e-governance is a gray research area yet to be overly explored. E-governance deal with distinctive information systems, people, management structures, services, and relationships between these units. Government infrastructures incorporates cultural operations with various organizations and authorities. Accordingly, fundamental contract on the value of concepts and procedures is required in advance of ontological configuration. Governments have several different data sources including: 1. Unstructured data for rules, procedures and concepts, policies, etc. 2. Data relating to facts and figures treated as operational ideas. 3. Structured data for administrative decision making. Some of the aspects of the UK Governance’s methodology to make data as free-use is the improvement of linking service. Adoption of linked data is believed to be an alternative for transforming entire data into RDF format, setting in single large triple store, correspondent to assembling another large dataset of e-governance [3]. Linked data is a concept describing how to distribute information on the cloud while the website is used for progressive and persistent publishing of data in a very huge and spread system. The greatest web services are RESTful—APIs. As website creators, we: 1. Utilize the (HTTP) URI to classify properties, 2. Recognize the difference among properties and their schemas, 3. Recognize and return expressive responses, and 4. Recognize mechanism as the links of system states.

186

2.1.1

J. B. Idoko and B. A. Ahmed

Semantic Web Environment Tools

The WWW and the semantic web are one and the same. Semantic web is a web extension that transforms current web publications into data by supplementing them with fresh data and metadata. This expansion of web content to data makes it possible for humans to process the web manually and automatically. Semantic web works well after creating a great/valuable website using markup code and some plugins or scripting methods. This term semantic web describes the use of specific procedures, tools, and technologies, referred as three-tier, client/server, and semantic web technologies, for web page and website development [37]. Semantic web also refers to data linking to applications based on Information systems, specifically called the World Wide Web Consortium (W3C). A poorly designed technical environment prevents proper procedures. A technical background is required for development, testing, and production [37]. In e-Government approaches, models not only lower the risk of misunderstanding but also lower the cost of modifying the system to accommodate changes in requirements. Figure 2 depicts the explored semantic web environment tools.

2.1.2

Semantic Web and Linked Data Principles

Linked data is the foundation of the semantic web. It describes the semantic web as “an extension of the current web that gives information a clear meaning and allows computers and humans to work better together”. Data is designed in a way that allows machines to create relationships among different sources. These relations are Fig. 2 Semantic web environment tools

Implementation of Semantic Web Service and Integration …

187

called linked data. Establishing best-feature, shareable, modeled metadata can make the library resources and studies discoverable and relatable to other relevant data sources. There are several open-source technological tools to get in progress with semantic web. Some of which are: linked data, World Wide Web Consortium (2015) i.e. a brief introduction from W3C that contains a synopsis of the linked data and illustrations of connected data sources. Linking data is a further development of the semantic web into a worldwide dataset. There are four common procedures for developing linked data for the semantic web to make it possible for both humans and robots to access data from several servers and to more readily understand its semantics. As a result, the semantic web evolves from being a collection of connected documents to one that is made up of linked data. It in turn enables the development of a densely connected network of meaning that can be processed by machines. The four procedures include: . As the name of the object that uses the URI. This comprises identifying relations among objects. Using actual URIs in the data is essential for linking the data to other datasets. . Use web URIs to allow users to search for these terms. The URI is from a wellknown and moderated vocabularies with a functional public that provides and manipulates these standards. . Use a standard, such as a controlled vocabulary, to provide helpful data when searching for the URI. . Add links to another URIs to help individuals and computers determine more. 2.1.3

Architecture of Linked Data

Architectural design is the structure and blueprint of how a system’s networked environment works. It constitutes the hardware and software used on each computer on the network [38]. A linked data architecture is depicted in Fig. 3. The primary capability continues to be the collection of data from spreadsheets and other structured sources such as SPARQL endpoints, and the subsequent visualization of such data within a Drupal-based user interface. The server controls the system logical, information retrieve logical, and storing data [38]. This is a representative choice, requiring reduced over-head and more straightforward protection. A three-tier architecture of linked data is demonstrated in Fig. 4. The “presentation tier” in the three-tier architecture is the primary tier and explored in the application interface. The second tier is the “application tier,” and cover the institution process logic that allows data to be shared and distributed across applications. It is developed in program language such as PHP and RDF. The final tier is the “data tier,” which covers the datasets and “data access” layer, for example, SPARQL. The three components of every linked data application are the semantic web, related URIs, and web service application. First, getting linking data from data sources is the responsibility of the related URIs. Wrappings was transformed to convert data into related URI when the obtained information is not in the RDF format. We used mashups to describe systems that exclusively consume related data. Secondly, it is

188

J. B. Idoko and B. A. Ahmed

Fig. 3 Linked data platform architecture

Fig. 4 A three-tier architecture of linked data

up to the linked data consumer to modify the data they have already ingested in order to create new linked data. Lastly, the user interface gives users a mechanism to communicate with the program. The system contain these supportive interfaces, such as, visualization of the schema and web service APIs for interaction with the application.

Implementation of Semantic Web Service and Integration …

2.1.4

189

System Development Environment

A development environment collects methods and techniques for implementing, testing, and debugging a system or software. The development environment is a significant concern, and needs to be decided by choosing among the many available environmental alternatives for the client and using the one that best suits the needs. So, the proposed application is a complete program developed in OWL and RDF as the front-end approach and SPARQL server as back-end tool. For the front-end development, we used HTML, CSS, RDF, and JS so that endusers can directly display, read and interact with the system/resources. The goal is to ensure that when users loads the website, they get information in an easy-to-read and well-formed format. Back-end tools for development are responsible for server-side systematic logical process and integrate the work of front-end development. For backend implementation, SPARQL is used to design the database. SPARQL, SPARQL Protocol, and RDF Query Language are semantic query Languages. We used RDF query language to express queries for datasets to process and manage data stored in the RDF format. It was standardized by the W3C’s RDF data access working group (DAWG) and is one of the critical tools of the semantic web service and linked data. SPARQL has several characteristics that distinguish it from other DBMSs. In SPARQL query can contain of triple-store, aggregations, graph-patterns, and optional parameters. Several programming language implementations exist. Some tools, such as ViziQuer, can connect SPARQL queries to SPARQL endpoint.

2.1.5

System Requirement Specification

Some hardware and software requirements were considered to achieve a common goal. The hardware device requirements for the implementation of the system include: Client Machine: Processor: Single Core (1.4 GHz) of CPU Monitor: SVG Color Monitor Memory: At least 1 GB of RAM Server Machine: Processor: Monitor: Memory: Free space:

Core 2 Duo of CPU SVG Color Monitor At least 2 GB of RAM 10 GB and more.

For the software requirements, the entire website’s code is written in HTML, and CSS and JavaScript are utilized for styles and behavior, and RDF and OWL for

190

J. B. Idoko and B. A. Ahmed

server-side programming. The following software must be installed on the computer system to implement the proposed system effectively. Client Machine: Operating System: Windows 7 and Higher Dependencies: JavaScript Browser: Microsoft Edge/Explorer, Google Chrome or Firefox Server Machine: Database: Dependencies: Ontology Model: Web Server:

SPARQL Java Development Kit (JDK) Protégé 5.5.0 Editor Apache-Jena (Fuseki server).

2.2 System Analysis System analysis deals the analysis of system domain to fully comprehend the targeted goals. The primary goals of the activities are to gain a thorough understanding of the current system, including its advantages and disadvantages as well as the justifications for restructuring, replacing, or automating it. It is the procedure of gathering and analyzing data, determining the issues, and breaking down a system into its constituent parts such as risk planning, cost approximation typically organized by the project analyst. The proposed system aims to develop a plan for improved facilities. The proposed method can overcome all the limitations of the existing systems to reduce the time involved in e-government web services.

2.2.1

Existing Systems

Most of the existing systems of the e-government considering the lack of interoperability, the new system will be projected to benefit semantics. However, e-governance extend leverages on the capabilities of interconnecting participation of society, there is a crucial necessity considering the lack of interoperability to use of integrated network infrastructure in public service and delivering government services stored in these distinct systems to citizens. Semantic Web technologies offers the basics for exchanging expertise and information to coordinate business processes. The manual method of data processing reveals several problems including: . Records to be kept are often too large, diversified, and complex to be processed manually. . Manual collection of information requires a lot of workforce, time, transportation, and strength. . The record maintenance and operations required on a day-to-day basis are extensive.

Implementation of Semantic Web Service and Integration …

. . . .

191

Needs manual calculations Information can be lost when records are stolen, misplaced, or vandalized Consumers’ large volume of paperwork Inaccuracies often ensue from human error in manual record keeping.

The problems that the e-government system faces can be solved by using semantic web service of linked data. Most of the existing systems have no integration capacity for interlinking government information. The design needs to be online and automated to avoid all these limitations and make service rendering more accurate.

2.3 The Proposed System The proposed system aims to implement an e-government for linked data services. The solution minimizes manual labor while providing enough security. The existing method has a number of drawbacks and numerous operational challenges. The suggested approach makes an effort to get rid of or significantly lessen these challenges. The proposed method deliver public services, increase transparency, and provide integrated network infrastructure for government. The existing e-government observed are fraught with several integral insufficiencies and short comings. Often, a substantial disparity happens in the system with fewer production. During our requirements gathering, we discovered a number of issues with the manual system, some of which include: . Paper and pencil procedure: The old method is entirely manual-based. The operations involving the e-government data are not systematic. . Reliability of Information: The old systematic, especially manual processes, facilitates the ability for information to be improperly maintained (mishandled). . Redundant data: It is simple to find information or details about a specific individual in various places. Errors and unreliable and repeated datasets in many data sources are blamed for unorganized information processing. . Decision-making: Whereas the existing system’s effectiveness is weak, decisionmaking is generally prolonged. The proposed system is straightforward in semantic web service and linked data. It provides the following aspects: . . . .

Minimize manual data access. Least time required for the several processes. Better effectiveness and improved service. User-friendly and interaction.

192

J. B. Idoko and B. A. Ahmed

3 System Implementation and Evaluation In this research, we sought to use the semantic web service to implement the linked data of e-government. We used the UK government as a case study and developed a new ontology and linked data service model based on the central government’s open data portal. The necessary technologies (RDF, OWL, SPARQL, etc.) were made available for the implementation. The web service and linked data are mostly regulated by e-government settings and certain societies and offer services that enhance people’s daily lives. Increasing the volume of semantic web services on the linked data and creating the essential technologies are required to achieve this. We present different steps of implementation and integration of this innovative model by considering the different main units and subunits of this government.

3.1 Brief Description of the United Kingdom (UK) Government The United Kingdom government, usually known as the British government is the main executive division of the Great Britain and Northern Ireland. The Prime Minister, who leads the government and appoints all other ministries. Since 2010, the conservatives have controlled the government of the nation, and each subsequent prime minister has been a member of its leadership team. The cabinet, which is the highest governing body, includes the prime minister and his senior ministers. The UK’s seat of government is similar to any other governments in the global. The UK open government data platform provides facilities to the whole UK citizens, residents, visitors, organizations, and the private sector. The UK government has a national e-government goal of making the whole governance facilities available to the public and ensuring the effectiveness, consistency, and credibility of these facilities at reasonable values that meet the people’s essential requests. The UK governance has taken many projects to lead the UK towards information systems launching numerous industries in the UK to adopt and use e-government. The success of e-government in the UK depends on government support and public acceptance. Adopting e-government in the UK presents many challenges, including administrative issues, technical challenges, infrastructure issues, absence of trustworthiness and security issues. Apart from these, there are social challenges such as the low IT skills, poor usability of government website, lack of linked data, and nonexistence of machine-readable data. The challenge of applying e-governance portal is how UK can implement its government portal using linked data to provide both human and machines understandable data, helpful, and access to the data service they desire.

Implementation of Semantic Web Service and Integration …

3.1.1

193

UK Government Linked Data Ontology

The framework for semantic-based services used by the UK’s central government consists of the UK public body, the cabinet, the government organization, the post, the civil service, and the committee. The ontologies were developed using the Protégé tool. An e-government service domain is a framework for semantic integration initial input. Several information sources and domain experts were engaged in establishing the domain’s procedure. Then, a domain ontology is created to record the pertinent ideas, actions, jobs, rules, etc. The ontology model is a group of clusters used by the e-governance and represented in their framework. Establishing central government ontology design principles, creating an ontology concentrating on categories related by the formed standards indicated in schemas, and appending classes and properties are the processes taken to connect the connected data of the UK government. By using correspondent classes and sub-class-of to construct sub-classes relations and correspondent properties and sub-property-of to construct sub-properties relations, it is possible to map the central government ontology that has been implemented in place of linked data for organizations respectively. Exploiting attributes, data interlinking is necessary to see “also” or “same as” as attributes when creating and using higher ontologies for actual data connections. Discovering data with the similar category or entity is addressed by central government ontology mapping, and a procedure for recognizing the similar properties is necessary for linking data relationship. Data from each institute must be connected together since interlinking data is crucial for utilizing linked data. Securing and sustaining linked data is particularly crucial because some data can only be accessed throughout SPARQL endpoint package, or the existing data package may not support them, making read data linking services impossible to establish. By removing institutional boundaries and improving data interlinking, central government ontologies for establishing related data are created of multiple domains and familiar sets. They will be able to cut down on the time and money needed before implementation of ontology modeling and external relation for other institutes to build linking data. The UK government central model ontology is demonstrated in Fig. 5. Figure 5 displays the main model of ontology in domains that we have implemented for the UK government using OWL languages. The procedure we have developed to construct these areas contains these stages: the first stage—ontology extraction stage, separates the notations from data sources. The second stage— ontology design and integration stage, outlines the essential schemas to construct an integrated domain ontologies declared in central government structure with the linked data obtained from the open data portal of UK government. The third stage— the ontology validation stage, confirms the notation domains in knowledge sets. The fourth/last stage—the ontology implementation stage, indicates the model with ontology language. The main page of the Protégé Editor with ontology metrics is demonstrated in Fig. 6. The active ontology of the UK central government’s many branches was presented using the program Protégé Editor 5.5. Also, it is possible to change or add classes to

Fig. 5 The central model ontology of UK government

194 J. B. Idoko and B. A. Ahmed

Implementation of Semantic Web Service and Integration …

195

Fig. 6 The main page of the Protégé Editor with ontology metrics

the tree in this phase, using OWL, the semantic web’s common language, to create the domains ontology. The primary concepts from the design phase serve as the input for this stage. Figure 7 depicts the ontology list classes entities.

3.1.2

Class and Property for Integrating Domain Ontology

There are at least six primary classes in the domain ontology. Each class has subclasses, including the cabinet, the government organization, the post, the civil service, the committee, and the UK public body. There are more than 280 vocabularies in the ontology connected to components through semantic ontological relationships. Synonyms for semantic relationships are used when referring to the Protégé, described as “the same individual as” the property, for example. Figures 7 and 8 illustrate the classes in model and the properties in the integrated domains notation, respectively. The graph model of the classes and object properties is shown in Fig. 9. Figure 10 shows the central government visualization for the model by using OntoGraf. OntoGraf is proposed and built-in by using the owl Protégé Editor. The structure of ontology can be automatically organized using a variety of layouts. Subclass, individual, domain/range object attributes, and equivalence are a few of the relationships offered.

196

Fig. 7 The ontology list classes entities

Fig. 8 Cabinet committee class annotation and description

J. B. Idoko and B. A. Ahmed

Implementation of Semantic Web Service and Integration …

197

Fig. 9 Object properties and its annotation

Fig. 10 Central government graph visualization using OntoGraf

Figures 11 and 12 shows the central government graph visualization and the central government of hierarchy for the ontology respectively, using the OWLViz. OWLViz is the graphical representation and installed-plugin for the owl-Protégé Editor; it provides class hierarchies in OWL model to be views and relational navigation,

198

J. B. Idoko and B. A. Ahmed

Fig. 11 Central government graph visualization using OWLViz

Fig. 12 UK public body hieratical in domain ontology

allows relationship of the asserted class hierarchies and the inferred class hierarchies. In order to implement the projected model, the procedure has been completed throughout a wide-range of inspection for resource of constituent evaluation.

Implementation of Semantic Web Service and Integration …

199

Fig. 13 The open refine for cleaning messy data parsing options

3.1.3

Clean Data with OpenRefine

OpenRefine is a very effective tool for data exploration or data classification. By converting the CSV data into the necessary graph structure, we were able to build entities without having to repeatedly write the base URI. Given that OpenRefine includes an excellent selection of filter and perspectives tools, this is quite helpful if you want to investigate the distribution of values in a numerical column, transform columns of data, or filter between ranges. Then export RDF as RDF/XML or turtle contingent on linked data. Figure 13 represent the open refine for cleaning messy data parsing options. Figures 13 and 14 details how to maintain an open data portal (UK government data) supported by open refine, including the number of resources each with portal department, agencies and the regional government or organization, each with portal definitions for web resources, vocabularies, standardized in their structure and delivery according to linked data.

3.1.4

Snapshots of the Proposed System

The following snapshots demonstrate the core pages of the application with their important characteristic or capability that a new system must achieve. The first page that welcomes you when you visit our system is the start page as shown in Fig. 15. This is used to aid navigation to pages of other departments/units through the provided menu. The general structure is described schematically in Figs. 16 and 17. In central government ontology, there are several triple stores to allow testing, integration and

200

J. B. Idoko and B. A. Ahmed

Fig. 14 Open refine export data as RDF format

Fig. 15 The welcome page of the proposed system

production of RDF data used to store interlinked descriptions of entities. Data conversion is a process the central government ontology uses to implemented different formats. We can re-use such queries and combine them with other data.

3.2 SPARQL Endpoint for Linked Data The SPARQL endpoint components are shown in Fig. 18. An RDF query language is SPARQL protocol and RDF query language. A REST-based interface called the SPARQL endpoint assists linked data development tools in maintaining triples. Tools

Implementation of Semantic Web Service and Integration …

201

Fig. 16 UK central government ontology

Fig. 17 Central government definitions

for developing linked data give users access to precise information about the triples they have built and the relationships between them. A local triple-store job is to keep user-built triples and give creators triple files. Use a SPARQL query to search for data from a particular dataset as demonstrated in Fig. 19. Such queries can be reused and combined with additional data. The RDF format is available for all LINDAS data.

202

J. B. Idoko and B. A. Ahmed

Fig. 18 SPARQL query section of the proposed system

Fig. 19 Table response of SPARQL query

3.3 System Evaluation and Discussion Linked data standards are excellent, but RDF and SPARQL are best tools for many developers. Even though the data is machine-understandable, specification format like RDF/XML and Turtle are inaccessible without specialized analysis tools, making them difficult for non-specialists. Presenting data as diagrams instead of the further aware tree or table patterns enhances additional hurdle. To effectively expend linking data, the data users must assume another method, i.e. to build an innovative rational

Implementation of Semantic Web Service and Integration …

203

method of the data, and regularly start using an innovative or skilled program governance to operate, control and manage. Knowledge from open data government shows that qualified programmers will find a good method to develop the SPARQL-endpoint fully. When the British government started publishing data using semantic web service, those previously comfortable with these tools adopted it with ease. For the novice, it was not easy to know what information existed and how it was modeled. This made it difficult for them to obtain an overview of the RDF standards and the potential notations of a specified source. Relevant data standards appeared to prevent access to open administrative data. We suggested a prototype of SPARQL endpoint, however, essential attributes like combining results by accumulation and calculation needed to be added to the present SPARQL endpoint and the linked data service application. Discussions presented in different places online were few and far between user documentation. It was challenging to monitor a development from the service known by programmers to linking data. The option was to take the plunge and provide much time, energy and cost into linking data. Web services provided through RESTful APIs have exploded in recent years. These web services are expressed using URI relations or query requests and return data in a plain XML or JSON format that is simple to adopt for server and any application to process. The UK government has therefore supported work to create a middleware based on a standard configuration format that can sit on top of SPARQL endpoints to: . Deliver plain XML, JSON and Turtle formats for linking data. . Support simple URI patterns controlling, utilizing, and navigating for linked data . Supports the formation of simple domains specified APIs by data providers. At the end, we wrote a standard of the web API functions provided by the controller and maintained the formation of native applications. Linked data service offers an underlying tool that enables additional types of systematic approach to data. All data related to the UK government can correspondingly be accessed via web service API. The essential target is to submit authorized API implementation using web service in a way that is mutually easy for the provider and the consumer, demonstrating that published in different data format offers values that offset the budgets. This approach changes how linked data is perceived. Linked data standards, already effective and appropriate, correspondingly offer the basis for the e-governance to create APIs rapidly. As a result of this work, the e-governance has currently perceived web service as the most efficient way to provide systematic approach to linked data and in formats already familiar to programmers, such as JSON. The UK government is seriously trying to build a network of e-governance data as comprehensive as linked data service. The use of linked data standards to publish information has significant benefits for governments. Data providers in governance linked data standards can distribute their data reliably. For data user, linked data service enable them to access management information flexibly and efficiently through the APIs. The adoption of data service in the UK governance has been a balancing act: among the institution proponents of linking data service and the logical issues of data users include:

204

J. B. Idoko and B. A. Ahmed

. The need for a centralized, one-stop for official information that is simple to discover and practice in decentralized form. . Data owners publish their data online, focused on realizing the immediate and long-term benefits of using linked data standards. 3.3.1

Ontology Metrics

Protégé Editor represents a significant tool which display statistics of various metrics for a given ontology in terms of metrics, class axioms, object property axioms, data property axioms, and annotation axioms. In Table 1, we have summarized different type of tools in our ontology model. The framework for semantic-based services used by the UK central government consists of the UK public body, the cabinet, the government organization, the post, the civil service, and the committee. Establishing central government ontology design principles makes an ontology concentrate on categories related by the formed standards indicated in schemas, and appending classes and properties are the processes taken to connect the linked data of the UK government. By using corresponding classes and sub-class-of to construct sub-classes relations and corresponding properties and sub-property-of to construct sub-properties relations, it is possible to map the central government ontology that has been implemented in place of linked data for organizations respectively.

3.3.2

Loading Dataset Time in Linked Data

Following the advancements on the Apache Jena Fuseki Server, it is possible to run SPARQL queries on the Apache Jena Fuseki Server. We usually use localhost Table 1 Ontology metrics Metrics

Class axioms

Axiom

560

Sub-class-of

41

Logical axiom count

170

Disjoint classes

10

Declaration axioms count

109

Object property axioms

Class count

44

Sub-object-property-of

39

Object property count

53

Inverse object properties

16

Data property count Individual count Annotation property count

4

Functional object property

0

Object property domain

31

Object property range

26

12

Data property axioms

1

Annotation axioms

Data property domain

2

Annotation assertion

Data property range

4

Annotation property domain

280 1

Implementation of Semantic Web Service and Integration …

205

Table 2 Loading time of RDF dataset Categories of RDF data

Triples

Size (kb)

Time (s)

Main Ontology Government

726 triples

64.5

8.25

Cabinet Office

99 triples

13.5

2.25

Attorney General’s Office

18 triples

2.7

0.35

Department for Business Innovation and Skills

88 triples

13.2

2.1

Department for Communities and Local Government

18 triples

2.6

1.135

Department for Culture, Media and Sport

46 triples

6.5

0.99

Department of Energy and Climate Change

18 triples

2.7

0.35

Department for Environment, Food and Rural Affairs

50 triples

7.2

1.1

Department for Education

64 triples

9.6

1.30

Department for Transport

70 triples

9.4

1.35

Department of Health

60 triples

9.4

1.25

and port 3030 to execute Apache Jena Fuseki on the web browser. An instance is entering “http://localhost:3030” into the address bar of the browser. SPARQL query and update supports the Fuseki Server. Jena’s SPARQL API supports both the ontology languages OWL and RDFS. When these queries and languages are considered, Fuseki is an information-distribution server that may use HTTP and SPARQL to show and update resource description framework (RDF). A SPARQL query may contain multiple service invocations against the same web resource. In such cases, fetching and loading repeatedly the same resource triples costs both in time and in computer resources. Such a functionality motivates web publishers to enrich their documents with RDF since it makes their data directly accessible via SPARQL (without needing to set up an endpoint), while it also enables the direct exploitation of RDF data that is created dynamically (e.g. by RESTful web applications). As depicted in Table 2, bigger the triples and size of data, the more time required for data transfer, and vice versa. In future, we will study query planning approaches and more optimization techniques aiming to reduce the transfer time of data between server/endpoint and remote sources.

4 Conclusion The main objective of this study is the implementation of semantic web service and integration of e-government based on linked data. The semantic web and linked data aim to take the web to the next level; a web of data where humans and machines understand data. Machines can make giant qualitative leaps in their use of web data. The necessary technological tools (RDF, OWL, SPARQL, etc.) explored are supported by internationally renowned organizations. The current challenge is to improve the quality of semantics on the linked data and implement the technologies

206

J. B. Idoko and B. A. Ahmed

required for web service and linked data, currently confined primarily to the academic experience, specific communities, and the general public. Appreciations to the open government data movement by the Great Britain. We explored this data to implement the proposed system. The system has been tested recorded reasonably low response time for data upload, download and transfer. Future considerations would focus on exploring more optimization approaches to further reduce the system turnaround time from seconds to possibly milliseconds.

References 1. Kim, H. (2002). Predicting how ontologies for the semantic web will evolve. CACM, 45. https:// doi.org/10.1145/503124.503148 2. Laudon, K. C., & Laudon, J. P. (2006). Management information 3. Jain, S., & Kumar, P. (2018). Semantic web, ontologies and e-government: A review. 4. Xiao, Y., Xiao, M., & Zhao, H. (2007). An ontology for e-government knowledge modeling and interoperability. In: 2007 international conference on wireless communications, networking and mobile computing (pp. 3600–3603). https://doi.org/10.1109/WICOM.2007.891 5. Albarghothi, A. (2018, August 1). An ontology-based semantic web for Arabic question answering: The case of e-government services. 6. Ding, L., Lebo, T., Erickson, J. S., DiFranzo, D., Williams, G. T., Li, X., Michaelis, J., Graves, A., Zheng, J. G., Shangguan, Z., Flores, J., McGuinness, D. L., & Hendler, J. (2011). TWC LOGD: A portal for linked open government data ecosystems. Journal of Web Semantics, 9(3). 7. Dojchinovski, M., & Vitvar, T. (2018). Linked web APIs dataset: Web APIs meet linked data. Semantic Web, 9(4), 381–391. https://doi.org/10.3233/SW-170259 8. Chae, J., Cho, Y., Lee, M., Lee, S., Choi, M., & Park, S. (2016). Design and implementation of a system for creating multimedia linked data and its applications in education. Multimedia Tools and Applications, 75(21), 13121–13134. https://doi.org/10.1007/s11042-015-2895-8 9. Attard, J., Orlandi, F., Scerri, S., & Auer, S. (2015). A systematic review of open government data initiatives. Government Information Quarterly, 32. https://doi.org/10.1016/j.giq. 2015.07.006 10. Farber, M. (2018). Linked Crunchbase: A linked data API and RDF data set about innovative companies. 11. Attard, J., Orlandi, F., Scerri, S., & Auer, S. (2015). A systematic review of open government data initiatives. Government Information Quarterly, 32(4), 399–418. https://doi.org/10.1016/j. giq.2015.07.006 12. Gruninger, M., & Lee, J. (2002). Ontology applications and design. CACM, 45. https://doi.org/ 10.1145/503124.503146 13. Crichton, C., Davies, J., Gibbons, J., Harris, S., & Shukla, A. (2007). Semantic frameworks for e-government. In: Proceedings of the 1st International Conference on Theory and Practice of Electronic Governance-ICEGOV’07 (p. 30). https://doi.org/10.1145/1328057.1328066 14. Vitvar, T., Kerrigan, M., Overeem, A. V., Peristeras, V., & Tarabanis, K. A. (2006). Infrastructure for the semantic pan-European e-government services. In Semantic web meets e-government, papers from the 2006 AAAI spring symposium, technical report SS-06-06, Stanford, California, USA, March 27–29, 2006 (pp. 135–137). http://www.aaai.org/Library/Symposia/Spring/2006/ ss06-06-023.php 15. Abijaude, J., Viana, H., Santiago, L. M., de Lima Sobreira, P., & Greve, F. (2018). I2oTegrator: A service-oriented IoT middleware for intelligent object management. In 2018 IEEE symposium on computers and communications (ISCC) (pp. 00174–00179). https://doi.org/10.1109/ISCC. 2018.8538541

Implementation of Semantic Web Service and Integration …

207

16. Abiyev, R., Arslan, M., Bush Idoko, J., Sekeroglu, B., & Ilhan, A. (2020). Identification of epileptic EEG signals using convolutional neural networks. Applied Sciences, 10(12), 4089. 17. Abiyev, R. H., Arslan, M., & Idoko, J. B. (2020). Sign language translation using deep convolutional neural networks. KSII Transactions on Internet & Information Systems, 14(2) 18. Helwan, A., Idoko, J. B., & Abiyev, R. H. (2017). Machine learning techniques for classification of breast tissue. Procedia Computer Science, 120, 402–410. 19. Sekeroglu, B., Abiyev, R., Ilhan, A., Arslan, M., & Idoko, J. B. (2021). Systematic literature review on machine learning and student performance prediction: Critical gaps and possible remedies. Applied Sciences, 11(22), 10907. 20. Idoko, J. B., Arslan, M., & Abiyev, R. (2018). Fuzzy neural system application to differential diagnosis of erythemato-squamous diseases. Cyprus Journal of Medical Sciences, 3(2), 90–97. 21. Ma’aitah, M. K. S., Abiyev, R., & Bush, I. J. (2017). Intelligent classification of liver disorder using fuzzy neural system. International Journal of Advanced Computer Science and Applications, 8(12) 22. Bush, I. J., Abiyev, R., Ma’aitah, M. K. S., & Altıparmak, H. (2018). Integrated artificial intelligence algorithm for skin detection. In ITM Web of conferences (Vol. 16, p. 02004). EDP Sciences. 23. Bush, I. J., Abiyev, R., & Arslan, M. (2019). Impact of machine learning techniques on hand gesture recognition. Journal of Intelligent & Fuzzy Systems, 37(3), 4241–4252. 24. Uwanuakwa, I. D., Idoko, J. B., Mbadike, E., Re¸sato˘glu, R., & Alaneme, G. (2022, May). Application of deep learning in structural health management of concrete structures. In Proceedings of the Institution of Civil Engineers-Bridge Engineering (pp. 1–8). Thomas Telford Ltd. 25. Helwan, A., Dilber, U. O., Abiyev, R., & Bush, J. (2017). One-year survival prediction of myocardial infarction. International Journal of Advanced Computer Science and Applications, 8(6). https://doi.org/10.14569/IJACSA.2017.080622 26. Bush, I. J., Abiyev, R. H., & Mohammad, K. M. (2017). Intelligent machine learning algorithms for colour segmentation. WSEAS Transactions on Signal Processing, 13, 232–240. 27. Dimililer, K., & Bush, I. J. (2017, September). Automated classification of fruits: Pawpaw fruit as a case study. In Man-machine interactions 5: 5th international conference on man-machine interactions, ICMMI 2017 Held at Kraków, Poland, October 3–6, 2017 (pp. 365–374). Cham: Springer International Publishing. 28. Bush, I. J., & Dimililer, K. (2017). Static and dynamic pedestrian detection algorithm for visual based driver assistive system. In ITM web of conferences (Vol. 9, p. 03002). EDP Sciences. 29. Abiyev, R., Idoko, J. B., Arslan, M. (2020, June). Reconstruction of convolutional neural network for sign language recognition. In 2020 international conference on electrical, communication, and computer engineering (ICECCE) (pp. 1–5). IEEE. 30. Abiyev, R., Idoko, J. B., Altıparmak, H., & Tüzünkan, M. (2023). Fetal health state detection using interval type-2 fuzzy neural networks. Diagnostics, 13(10), 1690. 31. Arslan, M., Bush, I. J., & Abiyev, R. H. (2019). Head movement mouse control using convolutional neural network for people with disabilities. In 13th international conference on theory and application of fuzzy systems and soft computing—ICAFS-2018 13 (pp. 239–248). Springer International Publishing. 32. Abiyev, R. H., Idoko, J. B., & Dara, R. (2022). Fuzzy neural networks for detection kidney diseases. In Intelligent and Fuzzy Techniques for Emerging Conditions and Digital Transformation: Proceedings of the INFUS 2021 Conference, held August 24–26, 2021 (Vol. 2, pp. 273–280). Springer International Publishing. 33. Uwanuakwa, I. D., Isienyi, U. G., Bush Idoko, J., & Ismael Albrka, S. (2020, August). Traffic warning system for wildlife road crossing accidents using artificial intelligence. In International conference on transportation and development 2020 (pp. 194–203). Reston, VA: American Society of Civil Engineers. 34. Idoko, B., Idoko, J. B., Kazaure, Y. Z. M., Ibrahim, Y. M., Akinsola, F. A., & Raji, A. R. (2022, November). IoT based motion detector using raspberry Pi gadgetry. In 2022 5th information technology for education and development (ITED) (pp. 1–5). IEEE.

208

J. B. Idoko and B. A. Ahmed

35. Idoko, J. B., Arslan, M., & Abiyev, R. H. (2019). Intensive investigation in differential diagnosis of erythemato-squamous diseases. In Proceedings of the 13th International Conference on Theory and Application of Fuzzy Systems and Soft Computing (ICAFS-2018) (Vol. 10, pp. 978– 3). 36. Adadi, A., Berrada, M., Chenouni, D., & Bounabat, B. (2015). A semantic web service composition for e-government services. https://www.semanticscholar.org/paper/A-SEM ANTIC-WEB-SERVICE-COMPOSITION-FOR-E-GOVERNMENT-Adadi-Berrada/8d18be 5b56b3db96256970b10afc0846225e3238 37. Zhou, C., Xu, C., Chen, H., & Idehen, K. (2007). Browser-based semantic mapping tool for linked data in semantic web. 38. Davies, R. (2005). A semantic web service-based architecture for the interoperability of egovernment services. https://www.academia.edu/69268529/A_semantic_web_service_based_ architecture_for_the_interoperability_of_e_government_services

Application of Zero-Trust Networks in e-Health Internet of Things (IoT) Deployments Morgan Morgak Gofwen, Bartholomew Idoko, and John Bush Idoko

Abstract The reliance on the capabilities of the internet is evident in the deployment of Internet of Things (IoT) sensors (have grown to be an Internet of Everything— IoE), devices and actuators for everyday life applications; from smart homes, smart cities, and healthcare services to agriculture. The interconnection of these sensors, actuators, devices and in general things incorporated with artificial intelligence and machine to machine communication algorithms forms the core norm of the IoT paradigm. Just like the wide use and application of the first version of the internet comes with some security and privacy concerns, so also the deployment of IoT in any environment. The degree of secrecy required to collect, process, transmit and store data collected from an individual increase higher in the deployment of IoT in eHealth because of the sensitivity of such data hence raises questions of how securely these devices could collect personal data and transfer it over an untrusted network and yet still maintain the privacy and security of the data. This challenge is also evident in sensor nodes and other things within an IoT as to protection of an individual’s privacy without divulging it to unauthorised users with malicious intents. The study identifies the challenges in the adoption of IoT and also analyze various privacy concerns, threats and vulnerabilities in the deployment of IoT in eHealth. Keywords Internet of Things · Privacy · Security · Identity of things · Zero-trust network

M. M. Gofwen (B) National Space Research and Development Agency, Abuja, Nigeria e-mail: [email protected] B. Idoko Center for Cyber Space Studies, Nasarawa State University, Keffi, Nigeria e-mail: [email protected] J. B. Idoko Applied Artificial Intelligence Research Centre, Department of Computer Engineering, Near East University, Nicosia 99138, Turkey e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_14

209

210

M. M. Gofwen et al.

1 Introduction The age of Information and Communication Technology has paved way for a number of breakthroughs and innovations in developing, designing and producing devices, things and objects. As the number of these devices continues to grow, there is need to harness their individual capabilities to optimize the maximum advantage that could be derived from this connections; this concept is known as “Internet of Things” (IoT). IoT continue to gain momentum and its application has widen since the concept was coined by Kevin Ashton. Industries, businesses and homes are interminably adopting the deployment and application of IoT. The wide application of IoT is attributed to the low cost of sensors and relevant technologies [1]. Application of IoT has made information sharing and communication from Machine to Machine, M2M easier. In the healthcare IoT application, wearable sensors are connected via a wireless communication to the patients’ mobile device to collect and share data with healthcare practitioner or care providers. IoT adoption by most developed nations has been applied to the Healthcare service by managing and improving patients’ wellbeing. The application of IoT in the Healthcare is mostly referred to as eHealth which involves using sensors (mostly wearable) with devices (mobile-with internet connection) to monitor patients’ sugar level, insulin level, heartbeat etc. which has greatly improved the treatment of cardiovascular diseases in elderly patients. The sensors collect data regarding patients’ wellbeing and communicate it to a data reservoir usually managed by a hospital or clinic. With this advantage derived from the application of IoT in eHealth, it poses a lot of concerns to patients as to the data collected and transferred through unsecured communication channels and protocols which requires some security measures to be deployed to ensure data confidentiality, integrity, availability, non-repudiation and preferential anonymous preservation. Privacy preservation has been identified as one of the critical areas in IoT [2] that needs attention to research and design efficient authentication methods as well as a well-designed cryptographic mechanism for the security of data shared among things in a healthcare scenario. The Zero trust network or zero knowledge proof approach works with a “never trust, always verify” approach. The level of trust devices and things place on each other has created vulnerable path for threats and attacks to be launched which compromises the security of sensitive data. Currently, the deployment of IoT devices relies on various trust frameworks in defining trustworthiness of things and humans in the context of allowing smart objects take decisions and trust an entity which has seen a number of so called fine grained trust negotiation mechanisms being developed [3]. Healthcare services has improved as a result of IoT application in providing people with special medical needs and the elderly in ‘smart’ communities with enhanced medical emergency, early detection and monitoring of diseases. Smart and wearable sensors also known as Gyroscope are being carried by patients requiring immediate medical attention to monitor body parameters such as blood pressure, body temperature, breathing, etc. to collect data and communicate it to medical practitioners [4].

Application of Zero-Trust Networks in e-Health Internet of Things (IoT) …

211

The Keep In Touch (KIT) technology [5] is developed so as to collect and forward data about chronically ill patients and elderly persons wellbeing in monitoring disease progression, therapy response and overall health status of patients. KIT technology relies on the combination of Radio-Frequency Identification RFID, Near Field Communication (NFC) which is a wireless connectivity technology that uses magnetic, 13.56 MHz free frequency band and inductive coupling and mobile communication [5]. Another application of IoT in healthcare is to provide autonomy and free lifestyle to elderly and special needs persons; by combining Keep In Touch (KIT) and Closed Loop Healthcare Services to enable data sharing with a physician in a service centre. Healthcare services provided under the IoT adoption must adhere to certain security requirements (privacy, anonymity and untraceability) for users of the system as achieved by PAAS eHealth Network [6]. The PAAS network framework operates on four privacy levels (i.e. level 0 which is described as the most intuitive privacy level that must be met by authorised physicians requesting access to patients’ data to level 3 which requires enforcement of nondisclosure of information by all parties and protocols using patients’ data) to maintain the compliance to security requirements set. The IoT paradigm makes the adoption and deployment to various day to day activities possible and pervasive healthcare provision to patients in hospitals [3]. Other application areas of IoT continue to expand as further academic and industrial researches are carried out. Other IoT application and deployment areas include: a. Agricultural operations in management of livestock, smart farming and transportation and distribution of farm products. Deployment of networked devices, things, sensors and actuators will improve crop productivity and pest management and control. Sensors that will be able to read and predict temperature, humidity and environmental conditions so as to provide farmers with sufficient and articulated data to enable them make informed decisions as to which season, soil, equipment, irrigation methods and weather conditions that are most suitable for what crop to better farming method to be employed [7]. b. IoT can be deployed in businesses to assist organisations achieve optimum operational functions such as in the supply and delivery chain and inventory management that relies on RFID of things and devices within the network for goods identification, control and tracking assets [4]. RFIDs are attached to goods and products that could be tracked, identified and data embedded can be read by RFID readers. The deployment of sensors, devices and things in logistics monitoring and tracking of goods in business presents a ‘smart’ business approach thereby improving Quality of Service (QoS) of business performance. c. IoT application enables network of people, devices and things to one another to provide information generation and sharing within a smart community, application for neighbourhood watch, value added services (utility management, social networking etc.)

212

M. M. Gofwen et al.

d. Smart homes make use of IoT sensors in cities for data collection, system automation and physical security systems to provide a more comfortable, convenient, serene and secured environment. e. IoT based autonomous car can be designed with a driverless concept by integrating several devices (car components), sensors, applications, internet to perform a complex and intelligent functions that a conventional car cannot. Assigning sensors within the eHealth application a Unique Identifier (UI) has been a research challenge to the application of IoT in various environments. An identity management approach that can accommodate sensors with different peculiarity is therefore imperative to this research as well as a Zero-Trust approach to further enhance security and privacy of data over the internet and vulnerable transport layer protocols.

2 E-Health Security Challenges Sensing frameworks in the IoT paradigm has to coup with uncertainty challenges that affect QoS availability and performance of nodes in WSN [8] a service architecture framework such as Mobile Ad hoc Networks (MANET) consists of an assembly of nodes (wireless) of arbitrary mobility patterns. Energy efficiency is identified as a key design challenge of the battery-operated MANET nodes. Therefore; the need to develop energy-efficient approaches to reduce energy consumption rate of sensor devices has become imperative [9, 10]. Connected devices in an IoT deployed environment is faced with a major concern of security of data, devices and communication medium [11]. Communication medium (private networks and VPN tunnels), devices, data which can be personal, industrial or enterprise and data reservoirs needs to be protected against theft, unauthorized access and destruction. Similarly, [12] noted that security and privacy has been the major de facto to the full adoption of IoT. Weber and Boban [13] outlined key security, confidentiality and privacy challenges in deployment of IoT in healthcare to include; i. Security—Securing physical objects in the IoT architecture from arbitrary attacks (e.g. Distributed Denial of Service-DDoS) and object protection from malwares. ii. Confidentiality—Simpler exchange but yet secure method to exchange confidential and private data. iii. User Privacy—Control and Right of personal data, privacy technologies and privacy protection approaches (frameworks, algorithms and models) (Fig. 1). There are a number of existing challenges to data security and privacy preservation and attacks that occur as a result of unethical handling and transmission of data by medical personnel in eHealth IoT application [14]. Security and privacy challenges in the application of eHealth in IoT has led to various approaches proposed claiming to ensure the preservation of privacy, data

Application of Zero-Trust Networks in e-Health Internet of Things (IoT) …

213

Fig. 1 An overview of eHealth security challenges

integrity, confidentiality and availability of both data and resources in the eHealth IoT paradigm such as an eHealth system by [14] that provides key exchange and personal data transmission by a secured trusted authority. Islam et al. [15], outlined some distinct eHealth IoT which must be identified and analysed into healthcare domain. These are confidentiality, integrity, authenticity, availability, data freshness, nonrepudiation, authorization, resiliency, fault tolerance and self-healing using artificial intelligence [16–26]. Computational memory, energy and memory limitations, mobility, scalability, device multiplicity and temper resistance packages were also identified as some security challenges in eHealth [15] (Fig. 2). Fig. 2 eHealth IoT security challenges [15]

214

M. M. Gofwen et al.

The security of IoT has posed critical challenges to its deployment in eHealth. Most of the research issues in this context has been identified in; Identity of Things, IDoT, Zero Trust, Data Security in Body Area Networks (BAN), eHealth Sensors Availability and Data Non-repudiation, BAN Encryption Overhead, and Preferential Anonymity Preservation for IDoT, . Identity of Things (IDoT) for eHealth Sensors: Identity of Things in an electronic sense is an essential component of the IoT computing paradigm. IDoT is witnessing a number of on-going researches as to a more comprehensive and all inclusive framework that could umbrella diverse entities in IoT considering different owners, manufacturers, vendors and difference in technology of objects (from devices having IP addresses to things such as fridges, farm harvesters, microwaves, sensors, etc.). According to [27], the identity of things is pivotal in the growth and deployment of IoT so anything an anything can be identified and associated with an owner. The problem of IDoT has been greeted with some numbers of researches proposing various approaches to solve the problem of Identity managemernt (IDM). Adoption of IoT requires some Usercentric Identity management protocol that would support people, things, devices etc. and their relaionship objects in an IoT to owners is required and is non existential thus, remains a challenge in IoT paradigm [28]. Various schemes have been proposed to provide a means of addressing and identifying sensors nodes in healthcare environment which are highly dependent on a number access technology [29] hence different identifies that relies addressing protocols that makes the connection of sensors challenging. Therefore a BAN could comprise of sensors with different identity schemes which makes the connection of theses sensors and actuators difficult to. Some identity of things schemes that provides a certain UI for nodes are the RFID object identifier, IPv4, IPv6 etc. . Zero-Trust Principles in eHealth Deployment: IoT has become one of the most discussed technologies in the pool of IT paradigms. And its deployment in recent years has continued to witness increase from smart buildings, car-to-car communication, agriculture and healthcare in large, medium and small scale. With the massive figures behind connected devices, it opens door for security challenges as regards the security of data collected by sensors and devices. IoT security issues are unique from that of internet, mobile communication networks and traditional computers [30]. According to a report by Palo Alto Networks, perimeter-centric security solution of network (ingress/egress) points relies on the basic assumption that entities within its boundaries (internal) are reliable and can be trusted. The Zero Trust security approach makes no reservation to the traffic, request or location the request emanates from, it treats all entities (devices, traffic etc.) as untrusted and hence requires verification at all times [31]. Defined Zero Trust approach in monitoring behaviour in and outside a network boundary as a scheme that monitors all incoming requests and labels them untrusted irrespective of the credit points held by requestor to be

Application of Zero-Trust Networks in e-Health Internet of Things (IoT) …

215

Fig. 3 Zero-trust network principle for eHealth

trusted as a result of its historical behaviour. In their paper, they argued that the requestor’s previous historical behaviour is not enough credential to be trusted as a trusted requestor can misuse certain privileges gained (Fig. 3). Zero Trust security approach discards any assumption of trust by abolishing the notion of trusted zones. Security concerns in eHealth IoT adoption will benefit from the Zero Trust approach due to the level of privacy required for things, data and data set, confidentiality, availability, integrity, non repudiation and preferential anonymous prevention. Zero-Trust Network Principles or Zero Knowledge Proofs can be applied into IoT security design such as the proposed solution for smart parking [32], will ensure proper authentication on the popularly adopted Elliptic Curve Cryptography for sensing nodes in the IoT WSN thereby providing data privacy, security and reducing computational burden of nodes. Researches have been done to proffer a lasting solution that can be adopted across all IoT application platforms, communication channels and protocols. Privacy and security of data collected is therefore paramount when considering adopting the IoT paradigm. For example, a wearable eHealth sensor collects data of a patient’s heartbeats, vital signs and heart rate variation for monitoring disease progression will transmit data collected in a secure manner ensuring it is not compromised. Securing the identity of the patient in an event of any security breach that may occur in the communication channel. . Data Security in Body Area Networks (BAN) A forecast by Cisco estimated that the volume of traffic non-PC devices and connections between M2M that will be generated would rise to almost 70% and 43% by 2026 as against the 40% and 24% in the year 2022 respectively as the number of interconnected devices continues to be on the rise (which includes smart homes, industries, automotive, healthcare etc.). Friess [33] Data integrity security and privacy frameworks requirements must take into account these IoT security challenges: . Resource constrained devices should be supported by Symmetric and lightweight solutions. . Cryptographic mechanisms and techniques are required to ensure data collected is protected, stored and shared in a manner that content cannot be altered or accessed by other parties (unauthorised).

216

M. M. Gofwen et al.

In their paper DTLS based security and two-way authentication for the internet of Things, Kothmayr et al. [34] presented a Datagram Transport Layer Security protocol approach that will ensure integrity of data communicated via IoT networks. The approach applies the technique of appending a 13 bytes DTLS header to all transmitted data. The header contains information such as content of message this could be data gathered during handshake process, checksum and protocol version. The technique effectively ensures data integrity is maintained and monitored at both ends on the communication process by header appending for data however, the approach relies on trust and hence presents a ‘verify and trust’ principle. A security framework RERUM proposed by [35] adopts existing cryptographic frameworks such as Malleable Signature Scheme (MSS) and Comprehensive Sensing (CS) for authenticity preservation and integrity assurance and by pairing integrity to other properties of security such as (i) availability (ii) authentication origin and (iii) external consistency. The RERUM framework ensures data integrity to data collected by sensor devices and introduces a trade-off between data anonymity to data integrity when transmitted data over networks. VIRTUS architecture is designed by [36] which is a solution for a middleware for managing applications in an IoT environment to ensure data integrity. The solution employs OSGi and XMPP open standards to present a solution for data integrity security. VIRTUS architecture functions with XMPP protocol for internet connection that relies on a secured server that isolates sensitive data from the internet to the private network thereby treating devices within the private network as trusted entities. This solution does not provide security during data transmission over unsecured networks. An approach that relies on Zero Knowledge Prove (ZKP) for authentication and verification of devices as presented by [37] uses two already functioning protocols for P2P authentication and encryption to ensure data integrity in the process of data transmission over an untrusted static IoT networks. The approach achieves Zero Knowledge which is based on the implementation of Goldreich-Micali-Wigderson (GMW) graph isomorphism and Deffie-Hellman (DH) key exchange protocols to provide data integrity, authentication and privacy. The approach is an improvement over other approaches discussed previously as it involves a mutual authentication for devices about to connect, making each device or network functioning both as a prover and verifier. The approach how be it, is limited to a wide range of deployment of the protocol because of the need of credential distribution (e.g isomorphism graphs) and lacks public key cryptosystem flexibility (pre-deployment setup of key directories is therefore necessary). . eHealth Sensors Availability and Data Non-repudiation: The upholding and non-denial of data sent or received by things in IoT becomes necessary due to security challenges sensitive data are being exposed. Nonrepudiation prevents denial of data transmitted previously by sensor nodes and proof of behaviour to trusted third parties [38, 39]. A research Data Leakage Detection conducted by (Papadimitriou & GarciaMolina, 2011) as sited by [40], proposed a model that enforces data integrity by monitoring data usage and leakages at various stages. The model has the capability of

Application of Zero-Trust Networks in e-Health Internet of Things (IoT) …

217

calculating ‘guilt’ possibility in every data leakage scenario by digitally marking and information tagging technique. Though the model proves to be effective in enforcing data non-repudiation along the communication networks, the same cannot be said along the sensor nodes as it introduces noise during data watermarking and tagging process and this causes interference with the quality of signal of sensor nodes [40]. Another data non-repudiation solution that could be integrated both in mobile sensors and standalone IoT sensor nodes by leveraging FUFs as trust root was introduced by [41]. The approach on-chip PUF that serves as sensor identity with the provision of secure keys used to ensure non-repudiation of sensed data. The Hybrid Cipher Algorithm accommodates both symmetric and non-symmetric cryptographies to provide data non-repudiation and also integrity of data for IoT devices. Though as stated by [42] the algorithm is not applicable to all IoT deployment. Data collection, transmission and consumption services in IoT requires a sustainable availability of raw and/or unprocessed M2M data from collection and processing subsystems or nodes IoT data cycle [43]. IoT systems security has eight major standards as enlisted by [33] which includes (i) availability of IoT resources at any time (ii) access control (iii) data confidentiality (iv) privacy protection (v) data confidentiality (vi) data integrity (vii) communication layer security and (viii) no arrived patience. Availability of IoT resources (sensors, actuators and devices) including data, communication networks and protocols must be protected against any disruption that with disrupt access in Vehicle to Vehicle communication. . BAN Encryption Overhead: Privacy and security data and resource due to the sensitivity and privacy requirement of sensor nodes in the Body Area Network and Wireless Sensor Networks (WSN) demands some security measures that must be in place for the provision of tight security. This calls for encryption approaches that could umbrella sensors and devices within various applications of the IoT paradigm. Traditional security measures are no longer adequate to sustain the growing security needs for sensor networks and IoT nodes therefore the need to build a stronger and scalable security infrastructure has become imperative to avert security challenges [45]. An encryption architecture proposed by [45] has three major components in its design (mobile and contextual sensor, communication gateway and a back-end infrastructure). The gateway functions with computational devices, which uses Linuxbased distributed systems for data communication and a RasberryPi device [46]. Through network interface configuration, the Linux OS networking can apply PKI encryption to incoming traffics and data sets. According to an experiment by [47] to measure the accuracy of security on data communicated over Wireless Sensor Networks (WSN) within IoT implementation. The experiment adopted two popular encryption algorithms to achieve this desired results (i.e. Elliptic Curve Diffie-Hellman (ECDH) and Integrated Factorization Scheme (IFES) algorithms). As proved in their experiments, the ECDH algorithm

218

M. M. Gofwen et al.

provides some high level security due to the mathematical operations required to implement key generation for two communicating parties within a network this makes it difficult for attackers to calculate the key pair required for the communication even if they monitor the communication channel. On the other hand, the IFES algorithm operates in the same way with RSA encryption algorithm as for any communication to materialize, the sender must generate a public key to encrypt data to be transmitted (which is shared with the receiver) and a private key to decrypt the data and so also the receiver must generate both a public key and private key. Though with the level of security these two algorithms provide, it lacks an overarching application to the sort of devices in a WSN such as RFIDs. Similarly, a comparative analysis of some cryptography algorithms for use in the IoT paradigm performed by [48] listed five cryptography libraries used in IoT which include; i. WolfSSL is characterised by a low runtime memory and its remarkable small footprint size makes it easy to be implemented in embedded devices and things. The WolfSSL houses a good number of algorithms such as RSA, SHA-1, SHA-2, ECC, BLAKE2, DSS and Poly 1305 which features key and certificate generation. The library supports Online Certificate Status Protocol (OCSP) and Certificate Revocation List (CRL) and also provides compatibility with OpenSSL. ii. WiseLib which is an algorithm liberary for sensor networks. It also contains basic documentation tips, issues with design and fundamental knowledge in its Wikipedia. iii. TinyECC cryptograph algorithm library is based on the ECC algorithm which provides ECDSA, ECDH key exchange protocol and ECIES public key encryption. iv. RelicToolkit which is also one of the cryptography algorithm library which could be adopted into the application of IoT security. The library incorporates some algorithms (Multiple-precision integer arithmetic, Bilinear maps and extensions field, ECC, and cryptographic protocols) to make it a much more flexible and efficient meta-toolkit. The inclusion of two or more integer arithmetic into this library, it makes it deployment feasible into a wide range of devices and things in the IoT world. v. AvrCryptoLib reduces cryptography runtime for resource-constrained devices (devices with low battery runtime, and a significantly lower memory capacity). The AvrCryptoLib cryptography algorithm employes a nuber of algorithms which includes AES, BLAKE, SHA-1, SHA-2 ARC4 etc. and SHA-1, SHA-256, MD5 and Grost for Hashing functions. The low computational power of embedded devices and microprocessors in some IoT things as pointed out by [48] plays an important role in the usage of internet infrastructure and information exchange in the field of IoT. Thus, draws the need for algorithms to facilitate a secure information exchange for things with low computational capability.

Application of Zero-Trust Networks in e-Health Internet of Things (IoT) …

219

It is worthwhile to mention that though a number of novel and already existing algorithms (implemented on a few platforms) are introduced and are being tested for a wide coverage of use in the environment of IoT, there is still more to be done as a result of the growing privacy and security concerns to the sensitive data collected within the usage and deployment environment of IoT. . Preferential Anonymity Preservation for IDoT: One of the approaches used in balancing level of aggregation is the k-anonymity privacy-preserved aggregation for published data that is in wide usage [49, 50] in that it provides a degree of secrecy that individual’s identity cannot recognized in a set of k users by anonymizing data [51]. According to [50], the design of their approach EEXCESS which provides data preferential anonymity and privacy preservation consist of content anonymity which guarantees data privacy be it that in an event of attack, the identity of the data provider should not be gained by an attacker, request unlinkability to maintain user privacy making it impossible to link two or more independent data from originating request and origin unlinkability to hide the origin of a request and not revealed by application level protocol [51]. Extended anonymity to include communication process by ensuring the identity of various communicating parties in a communication protocol and traffic is concealed. However, [51, 52] opined that k-anonymity location based privacy model fails to provide users’ preferential query requirement due to its reliance on dynamic users’ distribution. In their proposed solution for preserving location privacy, Ni et al. [52] argued that the solution display high scalability with (s, e)-anonymity model specifically and location protection as against other cloaking solutions such as the Casper architecture which achieve location privacy preservation queries by emulating a brute-force approach resulting to a high workload and scalability of deployment. The approach introduced by previously discussed authors could be adopted in the deployment of eHealth IoT considering the sensitivity of data system within this around this environment transmit which require high level privacy preservation and as shown by various approaches, data can transmitted with anonymity to hide identity of sender or requestor.

3 Proposed Model The model specifies IoT paradigm in eHealth application. The approach used is the Privacy Maintainability (PriMa) lightweight risk assessment designed to be more focused and befitting for things in the IoT paradigm. This was tested using data collected from secondary sources to validate the output of the lightweight proposed model in performing a risk assessment in the deployment of IoT in the healthcare environment. The lightweight risk assessment proposed in the PriMa model makes consideration to some number of factors that have been noted as challenges to IoT deployments such as Privacy, preferential anonymity preservation, Confidentiality, Availability and Integrity of data and things in the IoT environment of application.

220

M. M. Gofwen et al.

Fig. 4 PriMa proposed model

The proposed lightweight PriMa risk assessment is categorised into four phases (Fig. 4). Phase 1: Identification Phase 2: Privacy Evaluation Phase 3: Risk Evaluation Phase 4: Risk Analysis Phase 1: Identification: The first phase identifies the required information to begin assessing risk associated with the deployment of sensors and other things in an IoT context such as; assets identification threat identification and associated Vulnerability Identification . Assets Identification: This involves identifying assets in eHealth deployment. The assets identification covers tangible to intangible objects, things and nodes. In this model, an asset can be classified valuable under a number of criteria which include the sort of data collected and communicated and the privacy protection that must be assigned to the data or data set. This translates assessing each sensory note, network, communication gateway and protocol and database on the sensitivity of content held or transmitted (data) and performance requirement of assets. The

Application of Zero-Trust Networks in e-Health Internet of Things (IoT) …

221

asset value is the quantitative scoring scale based on the trust level assigned to assets to communicate with other devices within and outside the BAN parameters indicating the sensitivity of assets value with the following dependencies; i. ii. iii. iv. v.

Service interruption—Availability Confidentiality properties Integrity properties Privacy maintainability Preferential anonymity preservation

. Threat Identification: Threats associated to the deployment of sensor nodes are identified. The treat identification involves collecting information about reported threats in the past (threat history) and also information gathered from reporting forums of threats. Adopting [53], the threat model classifies the threats causes into either one or combination of the following; deliberate (D), accidental (a). Deliberate actions that are targeted at gaining unauthorised access to data while accidental actions are considered to me human related resulting from oversight and negligence that will result to destroying data or sensors nodes and associated valuable resources. . Identification of Vulnerabilities: This stage involves identifying [54] of WSN, sensor nodes and databases which are reported in online databases and users experience to the deployment of sensors. This stage follows similar procedures to identifying threats stage. Phase 2: Privacy Evaluation Phase: This phase determines the privacy requirements of each sensor node, device, communication protocols and gateways on an individual level. This phase provides information as to what aspect of privacy is exposed. This phase targets on Privacy maintainability and sensors privacy requirements (anonymity preservation which could be in form of location hiding from unauthorised access, an intercepted/stolen data or data set should not be tied to a particular individual) are determined with a Privacy impact matrix (calculated as critical outcome resulting from privacy violation against sensitivity of data or sensory nodes) as shown in the Fig. 5.

Fig. 5 Quantitative privacy impact matrix

222

M. M. Gofwen et al.

Fig. 6 Quantitative risk impact matrix

Phase 3: Risk Evaluation: The risk evaluation phase is determined by calculating the impact that vulnerabilities and threats could have on an asset through a risk matrix. The matrix is assigned quantitative values to measure the consequences a threat and vulnerability could pose against the likelihood of its occurrence which is described as the profiling of a risk. The consequence of a threat or vulnerability is considered from the loopholes sensors, networks and communication protocols or gateways have when deployed and the likelihood of a those threats and vulnerabilities being exploited (using historical data and reports that actually occurred) (Fig. 6). Phase 4: Risk Analysis: The risk analysis phase presents the overall assets and their values, threats to individual asset, vulnerabilities appropriately linking it to an asset(s), CIA (Confidentiality, Integrity and Availability) violation, Risk score (mean of privacy impact and risk impact). The risk analysis will be registered in the Table 1. To carry out an extensive risk analysis, factors such as Mitigation, reporting and risk treatment was considered. Mitigation: critically presenting steps and methods to take to prevent, minimize or avoid the impact of an attack/threat to sensor nodes and other resources in the eHealth deployment. Mitigation to the threats identified in the first phase of the model will be presented and analysed in the next session. Report: Parties involved with the usage, collection, handling, designing and manufacturing of sensors and other things should be notified of results of the risk assessment. Table 1 Risk analysis Asset

Threat

Vulnerability

CIA violation Confidentiality

Risk score Integrity

Availability

Mean score of privacy impact and risk impact (pi + ri/2)

Application of Zero-Trust Networks in e-Health Internet of Things (IoT) …

223

Risk Treatment: Risk in the deployment of sensors and other things can be owned, rejected or transferred to relevant third parties. Owning the risk requires employing necessary measures to control the risk and residual outcome.

4 Results The researchers analyses and presents data collected from various secondary and primary sources relating to the study. The proposed lightweight model was tested with the data collected and findings reported accordingly. Data from primary sources were extrapolated to collect data peculiar to carrying out this study. Extrapolated data sets are, sensors names, communication methods in eHealth IoT architectures in the deployment of healthcare sensors, vulnerabilities that sensors and communication networks possess, threats to sensors and networks and a list of potential or reported attacks to sensor resources (data and physical attack). There are wide ranges of sensor and actuators nodes, devices and networks designed and deployed for use in the healthcare industry and all perform various purpose in monitoring key vitals, and carrying out specific functions at predetermined times (also when sensors triggers and abnormality in body activities) and wellness progress in hospitals, clinics and some care homes. The assets identified in this stage are categorised into; . Sensor and actuator nodes/devices and systems . Communication Networks and Protocols: Communication protocols and principles that manages the process of data handling in the healthcare application of sensory nodes are identified in the below (Tables 3 and 4); Identification of Threats to Assets: To have an accurate result in the threat modelling stage, the threats noticeable in the deployment of sensors in healthcare are identified. Sensors nodes, actuators, communication channel and data storage hub are open to the below mentioned threats so as to get unauthorised privilege access to medical information either at the WBAN, intercepting data communicated or hub/ cloud storage and databases. Man-in-the middle Attack: This sort of attack to eavesdrops to collect data or information between the physicians and patient and also between nodes communicating to a device. The attacker gains access to such data by using keying materials that reveals the secret key of devices and communication channel during key exchange process (the attacker can steal the identity of the two parties communicating). This attack targets the key generation protocol to probe by probing it so as to get the private key of one of the two data exchange parties. This makes the key establishment protocol vulnerable. WNS Routing: packets are intercepted to collect or trace data coming from sensor nodes. The attacker transmits inaccurate reading from sensor notes when he/she intercepts data and alters it before reroute the data to the destination expected (server/ cloud based storage). The routing threat is classified into three attacks (i) selective

224

M. M. Gofwen et al.

Table 2 Identification of sensors and actuators in eHealth Asset

Description

Data Sources

ADAMM

ADAMM is an intelligent management [55] developed to [55] monitor patients with asthma condition. The technology employs an algorithm unique to patients’ respiratory state deemed-normal. ADAMM sensor is designed to provide notifications by vibration and text to a dedicated mobile device via a downloaded dedicated app. ADAMM also transmitted data collected at specific times to ADAMM web portal to allow access

HELIUS

This is a consumable pill like that has the ability of tracking vital signs of patients that consumes it. Data collected are transmitted in real-time to the medical doctor and an accompanying application compatible on PCs and Smart Devices. [56] also monitors drugs intake when sensor is ingested

[56]

ITBRA

A wearable sensor embedded by within ladies bra to monitor conditions and rhythms of breast tissues to alert breast cancer possibilities in ladies [57]

[57]

QardioCore

QardioCore ECG/EKG wearable monitor uses sensors that take [58] measurement of the heart rate and in general tracking heart activities by using a chest patch monitor. Data collected can be accessed via a dedicated mobile app (iOS compactible) that communicates with the [58] monitor by using Bluetooth connection and vital signs can be monitored by physicians in real time by accessing data uploaded to the hospital cloud storage

Google Contact Smart Lenses

This technology provides readings from tears in a person’s eye so as to get vital data regarding glucose levels at various stages [59]

Mobile devices

These devices collects data from sensing nodes and with an inbuilt capability of analysing the data (usually with an accompanying dedicated app) for users of the sensors and has the ability of transmitting collected data to a central or distributed storage centre/cloud based storage for other authorised parties to access collected data by sensors

CGM

Continuous Glucose Monitoring (CGM) is as glucose monitoring [60] which is sensor augmented to monitor low or high glucose level and to deliver necessary insulin shots to patients

[59]

[60]

forwarding; the attacker intercepts packets (expected) and forwards some selected packets to another receiver sensor thereby making exclusion to some certain data sets. (ii) Sinkhole threat; this sort of attacker is initiated by an attacker who tries to force other sensor nodes to establish a pathway (route) through a compromised node and are hard to detect. (iii) Sybil attack; an attacker uses a compromised sensor nodes to present fake identities to other BAN sensors.

Application of Zero-Trust Networks in e-Health Internet of Things (IoT) …

225

Table 3 Identification of communication networks Asset

Functional description

WBAN (Wireless Body Area Network)

. Uses a Radio Frequency wireless Technology . Connects nodes together . Transmission range of about 2 m between nodes and the network

WSN

. Collects external

Mobile Network

Transmits data over a network (Wired or wireless) to a collection hub

Hospital Network

Data sent from sensor devices to the hospital storage over an untrusted internet connection or networks

Table 4 Identification of communication protocols Protocol

Functional description

Bluetooth

Uses standards such as IEEE 802.15.1 (Bluetooth for communication between sensors and receiver device)

ZigBee

ZigBee the most common used standard for WBAN communication with an advantage of data transmission rate of up to 3Mbps

MICS

Medical Implant Communication Service (MIMS) designed specifically for WBAN [61]

Location Discovery Threat: Location tracking is one of the key features of BAN and sensory nodes so as to provide quick and adequate response to emergency reports. By default, sensor nodes provides location pinging by use of radio frequency, global position system, ultrasound and various technologies which poses threats to patients as regards to their privacy having not to notify persons with malicious intent to their whereabouts at all times. Denial of Service (DoS): DoS explores different methods to make devices, sensors and other IoT resources unavailable for authorize users (patients, physician and clinics). The DoS disruption may be temporal or permanent and poses a great risk to patients concerning both their privacy and health as such disruptions could cause fatal health outcome. A DoS threat/attack engages sensors and other things in the eHealth network with bogus requests so much so that the system would withstand such traffic of request and eventually shuts down. The DoS is could be launched using a reflective DoS attack method where sensor nodes a constantly overworked with Simple Network Manager Protocol (SNMP) amplification due to the fact that a typical SNMP request forwards thrice as usual amount of request targeted at sensor nodes. Network Injection: Open access points with non-filter network traffic are prone to attacks by an attacker who injects a massive amount of reversed reengineered commands targeting routers, intelligent hubs and switches. Non-filtered networks can be some broadcasting network traffic (e.g. 802.1D, Routing Information Protocol (RIP) and Host Standby Routing Protocol). This threat could lead to attack the will

226

M. M. Gofwen et al.

break down network communication between the patients sensor nodes or a linked device to the data centre of the healthcare provider. Session Hijacking: The cookie used for authenticating a user to a remote server during authentication process is stolen by an attacker to assume the identity of the communicating device or sensor of the attacked victim (device). Session hijacking usually takes over the TCP communication protocol session between the device and the server (authenticating server) and carries on from there. Data sent from the server to the recipient device is received by the attacker and this could cause loss of relevant data e.g. when a patient is expected to take the next dose (medication) or required to visit a physician. Such relevant updates and information could missed and might cause health complication and along with revelling a patient health record. Assets Vulnerability Identification: Vulnerability in IoT [62] healthcare application is classified into hardware vulnerability, software vulnerability and technical vulnerability. These are weaknesses that could be found in the system architecture of embedded and non-embedded sensor nodes that are exploited by allowing arbitrary executable commands. Weak Encryption: Most sensors are vulnerable to all sorts of attacks because of the small nature of these sensors; it does not accommodate conventional encryption algorithms to effectively provide security in sensors. Unpatched systems: Unpatched software presents security vulnerability for sensor nodes. This allows attackers to lunch various sorts of attack and by no means, an attack will be successful using the window presented in the software. Protocol Level vulnerability: This leaves sensors nodes within the WSN and BAN vulnerable to conventional attacks. This sort of vulnerability is most noticeable in the adoption of ZigBee as a communication protocol in the Body Area Network or Wireless Sensor Network. Node Capturing: This vulnerability makes possible attacks that target sensor nodes so as to collect information transmitted across various networks and nodes in the healthcare IoT application. The vulnerability can also be exploited at store infrastructure of the healthcare providers’ data collection location. Physical Security: With the use of signal detecting tools, an attacker physically gaining access or being in proximity to sensors and its accompanying devices (PCs or Mobile phones) to exploit vulnerabilities of the sensors, communication protocol (usually spoofing to gain knowledge of the open ports used for data transfer). Privilege escalation: This sort of vulnerability results from back doors exploited by users of the system or sensors who have privilege to use certain aspect or components of the sensor architecture to carry out an attack. Mobile devices and PCs that receives readings from sensors can be the vulnerable part of the system topology so also the communication networks over the internet and the data reservoir (cloud based storage or a hospital data storage) that doesn’t define the boundary an authorised user has clearance to use. Privacy Evaluation: The privacy requirements of Assets are presented in the Table 5. Risk Analysis and Evaluation: The risk evaluation provides scores to risk associated with each asset. Assets risks are presented in the form of likelihood and

Application of Zero-Trust Networks in e-Health Internet of Things (IoT) …

227

Table 5 Assets Privacy requirements Assets

Privacy needs

ADAMM

. Location hiding—Location of the user of this sensor must be kept unknown to unauthorised persons . Data collected must be accessed by persons authorised to by the patient and physician . Anonymity preservation . Data freshness

HELIUS

. Availability—The sensor must be available at all times to take readings of vitals . Integrity—data collected must be sent and received at the other end like it was sent

ITBRA

. Availability . Integrity

QardioCore

. . . . .

Google contact smart lenses

. Confidentiality . Integrity . Availability

Mobile devices

. . . .

Integrity Availability Confidentiality Location Hiding

CGM

. . . . .

Availability Integrity Restricted access Confidentiality Location hiding

Communication Networks (WBAN, WSN, Mobile Network, Hospital Network)

. Nonrepudiation . Availability . Confidentiality

Protocols (ZigBee, Bluetooth, MICS)

. Available . Confidentiality

Confidentiality Integrity Availability Anonymity preservation Data freshness

consequences. The vulnerabilities and threats identified above will be mapped to asset that it’s eminent or by historical data and risk profile. The Risk score is the mean value of privacy impact score and risk impact score (Table 6). Mitigation: The following measures must be put in place to prevent, control or mange threats in the eHealth deployment of sensory nodes and actuators for the delivery of healthcare services. The Table 7 shows the threats and countermeasures (mitigation) to be employed. Report: The risk associated to the aforementioned sensors after carrying out the risk assessment using the proposed PriMa model, relevant stakeholders involved

Weak encryption Node capturing, weak encryption, unpatched software

Man-in-the middle, WNS routing

Man-in-the, DoS, Physicat tempering

Man-in the middle, location discovery, DoS,

DoS, Man-in-the middle attack

Man-in-the middle attack, WNS routing, Location Discovery

DoS, Man-in-the middle attack, location discovery

WNS routing, DoS, WSN packet routing, Session hijacking, Network Injection

WNS routing, DoS, Man-in-the middle atack

HELIUS

ITBRA

QardioCore

Google Contact Smart Lens

Mobile devices

CGM

Communication Networks (WBAN, WSN, Mobile Network, Hospital Network)

Protocols (ZigBee, Bluetooth, MICS)

Yes

No

Yes

Yes

Yes

No

No

Yes

No

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Availability

Yes

Risk score

Confidentiality

Integrity

CIA violation

Protocol Level capturing Yes

Privilege escalation, Physical security, protocol level capturing

Privilege escalation, weak encryption algorithm

Physical security, Unpatched system, physical security

Protocol level capturing

Unpatched software, weak encryption

Unpatched software

Location discovery, DoS

ADAMM

Vulnerability

Threat

Asset

Table 6 Risk analysis and evaluation

pi = 6, ri = 6 6 + 6/2 = 6

pi = 6, ri = 5 6 + 5/2 = 5.5

pi = 8, ri = 8 8 + 8/2 = 8

pi = 7, ri = 8 8 + 6/2 = 7.5

pi = 5, ri = 5 5 + 5/2 = 5

pi = 8, ri = 8 8 + 8/2 = 8

pi = 5, ri = 3 5 + 3/2 = 4

pi = 8, ri = 8 8 + 8/2 = 8

pi = 8, ri = 6 8 + 6/2 = 7

Mean average score of privacy impact and risk impact (pi + ri/2)

228 M. M. Gofwen et al.

Application of Zero-Trust Networks in e-Health Internet of Things (IoT) …

229

Table 7 Threats mitigation Threat

Mitigation

Man-in-the-middle Attack

The authentication and verification process between the sensor nodes and accompanying IP based device (Mobile phone, tablet or PC) should employ a client–server authentication process operated by the clinic or healthcare provider as the Certificate Authority (CA) required for initiating any communication

WNS Routing

DTLS and IpSec should be implemented for mobile devices and 6LoWPAN (needs to adapt 802.15.4 security in link layer) for senor devices connecting to the web directly to provide an E2E data security Data being transmitted from a constrained routing network should be secured using high security encryption and checks (implementing a checksum) algorithm

Location Discovery

Location verification algorithms and technologies should be employed to provide a secure estimation and discovery of the location of a patient by tracking the sensor nodes or accompanying IP based device against unauthorised access

Denial of Service (DoS)

Security measures such as Intruder Detection systems (IDs) and emergency services such as Netflow, Akamai and Redware to monitor traffic flow across the network should be put in place both at the network and transport layer of the BAN, WAN and relay networks

Session Hijacking

SSL/TLS session encryption security should be deployed and attacked to every communication session initiated. A session logout should be implemented for suspicious IP addresses

with the deployment of these sensors should be informed of risk associated with the sensors. The stakeholders are the users (patients) of the sensors, healthcare providers, care homes and manufactures of the sensors. Risk Treatment: Risk involved with the application of these sensors will be managed by patients who will be in charge of reporting any loss or damage to the healthcare provider. Software issues regarding security and key privacy violations will be relayed to manufactures to design more sophisticated update to curb any threat or noticeable vulnerability. Issues regarding privacy violation risks will be shared between healthcare providers and sensor manufactures.

5 Conclusion The research began with identifying the problem of the study, outlining the security and privacy challenges associated with the deployment of sensor, actuators, devices and other things in the context of IoT applications precisely its application in healthcare service such as monitoring disease progression and taking vitals sings reading using smart sensors connected to other things. This has greatly improved healthcare delivery as its application continues to be refined and used by developers and physicians to give better patients health attention as necessary. According to a

230

M. M. Gofwen et al.

survey carried out by APADMI to investigate challenges involved with the usage of sensor devices for healthcare service delivery, it found out that most people consider privacy as a key component of their identity that they feel unsafe to take risks on actions that could violate that privacy by providing information especially to persons with malicious intent. The proposed model PriMa is a lightweight approach and puts into consideration the trust assigned to devices and things in the IoT paradigm as against the security and privacy issues it present and hence the approach functions with a zero-trust principle to consider privacy and security concerns that has not been paid due attention to in other approaches. Since the model is lightweight, it is open to more research to further fine tune its application. The research also observed that the more attention is given to maintaining privacy of sensor users, the less the performance the sensor behaves to their optimum expectation. This study investigated issues regarding the Identity of Things and Zero-Trust network principles and found out an enduring problem which is the issue of deploying a more scalable identity management approach to enable sensors without IP capabilities to be identified precisely and a network architecture that will enable sensors and other devices within its deployment to operate a two way authentication process that must always verify each other before a transmission or any exchange of data will take place. The model has not been tested in a real life scenario by experts in the deployment, designing and development of sensory nodes and networks in any IoT application. Hence, the validation is somewhat partial. Future work will fully validate the implementation of the PriMa lightweight model so as to test the model in the real world application of IoT in healthcare delivery.

References 1. Mainetti, L., Patrono, L. & Vilei, A. (2011). Evolution of wireless sensor networks towards the internet of things: A survey. Software telecommunications and computer networks (SoftCOM) (pp. 1–6). 2. Zhang, Z.-K., Cho, M. C., & Shieh, S. (2015). Emerging security threats and countermeasures in IoT. In Proceedings of the 10th ACM symposium on information, computer and communications security (pp. 1–6). 3. Li, X., Lu, R., Liang, X., & Shen, X. S. (2011). Smart community: An internet of things application. IEEE Communication Magazine, 49(11), 68–75. 4. Asghar, M. H., Mohammadzadeh, N., & Negi, A. (2015). Principle application and vision in internet of things (IoT). In International conference on computing, communication and automation (ICCCA) (pp. 427–431). IEEE. 5. Dohr, A. et al. (2010). The internet of things for ambient assisted living. In Seventh international conference on information technology (ICIT) (pp. 804–809). 6. Guo, L., Zhang, C., Sun, J., & Fang, Y. (2012). PAAS: A privacy-preserving attributebased authentication system for eHealth networks. In 32nd IEEE international conference on distributed computing systems (pp. 224–233).

Application of Zero-Trust Networks in e-Health Internet of Things (IoT) …

231

7. Nukala, R. et al. (2016). Internet of things: A review from ‘Farm to Fork’. In 27th Irish signals and systems conference (ISSC) (pp. 1–6). IEEE. 8. Wan, J., et al. (2014). IoT sensing framework with inter-cloud computing capability in vehicular networking. Electronic Commerce Research, 14(3), 389–416. 9. Yaacoub, E., Abdullah, K., Adnan, A.-D. (2012). Cooperative wireless sensor networks for green internet of things. In Proceedings of the 8h ACM symposium on QoS and security for wireless and mobile networks (pp. 79–80). 10. Yao, X., Chen, Z., & Tian, Y. (2015). A lightweight attribute-based encryption scheme for the Internet of Things. Future Generation Computer Systems, 49, 104–112. 11. Singh, S., & Singh, N. (2015). Internet of Things (IoT): Security challenges, business opportunities and reference architecture for E-commerce. In International conference on green computing and internet of things (ICGCIoT) (pp. 1577–1581). IEEE. 12. Sharmeta, A. F., Hernández-Ramos, J. L., & Moreno, V. M. (2014). A decentralized approach for security and privacy challenges in the Internet of Things. In IEEE world forum on internet of things (WF-IoT) (pp. 67–72). 13. Weber, M., & Boban, M. (2016). Security challenges of the Internet of Things. In International conference on information and communication technology, electronics and microelectronics (MIPRO) (pp. 638–643). IEEE. 14. Francis, T., Madiajagan, M., & Kumar, V. (2015). Privacy issues and techniques in E-health systems. In Proceedings of the 2015 ACM SIGMIS conference on computers and people research ACM (pp. 113–115). 15. Islam, R. S. M. et al. (2015). The Internet of Things for health care: A comprehensive survey. In IEEE Access (pp. 678–708). 16. Abiyev, R., Arslan, M., Bush Idoko, J., Sekeroglu, B., & Ilhan, A. (2020). Identification of epileptic EEG signals using convolutional neural networks. Applied Sciences, 10(12), 4089. 17. Abiyev, R. H., Arslan, M., & Idoko, J. B. (2020). Sign language translation using deep convolutional neural networks. KSII Transactions on Internet & Information Systems, 14(2). 18. Helwan, A., Idoko, J. B., & Abiyev, R. H. (2017). Machine learning techniques for classification of breast tissue. Procedia computer science, 120, 402–410. 19. Sekeroglu, B., Abiyev, R., Ilhan, A., Arslan, M., & Idoko, J. B. (2021). Systematic literature review on machine learning and student performance prediction: Critical gaps and possible remedies. Applied Sciences, 11(22), 10907. 20. Idoko, J. B., Arslan, M., & Abiyev, R. (2018). Fuzzy neural system application to differential diagnosis of erythemato-squamous diseases. Cyprus J Med Sci, 3(2), 90–97. 21. Ma’aitah, M. K. S., Abiyev, R., & Bush, I. J. (2017). Intelligent classification of liver disorder using fuzzy neural system. International Journal of Advanced Computer Science and Applications, 8(12). 22. Bush, I. J., Abiyev, R., Ma’aitah, M. K. S., & Altıparmak, H. (2018). Integrated artificial intelligence algorithm for skin detection. In ITM Web of conferences (vol. 16, p. 02004). EDP Sciences. 23. Bush, I. J., Abiyev, R., & Arslan, M. (2019). Impact of machine learning techniques on hand gesture recognition. Journal of Intelligent & Fuzzy Systems, 37(3), 4241–4252. 24. Uwanuakwa, I. D., Idoko, J. B., Mbadike, E., Re¸sato˘glu, R., & Alaneme, G. (2022). Application of deep learning in structural health management of concrete structures. In Proceedings of the institution of civil engineers-bridge engineering (pp. 1–8). Thomas Telford Ltd. 25. Helwan, A., Dilber, U. O., Abiyev, R., & Bush, J. (2017). One-year survival prediction of myocardial infarction. International Journal of Advanced Computer Science and Applications, 8(6). https://doi.org/10.14569/IJACSA.2017.080622 26. Bush, I. J., Abiyev, R. H., & Mohammad, K. M. (2017). Intelligent machine learning algorithms for colour segmentation. WSEAS Transactions on Signal Processing, 13, 232–240. 27. Friese, I., Heuer, J., & Kong, N. (2014). Challenges from identities of things introduction of the identities of things discussion group within Kantara initiative. In IEEE World Forum on Internet of Things (WF-IoT).

232

M. M. Gofwen et al.

28. Thuan, D. V., Butkus, P., & Thanh, D. V. (2014). A user centric identity management for Internet of Things. In International conference on it convergence and security (ICITCS) (pp. 1–4). IEEE. 29. Anggorojati, B., Mahalle, P. N., Prasad, N. R., & Prasad, R. (2013). Identity authentication and capability based access control (IACAC) for the internet. Journal of Cyber Security and Mobility, 1(4), 309–348. 30. Liu, C., Zhang, Y., & Zhang, H. (2013). A novel approach to IoT security based on immunology. In Computational intelligence and security (CIS), volume 9th international conference (pp. 771–775). IEEE. 31. Iyengar, N. S., & Ganapathy, G. (2015). Trilateral trust based defense mechanism against DDoS attacks in cloud computing environment. Cybernetics and Information Technologies, 15(2), 199–140. 32. Borgia, E. et al. (2016). Special issue on “Internet of Things: Research challenges and Solutions”. Computer Communications. 33. Friess, P. (2013). Internet of Things: Converging technologies for smart environments and integrated ecosystems. River Publishers. 34. Kothmayr, T., et al. (2013). DTLS based security and two-way authentication for the Internet of Things. Ad Hoc Networks, 11(8), 2710–2723. 35. Pohls, H. C., et al. (2014). RERUM: Building a reliable IoT upon privacy- and security-enabled smart objects. In Workshop on IoT communication and technologies (pp. 122–127). 36. Conzon, D., et al. (2012). The VIRTUS Middleware: An XMPP based architecture for secure IoT communication. In 21st international conference on computer communications and networks (ICCCN) (pp. 1–6). IEEE. 37. Flood, P., & Schukat, M. (2014). A zero-knowledge-based approach to security for the Internet of Things. In 10th international conference on digital technologies (DT) (pp. 68–72). IEEE. 38. Ning, H., Liu, H., & Yang, L. T. (2013). Cyberentity security in the Internet of Things. Computer, 4, 46–53. 39. Li, F., Zheng, Z., & Jin, C. (2016). Secure and efficient data transmission in the Internet of Things. Telecommunication Systems, 62(1), 111–122. 40. Ulltveit-Moe, N., et al. (2016). Secure information sharing in an industrial Internet of Things. In N. Walliman (ed.), 2011. Your research project, 3rd edn. Sage. 41. Haider, I., Höber, M., & Rinner, B. (2016). Trusted sensors for participatory sensing and IoT applications based on physically unclonable functions. In Proceedings of the 2nd ACM international workshop on IoT privacy, trust, and security (pp. 14–21). ACM. 42. Xin, M. (2015). A mixed encryption algorithm used in internet of things security transmission system. In International conference on cyber-enabled distributed computing and knowledge discovery (CyberC) (pp. 62–65). IEEE. 43. Datta, S. K., Bonnet, C., Da Costa, F. R. P., & Harri, J. (2016). DataTweet: An architecture enabling data-centric IoT services. In IEEE region 10 symposium (TENSYMP). 44. Hongsong, C., Zhongchuan, F., & Dongyan, Z. (2011). Security and trust research in M2M system. In International conference on vehicular electronics and safety (ICVES) (pp. 286–290). IEEE. 45. Doukas, C., et al. (2012). Enabling data protection through PKI encryption in IoT m-health devices. In 12th international conference on bioinformatics & bioengineering (BIBE) (pp. 25– 29). IEEE. 46. Idoko, B., Idoko, J. B., Kazaure, Y. Z. M., Ibrahim, Y. M., Akinsola, F. A., Raji, A. R. (2022). IoT based motion detector using Raspherry Pi gadgetry. In 2022 5th information technology for education and development (ITED) (pp. 1–5). IEEE. 47. Fisher, R., Lyu, M., Cheng, B., & Hancke, G. (2016). Public key cryptography: Feasible for security in modern personal area sensor networks? In IEEE international conference on industrial technology (ICIT) (pp. 2020–2025). 48. Kumar, U., Borgohain, T., & Sanyal, S. (2015). Comparative analysis of cryptography library in IoT. 49. Perera, C., Ranjan, R., & Wang, L. (2015). End-to-end privacy for open big data markets. IEEE Cloud Computing, 2(4), 44–53.

Application of Zero-Trust Networks in e-Health Internet of Things (IoT) …

233

50. Hasan, O., et al. (2013). A discussion of privacy challenges in user profiling with big data techniques: The EEXCESS use case. In IEEE international congress on big data (pp. 25–30). 51. Chabridon, S., et al. (2014). A survey on addressing privacy together with quality of context for context management in the Internet of Things. Annals of Telecommunications, 69(1–2), 47–62. 52. Ni, W., Gu, M., & Chen, X. (2016). Location privacy-preserving k nearest neighbor query under user’s preference. Knowledge-Based Systems, 103, 19–27. 53. ISO/IEC. (2011). Information security risk management technical report ISO/IEC 27005:2011. International Organization for Standardization. 54. Ayyeka, M. N. (2016). Industrial Internet of Things (IoT): Identifying the vulnerabilities of field devices. [Online] Available at: http://www.wateronline.com/doc/industrial-internet-of-thi ngs-iot-identifying-the-vulnerabilities-of-field-devices-0001. Accessed 24 April 2023. 55. Eagledream Technologies. (2016). Health care originals. [Online] Available at: http://health careoriginals.com/solution/. Accessed 26 March 2023. 56. Proteus Digital Health. (2016). Proteus digital health. [Online] Available at: http://www.pro teus.com/careers/. Accessed 26 March 2023. 57. Cyrcadia Health. (2018). Cyrcadia health. [Online] Available at: http://cyrcadiahealth.com/. Accessed 26 March 2023. 58. Qardio. (2019). QardioArm smart blood pressure monitor. [Online] Available at: https://www. getqardio.com/qardioarm-blood-pressure-monitor-iphone-android/. Accessed 22 Feb 2023. 59. Independent. (2021). Google contact lenses: Tech giant licenses smart contact lens technology to help diabetics and glasses wearers. [Online] Available at: http://www.independent.co.uk/ life-style/gadgets-and-tech/google-licenses-smart-contact-lens-technology-to-help-diabeticsand-glasses-wearers-9607368.html. Accessed 27 April 2023. 60. Medtronic. (2021). MiniMed paradigm revel insulin pump. [Online] Available at: http://www. medtronicdiabetes.com/products/minimed-revel-insulin-pump. Accessed 27 April 2023. 61. Javaid, N., et al. (2013). Ubiquitous healthcare in wireless body area networks: A survey. 62. Potoczny-Jones, I. (2015). IoT security & privacy: Reducing vulnerabilities. [Online] Available at: http://www.networkcomputing.com/internet-things/iot-security-privacy-reducing-vul nerabilities/807681850. Accessed 28 April 2023.

IoT Security Based Vulnerability Assessment of E-learning Systems Bartholomew Idoko and John Bush Idoko

Abstract In this paper, we identify the possible threat actors, vulnerabilities and risk that can be associated with e-learning systems. The rising cyber security incidents on the e-learning platforms have posed serious challenge to the developers, education administrators and students/users. It is on this premise that this systematic vulnerability assessment and threat landscape is based, in order to mitigate potential risk associated with e-learning systems. We carried out a review of some opensource e-learning systems with the highest security risk. The aim is to evaluate the ratings of the impact of lost and liability and address the security issues/risks as well as propose protective measures for the e-learning platforms that are vulnerable to potential adversaries (cyber attack). Keywords Threat · IoT · Security · Vulnerability · Risk · E-learning · Impact

1 Introduction E-learning is the type of learning that makes use of two components; the computer and the internet. It is a formal electronic system of learning that could be done in or out of the classrooms. Learning in an environment that is IoT based is not strange to the formal training system. The advent of COVID-19 pandemic in 2020 roused institutions consciousness to the use ICT to drive knowledge. However, schools and educational institutions were challenged to provide IoT secured learning platforms that are convenient and effective for students and school administrators to exploit. E-learning is powered with the use of cyber tools and resources like application B. Idoko (B) Center for Cyber Space Studies, Nasarawa State University, Keffi, Nigeria e-mail: [email protected] J. B. Idoko Department of Computer Engineering, Applied Artificial Intelligence Research Centre, Near East University, Nicosia 99138, Turkey e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_15

235

236

B. Idoko and J. B. Idoko

software’s, mobile phones, data, workstations, internet connections among other electronic devices. Typical e-learning platforms include; Coursera, Udemy, EDx, Moodle, Codecacademy and several open universities across the world [1–4]. It is pertinent to note that the post pandemic era has witnessed several institutions of learning migrating to e-learning or exploiting e-learning as an alternative to the conventional class room method of studying that is through physical contact with teachers, instructors, lecturers or trainers and this has impacted positively to all parties and stakeholders. [5]The Ministry of Electronics and Information Technology endorsed electronics system of learning as an essential tool to drive knowledge. With several merits and de-merits, Internet of Things (IoT) has enhanced fast access and link to knowledge and information. On the other hand, IoT has created a lot of opportunities that are potential inherent vulnerabilities to cyber attack in e-learning platforms [7]. The importance of e-learning in the education sector can not be overemphasized: . Flexibility An IoT based learning environment support every students needs. Students can access content multiple of times until they are satisfied. Secondly, the location of students is irrelevant to the platform as what is important is the student’s ability to connect to e-learning system. . Open and accessible The system is usually open and running at all times. This attribute makes it easier to student’s access as compared to the classroom method of learning. The issue of availability has been erased as it is top notched to students during examinations. Hence the advantage of being convenient to access at all times. . Modernized and up to date knowledge The IoT based learning platform is designed to provide students with an up to date version of training and learning materials that meets international standards. This often encourages learners across the globe to exploit the opportunity of e-learning. . Fast Transfer of Knowledge Most students have adopted e-learning system in the post Covid-19 pandemic era as more effective and reliable because it allows for fast transfer of knowledge such that program schedules are unaltered. One of the characteristics of e-learning is speedy delivery and access to lessons. Hence the time required to complete a training module is reduced drastically by 35%. . Versatility New knowledge and ideas can be gotten through e-learning as students and teachers have the opportunity to intercept and interact with all the resources available online. Therefore, new skills could be gain apart from the ones available on the institution curriculum. Hence both teachers and students are exposed to developed new skills.

IoT Security Based Vulnerability Assessment of E-learning Systems

237

Since e-learning can only operate in the cyber space, the system is prone to attack as there could be a lot of vulnerabilities as well as zero day vulnerabilities that might put the system at risk. Therefore, it becomes important to secure the content, the services and information of both the users and the system administrator. Security is the absence of or resistance to external forces’ potential harm (unwanted coercive change) [6]. Humans and social groupings, objects and institutions, ecosystems, and any other thing or phenomenon sensitive to undesirable change by its surroundings are all potential beneficiaries (referents) of security. This paper basically emphasizes on the vulnerability assessment (ratings) of e-learning platform as an IoT enabled system.

2 E-learning Vulnerabilities The flaws and exposures of e-learning systems that could be exploited by adversaries are shown in Table 1. V1—Injection: The adversaries hide under the guess of data interchange to send a malware infected file in the form of SQL, OS or LDAP injection to the victim system (interpreter) as a program or query. The hackers ensure the interpreter is deceived into accessing the infected program, commands or files for malicious purposes [8]. V2—Cross-Site Scripting (XSS): This is a bridge of the cyber security goals (authentication and authorization) [9]. XSS vulnerability exists whenever a compromised data is transferred to a web browser without authentication and or authorization. This process enables hackers to gain access to execute scripts in the host browser for malicious purposes like sessions hijacking, website alteration or social engineering. V3—Broken Authentication and Session Management: Malicious parties take advantage of errors due to incorrect implementation of software functions and workflow administration to compromise passwords, tokens, keys, identities as well as exploit flaws from program and software design, testing and implementation. V4—Insecure Direct Object References: A direct object reference occurs when a developer exposes a reference to an internal implementation object, such as a Table 1 E-learning vulnerabilities V1: Injection

V2: Cross-Site Scripting (XSS)

V3: Broken Authentication and Session Management

V4: Insecure Direct Object References

V5: Cross Site Request Forgery (CSRF)

V6: Security Misconfiguration

V7: Failure to Restrict URL Access

V8: Insecure Cryptographic Storage

V9: Insufficient Transport Layer Protection

V10: Invalidated Redirects and Forwards

238

B. Idoko and J. B. Idoko

file, directory, or database key. Without an access control check or other protection, attackers can manipulate these references to access unauthorized data. V5—Cross-Site Request Forgery (CSRF): A CSRF flaws allow the host browser to send a cloned HTTP request, pop ups, cookies and updates to another flawed web application. The essence is for the hacker to force the host or victim’s browsers to generate requests that will compare the flawed application behave as if it is legitimate. V6—Security Misconfiguration: wrong parameter configuration of systems and software’s create room for flaws and backdoors for adversaries to exploit. A reliable security entails deploying well secured configured systems with strong security features and settings that is implementable, maintainable and flexible to updates [10– 20]. It is not advisable to rely on default security settings on applications, servers, routers and networks. V7—Insecure Cryptographic Storage: Hackers often steal or reconfigure vulnerable private credentials for malicious purposes. This flaw as a result of unsecured web applications exposes tools for online transactions at a severe risk. Hence sensitive information requires proper protection in the form of encryption or hashing. V8—Failure to Restrict URL Access: Access control is an important component of checking vulnerability. Most web applications are designed to authenticate URL access rights before granting access to the web pages. This will protect the web applications and prevent hackers from cloning URL in order to query the hidden pages. V9—Insufficient Transport Layer Protection: Flaw associated with the failure of applications to always authenticate, encrypt, and secure the basic security goals of confidentiality, integrity, authenticity, availability, non-repudiation and trust of sensitive data and network traffic could be exploited by attackers. This kind of vulnerability occurred as a result of weak algorithms, expired or fake certificates or misapplication of the software’s. V10—Invalidated Redirects and Forwards: Hackers exploit the window of redirects and forward to mislead victims to malicious sites as well as access unauthorized sites. Sufficient validation of websites is necessary to ensure that users visit secured sites at all times.

3 E-learning Vulnerability Assessment In this section, we analyzed vulnerability assessment and ratings as it relates to elearning platform [21]. Vulnerability assessment deals with the potential impact of loss from a successful incidence as well as the weakness of the system to an incidence. The Impact of loss is the rate at which the exploit affected the system by a successful harm considering the indicator of compromise. Vulnerability are examined by taking into cognizant the threat and exposures of the platform that could be attacked or affected, or analyzed by considering the multiple potential specific sequences of events, that is, a scenario based approach.

IoT Security Based Vulnerability Assessment of E-learning Systems

239

An important aspect of vulnerability assessment is accurately highlighting the ratings for impact of loss and liability. This procedure may differ from one e-learning platform to another. We shall base our analyses on two axioms; Devastating and Target Attractiveness. Devastating: The system is destroyed or compromised beyond recovery. . Severe: The system is slightly destroyed or compromised . Noticeable: The system is temporary shot down or service denied with the hope of a short term recovery and resume with a steady uptime. The disruption in this case should not be more than 24 h. . Minor: The system impact from the attack is insignificant with service disruption of less than four hours with no loss of critical information infrastructure. Target Attractiveness: This is the measure of the criticality of the infrastructure as viewed by the attacker and can be determined by the function the infrastructure performed in the organization. . Very High: A highly rated infrastructure that is very attractive to potential threat actors with a loosed indicator of attack and compromise. . High: A high profile and much useful system asset that provides an attractive target with an exposed attack surface, insufficient defense such that it could be exploited by an adversary . Average: An averaged profile system that gives a potential target and/or protection availed by the existing protection layer is less extremely sufficient . Low: That is, below averaged system that gives a potential target and/or the level of resilience and/or protection provided by the existing protection layer is adequate. The element of e-learning vulnerability assessment can be categorized into the following important stages [22] (Tables 2, 3, 4, 5 and 6), however, Table 7 shows the assessment summary score/rating key: . . . . .

Level of exposure. Criticality of attack surface. External impact. Number of future users. potential extent of damage.

Table 2 Assessment of the visibility of the weakness/ flaw

Level of exposure

Rating value

Invisible

0

Very low visibility

1

Low visibility

2

Medium visibility

3

High visibility

4

Very high visibility

5

240 Table 3 Assessment of the important of the system to the users and state actors

Table 4 Assessment of external impact/effect of damage

Table 5 Assessment of the system users at any given time

Table 6 Assessment of potential of the impact of damage

B. Idoko and J. B. Idoko

Criticality of attack surface

Rating value

No usefulness

0

Minor usefulness

1

Moderate usefulness

2

Significant usefulness

3

Highly useful

4

Critical

5

External impact

Rating value

None

0

Very low

1

Low

2

Medium

3

High

4

Very high

5

Number of future users

Rating value

0

0

1–500

1

501–10,000

2

10,001–20,000

3

20,001–100,000

4

100,001+

5

Potential for impact of damage

Rating value

0–100

0

101–500

1

501–5000

2

5001–20,000

3

20,001–100,000

4

100,001+

5

IoT Security Based Vulnerability Assessment of E-learning Systems

241

Table 7 Assessment summary score/rating key • Summary score • Rating key Summary score • Visibility • Criticality • External impact • Number of future users • Potential extent of damage

Basic target vulnerability assessment rating key 0–2 pts. = 1 3–5 pts. = 2 6–8 pts. = 3 9–11 pts. = 4 12–14 pts. = 5

4 Vulnerability Risk Space and Threat Landscape of E-learning system The probability of a cyber incident exploiting a weakness to cause damage to the learning system poses a risk. [23] When an attack surface is exploited, there must be an impact. In the context of e-learning IoT, the impact is the compromise of the six cybersecurity goals of Availability, Integrity, Confidentiality, Authenticity, Non-repudiation, and Trust. Income as well as time may be lost as shown in Fig. 1. As shown in Fig. 2 below, the system is at a high risk because of its’ level of exposure to threat actors. All security control implemented on e-learning platforms does not guarantee total protection. The attackers could exploit the flaws or weakness in the system to cause harm which will have adverse effect on both the technical and business operations.

Security controls High Authentication Identify preservation Non repudiation

Input handling Low Session management

Security functions

Output handling

Access control

May affect all other parts as side effects

Application business

Backend

Fig. 1 Vulnerability risk space

Segregation between users Confidentially Integrity Availability

May be affected by all other parts

242

B. Idoko and J. B. Idoko Threat Agents

Attack Vectors Attack

Security Weaknesses Flaws

Attack

Flaws

Attack

Flaws Flaws

Security Controls

Technical Impacts

Control Control

Impact Data Function

Control

Business Impacts

Impact Impact

Data

Fig. 2 Threat landscape

5 Conclusion The paper identified some vulnerability and threat associated with IoT using elearning platform as a case study. The comprehensive study of the vulnerability assessment was carried out. Although, e-learning systems were designed to be secured, several factors can put the system at risks, such as man in the middle attack (physical security of the web server), security updates (firewall, patches, antivirus, etc.) and the system configuration among others. These established factors have created loop holes (vulnerabilities) for potential adversaries. The vulnerability assessment ratings using different elements of the system such as level of visibility, criticality of attack surface and impact factor among other ratings shows a high exposure of the system to cyber threat. Consequently, in developing e-learning systems, all standard procedures for a well secured system must be followed. Workable security functions such as Authentication, Access Control, session management, encryption and non-repudiation must be implemented. The system administrator should ensure that data transfer between users and content operators should be on the encrypted SSL channels through the web administrator interface. E-learning system security should integrate all the features of cyber security for it to be robust and effective for the instructor and students. We recommend that further assessment of e-learning systems should focus on the zero day vulnerabilities of e-learning systems.

References 1. C. (Defta) Costinela - Lumini¸ta and C. (Iacob). (2012). Nicoleta - Magdalena/Procedia. Social and Behavioral Sciences, 4(6), 2, 297–2301 2. Defta, L. (2011). Information security in E-learning platforms. In: Proceedings of the 3rd World Conference on Educational Sciences, Instanbul, Turkey (pp. 2689–2693).

IoT Security Based Vulnerability Assessment of E-learning Systems

243

3. https://leadschool.in/blog/growing-importance-of-e-learning-in-the-21st-century/. Accessed 21 February, 2023. 4. https://www.learnworlds.com/online-learning-platforms/. Accessed 21 February, 2023. 5. Weber, M., & Boban, M., 2016. Security challenges of the Internet of Things. In: International Conference on Information and Communication Technology, Electronics and Microelectronics (MIPRO) (pp. 638–643). IEEE 6. Idoko, B., Idoko, J. B., Kazaure, Y. Z. M., Ibrahim, Y. M., Akinsola, F. A., & Raji, A. R. (2022). IoT based motion detector using Raspherry Pi Gadgetry. In: 2022 5th Information Technology for Education and Development (ITED), 978-6654-9373-3/22 $31.00 (c) (pp. 1–5). IEEE 7. Flood, P., & Schukat, M. (2014). A zero-knowledge-based approach to security for the Internet of Things. In: 10th International Conference on Digital Technologies (DT) (pp. 68–72). IEEE 8. 2018 International Conference on Computational and Characterization Techniques in Engineering & Sciences (CCTES). IEEE, https://doi.org/10.1109/CCTES.2018.8674115 9. Cross Site Request Forgery (CSRF). Cross Site Request Forgery (CSRF) | OWASP Foundation. (n.d.). https://owasp.org/www-community/attacks/csrf. Accessed 20 February, 2023. 10. Abiyev, R., Arslan, M., Bush Idoko, J., Sekeroglu, B., & Ilhan, A. (2020). Identification of epileptic EEG signals using convolutional neural networks. Applied Sciences, 10(12), 4089. 11. Abiyev, R. H., Arslan, M., & Idoko, J. B. (2020). Sign language translation using deep convolutional neural networks. KSII Transactions on Internet & Information Systems, 14(2). 12. Abiyev, R. H., Arslan, M., & Idoko, J. B. (2020). Sign language translation using deep convolutional neural networks. KSII Transactions on Internet & Information Systems, 14(2). 13. Sekeroglu, B., Abiyev, R., Ilhan, A., Arslan, M., & Idoko, J. B. (2021). Systematic literature review on machine learning and student performance prediction: Critical gaps and possible remedies. Applied Sciences, 11(22), 10907. 14. Idoko, J. B., Arslan, M., & Abiyev, R. (2018). Fuzzy neural system application to differential diagnosis of erythemato-squamous diseases. Cyprus Journal of Medical Sciences, 3(2), 90–97. 15. Ma’aitah, M. K. S., Abiyev, R., & Bush, I. J. (2017). Intelligent classification of liver disorder using fuzzy neural system. International Journal of Advanced Computer Science and Applications, 8(12) 16. Bush, I. J., Abiyev, R., Ma’aitah, M. K. S., & Altıparmak, H. (2018). Integrated artificial intelligence algorithm for skin detection. In ITM Web of Conferences (Vol. 16, p. 02004). EDP Sciences 17. Bush, I. J., Abiyev, R., Arslan, M. (2019). Impact of machine learning techniques on hand gesture recognition. Journal of Intelligent & Fuzzy Systems, 37(3), 4241–4252 18. Uwanuakwa, I. D., Idoko, J. B., Mbadike, E., Re¸sato˘glu, R. & Alaneme, G., 2022, May. Application of deep learning in structural health management of concrete structures. In Proceedings of the Institution of Civil Engineers-Bridge Engineering (pp. 1–8). Thomas Telford Ltd 19. Helwan, A., Dilber, U. O., Abiyev, R., & Bush, J. (2017). One-year survival prediction of myocardial infarction. International Journal of Advanced Computer Science and Applications, 8(6). https://doi.org/10.14569/IJACSA.2017.080622 20. Bush, I. J., Abiyev, R. H., Mohammad, K. M. (2017). Intelligent machine learning algorithms for colour segmentation. WSEAS Transactions on Signal Processing, 13, 232–240. 21. Open Vulnerability Assessment Scanner. OpenVAS. (n.d.). https://www.openvas.org/. Accessed 21 February, 2023. 22. E-learning Platforms Security Issues and Vulnerability Analysis. Digital Object. Identifier System. (n.d.). https://doi.org/10.1109/cctes.2018.8674115. 23. Vulnerability Metrics. National Vulnerability Database. NVD. (n.d.).https://nvd.nist.gov/vulnmetrics/cvss.

Blockchain Technology, Artificial Intelligence, and Big Data in Education Ramiz Salama and Fadi Al-Turjman

Abstract The way we approach learning and teaching may be completely changed if blockchain technology, artificial intelligence, and big data are integrated into the educational system. While artificial intelligence can tailor and improve the learning experience for each student, blockchain technology can offer safe and open recordkeeping for educational institutions. Big data can be utilized to evaluate and enhance many facets of the educational process, including resource allocation and curriculum creation. This study intends to investigate how big data, artificial intelligence, and blockchain are currently being used in education and to evaluate any prospective effects on the educational system. In order to do this, we reviewed prior studies on the subject and examined case studies of effective uses of these technologies in education. In order to gain more information about the benefits and challenges of integrating blockchain, artificial intelligence, and big data into education, we also spoke with educators and industry leaders in-person. According to our research, the use of big data, AI, and blockchain in education has the potential to increase the effectiveness, equity, and customization of the educational system. There are obstacles to be overcome, too, such as the requirement for the right infrastructure and the potential for data privacy issues. Overall, this study advances knowledge of the possibilities and restrictions of utilizing blockchain, artificial intelligence, and big data in education and offers suggestions for future study and application.

R. Salama (B) Department of Computer Engineering, AI and Robotics Institute, Research Center for AI and IoT, Near East University, Nicosia, Mersin 10, Turkey e-mail: [email protected] F. Al-Turjman Department of Artificial Intelligence and Data Science, AI and Robotics Institute, Near East University, Nicosia, Mersin 10, Turkey Research Center for AI and IoT, Faculty of Engineering, University of Kyrenia, Kyrenia, Mersin 10, Turkey F. Al-Turjman e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_16

245

246

R. Salama and F. Al-Turjman

Keywords Big data · Artificial intelligence · Blockchain · Record-keeping · Data analytics

1 Introduction With the advent of new tools and platforms that have the potential to change how we approach learning and teaching, the use of technology in education has progressed quickly in recent years. The incorporation of blockchain technology, artificial intelligence, and big data into the educational system is one area that has attracted a lot of interest. Blockchain technology is a decentralized and secure method of storing and transferring data that was initially created for the bitcoin business. Blockchain technology has many applications in the field of education, including the creation of transparent and secure records for grades, certifications, and transcripts. Contrarily, artificial intelligence has the capacity to optimize and tailor each student’s educational experience. AI may deliver individualized feedback and recommendations, tailor the curriculum to the needs of each student, and pinpoint areas where further support might be required by analyzing data on students’ learning preferences, progress, and performance. By offering insights into various parts of the educational process, big data, or the collection and analysis of massive datasets, can also be useful in the field of education. Big data can be used, for instance, to analyze student performance and spot trends and patterns that might guide the development of curricula and the distribution of resources. Big data, AI, and blockchain technology in education have the potential to enhance the effectiveness, equity, and personalization of the educational system. There are still obstacles to be overcome, such as the requirement for the right infrastructure and the possibility for issues with data protection [1–6]. We seek to examine the current state of blockchain, AI, and big data utilization in education and evaluate their possible effects on the educational system in this study. In order to do this, we will study prior studies on the subject and analyze case studies of effective uses of these technologies in education. In order to learn more about the opportunities and challenges of integrating blockchain, AI, and big data into education, we will also interview educators and subject-matter experts. Our findings will advance knowledge of the potential and restrictions of utilizing blockchain, artificial intelligence, and big data in education and offer suggestions for future study and application.

Blockchain Technology, Artificial Intelligence, and Big Data in Education

247

2 Extent of Past Work In recent years, there has been a growing amount of study on the application of big data, AI, and blockchain in education. Studies on blockchain have concentrated on the advantages of employing this technology for credentialing and educational recordkeeping. For instance, Kim et al. study looked at how to use blockchain to create transparent and safe transcripts, credentials, and grade records in higher education. According to the study, blockchain technology has the ability to increase the fairness and efficiency of the credentialing process while lowering the likelihood of fraud and mistakes. Several research have looked into using blockchain to build decentralized and secure platforms for online learning as well as to track and confirm the legitimacy of educational resources [7]. Studies on the potential of artificial intelligence to tailor and improve the learning experience for students have drawn attention in the realm of education as well. The application of AI-powered individualized learning platforms in primary and secondary school was studied in a study by Li et al. The study discovered that these platforms had the capacity to meet the needs of different learners and had a beneficial influence on student learning outcomes and engagement. Several research have looked into the use of AI for identifying and filling in learning gaps as well as giving students individualized feedback and guidance. Regarding big data, research has concentrated on using data analytics to enhance a number of facets of the educational process. For instance, Sun et al. study looked at how big data analytics can be used in higher education to enhance curriculum development and resource allocation. The study discovered that by identifying trends and patterns in student performance and needs, the use of big data had the potential to improve the effectiveness and efficiency of the educational process. Several research have investigated the use of big data to forecast student performance and retention as well as to find best practices in teaching and learning [3]. Overall, the research on the application of big data, AI, and blockchain in education indicates that these technologies have the potential to significantly improve the effectiveness, fairness, and personalization of education. There are still obstacles to be overcome, such as the requirement for the right infrastructure and the possibility for issues with data privacy [8–12].

3 Materials and Procedures We did a literature study of prior research on the subject and reviewed case studies of successful implementations of these technologies in education in order to explore the current status of their use in education and to estimate their potential impact on the educational system. In order to gain more information about the benefits and challenges of integrating blockchain, artificial intelligence, and big data into education, we also spoke with educators and industry leaders in-person. For the

248

R. Salama and F. Al-Turjman

literature review, we used a variety of internet databases, including Google Scholar and the Education Resources Information Center, to look for pertinent papers and research published in the previous three years (2020–2022). (ERIC). To find pertinent publications and studies, we searched using the terms “blockchain,” “artificial intelligence,” “big data,” and “education.“ We examined more than 100 publications and studies in all, and we chose a sample of 15 that satisfied our inclusion requirements, which included a focus on the application of blockchain, artificial intelligence, and big data in education and a publication date no older than three years. We selected a selection of five effective big data, AI, and blockchain applications in education for the case studies. These case studies were chosen because they were pertinent to the research subject and had a clear effect on education. Using web research, including articles, papers, and conversations with pertinent parties, we compiled information on these case studies. We interviewed a sample of 10 educators and subject-matter experts in the area of technology in education in semi-structured interviews for the study. Video conferencing was used for the interviews, which were also videotaped and transcriptions were made for analysis. To maintain uniformity among interviews and to cover a wide range of issues, we employed an interview guide. These topics included the current state of using blockchain, AI, and big data in education, the advantages and difficulties of doing so, and suggestions for further study and application. We used thematic analysis, a technique that includes finding and coding themes in the data, to examine the information from the literature review, case studies, and interviews. To aid in the process of coding and analysis, we used NVivo, a program for qualitative data analysis. The use of blockchain for educational record-keeping and credentialing is one particular example of how it is being used in education that has arisen from the literature and case studies. Studies have looked into the possibility of using blockchain to make safe and transparent records of credentials, grades, and transcripts [7]. According to these research, blockchain technology can offer a decentralized and secure method of data storage and transfer, lowering the possibility of mistakes and fraud while enhancing the effectiveness and fairness of the credentialing process. The research and case studies identified a variety of possible advantages and difficulties in the use of artificial intelligence in education. On the one hand, it has been demonstrated that using AI-powered tailored learning systems improves students’ learning outcomes and engagement. By tailoring the curriculum to the individual requirements and preferences of each student and taking into account the demands of varied learners, these platforms may offer students individualized feedback and support. Yet, there are obstacles to the application of AI in education, such as the requirement for proper infrastructure and the possibility of bias in the employed algorithms. Several studies have focused on the use of data analytics to enhance different areas of the educational process when it comes to the use of big data in education. According to this research, using big data to discover trends and patterns in student performance and needs, as well as to drive curriculum design and resource allocation, has the potential to improve the efficiency and efficacy of the educational process. The requirement for proper infrastructure and the possibility of data privacy

Blockchain Technology, Artificial Intelligence, and Big Data in Education

249

issues, however, are barriers to the use of big data in education [3]. Interviews with educators and subject-matter experts offered additional insights and perspectives on the application of blockchain, AI, and big data in education in addition to the findings from the literature and case studies. These interviews supported the potential advantages and difficulties of these technologies while also emphasizing the necessity of more empirical study to evaluate their impact and the significance of ethical considerations in their application [13–24]. Overall, the study’s findings indicate that although using blockchain, artificial intelligence, and big data in education has the potential to significantly improve the educational system, there are still certain obstacles to be solved. The paper makes a number of recommendations for future research and implementation, including the requirement for further empirical research and the consideration of ethical and societal consequences, to solve these issues and realize the full potential of these technologies.

4 Results and Discussion According to the study’s findings, big data, artificial intelligence, and blockchain technology have the potential to significantly improve the effectiveness, fairness, and personalization of education. The potential for these technologies to increase the effectiveness of the educational process was one of the key advantages noted in the research and case studies. For instance, using blockchain for school credentialing and record-keeping can cut down on the need for manual procedures and lower the possibility of fraud and errors. The adoption of AI-powered individualized learning platforms can also improve each student’s learning experience, obviating the requirement for one-size-fits-all methods and boosting instructional effectiveness. Big data analytics can also be used to inform curriculum design and resource allocation, resulting in a more effective and efficient use of resources. Another advantage was the potential for these tools to increase educational equity. Blockchain technology, for instance, can offer a transparent and safe method of storing and sharing data, lowering the possibility of bias and discrimination. In addition to meeting the needs of various learners, the implementation of AI-powered personalized learning systems can guarantee that all students have access to an equitable educational experience. Big data analytics can also be used to identify and rectify inequities in student achievement and resource access, resulting in more equitable outcomes. The capability of these technologies to tailor the educational experience for each student was noted as a third benefit. The usage of AI-powered personalized learning platforms can offer students individualized feedback and support, allowing the curriculum to be adjusted to each student’s requirements and preferences. Big data analytics can also help in the construction of individualized learning pathways and interventions, resulting in lessons that are more specialized and efficient. The study’s findings did, however, also point out a number of obstacles to using these technologies in education. The requirement for suitable infrastructure to support the deployment of these technologies was one issue that was noted.. A competent workforce is required to administer

250

R. Salama and F. Al-Turjman

and maintain the systems, as well as reliable and secure networks. The potential for data privacy issues and the requirement to ensure that personal data is acquired, maintained, and used in a responsible and ethical manner presented another challenge. The use of blockchain for educational record-keeping and credentialing is one particular example of how it is being used in education that has arisen from the literature and case studies. Studies have looked into the possibility of using blockchain to make safe and transparent records of credentials, grades, and transcripts [7]. According to these research, blockchain technology can offer a decentralized and secure method of data storage and transfer, lowering the possibility of mistakes and fraud while enhancing the effectiveness and fairness of the credentialing process. The research and case studies identified a variety of possible advantages and difficulties in the use of artificial intelligence in education. On the one hand, it has been demonstrated that using AI-powered tailored learning systems improves students’ learning outcomes and engagement. By tailoring the curriculum to the individual requirements and preferences of each student and taking into account the demands of varied learners, these platforms may offer students individualized feedback and support. Yet, there are obstacles to the application of AI in education, such as the requirement for proper infrastructure and the possibility of bias in the employed algorithms. Several studies have focused on the use of data analytics to enhance different areas of the educational process when it comes to the use of big data in education. According to this research, using big data to discover trends and patterns in student performance and needs, as well as to drive curriculum design and resource allocation, has the potential to improve the efficiency and efficacy of the educational process. The requirement for proper infrastructure and the possibility of data privacy issues, however, are barriers to the use of big data in education [3]. Interviews with educators and subject-matter experts offered additional insights and perspectives on the application of blockchain, AI, and big data in education in addition to the findings from the literature and case studies. These interviews supported the potential advantages and difficulties of these technologies while also emphasizing the necessity of more empirical study to evaluate their impact and the significance of ethical issues in their application [7, 25–29]. Overall, the study’s findings indicate that although using blockchain, artificial intelligence, and big data in education has the potential to significantly improve the educational system, there are still certain obstacles to be solved. The paper makes a number of recommendations for future research and implementation, including the requirement for further empirical research and the consideration of ethical and societal consequences, to solve these issues and realize the full potential of these technologies.

Blockchain Technology, Artificial Intelligence, and Big Data in Education

251

5 Conclusion In conclusion, the adoption of big data, artificial intelligence, and blockchain technology in the classroom has the potential to completely change how we approach teaching and learning. By offering safe and transparent record-keeping, individualized learning opportunities for students, and giving information for curriculum development and resource allocation, these technologies have the potential to enhance efficiency, justice, and personalization in the educational system. The use of these technologies in education, however, is not without difficulties. To facilitate their use, the right infrastructure is required, and data privacy issues must be carefully taken into account. Further empirical study is also required to evaluate how these technologies affect student learning outcomes and the teaching process, as well as to pinpoint the best ways to use them. Big data, AI, and blockchain technology have the potential to significantly improve education, but it is crucial to carefully weigh the risks and drawbacks of using them. These issues should be addressed, and the potential for these technologies to improve the educational system should be fully realized, in future study and implementation.

References 1. Pablo, R. G. J., Roberto, D. P., Victor, S. U., Isabel, G. R., Paul, C., & Elizabeth, O. R. (2022). Big data in the healthcare system: A synergy with artificial intelligence and blockchain technology. Journal of Integrative Bioinformatics, 19(1). 2. Park, J. (2022). From Big Data to blockchain: Promises and challenges of an all-encompassing technology in education. Digital Communication and Learning, 383–397. 3. Zhou, M., Matsika, C., Zhou, T. G., & Chawarura, W. I. (2022). Artificial intelligence and blockchain as disruptive technologies in adolescent lives. In Impact and role of digital technologies in adolescent lives (pp. 243–254). IGI Global. 4. Jang, J., & Kyun, S. (2022). An innovative career management platform empowered by ai, big data, and blockchain technologies: Focusing on female engineers. Webology, 19(1), 4317–4334. 5. Bucea-Manea-T, oni¸s, R., Kuleto, V., Gudei, S. C. D., Lianu, C., Lianu, C., Ili´c, M. P., & P˘aun, D. (2022). Artificial Intelligence potential in higher education institutions enhanced learning environment in Romania and Serbia. Sustainability, 14(10), 5842. 6. Rajagopal, B. R., Anjanadevi, B., Tahreem, M., Kumar, S., Debnath, M., & Tongkachok, K. (2022, April). Comparative analysis of blockchain technology and artificial intelligence and its impact on open issues of automation in workplace. In 2022 2nd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE) (pp. 288–292). IEEE. 7. Zhang, H., Lee, S., Lu, Y., Yu, X., & Lu, H. (2023). A survey on Big Data technologies and their applications to the metaverse: Past. Current and Future. Mathematics, 11(1), 96. 8. Soldatos, J., & Kyriazis, D. (2022). Big Data and artificial intelligence in digital finance: Increasing personalization and trust in digital finance using Big Data and AI. 9. Sharma, P., Shah, J., & Patel, R. (2022). Artificial intelligence framework for MSME sectors with focus on design and manufacturing industries. Materials Today: Proceedings. 10. Ma, Y. (2022, January). Accounting talent training reform in the Era of artificial intelligence. In 4th international seminar on education research and social science (ISERSS 2021) (pp. 282– 286). Atlantis Press.

252

R. Salama and F. Al-Turjman

11. Muheidat, F., Patel, D., Tammisetty, S., Lo’ai, A. T., & Tawalbeh, M. (2022). Emerging concepts using blockchain and Big Data. Procedia Computer Science, 198, 15–22. 12. Sahu, L. K., Vyas, P. K., Soni, V., & Deshpande, A. (2022). Survey of recent studies on healthcare technologies and computational intelligence approaches and their applications. In Computational intelligence and applications for pandemics and healthcare (pp. 282–307). IGI Global. 13. Kuleto, V., Bucea-Manea-T, oni¸s, R., Bucea-Manea-T, oni¸s, R., Ili´c, M. P., Martins, O. M., Rankovi´c, M., & Coelho, A. S. (2022). The potential of blockchain technology in higher education as perceived by students in Serbia, Romania, and Portugal. Sustainability, 14(2), 749. 14. Kuzior, A., & Sira, M. (2022). A bibliometric analysis of blockchain technology research using VOSviewer. Sustainability, 14(13), 8206. 15. El Samad, M., El Nemar, S., & El-Chaarani, H. (2023). Blockchain and Big Data for a smart healthcare model. In Handbook of research on artificial intelligence and knowledge management in Asia’s digital economy (pp. 64–80). IGI Global. 16. Chaka, C. (2023). Fourth industrial revolution—A review of applications, prospects, and challenges for artificial intelligence, robotics and blockchain in higher education. Research and Practice in Technology Enhanced Learning, 18. 17. Zhang, J., & Haleem, S. (2023). Application of blockchain technology in the construction of MOOC digital communication platform. In International conference on smart technologies and systems for internet of things (pp. 564–573). Springer, Singapore. 18. Liu, Y., & Chen, M. (2023). The knowledge structure and development trend in artificial intelligence based on latent feature topic model. IEEE Transactions on Engineering Management. 19. Liu, W. (2023, January). Research on the innovation and application of big data in higher education. In Third international conference on intelligent computing and human-computer interaction (ICHCI 2022) (Vol. 12509, pp. 259–264). SPIE. 20. Pu, S., & Lam, J. S. L. (2023). The benefits of blockchain for digital certificates: A multiple case study analysis. Technology in Society, 72, 102176. 21. Su, N. (2023). Research on Big Data empower epidemic governance based on knowledge map. In Proceedings of the world conference on intelligent and 3-D technologies (WCI3DT 2022) (pp. 557–567). Springer, Singapore. 22. Andronie, M., L˘az˘aroiu, G., Karabolevski, O. L., S, tef˘anescu, R., Hurloiu, I., Dijm˘arescu, A., & Dijm˘arescu, I. (2023). Remote Big Data management tools, sensing and computing technologies, and visual perception and environment mapping algorithms in the internet of robotic things. Electronics, 12(1), 22. 23. Polas, M. R. H., Ahamed, B., & Rana, M. M. (2023). Artificial intelligence and blockchain technology in the 4.0 IR metaverse Era: Implications, opportunities, and future directions. In Strategies and opportunities for technology in the metaverse world (pp. 13–33). IGI Global. 24. Nalbant, K. G., & Aydin, S. (2023). Development and transformation in digital marketing and branding with artificial intelligence and digital technologies dynamics in the metaverse universe. Journal of Metaverse, 3(1), 9–18. 25. Tomar, P., Bhardwaj, H., Sharma, U., Sakalle, A., & Bhardwaj, A. (2023). Transformation of higher education system using blockchain technology. In Applications of blockchain and Big Iot systems (pp. 499–524). Apple Academic Press. 26. Aseeri, M., & Kang, K. (2023). Organisational culture and big data socio-technical systems on strategic decision making: Case of Saudi Arabian higher education. Education and Information Technologies, 1–26. 27. Sasikumar, A., Vairavasundaram, S., Kotecha, K., Indragandhi, V., Ravi, L., Selvachandran, G., & Abraham, A. (2023). Blockchain-based trust mechanism for digital twin empowered Industrial Internet of Things. Future Generation Computer Systems, 141, 16–27.

Blockchain Technology, Artificial Intelligence, and Big Data in Education

253

28. Rao, K. P., & Manvi, S. (2023). Survey on electronic health record management using amalgamation of artificial intelligence and blockchain technologies. Acta Informatica Pragensia, 12(1). 29. Lyytinen, K., Topi, H., & Tang, J. (2023). MaCuDE IS task force phase II report: views of industry leaders on Big Data analytics and AI. Communications of the Association for Information Systems, 52(1), 18.

Sustainable Education Systems with IOT Paradigms Ramiz Salama and Fadi Al-Turjman

Abstract Recent years have seen an increase in interest in the future of cities among academic and professional academics. They come to the conclusion that as technology develops, it will have an impact on both infrastructure and architecture, giving rise to the concept of smart cities. This essay examines many definitions of “smart cities,” which differ depending on the geographic, environmental, economic, and social boundaries of each city, in an effort to provide readers with a thorough understanding of the movement toward smartness. It also offers the dimensions that turn the concept of a “smart city” into a 3D one as well as a few examples of smart city models. Smart administration, smart transportation, smart lifestyle, and smart human level are only a few of the traits of smart cities that are described. It also provides some detailed illustrations of the elements of each paradigm and how they interact. It provides a broad overview of the traits of smart cities, including the Smart Economy, Smart Environment, Smart Government, Smart Mobility, Smart Living, and Smart Human Level, and it displays some expansive depictions of the key elements of each paradigm. People commonly relocate to cities in order to satisfy their needs in terms of careers, relationships, and enjoying modern life. However due to the urbanization phenomena, climate change, and resource depletion, a number of Smart cities difficulties have emerged in metropolitan regions. But, owing to ICT, smart cities give individuals the chance to invent, create, test, and experience new things to improve their quality of life. The Internet of Things is being created as a result of the quick and seamless integration of the physical and digital worlds (IoT). IoT has significantly influenced governments and businesses to embark on an evolutionary path toward Industry 4.0, the fourth industrial revolution. Industrial production of the future will R. Salama (B) Department of Computer Engineering, Research Center for AI and IoT, AI and Robotics Institute, Near East University, Nicosia, Mersin 10, Turkey e-mail: [email protected] F. Al-Turjman Artificial Intelligence Engineering Department, AI and Robotics Institute, Near East University, Nicosia, Mersin 10, Turkey Research Center for AI and IoT, Faculty of Engineering, University of Kyrenia, Kyrenia, Mersin 10, Turkey © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_17

255

256

R. Salama and F. Al-Turjman

be extremely adaptable, integrated with customers, businesses, and suppliers, and, most importantly, sustainable. The current activities and research linked to smart factories and Industry 4.0 are examined and evaluated in this article, which also highlights the key traits of such factories with a focus on sustainability. It also offers a blueprint for IoT-based smart manufacturing. The strategy for energy management in smart factories based on the IoT paradigm is then presented, together with a directive and anticipated advantages. Keywords IOT · Sustainable education systems · Lifelong learning

1 Introduction Expansion in the social, economic, and environmental sectors is linked to urbanization. So, the percentage of people who live in cities is rising. The speed character has defeated the rural character since 2007. As a result, communities all across the world are actively looking for the best answers to persistent problems that change over time and space. Civilizations now need to provide solutions for a number of pressing issues, such as sustainable development, public services, education, energy, and the environment. Cities have evolved into complex social ecologies as a result of these difficulties, making it crucial to preserve sustainability and a high quality of life. A sizable number of potential future city plans have been created by different governments from across the world in an effort to handle urban operations and citizen demands more efficiently. On the other hand, these models introduce cities to the idea of “Smart Cities” and emphasize connectivity, comfort, safety, and attractiveness as key goals to achieve. In this way, and for a significant portion of the twentieth century, the media created the idea of the city’s intelligence. But the Smart City “is quickly becoming a reality thanks to technological advancements in telematics and the sophistication of contemporary technologies. ICTs (information and communication technologies) are also crucial for enhancing system automation and equipping people with the knowledge and abilities they need to monitor, comprehend, evaluate, and construct cities. A smart city’s development must be led by these three principles: sustainability, strengthened linkages between the city and the environment, and the use of a green economy. As a result, the development of intelligent infrastructures and the ICTs-Human link serve as the cornerstone of a smart city. Intelligence, the economy, and contextaware government. Inclusion promotes a high-employment economy that promotes social and territorial integration. The manufacturing industry is moving toward smart factories as a result of changing consumer needs and emerging autonomous technologies like loT. When physical objects are upgraded with embedded electronics (RFld tags, sensors, etc.) and connected to the Internet, the system is known as the “Internet of Things” (loT). As a result, loT requires both smart networks and smart objects. The Internet of Things allows for seamless integration of physical things

Sustainable Education Systems with IOT Paradigms

257

into the information network (loT). They can afterwards actively take part in corporate activities and exchange information on their state, their environment, manufacturing processes, maintenance schedules, and other topics [1–4]. On the other hand, rising energy prices, growing ecological awareness, and shifting customer preferences toward greener products are placing pressure on decision-makers to prioritize energy-efficient production and green manufacturing. Green products are ones that were created using the least amount of energy feasible, not just those that consume less energy when being used by the user. Smart meters, which allow for the collection of real-time data on energy use, will be one use for loT. (e.g., at machine, production line and facility level). The knowledge must then be incorporated into production management practices and choices in order to increase energy efficiency.

2 Scope of Previous Works Globalization, the fourth industrial revolution, which ushers in a shift to cuttingedge technology in design and production, the global recession, and global mobility are a few of the issues that education systems must confront. Governments implemented distant learning as a means of containing the COVID-19 outbreak in the 2020–2021-time frame. During this time, which was usually marked by fast change, technical advancements, and digitization, a number of schools shuttered. Because of this, educational systems must become more diverse and have a long-term plan for social development. To do this, the distinction between inclusive, multicultural education—which aims to foster an appreciation of students’ diversity in terms of their familial histories, socioeconomic classes, and cultural backgrounds—and special education must be eliminated. By creating a new educational model that includes both philosophies, the contemporary school will live up to its multidisciplinary name. This integrated paradigm, which will be known as “inter-cultural, inclusive, sustainable education,” will put an emphasis on inclusion, equality, and social justice. Its goal is to address the academic and social-emotional needs of every student. The twenty-first century is a complicated and quickly changing era in terms of politics, society, technology, the economy, and the environment. Future events are rare and unpredictable. Because the methodology, problems, and standards are based on a system of values and ideas from a bygone era, the current educational paradigm is in crisis. A new educational paradigm that takes into account the distinctive characteristics of the postmodern era is now crucial. The UN Convention on the Rights of People with Disabilities placed emphasis on the idea of inclusive education. Since the Convention’s ratification in 2007, more than 80% of its signatories, including Israel, have complied with its provisions (which signed the Convention in 2007 and ratified it in 2012). Governments and educational systems have decided to use inclusive education as the main strategy for educating all children, regardless of disability or special educational requirements, as a result of the worldwide trend toward educational reform, policy change, and evolving techniques. The foundations of inclusive education’s pedagogical, social, political, and economic aspects

258

R. Salama and F. Al-Turjman

address issues with attitudes, quality, and fairness in educational practice, policies, and resources. Creating a secure environment where all students may study and take part in extracurricular and social activities is what inclusive education means conceptually. Inclusive education includes adjustments to structures, relationships, and instructional techniques in order to fit student variability into typical educational experiences in addition to integrating students with special needs or impairments. The use of pedagogical practices that consider diversity is a crucial part of creating inclusive processes in the classroom. These plans must be built on adaptable organizational structures and original information gathering and production techniques. The goal is to create communities where students may learn from and alongside one another rather than selecting or ranking one another (Parrilla, 2007). Teachers in the general classroom will need to manage the needs of a greater range of children with varied levels of ability as the inclusion movement continues to blur the distinctions between general education and special education. It is morally and ethically dubious to include children and pupils who are disabled. To make this a reality, we must endeavor to transform our educational systems. Everyone benefits as a result, and our societies become more robust and democratic. It is important to stress that there is a growing body of knowledge in this area, as demonstrated by the multiple reviews of inclusive education’s various components, including Theories on the subject’s effectiveness. Yet, studies from a wide range of nations with various educational systems and cultural infrastructures follow a similar methodology. As is the case with other effective programs, this effort is ardently committed to upholding human rights, but the reality is quite different. One of its issues is that wealthier nations find it difficult to adopt inclusive education. Due of this, inclusive education is not beneficial to all civilizations and nations. As a result, the ideas on inclusive education and its effectiveness don’t offer many recommendations for how to create more inclusive practices. The three pillars of inclusive education must be involvement, dialogue, and openness. According to the UNESCO report, inclusive education is a process that will achieve its goal by 2030. By highlighting major breakthroughs in the work on the following issues: policy, funding, quality, learning, attitude, and equity, it demonstrates the advancements made in inclusive education and the efforts taken to put them into effect. For inclusive education to be successful, each of these problems needs to be resolved [5–9]. To establish and implement inclusive education, a policy must have a vision, goals, and objectives in addition to political will, infrastructure, dedicated financial resources, law, participation from all necessary systems, and public support. The successful application of numerous evidence-based techniques will ensure that the programs are well internalized. The strategy must be well-prepared and use resources that will result in changes that call for management and supervision support in order to accomplish this. One of our most basic rights is the access to an education. It is essential for long-term growth and stability inside nations as well as among them. As a result, in order to engage in the rapidly globalizing 21st-century economy and society, a strong education is required. With inclusive education, we have a lot to look forward to since children who coexist and embrace variety grow up in societies that are accessible in terms of both accessible physical spaces and accessible learning and information. A complex

Sustainable Education Systems with IOT Paradigms

259

process that culminates in inclusive education necessitates the participation of experts as well as educational policies, models, and resources.

3 Methodology This study tries to clarify the idea of industry 4.0 by providing the architecture for such factories and explaining the core characteristics of such factories with regard to energy management. It does this by reviewing the literature and existing projects focused at loT-based smart factories (i.e., industry 4.0). Because it is considered to be sustainable, LoT is the essential technology that makes smart manufacturing possible. To support loT-based energy management in (sustainable) smart factories, this paper suggests a strategy that involves integrating energy data into production management. Many smart meters have been set up at the machine level in order to gather data on energy use in real-time. This data has since been analyzed and sent to decision-makers to assist them in making more energy-efficient production management decisions. A consultation strategy was used to select the literature, which included searching academic databases (such Scopus and Google Scholar) and interacting with a number of important stakeholders and informants. The latter required a total of five workshops as well as ongoing connection with participants via the creation of a web-based communication platform and database between 2017 and 2019. Footnote3 100 papers were found as a result, for analysis (see “Relational Approaches to Ontology,” “Relational Approaches to Epistemology,” and “Relational Approaches to Ethics”). The works were divided into three categories according to ontology, epistemology, and ethics, which were defined in relation to sustainability as follows: 1. As stated in the definition, ontologies are “assumptions (which may be implicit or explicit) regarding what types of entities do or can exist in [reality] and what may be their conditions of existence, relations of reliance, and so on”. 2. Epistemologies explain how people acquire knowledge of the world. They lay out the requirements, benchmarks, and techniques for comprehending reality. 3. “What is morally good and evil, morally right and wrong” is the definition of ethics. This idea encompasses morality, morals, and cultural norms that have been influenced by social and political life. In order to offer a persuasive analysis while admitting the connections between the categories and discourses, these three categories were broken down. The categorization system is therefore a fuzzy set. Despite acknowledging that discourses have linkages to areas other than the major category to which they are assigned in footnote 4, this category nonetheless lists discourses as its members. Note 5: While acknowledging that discourses may be classified differently, allowing for the visibility of new links, we categorize discourses in order to emphasize certain relationships that may be useful in further developing relational approaches to sustainability [9–13]. So, the

260

R. Salama and F. Al-Turjman

functional assemblage we build is one that might be looked at in upcoming sustainability studies. (Relational Approaches to Epistemology, Relational Approaches to Ethics, and Relational Approaches to Ontology) Note 6 The tri-partite categorization provides a helpful framework for relational approaches to sustainability collaboration, utilizing a variety of discourses while understanding both their similarities and intra-relations.

4 Results and Discussion 4.1 Methods Using Relational Ontology We found 25 publications that were pertinent to relational ontology techniques. Their primary areas of study are philosophy, political science, native American and religious studies, and cultural studies. All relational ontologies start with the idea that “the relationships between things are more basic than the entities themselves”. The previous entity was not connected in any way. Differentiated and undifferentiated relational ontologies are the main varieties that are described in the literature that has been found. An entity can be seen as “an evolving expression of a metaphysical source” in undifferentiated relational ontologies. For instance, ecological holism is a type of undifferentiated relational ontology that blurs the lines between life, matter, and awareness in terms of the most fundamental processes of the universe. Differentiated relational ontologies, on the other hand, believe that reality is a dynamic, lone manifestation of complex, relational, and multidimensional beginnings. The latter incorporates distinctions in more extensive forms of identification whereas the former conceptualizes identity and variation in relation to one another. For the study of sustainability, it is critical to understand the distinction between a differentiated and an undifferentiated relational ontology. White et alin-depth’s analysis of hybrid theoretical perspectives on society and nature. The importance of employing a varied relational strategy is demonstrated by (2016). In order to understand the interactions between social and ecological systems without categorizing or combining them, this is done. Three issues—new materialism, process philosophy, and speculative realism—were highlighted as the current conflicts on relational ontology in Western thinking. A vast body of thinking known as “speculative realism” (SR) opposes the anti-realism of contemporary Enlightenment philosophy and the separation of nature and culture by putting up a variety of arguments. In an effort to solve the correlation problem, SR’s key doctrines include a renewed openness to speculative metaphysics and ontological realism. The correlations hypothesis, made famous by Kant, contends that knowing about an item is always a correlation between thinking and being since an object cannot be understood independently of its relationship to the mind. SR investigates a wide range of solutions to this dilemma in attempt to describe reality. Process philosophy, a predecessor to SR, is known to have a distinctive relational ontology. Alfred

Sustainable Education Systems with IOT Paradigms

261

North Whitehead, the founder of process philosophy, said that each real thing builds societies of ever-greater societies while also being connected to and distinct from other actual beings in 1929. “The social is a technique of portraying how each entity is formed by and through its surroundings,” he asserted. Current examples of how process-relational ontologies may alter epistemological and ethical perspectives on human-nature relationships are provided by Muraca, Latour, Kaaronen, Ims et al., Stengers, Henning, and Mancilla et al. These examples are based on knowledge of the co-constitution of the ontologies. The Earth should be seen as a complex network of living, agential processes, according to Latour, one of the most well-known authors on process philosophy and ecology [14–18]. New materialism is a large school of philosophy that examines relational ontology in the perspective of sustainability. A fundamental tenet of new materialism is the exploration of post-Cartesian ontologies that examine the complex interactions between various nature-cultures. Most new materialists use multi-modal approaches to investigate many socio-ecological system levels (micro, meso, and macro) at once. One of the most well-known new materialists is Jane Bennet. She creates a “vibrant materialism” in Vibrant Matter (2010) that, like Latour’s, grants agency to nonhumans and sees living and nonliving matter as co-generating assemblages. The discourses of relational ontology, process philosophy, and new materialism are relatively recent additions to Western thought. Nonetheless, over thousands of years, most relational ontologies have often emerged outside of the West. There are a number of earthbased, indigenous, and religious ontologies that are not modern and do not adhere to the Western modern worldview’s separation of nature and society. These practices all demonstrate how entwined, reliant, and communicative nature-cultures are with one another. These traditions do not view the environment as something that is “out there” that needs to be conserved, in contrast to Western environmentalism. In addition to bearing the imprints of social and individual biographies, task-scopes, conventions, rituals, and cosmologies, landscapes are seen to be both physical and mental occurrences. Native People, for instance, have a relational ontology based on kinship. They believe that they have a common ancestor with nature.

4.2 Relationship-Based Epistemologies In total, 52 works were found to be pertinent to relational approaches to epistemology. They are mainly from the sustainability science, feminism, sociology, psychology, sociology of science, and cognitive science sectors. Assembly theory, actor-network theory (ANT), multi-species ethnography, integral ecology, geo-philosophy, nonphilosophy, transdisciplinary (TD) methods, intersectional analysis, systems and complexity theory, and reflexive and diffractive methods are a few of the pertinent discourses describing relational epistemologies that can be found in the identified literature. Everyone is in agreement that the scientific revolution and the Enlightenment gave rise to the current western epistemologies, which are largely to blame

262

R. Salama and F. Al-Turjman

for the extreme inequities and patterns of exploitation between humans and nonhumans. People like Isaac Newton, Immanuel Kant, David Hume, John Locke, Francis Bacon, and René Descartes laid the conceptual groundwork for them (Griffin 2001). They argue that (1) causation can only be determined by relationships between the exteriors of objects, (2) no object can be understood apart from its relationship to thought, (3) primary and secondary (sensible) qualities can be studied separately by science and that the former can be done objectively without the latter, (4) nature can be controlled, that “her” secrets may be discovered by instrumental reason and scientific “progress,” and (5) that the body is a “thing” that can be controlled. These concepts contributed to the emergence of the empiricism school of thought, which had an effect on the development of science, technology, and commerce in the modern era. Despite the significant impact these concepts have had on how society has developed, according to Latour, we have never truly been modern (1991). Even though it’s a common misconception in contemporary society that nature can be comprehended objectively, social interactions and conduct have a big impact on scientific understanding. In the past, both researchers and the subjects of their studies influenced one another. Reflective techniques are increasingly being used by academics to examine how the observer shapes knowledge. In relevant literature from the field of cognitive science, the complicated and dynamic interactions between linked brainbody-environment systems are described using embedded, extended, extended, and enactive (4E) approaches to cognition. For instance, Evan Thompson proposes that phenomenological explanations of experience may be included into scientific theories of mind and life in order to close the explanatory gap between awareness and life. In order to embrace the developing discipline of affect studies, 4E approaches are frequently referred to as 4EA. This is because this multidisciplinary body of research, which has examined body politics, media ecology, and emotional attachments to locations, uses relational approaches to emotions. A review of the pertinent psychological research reveals that sociocognitive, identity-based, and value-based strategies are the most effective at bridging the knowledge gap between personal and social-ecological transformation. A subfield of psychology called ecopsychology examines the structural connections between people’s minds and their surroundings. The ecological unconscious, phenomenology, the interconnection of all beings, the transpersonal, and the transcendental are typical research areas in ecopsychology. A review of relevant social science literature suggests a rise in interest in relational knowledge methodologies. These resources provide social scientists more ways to look into how people and other animals interact. According to the assembly hypothesis, all living and nonliving entities are made up of both human and nonhuman components. Many techniques for rating assemblages have emerged in empirical studies. One of the relationship techniques most frequently employed in the social sciences is actor-network theory (ANT). Instead, then placing humans at the core of agency and responsibility, it sees agency as being diffused over a range of various actants, none of which can effect change on their own. It examines how, as opposed to being formed by a single person, a network of interconnected items and processes creates agency. Typically, scholars employ multi-species ethnography to examine agency in fields other than

Sustainable Education Systems with IOT Paradigms

263

humanist philosophy. According to our analysis, relational epistemologies are being constructed in philosophy to aid in thinking at several geo-social scales. According to Esbjörn-Hargens and Zimmerman, Mickey, and Mickey et al., integrated approaches to ecology that incorporate the humanities, social sciences, and natural sciences are known as integral ecology. O’Brien and Hochachka, for instance, make use of integral theory to develop a multi-disciplinary, multi-perspectival understanding of adaptation to climate change. Francois Laruelle’s non-philosophy and Deleuze and Guattari’s geo-philosophy are two further approaches for navigating the psychological, social, and environmental ecologies. Each of these approaches offers a way for various forms of knowing, including philosophical, scientific, and theological ones, to inform one another without creating hierarchies. These more contemporary philosophical movements provide methods for thinking ecologically, not just “about ecology,” but rather in terms of a “universal ecology”. The best example in this regard is Morton’s work (2013, 2016). He describes ecological awareness as knowledge that is self-reinforcing, similar to meditation, where one learns to recognize “the mesh” of interconnected events and their essential ties to oneself. Relational knowledge approaches are being developed in transdisciplinary disciplines as well. One of the most popular discourses in these fields is systems theory, which includes general systems theory, cybernetics, and complexity theory. According to Capra and Luisi, the idea of systems thinking was created in the 1920s by biologists, Gestalt psychologists, ecologists, and quantum physicists (2014). Significant perspective shifts, such as those from the parts to the whole, from disciplines to multidisciplinary, from objects to relationships, from measuring to mapping, from quantities to qualities, from structures to processes, from objective to epistemic science, and from Cartesian certainty to approximate knowledge, are some of its defining characteristics (pp. 80–82). As instances of socially situated epistemic discourses, feminist academics provide essential perspective theory (Harding 1991), situated knowledge, and intersectional analysis (Crenshaw 1989). These discourses provide sustainability research a moral perspective and politicize it. They are used most frequently in research on environmental justice. Feminist scholars have also created diffractive approaches to remedy the inadequacies of reflective procedures (e.g., Bozalek and Zembylas 2017; Hill 2017). In order to create new insights into the relationship between differences, diffractive techniques are employed to read the findings of one field through another discipline. Not to mention, our analysis demonstrates that scientists are pushing harder for the creation of empirical techniques that take subjectivity and its influence on scientific practice into account. For instance, studies on “mind maps” and “mental models” offer generalizable methods for impartially assessing subjectivity and incorporating it into systems research and institutional frameworks, according to Manuel-Navarrete.

264

R. Salama and F. Al-Turjman

4.3 Ethical Methods Based on Relationships There was a total of 23 papers that were deemed to be pertinent in terms of relational approaches to ethics. They mainly come from the domains of philosophy, religion, philosophy, and cultural studies. In the research literature, biocentrism, egocentrism, deep ecology, social ecology, political ecology, environmental and climatic justice, and ecofeminism are among the pertinent themes that explain relational approaches to ethics. The previous five discourses, with one caveat, are now included in the ethical category. Even when they alter ontological and epistemological understandings, they nevertheless count as normative discourses that have an impact on values, morals, and social norms. Biocentrism and ecocentrism are two of the recognized key relational approaches to ethics in the fields of environmental and climate ethics. According to biocentrism and ecocentrism, biological entities and ecological systems are morally valuable. They all agree that moral issues shouldn’t be primarily defined in terms of human concerns, which is known as non-anthropocentrism. The important discourse of deep ecology emphasizes the necessity of a paradigm shift in order to change the current industrial society into one that is more sustainable. It was developed by Arne Naess, a Norwegian Eco philosopher. Naess distinguishes between shallow ecology and deep ecology, arguing that the latter considers nature from an eccentric perspective, drawing inspiration from spiritual, religious, and philosophical traditions, whereas the former does so from an anthropocentric perspective, taking into account how useful it is to humans. Although there are other different deep ecology theories, Naess’ version (ecosophy “T”) is influenced by Gandhi’s nonviolent conflict resolution, Mahayana Buddhism, and Spinoza. The wellness of both humans and nonhumans is taken into account when conflicts of interest arise, and the vitality of higher-order (more complex) systems is prioritized over that of lower-order systems. Critics contend that deep ecology and social ecology should be integrated, despite the fact that deep ecology provides an apolitical perspective on systems transformation. One theorist who has successfully combined social ecology and deep ecology is Gary Snyder. Social ecology, as it was created by Bookchin, offers a critical viewpoint to the class-based struggles of oppressed people by taking into account how social hierarchy and power affect the environment. In order to end human control over one another and the natural world, radical social ecology investigates the material, interpersonal, and spiritual pillars of an ecological society. It links various linked socioeconomic issues with ecological issues. Similar to this, political ecology investigates unequal power and resource distribution, assisting in addressing the root causes rather than the symptoms of sustainability challenges. Intersectional approaches from social and political ecology are utilized to evaluate the present environmental movement in the study of environmental and climate justice. Environmental justice activists and scholars work to address both social justice problems (particularly racial, gender, and class-based inequality) and ecological challenges (such air pollution, waste disposal, and access to clean water) by forging alliances with disadvantaged groups. Ecofeminism is one

Sustainable Education Systems with IOT Paradigms

265

of the most significant and well-known topics in the literature on social and political ecology. Ecofeminism, which “seeks to comprehend the interconnected root of all domination,” connects the exploitation and tyranny of nature to the oppression and supremacy of women in particular and oppressed people in general. The dualistic Western philosophical frameworks are associated with the logic of dominance, according to Plumwood (1993). She claims that the oppositions between men and women, the mind and body, civilization and nature, and men and women normalize exploitative and unequal interactions based on the superiority of weaker groups. The work of other well-known ecofeminists, such as Merchant (1980) and Shiva (1989), demonstrates how advancements in science, technology, and the economy foster concepts of progress that are linked to the control and hegemony of nature and of women. An ecofeminist with a strong spiritual grounding, Ruether, proposes spiritual responses to these concerns. Nonetheless, some ecofeminists (though not all of them) have problematically embraced gendered representations of nature that do not oppose the dualistic thinking that underlies the logic of dominance. Since then, the ecofeminism movement has developed into one that is more critical, intersectional, materialist, and post humanist. Alaimo, Braidotti, Zylinska, Haraway, Keller, and Puis de la Bellacasa are a few notable recent works (2017). Posthuman feminists tend to be much more techno materialist, rejecting essentialist notions of gender and holding that human-nonhuman connections are materially impacted by sociotechnical systems. Posthumanism explores all sorts of partnerships, including those between biological entities (such symbionts or holobionts) and cyborgs, rather than limiting its focus to animal (zoologic) interactions (or flesh machines).

5 Conclusions We have identified important developments, recurrent themes, and patterns that characterize a relational paradigm (and potentially a shift towards a relational paradigm) in sustainability research through our analysis of the existing bodies of literature that take relational approaches to ontology, epistemology, and ethics relevant for sustainability. Notwithstanding their variations, all of the aforementioned approaches construct a paradigm that I values interactions with entities other than humans, (ii) is based on a relational ontology, and (iii) emphasizes the significance of perceiving human and non-human nature as mutually constitutive. According to our research, relational ontologies can reconcile a variety of dualisms (such as subjectivity and objectivity) that have contributed to the development of the modern worldview, including the separation of nature and society Although the integrity of each person is respected, it is acknowledged that humans are basically constituted through all forms of relationships (as opposed to undifferentiated relational ontologies). Systems of knowledge like speculative realism, process philosophy, new materialism, and indigenous and religious wisdom traditions provide particularly deep understandings

266

R. Salama and F. Al-Turjman

of relational ontology pertinent to sustainability in this setting. Our analysis demonstrates that relational approaches to epistemology also take into account the distribution of agency across networks, view objects as assemblages of humans and nonhumans, increasingly emphasize transdisciplinary methods to cross disciplinary boundaries, and employ diffractive methods to take into account various modes of knowing. Our analysis demonstrates that relational approaches to ethics take into account non-anthropocentric points of view, value non-human nature in non-instrumental terms, use intersectional methods to examine the connections between social and ecological issues, and contextualize human-nature interactions in light of unequal power dynamics and dynamics between assemblages or networks of interest. In an effort to uncover ways to strengthen ontology, epistemology, and ethics as a tripartite constellation in future sustainability research, practice, and education, this project research looked at linked approaches to each individually. Footnote8 Researchers and professionals can thus benefit from the findings and the created analytical tri-partite framework by more methodically recognizing and utilizing the benefits of relational approaches to sustainability. Studies that primarily examine relational perspectives on sustainability are currently few. Research on resilience; socio-technical transitions (such as Taylor and Pacini-Ketchaba); and sustainability education (such as Notwithstanding these anomalies, very few academics who specialize in sustainability clearly state the interrelated discourses that are discussed in this book. In fact, despite the widespread academic interest in relationality that is expanding across other domains, our analysis reveals that relational approaches are neglected within sustainability scholarship. So, this study asks academics to take into account the discourses that are highlighted in their upcoming theory, practice, and educational research on sustainability. The terminology and phrasing utilized to describe the “interior” and “outside,” “personal” and “collective” sustainability components are dualism-based, but the relational approaches that are stressed give a foundation for their integration without presuming that. Ives et al. advocated emphasizing the connections between these traits rather than talking about each one separately. In light of our findings, we recommend more study to learn more about the generative connections between these several discourses and dimensions. It is vital to look into how relational ontologies, epistemologies, and ethics work together to create a relational approach to sustainability in more detail. Intra-action is now referred to as “mutual composition of entangled agencies”. The concept of intra-action acknowledges that different agencies do not precede, but rather evolve via, their intra-action, in contrast to the conventional “interaction,” which believes that there are separate individual agencies that precede their contact. We conclude with a call to action for academics and practitioners in sustainability to collaborate on building a research agenda in order to promote a relational paradigm in sustainability research, practice, and education that is founded on relational ways of being, knowing, and doing.

Sustainable Education Systems with IOT Paradigms

267

References 1. Malik, P. K., Singh, R., Gehlot, A., Akram, S. V., & Das, P. K. (2022). Village 4.0: Digitalization of village with smart internet of things technologies. Computers & Industrial Engineering, 165, 107938. 2. Zeeshan, K., Hämäläinen, T., & Neittaanmäki, P. (2022). Internet of things for sustainable smart education: An overview. Sustainability, 14(7), 4293. 3. Voulvoulis, N., Giakoumis, T., Hunt, C., Kioupi, V., Petrou, N., Souliotis, I., & Vaghela, C. J. G. E. C. (2022). Systems thinking as a paradigm shift for sustainability transformation. Global Environmental Change, 75, 102544. 4. Sigahi, T. F., & Sznelwar, L. I. (2022). Exploring applications of complexity theory in engineering education research: A systematic literature review. Journal of Engineering Education, 111(1), 232–260. 5. Alam, A. (2022). Investigating sustainable education and positive psychology interventions in schools towards achievement of sustainable happiness and wellbeing for 21st century pedagogy and curriculum. ECS Transactions, 107(1), 19481. 6. Savelyeva, T., & Park, J. (2022). Blockchain technology for sustainable education. British Journal of Educational Technology, 53(6), 1591–1604. 7. Haleem, A., Javaid, M., Qadri, M. A., & Suman, R. (2022). Understanding the role of digital technologies in education: A review. Sustainable Operations and Computers. 8. Zekhnini, K., Cherrafi, A., Bouhaddou, I., Chaouni Benabdellah, A., & Bag, S. (2022). A model integrating lean and green practices for viable, sustainable, and digital supply chain performance. International Journal of Production Research, 60(21), 6529–6555. 9. Deebak, B. D., Memon, F. H., Cheng, X., Dev, K., Hu, J., Khowaja, S. A., & Choi, K. H. (2022). Seamless privacy-preservation and authentication framework for IoT-enabled smart eHealth systems. Sustainable Cities and Society, 80, 103661. 10. Basharat, S., Hassan, S. A., Mahmood, A., Ding, Z., & Gidlund, M. (2022). Reconfigurable intelligent surface-assisted backscatter communication: A new frontier for enabling 6G IoT networks. IEEE Wireless Communications. 11. Ang, K. L. M., Seng, J. K. P., & Ngharamike, E. (2022). Towards crowdsourcing internet of things (crowd-iot): Architectures, security and applications. Future Internet, 14(2), 49. 12. Celone, A., Cammarano, A., Caputo, M., & Michelino, F. (2022). Is it possible to improve the international business action towards the sustainable development goals?. Critical perspectives on international business, 18(4), 488–517. 13. Mogas, J., Palau, R., Fuentes, M., & Cebrián, G. (2022). Smart schools on the way: How school principals from Catalonia approach the future of education within the fourth industrial revolution. Learning Environments Research, 25(3), 875–893. 14. de Oliveira, L. C., Guerino, G. C., de Oliveira, L. C., & Pimentel, A. R. (2022). Information and communication technologies in education 4.0 paradigm: a systematic mapping study. Informatics in Education. 15. Martins, I., Resende, J. S., Sousa, P. R., Silva, S., Antunes, L., & Gama, J. (2022). Host-based IDS: A review and open issues of an anomaly detection system in IoT. Future Generation Computer Systems. 16. Johansson-Fua, S. U. (2022). Wansolwara: Sustainable development, education, and regional collaboration in Oceania. Comparative Education Review, 66(3), 465–483. 17. Liu, Y., Li, D., Du, B., Shu, L., & Han, G. (2022). Rethinking sustainable sensing in agricultural internet of things: From power supply perspective. IEEE Wireless Communications, 29(4), 102–109. 18. Reyers, B., Moore, M. L., Haider, L. J., & Schlüter, M. (2022). The contributions of resilience to reshaping sustainable development. Nature Sustainability, 5(8), 657–664.

Post Covid Era-Smart Class Environment Kamil Dimililer, Ezekiel Tijesunimi Ogidan, and Oluwaseun Priscilla Olawale

Abstract The incidence of the coronavirus pandemic resulted in the temporary closure of numerous institutions in the year 2020. As a result, regular classroom activities had to be discontinued in almost all physical settings. This forced many institutions to use smart learning systems to transfer educational activities online and inspired innovation in the development of smart classrooms featuring machine learning. While it is obvious that there has been considerable development in this field, the advancements have been met with mixed feelings. In this chapter, the motivation, route and effects of these advancements would be examined. The rationale behind the different opinions of smart classrooms would also be discussed. Keywords IoT · AI · Smart classroom · Student · Instructor · School

1 Introduction The coronavirus incident resulted in the temporary closure of numerous institutions in the year 2020. The virus’s rapid growth forced governments throughout the world to swiftly adopt a mandatory emergency lock-down in an effort to stop the virus’ K. Dimililer (B) Department of Electrical and Electronic Engineering, Near East University, Near East Boulevard, Nicosia, North Cyprus, via Mersin 10, Turkey e-mail: [email protected] E. T. Ogidan Department of Computer Engineering, Near East University, Near East Boulevard, Nicosia, North Cyprus, via Mersin 10, Turkey O. P. Olawale Department of Software Engineering, Near East University, Near East Boulevard, Nicosia, North Cyprus, via Mersin 10, Turkey K. Dimililer Applied Artificial Intelligence Research Centre, Near East University, Near East Boulevard, Nicosia, North Cyprus, via Mersin 10, Turkey © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. B. Idoko and R. Abiyev (eds.), Machine Learning and the Internet of Things in Education, Studies in Computational Intelligence 1115, https://doi.org/10.1007/978-3-031-42924-8_18

269

270

K. Dimililer et al.

spread. As a result, regular classroom activities were discontinued in almost all physical settings. Because of this, many institutions use smart learning systems to transfer educational activities online. According to [1], a classroom that is “smart” comprises automatic voice recognition and computer vision. Smart Classrooms are educational environments that utilize the Internet of Things (IoT) and Artificial Intelligence (AI) to enhance the learning experience for students. IoT refers to the network of physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, and network connectivity that enables these objects to collect and exchange data. AI, on the other hand, refers to the simulation of human intelligence in machines that are programmed to think and act like humans. In a smart classroom, devices such as smartboards, smart lights, and smart projectors, are connected to the internet and can be controlled and monitored remotely. This allows teachers to easily adjust the environment to suit the needs of their lesson and students. For example, smart lights can be dimmed or brightened as needed, and the temperature can be adjusted to ensure a comfortable learning environment. Smart classrooms also make use of AI-powered educational tools, such as virtual and augmented reality technologies, to create interactive and engaging learning experiences for students. For example, students can use virtual reality [2] headsets to explore historical events, scientific concepts, and other subjects in a immersive and interactive way. In addition, smart classrooms often use AI algorithms to analyze data and provide personalized feedback to students. For instance, AI-powered assessment tools can provide real-time feedback to students as they take quizzes or complete assignments, helping them to identify areas where they need improvement and target their studying accordingly. The use of IoT and AI in smart classrooms helps to create a dynamic and personalized learning experience that can engage students and enhance their understanding of the material. Before the COVID-19 pandemic, smart classrooms were already becoming increasingly popular in educational institutions worldwide. For example, a study published in the Journal of Educational Technology Development and Exchange (JETDE) in 2019 found that smart classrooms were positively impacting the teaching and learning process by providing an interactive and multimedia environment for students. However, with the outbreak of the COVID-19 pandemic, the role of smart classrooms has become even more critical. With the widespread implementation of remote and online learning, the use of technology in the classroom has become a necessity. Smart classrooms have been able to provide students with access to high-quality educational resources from their homes, helping to minimize the impact of school closures on their academic progress. For example, smart classrooms have been instrumental in facilitating remote learning during the pandemic. The study highlighted the benefits of using technology to deliver online classes, including improved student engagement and increased access to educational resources. The COVID-19 pandemic has drawn attention to the value of smart classrooms and their contribution

Post Covid Era-Smart Class Environment

271

to supporting online and remote learning. As a result, it is anticipated that educational institutions would continue to spend money on equipping their classrooms with cutting-edge technology in the post-pandemic era. In this book chapter, we define the benefits of leveraging educational technology after the epidemic, to build more effective and efficient learning settings; give specific examples of the smart classroom strategies that colleges and institutions are using, such as interactive whiteboards, VR simulations, and learning management systems, to solve these issues; talk about the difficulties that colleges and universities are facing as a result of the epidemic, such as the need to adjust to new teaching methods; to improve learning for both teachers and students, including more individualized education, increasing teamwork and communication, and enabling remote learning; finally, we discuss potential issues with privacy and security that readers might have with the usage of technology in the classroom.

2 Educational Sensors Educational sensors refer to a class of devices and technologies used to gather information about the physical world for educational purposes. Educational sensors are an important tool for enhancing science education [3] and helping students learn about the world around them in a hands-on, engaging way. The data collected by these sensors can be used for a variety of tasks, including scientific experimentation, data analysis [4], and hands-on learning activities. Some of the most common types of educational sensors include: Light sensors: These sensors detect light. They also generate an electrical signal proportional to the amount of light they receive. They are used in a variety of educational contexts some of which include measuring light intensity, studying photosynthesis in plants, and exploring color and spectroscopy. Temperature sensors: These sensors measure temperature and are commonly used in physics and biology labs to study heat transfer, thermodynamics, and the temperature dependence of chemical reactions. Sound sensors: These sensors measure sound and are used in music, physics, and engineering classes to study sound waves, acoustics, and the properties of sound. Motion sensors: These sensors measure movement and are used in physics and engineering classes to study motion, acceleration, and force. pH sensors: These sensors measure the acidity or basicity (pH) of a solution and are commonly used in biology and chemistry classes to study chemical reactions and solutions. Pressure sensors: These sensors measure pressure and are used in physics and engineering classes to study fluid mechanics, atmospheric pressure, and the properties of gases. Humidity sensors: These sensors measure the amount of moisture in the air and are used in meteorology and environmental science classes to study the water cycle, weather patterns, and climate.

272

K. Dimililer et al.

Additionally, a range of educational sensor platforms and kits are available that let students connect to and engage with these sensors in a practical, project-based setting [5]. A popular educational sensor platform is the Vernier SensorDAQ [6], which enables students to gather data from a range of sensors, including those mentioned above, and analyze it in real-time using programs like Graphical Analysis or Logger Pro. Smart classrooms are technologically enhanced learning spaces that provide an interactive and engaging learning environment for students. There are different types of smart classrooms available that vary based on their features and functionalities. Smart classrooms may be categorized into the following.

3 Ubiquitous Learning According to [7], ubiquitous learning can be defined as “an everyday learning environment that is supported by mobile and embedded computers and wireless networks in our everyday life”. Hwang et al. [8] explained that the aim of ubiquitous learning is to provide learners with content, and interaction anytime and anywhere. The learning process involves the integration of real-life experience with virtual information, and is customized to fit the learner’s needs, interests, and cognitive characteristics. The content objects, activities, and interaction within the system, as well as with other humans (such as instructors and peers) are tailored to the learner’s goals of learning, subject matter competency, technology, and situational context.

4 Learning Management Systems (LMS) Learning management systems have been in existence for over a decade [9] and their sole aim is to deliver the purpose of learning, most times, remotely. They have become even more popular since the happenings of the coronavirus often referred to as COVID-19. LMS is a web-based and database-driven [10] software tool that aids teachers to deliver, organize, and assess the learning activities of students [11]. According to [12], LMS is capable of performing functions carried out in the traditional learning classroom. These functions include delivering necessary course content, monitoring, and evaluating student performance. In other words, we can refer to LMS as the automated learning platform [13] of the traditional learning method. This topic is a hot research topic because of its involvement in education generally [10]. Learning management systems are often confused as eLearning platforms. While eLearning platforms offer a wide range of library content, they may be integrated into open-source learning management systems which usually may not have the same feature. According to [14], eLearning is a virtual space that supports learning without distance being a barrier and is usually built with the learning management systems. In this research work, five open-source learning management systems are

Post Covid Era-Smart Class Environment Table 1 Learning management systems

LMS

273

Website

Moodle

https://moodle.org/

ATutor

https://atutor.github.io/

TalentLMS

https://www.talentlms.com/

Canvas

https://www.instructure.com/canvas/

selected based on some selected criteria, the first being that the learning management system must be open source. A good LMS usually have amazing features such as-course management, tracking system, usability, technical support, assessment, communication tools, support for other languages and certification, security, integration with other systems, support for AI, and programming language used for developing the selected LMS may also be considered. Some examples of LMS have been summarized in the table below (Table 1).

5 Unified Learning Unified Learning refers to an approach to education that integrates multiple modes of instruction and learning resources into a cohesive and personalized learning experience for students. The goal of this approach is to provide learners with a more holistic and effective education that takes advantage of the benefits of both traditional and digital learning resources. Unified Learning is a relatively new concept that has emerged in response to the changing landscape of education. With the increasing prevalence of digital technologies and the growing importance of online learning, educators and researchers have recognized the need for a more integrated approach to education that can leverage the benefits of both traditional and digital learning resources. According to a study by [15], Unified Learning can be defined as “an approach to education that seamlessly integrates online and offline resources, creates a personalized learning environment, and promotes active learning and collaborative learning.” Several academic sources have explored the potential benefits of Unified Learning. Kirsal Ever [16] discovered that students who participated in a Unified Learning environment showed significant improvement in their academic performance compared to those who only received traditional instruction. Similarly, [17] found that unified learning can lead to increased student engagement and motivation. Another important aspect of unified learning is its potential to support personalized learning. According to a study by [18] learning can help learners to access a variety of resources and choose the ones that best suit their individual learning needs and preferences. This approach can also help educators to tailor their instruction to the needs of individual students, which can improve learning outcomes and student satisfaction.

274

K. Dimililer et al.

Table 2 Unified learning platforms

Unified learning platform

Website

Schoology

https://www.schoology.com/

Edmodo

https://new.edmodo.com/

Nearpod

https://nearpod.com/

Unified Learning is an educational strategy that aims to combine many teaching modalities and learning resources into a seamless and tailored learning environment for students. Numerous academic studies that support the idea claim that it can result in better academic achievement, greater student engagement and motivation, and more individualized learning experiences (Table 2). Each platform has its own unique features and benefits, so it is important to research and evaluate them carefully to find the one that best meets specific needs and goals.

6 Virtual Classrooms These classrooms allow students to attend classes and interact with teachers and other students from anywhere, as long as they have an internet connection. Virtual classrooms often use video conferencing tools, such as Zoom or Microsoft Teams, to facilitate communication and collaboration. They can be integrated with an LMS. Some examples of virtual classrooms have been summarized in the table below (Table 3). This is not an exhaustive list of virtual classroom platforms, and are other platforms that are suitable for specific needs. It is important to note that all of the above-mentioned applications may be grouped into: Table 3 Virtual classrooms Virtual classroom

Website

Blackboard

https://www.blackboard.com/teaching-learning/collaboration-web-conferenc ing/virtual-classroom

WizIQ

https://www.wiziq.com/virtual-classroom/

BigBlueButton

https://bigbluebutton.org/

Google classroom

https://classroom.google.com/u/0/h

Zoom

https://support.zoom.us/hc/en-us

Adobe connect

https://www.adobe.com/products/adobeconnect.html

Post Covid Era-Smart Class Environment

275

7 Cloud-Based Smart Classroom These are accessible on a cloud created using the cloud computing idea [19]. A wide range of package services are included with cloud computing to support the diverse demands of a single learner or a community of learners within a large business. This gives each of them the freedom to create training programs that are tailored to their interests and needs [20]. Free Smart Classrooms: Although these smart classrooms are free, they have a limited feature set for the users, who must upgrade for a fee to access more sophisticated capabilities. Open-source Smart Classroom: Open-source smart classrooms give educators the freedom to combine the greatest eLearning technologies and content while continuously customizing any of them to suit particular needs. According to [21], open source is a collection of free applications that are available to everyone and can have their features changed to suit the needs of the user. Although there is no requirement to pay a license fee, this does not necessarily mean that it is free.

8 Education and the Covid-19 Era With the onset of the COVID-19 Era causing most businesses and organizations to resort to operating remotely, it became necessary for the field of education to evolve and augment its traditional teaching methods. There was some level of research and testing already being done to see how to incorporate more technology—especially artificial intelligence, augmented reality, and virtual reality—into the education system to improve the quality of teaching and the efficiency of the education being obtained. One of the more known examples of this is the use of VR for experiential learning in Malaysia [22], as inspired by Kolb’s Experiential Learning Cycle (ELC) [23], a learning model with a focus on allowing students to learn from concrete experiences. There has also been research into the use of machine learning classification for analysis of student motivation [24, 25]. The use of Intelligent Tutoring Systems (ITS) that provide real time analytics on student learning and behaviour has also been examined with comprehensive experiments [26] as well as other studies into the true effects that AI, AR, VR, and IoT could have on education. The field of Artificial Intelligence in Education (AIED) however saw most of the research done in the field start to happen around 2020, right at the onset of the COVID-19 pandemic, as the need for remote work and study caused by lockdowns forced the education sector to adjust accordingly. In fact, there was more research on the topic of AI in Education in the year 2020 alone than in the 10 years prior [27]. This shows a stark correlation that proves that the pandemic was indeed a catalyst for research into AIED. With the passing of the covid-19 era, a number of questions have risen regarding the effectiveness of the non-traditional methods of teaching adopted during the

276

K. Dimililer et al.

pandemic and if some of such methods should be thrown out, maintained or improved. At a glance, the presupposition is that online or distant learning is not as effective as traditional methods of learning and should only be used as a last resort alternative like in the case with the covid-19 pandemic, or as an auxiliary form of education in cases where the more superior traditional method of learning is unavailable or can not be obtained. However, this does not take into account the many other factors that affects the effectiveness of these non-traditional methods. Some of these factors include the stress and lack of focus caused by the pandemic and restrictions of the lockdown, the ineffectiveness of some of the systems used as they were put together in a rush to meet the surge in demand, as well as the suddenness of the change that left a lot of students struggling to adjust to the new mode of operation. Apart from the factors caused by the Pandemic, there are also factors that generally affect the effectiveness of online learning. Some of these factors have been highlighted by a measure known as the Test of Online Learning Success (ToOLS) as reading and writing skills, independent learning, motivation and computer literacy [28]. Although this study was carried out with a focus on online learning, these factors still affect the effectiveness of using tools such as AR, VR, AI, and IoT. Even with this presupposition, the weight of research still supports the fact that online learning or face-to-face learning supported by AI tools has a lot of promise and would improve the effectiveness of teaching and value of education.

9 Artificial Intelligence in Education Aiding education with technology has been a huge topic of interest, seeing a lot of study and research over the years. The first real results of this work and studies were Computer Based Training (CBT) and Computer Aided Instruction (CAI) [29]. CBT was used to provide a structured and systematic form of education that could be created based on a curriculum and reused for different students [28]. CAI on the other hand involves technologies that are used to augment an already used education system, such as practice exercises, interactive learning games, or computer-based tests [30]. Helpful and effective as they are, CBT and CAI have a fixed system and do not take into consideration the individual learning needs of each student [27]. This makes it so that some students get left behind while others thrive using these systems. Proof of this can be seen from the fact that some students had a negative attitude towards online learning during the pandemic. The conclusions drawn from a separate study [4] agreed with the factors highlighted by the Test of Online Learning Success (ToOLS) and pointed them out as limitations to CBT and CAI. To tackle this issue, a new system known as Intelligent Tutoring Systems (ITS) was developed [30]. This system represents the incorporation of artificial intelligence into education with a focus on each students learning abilities and on a case by case basis.

Post Covid Era-Smart Class Environment

277

9.1 Intelligent Tutoring Systems Intelligent Tutoring Systems (ITS) are computer-based programs designed to provide personalized instruction and feedback to students. These systems use artificial intelligence and machine learning techniques to model a student’s knowledge, skills, and learning preferences in order to provide customized instruction and support. ITS can be used in a variety of educational contexts, including K-12 schools, colleges and universities, and corporate training programs. They are often composed of a number of complex technologies working together, from systems for grade prediction [24] to systems for student mood classification [15]. There has been a significant amount of research on ITS over the past few decades. A meta-analysis of 122 studies on ITS conducted by [26] found that ITS can be effective in improving student learning outcomes, particularly in the areas of math, science, and computer programming. Other studies have shown that ITS can be more effective than traditional classroom instruction or human tutoring in certain contexts [17]. One key advantage of ITS is their ability to provide personalized instruction and feedback to students. By using data analytics and machine learning algorithms, ITS can adapt to the individual learning needs and preferences of each student, providing targeted instruction and feedback that is tailored to their specific strengths and weaknesses [31]. Another advantage of ITS is their ability to provide immediate feedback to students. Unlike human tutors, ITS can provide feedback in real-time, allowing students to quickly identify and correct their mistakes [29]. However, there are also challenges associated with the development and implementation of ITS. One major challenge is the development of accurate and reliable student models. ITS relies on accurate and detailed models of student knowledge and skills in order to provide personalized instruction and feedback, and the development of these models can be time-consuming and difficult [30]. Another challenge is the high cost of developing and implementing ITS. Developing an effective ITS requires significant investment in technology, software development, and instructional design. Additionally, ITS require ongoing maintenance and support in order to remain effective over time [31].

9.2 Smart Classrooms: Good or Bad? The use of smart classrooms and artificial intelligence (AI) in education has become increasingly popular in recent years. There are several potential consequences of using these technologies, both positive and negative. Learning Outcome: Smart classrooms and AI in education has lead to improved learning outcomes for students’. For example, a study conducted by the National Science Foundation found that incorporating AI into a high school biology class led to significant improvements in student learning outcomes [11].

278

K. Dimililer et al.

On the other hand, based on our (researchers of this study) experience, smart classrooms and AI have not completely improved the performance of all students. We see students who fail to pay attention to education and utilize available technologies just to pass the courses. Involving students in resuming the actual face-to-face learning has revealed this fact. Increased Personalization: The use of AI in education has also lead to increased personalization of learning experiences. A study conducted by the University of Michigan found that an AI-based tutoring system improved learning outcomes and engagement for students with different learning styles [26]. Privacy and Security Concerns: The use of smart classrooms and AI in education also raises concerns about privacy and security. For example, the use of facial recognition technology in classrooms could potentially violate students’ privacy rights [23]. Bias in AI: Another concern is that AI systems may perpetuate existing biases in education. For example, a study by the AI Now Institute found that many AI systems used in education reinforce existing biases against marginalized groups [28].

10 Conclusion Although smart classrooms are undoubtedly a show of considerable technological advancements, its results have not all been met with positive feedback. The benefits of technology to education through the years have been welcome, but full dependence or a complete migration to such methods have been met with skepticism. The dissatisfaction of students and parents about how the quality of education dropped as a result of dependence on online education during the pandemic is still felt till this day. However, even with this negative feedback, research and studies insist that smart classrooms would yield a net positive for the field of education. Our study of this topic leads us to believe that more research still needs to be done on the pedagogical aspect of smart classrooms and not only on the technological aspect. This would help us better understand the effects of these methods on students’ learning and fine tune these systems to make them more effective. Although the covid era forced organisations and educational institutions to use a lot of these systems under short notice and without much study on their effectiveness and side effects, it was still helpful to provide first hand information at a large scale about the effectiveness and limitations of these systems.

Post Covid Era-Smart Class Environment

279

References 1. AI Now Institute. (2019). The AI Now Report 2019. Retrieved from https://ainowinstitute.org/ AI_Now_2019_Report.pdf 2. Aldheleai, H.F., Ubaidullah, M., & Alammari, A. (2017). Overview of cloud-based learning management systems. International Journal of Computer Application, 41–46. https://doi.org/ 10.5120/ijca2017913424 3. Andriotis, N. (2018). SaaS and eLearning: Why a SaaS LMS is the Best LMS. 4. Azhari, T., & Kurniawati. (2021). Students’ perception on online learning during the COVID-19 pandemic. In Proceedings of the International Conference on Social Science, Political Science, and Humanities (ICoSPOLHUM 2020). https://doi.org/10.2991/assehr.k.210125.009 5. Beck, J., Stern, M., & Haugsjaa, E. (1996). Applications of AI in education. XRDS: Crossroads, The ACM Magazine for Students, 3(1), 11–15. https://doi.org/10.1145/332148.332153 6. Bedwell, W. L., & Salas, E. (2010). Computer-based training: Capitalizing on lessons learned. International Journal of Training and Development, 14(3), 239–249. https://doi.org/10.1111/ j.1468-2419.2010.00355.x 7. Center for Democracy and Technology. (2019). Facial Recognition in Schools: A Threat to Privacy and Safety. 8. Cobb, J. (2019). What is an LMS?. 9. Dimililer, K. (2017). Use of intelligent student mood classification system (ISMCS) to achieve high quality in education. Quality & Quantity, 52(S1), 651–662. https://doi.org/10.1007/s11 135-017-0644-y 10. Flinn Scientific. (2021). Sensors and data collection. 11. Graesser, A. C., Conley, M., Olney, A., & D’Mello, S. (2005). Intelligent tutoring systems. In K. J. Holyoak & R. G. Morrison (Eds.), The Cambridge handbook of thinking and reasoning (pp. 437–454). Cambridge University Press. 12. Holstein, K., McLaren, B.M., & Aleven, V. (2018). Student learning benefits of a mixedreality teacher awareness tool in AI-enhanced classrooms. Lecture Notes in Computer Science, 154–168. https://doi.org/10.1007/978-3-319-93843-1_12 13. Hwang, G. J., Tsai, C. C., & Yang, S. J. (2008). Criteria, strategies and research issues of context-aware ubiquitous learning. Journal of Educational Technology & Society, 11(2), 81–91. 14. Hussain, A.A., & Dimililer, K. (2021). Student grade prediction using machine learning in IOT ERA. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 65–81. https://doi.org/10.1007/978-3-030-69431-9_6 15. Kerr, M. S., Rynearson, K., & Kerr, M. C. (2006). Student characteristics for online learning success. The Internet and Higher Education, 9(2), 91–105. https://doi.org/10.1016/j.iheduc. 2006.03.002 16. Kirsal Ever, Y., & Dimililer, K. (2017). The effectiveness of a new classification system in higher education as a new e-learning tool. Quality & Quantity, 52(S1), 573–582. https://doi. org/10.1007/s11135-017-0636-y 17. Kulik, C.-L.C., & Fletcher, J. D. (2016). Effectiveness of intelligent tutoring systems: A metaanalytic review. Review of Educational Research, 86(1), 42–78. 18. Mann, B. L. (2009). Computer-aided instruction. Wiley Encyclopedia of Computer Science and Engineering. https://doi.org/10.1002/9780470050118.ecse935 19. Mcleod, S. (2017). Kolb’s learning styles and experiential learning cycle. Simply Psychology. 20. National Science Foundation. (2019). AI-enhanced learning in biology: Improving achievement and motivation for high school students. 21. National Science Teachers Association. (2021). Science and children: Sensors in the science classroom. 22. Ng, L. S., & Neo, M. (2022). Engaging indigenous learners in experiential learning: Virtual reality as a learning strategy for Malaysian orang asli students. In: EDULEARN Proceedings. https://doi.org/10.21125/edulearn.2022.1965

280

K. Dimililer et al.

23. Ogata, H., Matsuka, Y., El-Bishouty, M. M., & Yano, Y. (2009). LORAMS: Linking physical objects and videos for capturing and sharing learning experiences towards ubiquitous learning. International Journal of Mobile Learning and Organisation, 3(4), 337–350. 24. PASCO scientific. (2021). Sensors and data collection. 25. Prahani, B. K., Rizki, I. A., Jatmiko, B., Suprapto, N., & Tan, A. (2022). Artificial intelligence in education research during the last ten years: A review and bibliometric study. International Journal of Emerging Technologies in Learning (IJET), 17(08), 169–188. https://doi.org/10. 3991/ijet.v17i08.29833 26. Sekeroglu, B., Dimililer, K., & Tuncal, K. (2019). La Inteligencia artificial en educación: Aplicación en la Evaluación del Desempeño del alumno. Dilemas Contemporáneos: Educación, Política y Valores. https://doi.org/10.46377/dilemas.v28i1.1594 27. Sharma, M., & Srivastav, G. (2020) Study and review of learning management system software. In Innovations in Computer Science and Engineering. Lecture Notes in Networks and Systems (vol. 103, pp. 373−383). https://doi.org/10.1007/978-981-15-2043-3_42 28. Shi, Y., Xie, W., Xu, G., Shi, R., Chen, E., Mao, Y., & Liu, F. (2003). The smart classroom: Merging technologies for seamless tele-education. IEEE Pervasive Computing, 2, 47–55. 29. VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221. 30. VanLehn, K., Lynch, C., Schulze, K., Shapiro, J. A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., & Wintersgill, M. (2005). The andes physics tutoring system: Lessons learned. International Journal of Artificial Intelligence in Education, 17(3), 301–331. 31. Woolf, B. P. (2009). Building intelligent interactive tutors: Student-centered strategies for revolutionizing e-learning. Morgan Kaufmann.