Expert Clouds and Applications: Proceedings of ICOECA 2023 9819917441, 9789819917440

The book features original papers from International Conference on Expert Clouds and Applications (ICOECA 2023), organiz

446 102 26MB

English Pages 890 [891] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Editors and Contributors
A Detailed Survey on Network Intrusion Detection in Cloud Using Different Techniques
1 Introduction
2 Literature Revıew
3 Performance Measures
3.1 Accuracy
3.2 Precision
3.3 Recall
3.4 Chi Square
4 Proposed Methodology
4.1 Data Collection
4.2 Data Preprocessing
4.3 Feature Engineering
4.4 Model Training
4.5 Model Evaluation
4.6 Model Deployment
4.7 Limitations/Challenges
4.8 Common Research Challenges [12, 9, 15]
4.9 Datasets
5 Conclusıon
References
Adoption of Network Function Virtualization Technology in Education
1 Introduction
2 Network Function Virtualization in Research Education
2.1 Advantages of Network Function Virtualization
3 Network Function Virtualization Essentials and Considerations
3.1 Efficiency
3.2 Accordance with Information Technology
3.3 Adaptability and Creditability
3.4 Security and Other Threatening Issues
3.5 User’s View
4 Awareness Required and Challenges on Network Function Virtualization
4.1 Establishing a Mobile Virtualization Network
4.2 Establishing a Home Network
4.3 Compatibility with the Existing One
4.4 Virtualization of the Resource
4.5 Sharing of Resources
4.6 Security Issue on Open Source
4.7 Multi-vendor Environment
4.8 Supply Chain
5 Future of Network Function Virtualization
6 Conclusion
References
An Efficient Secure Cloud Image Deduplication with Weighted Min-Hash Algorithm
1 Introduction
2 Related Work
3 Proposed Methodology
3.1 System Setup
3.2 Cloud Deduplication Storage System
4 Results and Discussion
5 Conclusion
References
An Intelligence Security Architecture for Mitigating DDOS Attack in CloudIoT Environment
1 Introduction
2 Review of Literature
3 Proposed Architecture
3.1 Functionality of the Proposed Architecture
4 Security Algorithms
4.1 Algorithm for Secure User and Device Registration
4.2 Algorithm for Key Generation Using ECC
4.3 Significance of the Proposed Security Algorithms
5 Experimental and Result Analysis
5.1 Secure User and Device Registration
5.2 Number of HTTP Application Requests that the Server Has Received
5.3 Application Response Time for HTTP
6 Conclusion
References
Automated Activity Scheduling Using Heuristic Approach
1 Introduction
2 Literature Survey
3 Problem Statement
4 Related Works
5 Existing System
6 Proposed System
6.1 Algorithm Description
6.2 Pseudo Code
7 Result Analysis
7.1 Dashboard
7.2 View All Slots
7.3 Timetable
8 Future Scope
9 Conclusion
References
Automated Helpline Service Using a Two-Tier Ensemble Framework
1 Introduction
2 Literature Survey
3 Proposed System
4 User Classes
4.1 Admin Users
4.2 Application Users
5 Dataset
6 Architecture
6.1 Assistance Chatbot
6.2 Classification Model
6.3 Model Learning
7 Implementation
7.1 Assistance Chatbot
7.2 Query Processing
7.3 Classification Model
7.4 Notifying Department
7.5 Model Learning
8 User Interface
9 Results and Discussions
10 Conclusion
References
Big Data Security: Attack’s Detection Methods Using Digital Forensics
1 Introduction
2 Literature Survey
2.1 Using Analytics with Big Data
2.2 Attacks Against Big Data
2.3 Increase in Attacks on Big Data
3 Threat Detection System
3.1 Finding Attackers with Digital Forensics
3.2 Digital Forensics Tools
4 Conclusion
5 Future Process
References
Decentralized Expert System for Donation Tracking and Transparency
1 Introduction
2 Related Work
3 Proposed Work
3.1 Problem Statement
3.2 Proposed System
3.3 Methodology
4 Results
5 Conclusion
6 Future Enhancement
References
Design of Wireless Cloud-Based Transmission Intelligent Robot for Industrial Pipeline Inspection and Maintenance
1 Introduction
2 Literature Review
3 Feasibility Study
3.1 Technical Feasibility
3.2 Operational Feasibility
3.3 Power Feasibility
3.4 Economic Feasibility
4 Proposed Model and Circuit of the Pipeline360 Robot
4.1 Sensory Module
4.2 Data Transmission Module
4.3 System Control Module
5 Function Tree
6 Working of the Robot
7 Construction of the Robot
7.1 Design Description of Pipeline360
7.2 Calculation for the Spring Selection
8 Conclusions
References
Evaluation of Security of Cloud Deployment Models in Electronic Health Records
1 Introduction
2 Literature Review
3 Securıty Issues in Cloud Computıng
4 Methodology
4.1 Indian Situation and EHR
4.2 EHR System Architecture
4.3 Deployment Models
4.4 Model of Separation
4.5 Model of Availability
4.6 Model of Migration
4.7 Model for Tunnel and Cryptography
5 Proposed Model
5.1 Cross-Authority Access and Privacy Method
5.2 Write-Record Operation
5.3 Record-Reading Procedure
6 Experimental Findings and Analysis
6.1 Security Evaluation
6.2 Performance Analysis
7 Conclusıon
References
Leasing in IaaS Cloud Using Queuing Model
1 Introduction
2 Related Work
3 Analytical Modelling
4 Numerical Results
5 Conclusion
References
Resource Allocation Using MISO-NOMA Scheme with Clustering Technique
1 Introduction
2 MISO-NOMA Model
3 Energy Efficient MISO-NOMA Scheme with Clustering Technique
4 Simulation Results
5 Conclusion
References
Resource Request Handling Mechanisms for Effective VM Placements in Cloud Environment
1 Introductıon
2 Related Work
3 Resource Request Handling and Scheduling
3.1 Request Handling Components
4 Queuing Model
4.1 Join-Shortest Queue (JSQ) Algorithm
5 Experimental Environment
5.1 Results
6 Conclusion and Future Work
References
Self-adaptive Hadoop Cluster for Data Analysis Applications
1 Introduction
2 Related Works
3 System
3.1 System Description
3.2 Hadoop
3.3 Hadoop Distributed File System (HDFS)
3.4 Hadoop Distributions
3.5 MapReduce
3.6 Architectural Design
4 Implementation
5 Results
6 Evaluation
7 Conclusion
References
Sustainable Cloud Computing in the Supply Chains
1 Introduction
2 Problem Statement
3 Literature Review
4 Analysis of Various Cloud-Based Supply Chain Models
5 Findings
6 Conclusion
References
Task Scheduling in Fog Assisted Cloud Environment Using Hybrid Metaheuristic Algorithm
1 Introduction
2 Related Work
3 System Model
4 The Proposed HPSO_GWO Approach
5 Results Analysis
6 Conclusions and Future Work
References
Trust Model for Cloud Using Weighted KNN Classification for Better User Access Control
1 Introduction
2 Short Literature Study
3 Proposed Methodology
3.1 Identity Management
3.2 Resource Management
3.3 Request Management
3.4 Policy Information Point
3.5 Decision and Policy Enforcement
3.6 Log Access Management (LAM)
3.7 Resource Ranking
3.8 Trust Management Module (TTM)
4 Process for Authorization
4.1 Trust Management Module (TMM)
4.2 Weighted K-Nearest Neighbor Trust Value Prediction
4.3 K-Nearest Neighbor Weighted
5 Results and Discussion
6 Conclusion and Future Work
References
A Case Study of IoT-Based Biometric Cyber Security Systems Focused on the Banking Sector
1 Introduction
2 Literature Review
3 Methodology
3.1 Data Collection
3.2 Data Processing Methods
3.3 Data Analysis and Result
4 Results and Discussion
4.1 Victimized by Banking Fraud
4.2 Awareness Banking Fraud
4.3 Feasibility of Affected by the Banking Fraudulent Activities
5 Conclusion
References
Blockchain Security Through SHA-512 Algorithm Implementation Using Python with NB-IoT Deployment in Food Supply Chain
1 Introduction
1.1 Blockchain Technology with SHA-512
2 Narrowband IoT and Blockchain Technology Are Preparing for the Digitalization of Agriculture
2.1 Using Blockchain for Supply Chain Management
3 Narrow Band-IoT (NB-IoT) in Exıstıng 5G Technology
4 Model of Supplychain with Blockchain
4.1 Python Proposed Flow Chart
4.2 SHA-512 Algorithm
4.3 HASH Rule to Meet: Hash Must Start with X Values of 0’s
4.4 Status of Flags in Each Entity
5 Implementation of Algorithm SHA-512
5.1 Flag Status Between Manufacturer to the Transport Company
5.2 Flag Status Between the Transport Company to the Retailer Company
5.3 Flag Status of Review of the Retailer Company
6 Conclusion
References
IOT-Based Whip-Smart Trash Bin Using LoRa WAN
1 Introduction
2 Literature Survey
3 System Model
4 Performance Evaluation
4.1 Transmıtter
4.2 Receıver
5 Conclusion
References
Monitoring of Wireless Network System-Based Autonomous Farming Using IoT Protocols
1 Introduction to Agriculture and Farming Systems
2 Review of Literature
3 Proposed System
3.1 Overview
3.2 Analysis the Proposed System
3.3 Proposed Method
3.4 Selection of IoT
3.5 Result of Thingspeak Cloud Server
3.6 Programming of COAP and MQTT Protocol
4 Conclusion
References
A Brief Survey on Enhanced Quality of Service Mechanisms in Wireless Sensor Network for Secure Data Transmission
1 Introduction
2 Literature Survey
3 Proposed Methodologies
4 Conclusion
References
A Mobile Application Model for Differently Abled Using CNN, RNN and NLP
1 Introduction
2 Literature Survey
3 Concepts Used
3.1 Convolutional Neural Network (CNN)
3.2 Recurrent Neural Network (RNN)
4 Implementation Methodology
5 Expected Results
6 Conclusion
References
A Survey Paper: On Path Planning Strategies Based on Classical and Heuristic Methods
1 Introduction
2 Classical Approaches
2.1 Sub-Goal Method
2.2 Potential Field Method
2.3 Cell Decomposition
2.4 Dijkstra’s Algorithm
2.5 Rapidly-Explore Random Tree Cell
2.6 Probabilistic Road Map
3 Heuristic Approach
3.1 Artificial Neural Network
3.2 Ant Colony Optimization
3.3 Genetic Algorithm
3.4 Bug Algorithm
4 Scope
5 Conclusion
References
Actual Problems and Analysis of Anti-avoidance Tax Measures in the EAEU and EU Countries in the Context of Digitalization of the Economy
1 Introduction
2 Materials and Methods
2.1 Descriptive Part
2.2 Research Part
3 Results & Discussion
4 Conclusion
References
An Alternative Approach to Smart Air Conditioner with Multiple Power Sources
1 Introduction
2 Technological Advancement Related to AC
3 Design and Implementation of the Proposed System
4 Discussion
5 Conclusion
References
An Analysis on Daily and Sports Activities’ Recognition Using Data Analytics and Artificial Intelligence Model
1 Introduction
2 Literature Review
3 Challenges in Existing System
4 Methods for Analysis
4.1 Feature Selection Algorithms
5 Machine Learning-Based Approaches
5.1 Logistic Regression
5.2 Naive Bayes
5.3 Support Vector Machine
5.4 Decision Tree
5.5 K-Nearest Neighbor (KNN)
5.6 Random Forest
6 Deep Learning Approach
6.1 Convolutional Neural Network
6.2 Recurrent Neural Network (RNN)
6.3 Deep Belief Networks (DBNs)
6.4 Deep Neural Networks (DNNs)
6.5 Auto-Encoder
6.6 Long Short-Term Memory
7 Discussion and Analysis
8 Conclusion
References
Analysis of the Results of Evaluation in Higher Technological Institutes of Ecuador, Using Data Mining Techniques
1 Introduction
2 Related Works
3 Materials and Methods
3.1 Data Sample
3.2 Data Preparation
3.3 Data Modeling
4 Results
5 Conclusions
References
Appointment Scheduler for VIP Using Python and Twilio
1 Introduction
2 Literature Survey
3 Proposed Work
4 Results
5 Conclusion
References
Artificial Intelligence Application for Healthcare Industry: Cases of Developed and Emerging Markets
1 Introduction
2 How Does AI Help Healthcare Industries
3 AI Healthcare Industry Applications in Different Countries
3.1 Case Study of India
3.2 Case Study of South Korea
4 Results and Discussion. Comparison of AI Healthcare Application Between India and South Korea
5 Conclusion
References
Automated ADR Analysis from Twitter Data Using N-Gram-Based Feature Extraction Methods and Supervised Learning Classification
1 Introduction
2 Literature Review
3 Methodology
3.1 Dataset Collection
3.2 Preprocessing
3.3 Feature Extraction
3.4 Supervised Classification
4 Results and Discussion
4.1 Performance Measures Parameters
4.2 Experimental Setup
5 Conclusion
References
Bayesian Algorithm for Labourer Job Performance at Home in the COVID-19 Pandemic
1 Introduction
2 Literature Review
2.1 Work Performance (WP)
2.2 Research Hypothesis and Research Model
3 Methodology
3.1 Sample Size
3.2 Bayes’ Theorem
3.3 Bayes’ Inference
3.4 Selection of the Model by the Bayesian Model Averaging
4 Results
4.1 Reliability Test
4.2 BIC Algorithm
4.3 Model Evaluation
5 Conclusions
5.1 Implications with Concerns About the COVID-19 Pandemic (CCP)
5.2 Implications for Facilities (FAC)
References
Comparative Analysis of Human Hand Gesture Recognition in Real-Time Healthcare Applications
1 Introduction
2 Related Work/Literature Survey
3 Details of Technology
3.1 Detection
3.2 Tracking
3.3 Recognition
4 Analytical and Experimental Work
4.1 Dataset
4.2 Model Training
4.3 Construction of Model
4.4 Testing Model
4.5 Model Evaluation
4.6 Applications
5 Limitations
6 Conclusion
References
Compression in Dynamic Scene Tracking and Moving Human Detection for Life-Size Telepresence
1 Introduction
2 Related Works
3 Proposed Method
3.1 Real-Time Tracking Method
3.2 Extraction Point-Cloud in Triangulation Process
3.3 Data Transmission and Compression Over Network
4 Implementation
5 Results
6 Conclusion
References
Deep Learning-Based Facial Emotion Analysis
1 Introduction
2 Related Work
3 Proposed Approaches
3.1 Dataset
3.2 Network Architecture
4 Discussions on Outcome
5 Conclusion
References
Electronic Health Records Rundown: A Novel Survey
1 Introduction
2 Literature Survey
3 Discussion
3.1 Why Electronic Health Records?
3.2 Methodology
4 Conclusion
References
ENT Pattern Recognition Using Augmented Bounding Boxes
1 Introduction
2 System Methodology
3 Results and Discussion
4 Conclusion
References
Evaluation of Bone Age by Deep Learning Based on Hand X-Rays
1 Introduction
2 Literature Survey
3 Proposed Work
3.1 VGG19
3.2 ResNet 50
4 Results and Discussion
4.1 VGG19 Results
4.2 ResNet 50
5 Conclusion
References
Image Processing-Based Presentation Control System Using Binary Logic Technique
1 Introduction
2 Literature Review
3 Proposed Methodology
3.1 System Architecture
3.2 Binary Algorithm
4 Results
5 Future Scope
6 Conclusion
References
Intelligent Fault Diagnosis in PV System—A Machine Learning Approach
1 Introduction
2 Methodology
2.1 Collection of Dataset
2.2 Training Phase
2.3 Testing Phase
3 Results and Discussion
4 Conclusion
References
Literature Survey on Empirical Analysis on Efficient Secured Data Communication in Cloudcomputing Environment
1 Introduction
1.1 Research Objective
2 Lıterature Revıew
3 Secured Data Communication in Cloud Environment
3.1 A Compressive Integrity Auditing Protocol for Secure Cloud Storage
3.2 Secure Password-Protected Encryption Key for Deduplicated Cloud Storage Systems
3.3 Light Weight and Privacy-Preserving Delegable Proofs of Storage with Data Dynamic Sin Cloud Storage
4 Performance Analysis of Secured Data Communıcatıon Techniques in Cloud
4.1 Impact on Processing Time
4.2 Impact on Data Confidentiality Rate
4.3 Impact on Memory Consumption
5 Discussion and Limitations on Secured Data Communication in Cloud Environment
5.1 Future Direction
6 Conclusion
References
Multi-scale Avatars in a Shared Extended Reality Between AR and VR Users
1 Introduction
2 Related Works
3 Proposed Method
3.1 Multi-scale Avatar
3.2 Tangible Interaction Using ZapBox
3.3 Interaction in VR
4 Test Application: A Networked Maze Game
5 Conclusion
References
Pattern Recognition of Durian Foliar Diseases Using Fractal Dimension and Chaos Game
1 Introduction
1.1 The Durian Plant
2 Literature Survey
3 Proposed Algorithm
3.1 Fractal Dimension and Pattern Recognition of Durian Foliar Disease
4 Implementation
4.1 Chaos Game
5 Proposed Results
6 Conclusion
References
Pressure Ulcer Detection and Prevention Using Neural Networks
1 Introduction
2 Background
2.1 Frontend Module
2.2 Server Module
2.3 Hardware Module
2.4 Prediction Module
2.5 Prevention Module
3 Experiment Results and Discussion
4 Conclusion
References
Recreation of a Sub-pod for a Killed Pod with Optimized Containers in Kubernetes
1 Introduction
2 Related Research
3 Proposed Method
4 Design
4.1 Proposed Algorithm
4.2 To Check the Status of Container in a Pod
5 Evaluation
5.1 Experimental Set Up
6 Conclusions and Future Research
References
Review of Model-Based Techniques in Augmented Reality Occlusion Handling
1 Introduction
2 Related Works
3 The Tracking and Pipeline
4 Model-Based AR Occlusion Handling
5 Conclusion
References
Segmentation and Area Calculation of Brain Tumor Images Using K-Means Clustering and Fuzzy C-Means Clustering
1 Introduction
2 Related Work
3 Methodology
3.1 Proposed Techniques
3.2 Description of Block Diagram
4 Results and Discussion
5 Conclusion
References
Smart Agricultural Field Monitoring System
1 Introduction
2 Related Works
3 Proposed Methodology
3.1 System Architecture
3.2 Module Description
4 Implementation
4.1 Detection of Rainfall
4.2 Detection of Soil Moisture
4.3 Rain Detection
5 Results and Analysis
6 Conclusion and Future Work
References
Smart Vehicle Tracking in Harsh Condition
1 Introduction
2 Literature Review
3 Methodology
4 Result
5 Limitations
6 Future Scope
7 Result and Discussion
References s
Storage Automation Using the Interplanetary File System and RFID for Authentication
1 Introduction
2 Literature Survey/Related Works
2.1 RFID
2.2 Cloud
2.3 Block Chain
2.4 IPFS
3 Proposed Methodology
3.1 Introduction
4 Implementation
4.1 Authentication Sector
4.2 Storage Sector
4.3 Web Application Sector
5 Results
6 Conclusion and Future Work
References
Storing and Accessing Medical Information Using Blockchain for Improved Security
1 Introduction
2 Literature Review
3 Proposed Methodology
4 Experimental Setup and Configurations
5 Results and Discussions
6 Conclusion
References
Systematic Literature Review on Object Detection Methods at Construction Sites
1 Introduction
2 Methods Used for Object Detection at Construction Sites
2.1 Object Detection Using Traditional Approach
2.2 Object Detection Using Modern Approach
3 Discussion and Future Scope
4 Conclusion
References
Unique Web-Based Assessment with a Secure Environment
1 Introduction
2 Literature Survey
2.1 SIETTE Model
2.2 CBTS Model
2.3 EMS Model
2.4 OEES Model
3 Proposed System
3.1 Presentation Layer
3.2 Core Module
3.3 Server
3.4 Storage Service
4 Results
4.1 Admin Input
4.2 User Login
5 Conclusion
References
A Draft Architecture for the Development of a Blockchain-Based Survey System
1 Introduction
2 Blockchain and Hyperledger
2.1 Blockchain Technology
2.2 Hyperledger
3 Related Works
4 Proposed Approach
5 Conclusions
References
Additive Congruential Kupyna Koorde Cryptographic Hashfor Secured Data Storage in Cloud
1 Introduction
2 Related Works
3 Proposal Methodology
4 Registration and Key Generation
5 Data Encryption
6 Davies–Meyer Kupyna Koorde Distributed Hash-Based Data Storage
7 Decryption
8 Experimental Evaluation
9 Performance Results and Discussion
10 Impact of Data Confidentiality Rate
11 Impact of Data Integrity Rate
11.1 Impact of Computational Cost
11.2 Impact of Storage Cost
12 Conclusion
References
AMQP Protocol-Based Multilevel Security for M-commerce Transactions
1 Introduction
2 Related Work
2.1 Mobility and Fast Processing
2.2 Reachability
2.3 Low Cost of Maintenance
2.4 Characteristics of Wireless and Wired M-commerce
2.5 Issues in Cloud Computing
3 Security Algorithms
3.1 RSA Algorithm
3.2 Shor’s Algorithm
3.3 Mc Elice Algorithm
4 Methodology
4.1 AMQ Model Architecture
4.2 Message Flow
4.3 Queue Life Cycles
4.4 AMQP Client Architecture
5 Experimental Results
6 Conclusion
References
Decentralised Blockchain-Based Framework for Securing eVoting System
1 Introduction
2 Blockchain
2.1 How Does It Work?
2.2 Block Structure
2.3 Properties of Blockchain
3 Reasons for Using Blockchain Technology
4 Motivation and Related Work
5 Implementation Details
5.1 Design Considerations
5.2 Ethereum
5.3 Smart Contracts
5.4 Working of Blockchain Voting System
5.5 In What Way Identity Verification is Performed?
5.6 Implementation in Ethereum
5.7 Model Process
6 Conclusion
References
Digital Twins—A Futuristic Trend in Data Science, Its Scope, Importance, and Applications
1 Introduction
2 Related Work
3 Digital Twin-Meaning, Characteristics, and Attributes
3.1 Digital Twin-Meaning
3.2 Characteristics of DT
3.3 Attributes of DT
4 Digital Twin Versus Other Technologies
4.1 Simulator Versus Digital Twin
4.2 Digital Twin Versus Digital Shadow
5 Underlying Technologies in DT
6 Features of Digital Twins
7 The Architecture of the Digital Twin
8 Steps Involved in the Creation of the Digital Twins
9 General Working of Digital Twins
10 A Case Study—Digital Twins in Restaurant Management
10.1 Components Required to Implement Digital Twins in Restaurants
10.2 Architecture of the Digital Twins for Restaurant Management
11 Conclusion
References
Loyalty Points Exchange System Using Blockchain
1 Introduction
2 Literature Review
3 Methodology
3.1 Smart Contract
3.2 Metamask Wallet
3.3 Goerli Faucet—ETH Blockchain
4 Results
4.1 Add User
4.2 Inter-Transfer
4.3 Intra-Transfer
4.4 Reliability
5 Conclusion
References
Performance Analysis of Various Machine Learning-Based Algorithms on Cybersecurity Approaches
1 Introduction
2 Background Information
3 Analysis of Various ML Models for Cybersecurity
4 Results and Discussions
5 Conclusion
References
Privacy-Preserving Data Publishing Models, Challenges, Applications, and Issues
1 Introduction
2 Identified Security and Privacy Research Challenges
3 Modeling Adversary's Background Knowledge Privacy-Preserving Data Publishing Model
3.1 Skyline Privacy
3.2 Privacy-MaxEnt
3.3 Skyline (B, t)-Privacy
4 Applications of PPDP
4.1 Cloud PPDP
4.2 E-Health PPDP
4.3 Social Network PPDP
4.4 Agriculture PPDP
4.5 Smart City PPDP
5 Anonymization Methods and Challenges
6 Limitations and Future Directions in PPDP
7 Conclusion
References
Recent Web Application Attacks’ Impacts, and Detection Techniques–A Detailed Survey
1 Introduction
1.1 Web-Based Attacks
1.2 Web Applications Working
1.3 Online Attacks
1.4 Types of Web Attacks
1.5 Protecting Against the Site Attacks
2 Related Study
3 Survey of Existing Works
4 Research Gaps
5 Conclusion and Future Directions
References
Security Concerns in Intelligent Transport Systems
1 Introduction
1.1 ITS Architecture
2 Literature Review
3 Discussion
4 Conclusion and Future Work
References
Verify De-Duplication Using Blockchain on Data with Smart Contract Techniques for Detecting Errors on Cloud
1 Introduction
2 Literature Survey
3 Proposed System
4 Methodologies Used
5 System Architecture
6 Result Analysis
7 Security Analysis
8 Conclusion
References
Author Index
Recommend Papers

Expert Clouds and Applications: Proceedings of ICOECA 2023
 9819917441, 9789819917440

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Networks and Systems 673

I. Jeena Jacob Selvanayaki Kolandapalayam Shanmugam Ivan Izonin   Editors

Expert Clouds and Applications Proceedings of ICOECA 2023

Lecture Notes in Networks and Systems Volume 673

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas— UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Türkiye Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong

The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).

I. Jeena Jacob · Selvanayaki Kolandapalayam Shanmugam · Ivan Izonin Editors

Expert Clouds and Applications Proceedings of ICOECA 2023

Editors I. Jeena Jacob GITAM University Bengaluru, India Ivan Izonin Department of Artificial Intelligence Lviv Polytechnic National University Lviv, Ukraine

Selvanayaki Kolandapalayam Shanmugam Department of Mathematics and Computer Science Ashland University Ashland, OH, USA

ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-981-99-1744-0 ISBN 978-981-99-1745-7 (eBook) https://doi.org/10.1007/978-981-99-1745-7 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

We are honored to dedicate the proceedings of ICOECA 2023 to all the participants, organizers and editors of ICOECA 2023.

Preface

It is a great privilege for us to present the proceedings of the First International Conference on Expert Clouds and Applications [ICOECA 2023] to the readers, delegates and authors of the conference event. We greatly hope that all the readers will find it more useful and resourceful for their future research endeavors. International Conference on Expert Clouds and Applications [ICOECA 2023] was held in Bengaluru, India, from 9–10, February 2023, organized by R. V. College of Engineering, Bengaluru, India, with an aim to provide a platform for researchers, academicians and industrialists to discuss the state-of-the-art research opportunities, challenges and issues in intelligent computing applications. The revolutionizing scope and rapid development of computing technologies will also create new research questions and challenges, which in turn results in the need to create and share new research ideas and encourage significant awareness in this futuristic research domain. The proceedings of ICOECA 2023 provide a limelight on creating a new research landscape for intelligent computing applications with the support received and the research enthusiasm that has truly exceeded our expectations. This made us to be more satisfied and delighted to present the proceedings with a high level of satisfaction. The responses from the researchers to the conference had been overwhelming from India and overseas countries. We have received 328 manuscripts from the prestigious universities / institutions across the globe. Furthermore, 64 manuscripts are shortlisted based on the reviewing outcomes and conference capacity constraints. Nevertheless, we would like to express our deep gratitude and commendation to the entire conference review team, who helped us to select the high-quality research works that are included in the ICOECA 2023 proceedings published by Springer. Also, we would like to extend our appreciation to organizing committee members for their continual support. We are pleased to thank Springer Publications for publishing the proceedings of ICOECA 2023 and maximizing the popularity of the research manuscript across the globe.

vii

viii

Preface

At last, we wish all the authors and participants of the conference event the best of success in their future research endeavors. Dr. I. Jeena Jacob Department of Computer Science and Engineering GITAM University Bengaluru, India Dr. Selvanayaki Kolandapalayam Shanmugam Associate Professor, Department of Mathematics and Computer Science Concordia University Chicago River Forest, IL, USA Dr. Ivan Izonin Department of Artificial Intelligence Lviv Polytechnic National University Lviv, Ukraine

Contents

A Detailed Survey on Network Intrusion Detection in Cloud Using Different Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. L. V. N. Manikantha Sudarshan, Majeti SaiRajKumar, M. Rakesh, T. Sathwik, K. Swathi, and G. Raja

1

Adoption of Network Function Virtualization Technology in Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Jinapriya and B. H. Chandrashekhar

19

An Efficient Secure Cloud Image Deduplication with Weighted Min-Hash Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Dhivya and N. Shanmugapriya

33

An Intelligence Security Architecture for Mitigating DDOS Attack in CloudIoT Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E. Helen Parimala, K. Sureka, S. Suganya, Y. Sunil Raj, and L. Lucase

47

Automated Activity Scheduling Using Heuristic Approach . . . . . . . . . . . . . Bhoomil Dayani, Raj Busa, Divyesh Butani, and Nirav Bhatt Automated Helpline Service Using a Two-Tier Ensemble Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Sai Jatin, K. S. Sai ShriKrishnaa, Samyukta Shashidharan, Sathvik Bandloor, and K. Saritha Big Data Security: Attack’s Detection Methods Using Digital Forensics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ch. Charan, A. Pradeepthi, J. Jyotsna, U. Lalith, Radhika Rani Chintala, and Divya Vadlamudi

63

77

97

Decentralized Expert System for Donation Tracking and Transparency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Swati Jadhav, Siddhant Nawale, Vaishnav Loya, Varun Gujarathi, and Siddhesh Wani

ix

x

Contents

Design of Wireless Cloud-Based Transmission Intelligent Robot for Industrial Pipeline Inspection and Maintenance . . . . . . . . . . . . . . . . . . . 125 Joshuva Arockia Dhanraj, Gokula Vishnu Kirti Damodaran, C. R. Balaji, Swati Jha, Ramkumar Venkatasamy, Janaki Raman Srinivasan, and Pavan Kalyan Lingampally Evaluation of Security of Cloud Deployment Models in Electronic Health Records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Nomula Ashok, Kumbha Prasadarao, and T. Judgi Leasing in IaaS Cloud Using Queuing Model . . . . . . . . . . . . . . . . . . . . . . . . . 159 Bibhuti Bhusan Dash, Utpal Chandra De, Manoj Ranjan Mishra, Rabinarayan Satapathy, Sibananda Behera, Namita Panda, and Sudhansu Shekhar Patra Resource Allocation Using MISO-NOMA Scheme with Clustering Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Kasula Raghu and Puttha Chandrasekhar Reddy Resource Request Handling Mechanisms for Effective VM Placements in Cloud Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 T. Thiruvenkadam, A. Muthusamy, and M. Vijayakumar Self-adaptive Hadoop Cluster for Data Analysis Applications . . . . . . . . . . 195 Luchmee Devi Reesaul, Aatish Chiniah, and Humaïra Baichoo Sustainable Cloud Computing in the Supply Chains . . . . . . . . . . . . . . . . . . 211 Manish Shashi and Puja Shashi Task Scheduling in Fog Assisted Cloud Environment Using Hybrid Metaheuristic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Kaustuva Chandra Dev, Bibhuti Bhusan Dash, Utpal Chandra De, Parthasarathi Pattnayak, Rabinarayan Satapthy, and Sudhansu Shekhar Patra Trust Model for Cloud Using Weighted KNN Classification for Better User Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Manikandan Rajagopal, S. Ramkumar, R. Gobinath, and J. Thimmiraja A Case Study of IoT-Based Biometric Cyber Security Systems Focused on the Banking Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Sanjoy Krishna Mondol, Weining Tang, and Sakib Al Hasan Blockchain Security Through SHA-512 Algorithm Implementation Using Python with NB-IoT Deployment in Food Supply Chain . . . . . . . . . 263 Chand Pasha Mohammed and Shakti Raj Chopra IOT-Based Whip-Smart Trash Bin Using LoRa WAN . . . . . . . . . . . . . . . . . 277 D. Dhinakaran, S. M. Udhaya Sankar, J. Ananya, and S. A. Roshnee

Contents

xi

Monitoring of Wireless Network System-Based Autonomous Farming Using IoT Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 D. Faridha Banu, N. Kumaresan, K. Geetha devi, S. Priyanka, G. Swarna Shree, A. Roshan, and S. Meivel A Brief Survey on Enhanced Quality of Service Mechanisms in Wireless Sensor Network for Secure Data Transmission . . . . . . . . . . . . . 303 Pavan Vamsi Mohan Movva and Radhika Rani Chintala A Mobile Application Model for Differently Abled Using CNN, RNN and NLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 P. Rachana, B. Rajalakshmi, Sweta Leena, and B. Sunandhita A Survey Paper: On Path Planning Strategies Based on Classical and Heuristic Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Tryambak Kumar Ojha and Subir Kumar Das Actual Problems and Analysis of Anti-avoidance Tax Measures in the EAEU and EU Countries in the Context of Digitalization of the Economy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Irina A. Zhuravleva, Natalia V. Nazarova, and Natalia V. Levoshich An Alternative Approach to Smart Air Conditioner with Multiple Power Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Md. Rawshan Habib, W. M. H. Nimsara Warnasuriya, Md. Mobusshar Islam, Md. Apu Ahmed, Sibaji Roy, Md. Shahnewaz Tanvir, and Md. Rashedul Arefin An Analysis on Daily and Sports Activities’ Recognition Using Data Analytics and Artificial Intelligence Model . . . . . . . . . . . . . . . . . . . . . . 371 S. Maheswari and V. Radha Analysis of the Results of Evaluation in Higher Technological Institutes of Ecuador, Using Data Mining Techniques . . . . . . . . . . . . . . . . . 387 Diego Cale, Verónica Chimbo, María Cristina Moreira, and Yeferson Torres Berru Appointment Scheduler for VIP Using Python and Twilio . . . . . . . . . . . . . 403 N. Hari Krishna, V. Sai Hari Krishna, Shaik Nazeer Basha, Vasu Deva Polineni, and Akhil Vinjam Artificial Intelligence Application for Healthcare Industry: Cases of Developed and Emerging Markets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Olga Shvetsova, Mohammed Feroz, Sergey Salkutsan, and Aleksei Efimov Automated ADR Analysis from Twitter Data Using N-Gram-Based Feature Extraction Methods and Supervised Learning Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 K. Priya and A. Anbarasi

xii

Contents

Bayesian Algorithm for Labourer Job Performance at Home in the COVID-19 Pandemic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Bui Huy Khoi and Nguyen Thi Ngan Comparative Analysis of Human Hand Gesture Recognition in Real-Time Healthcare Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Archita Dhande, Shamla Mantri, and Himangi Pande Compression in Dynamic Scene Tracking and Moving Human Detection for Life-Size Telepresence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 Fazliaty Edora Fadzli and Ajune Wanis Ismail Deep Learning-Based Facial Emotion Analysis . . . . . . . . . . . . . . . . . . . . . . . 491 M. Mohamed Iqbal, M. M. Venkata Chalapathi, S. Aarif Ahamed, and S. Durai Electronic Health Records Rundown: A Novel Survey . . . . . . . . . . . . . . . . 501 A. Sree Padmapriya, B. Sabiha Sulthana, A. Tejaswini, P. Snehitha, K. B. V. Brahma Rao, and Dinesh Kumar Anguraj ENT Pattern Recognition Using Augmented Bounding Boxes . . . . . . . . . . 511 P. Radha, V. Neethidevan, and S. Kruthika Evaluation of Bone Age by Deep Learning Based on Hand X-Rays . . . . . 523 R. G. V. Prasanna, Mahammad Firose Shaik, L. V. Sastry, Ch. Gopi Sahithi, J. Jagadeesh, and Inakoti Ramesh Raja Image Processing-Based Presentation Control System Using Binary Logic Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 Sheela Chinchmalatpure, Harshal Ingale, Rushikesh Jadhao, Ojasvi Ghule, and Madhura Ingole Intelligent Fault Diagnosis in PV System—A Machine Learning Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 R. Priyadarshini, P. S. Manoharan, and M. Niveditha Literature Survey on Empirical Analysis on Efficient Secured Data Communication in Cloudcomputing Environment . . . . . . . . . . . . . . . 559 P. V. Shakira Multi-scale Avatars in a Shared Extended Reality Between AR and VR Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573 Shafina Abd Karim Ishigaki and Ajune Wanis Ismail Pattern Recognition of Durian Foliar Diseases Using Fractal Dimension and Chaos Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 Mia Torres-Dela Cruz, V. Murugananthan, R. Srinivasan, M. Kavitha, and R. Kavitha Pressure Ulcer Detection and Prevention Using Neural Networks . . . . . . 605 A. Durga Bhavani, S Likith, Khushwinder Singh, and A Nitya Dyuthi

Contents

xiii

Recreation of a Sub-pod for a Killed Pod with Optimized Containers in Kubernetes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619 Indrani Vasireddy, Rajeev Wankar, and Raghavendra Rao Chillarige Review of Model-Based Techniques in Augmented Reality Occlusion Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629 Muhammad Anwar Ahmad, Norhaida Mohd Suaib, and Ajune Wanis Ismail Segmentation and Area Calculation of Brain Tumor Images Using K-Means Clustering and Fuzzy C-Means Clustering . . . . . . . . . . . . . . . . . . 643 P. Likhitha Saveri, Sandeep Kumar, and Manisha Bharti Smart Agricultural Field Monitoring System . . . . . . . . . . . . . . . . . . . . . . . . . 655 N. Sabiyath Fatima, N. Noor Alleema, V. Muthupriya, S. Revathi, Geriga Akanksha, and K. Balaji Smart Vehicle Tracking in Harsh Condition . . . . . . . . . . . . . . . . . . . . . . . . . . 669 Rakhi Bharadwaj, Pritam Shinde, Prasad Shelke, Nikhil Shinde, and Aditya Shirsath Storage Automation Using the Interplanetary File System and RFID for Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683 Paul John, Anirudh Manoj, Prathik Arun, Shanoo Raghav, and K. Saritha Storing and Accessing Medical Information Using Blockchain for Improved Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697 G. Manonmani and K. Ponmozhi Systematic Literature Review on Object Detection Methods at Construction Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709 M. N. Shrigandhi and S. R. Gengaje Unique Web-Based Assessment with a Secure Environment . . . . . . . . . . . 725 S. K. Sharmila, M. Nikhila, J. Lakshmi Prasanna, M. Lavanya, and S. Bhavana A Draft Architecture for the Development of a Blockchain-Based Survey System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737 Turgut Yıldız and Ahmet Sayar Additive Congruential Kupyna Koorde Cryptographic Hashfor Secured Data Storage in Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747 P. V. Shakira and Laxmi Raja AMQP Protocol-Based Multilevel Security for M-commerce Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765 Ramana Solleti, N. Bhaskar, and M. V. Ramana Murthy

xiv

Contents

Decentralised Blockchain-Based Framework for Securing eVoting System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781 Devarsh Patel, Krushang Patel, Pranay Patel, Mrugendrasinh Rahevar, Martin Parmar, and Ritesh Patel Digital Twins—A Futuristic Trend in Data Science, Its Scope, Importance, and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801 M. T. Vasumathi, Aurangjeb Khan, Manju Sadasivan, and Umadevi Ramamoorthy Loyalty Points Exchange System Using Blockchain . . . . . . . . . . . . . . . . . . . 819 Swati Jadhav, Shruti Singh, Akash Sinha, Vishal Sirvi, and Shreyansh Srivastava Performance Analysis of Various Machine Learning-Based Algorithms on Cybersecurity Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833 Boggarapu Srinivasulu and S. L. Aruna Rao Privacy-Preserving Data Publishing Models, Challenges, Applications, and Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845 J. Jayapradha and M. Prakash Recent Web Application Attacks’ Impacts, and Detection Techniques–A Detailed Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 863 B. Hariharan and S. Sathya Priya Security Concerns in Intelligent Transport Systems . . . . . . . . . . . . . . . . . . . 873 Jasmeet Kour, Pooja Sharma, and Anuj Mahajan Verify De-Duplication Using Blockchain on Data with Smart Contract Techniques for Detecting Errors on Cloud . . . . . . . . . . . . . . . . . . 885 Vishal Satish Walunj, Praveen Gupta, and Thaksen J. Parvat Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897

Editors and Contributors

About the Editors I. Jeena Jacob is working as Professor in Computer Science and Engineering department at GITAM University, Bangalore, India. She actively participates on the development of the research field by conducting international conferences, workshops, and seminars. She has published many articles in referred journals. She has guest edited an issue for International Journal of Mobile Learning and Organisation. Her research interests include mobile learning and computing. Selvanayaki Kolandapalayam Shanmugam is currently working as Associate Professor in Department of Mathematics and Computer Science, Ashland University, Ashland, OH, 44805. She had over all 18+ years of Lecturing sessions for theoretical subjects and experimental and instructional procedures for laboratory subjects. She presented more research articles in the National and International conferences and Journals. Her research interest includes image processing, video processing, soft computing techniques, intelligent computing, web application development, object-oriented programming like C++ and Java, scripting languages like VBscript and Javascript, data science, algorithms, data warehousing and data mining, neural networks, genetic algorithms, software engineering, software project management software quality assurance, enterprise resource planning, information systems, and database management systems. Ivan Izonin (IEEE Member) received the M.Sc. degree in computer science, in 2011, the M.Sc. degree in economic cybernetics, in 2012, and the Ph.D. degree in artificial intelligence, in 2016. He is currently Associate Professor at the Department of Artificial Intelligence, Lviv Polytechnic National University, Ukraine. His main research interests include computational intelligence, high-speed neural-like systems, non-iterative machine learning algorithms, and ensemble learning.

xv

xvi

Editors and Contributors

Contributors S. Aarif Ahamed Department of Computer Science and Engineering, Presidency University, Bangalore, Karnataka, India Shafina Abd Karim Ishigaki Mixed and Virtual Reality Research Lab, ViCubeLab, Faculty of Computing, Universiti Teknologi Malaysia, Johor, Malaysia Muhammad Anwar Ahmad ViCubeLab, Faculty of Computing, Universiti Teknologi Malaysia, Johor, Malaysia Geriga Akanksha Department of Computer Science and Engineering, B.S. Abdur Rahman Crescent Institute of Science and Technology, Chennai, India J. Ananya Department of Information Technology, Velammal Institute of Technology, Chennai, India A. Anbarasi Department of Computer Science, Govt. Arts and Science College for Women, Puliyakulam, Coimbatore, Tamil Nadu, India Dinesh Kumar Anguraj Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vijayawada, Andhra Pradesh, India Md. Apu Ahmed Chemnitz University of Technology, Chemnitz, Germany Md. Rashedul Arefin Ahsanullah University of Science & Technology, Dhaka, Bangladesh Joshuva Arockia Dhanraj Centre for Automation and Robotics (ANRO), Department of Mechatronics Engineering, Hindustan Institute of Technology and Science, Chennai, Tamil Nadu, India Prathik Arun Department of Computer Science and Engineering, PES University, Bangalore, India Nomula Ashok Sathyabama ˙Institute of Science and Technology, Chennai, Tamil Nadu, India Humaïra Baichoo CITS, University of Mauritius, Reduit, Mauritius C. R. Balaji Centre for Automation and Robotics (ANRO), Department of Mechatronics Engineering, Hindustan Institute of Technology and Science, Chennai, Tamil Nadu, India K. Balaji Department of Computer Science and Engineering, B.S. Abdur Rahman Crescent Institute of Science and Technology, Chennai, India Sathvik Bandloor Computer Science and Engineering, PES University, Bangalore, Karnataka, India Shaik Nazeer Basha Department of CSE, KKR & KSR Institute of Technology and Sciences, Guntur, Andhra Pradesh, India

Editors and Contributors

xvii

Sibananda Behera School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar, India Yeferson Torres Berru Instituto Superior Tecnológico Sudamericano, Loja, Ecuador; Universidad Internacional del Ecuador, Loja, Ecuador Rakhi Bharadwaj Department of Computer Science and Engineering, Vishwakarma Institute of Technology, Pune, Maharashtra, India Manisha Bharti Department of Electronics and Communication Engineering, National Institute of Technology, Delhi, India N. Bhaskar Department of CSE, CBIT, Hyderabad, Telangana, India Nirav Bhatt Smt. Kundanben Dinsha Patel Department of Information Technology, CSPIT, Charotar University of Science and Technology, Anand, Gujarat, India S. Bhavana Department of Information Technology, Vignan’s Nirula Institute of Technology and Science for Women, Pedapalakaluru, Guntur, Andhra Pradesh, India A. Durga Bhavani Department of Computer Science and Engineering, BMS Institute of Technology & Management, Bangalore, India K. B. V. Brahma Rao Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vijayawada, Andhra Pradesh, India Raj Busa Smt. Kundanben Dinsha Patel Department of Information Technology, CSPIT, Charotar University of Science and Technology, Anand, Gujarat, India Divyesh Butani Smt. Kundanben Dinsha Patel Department of Information Technology, CSPIT, Charotar University of Science and Technology, Anand, Gujarat, India Diego Cale Instituto Superior Universitario Tecnológico del Azuay, Cuenca, Ecuador B. H. Chandrashekhar RV College of Engineering, Bengaluru, Karnataka, India Ch. Charan Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andra Pradesh, India Raghavendra Rao Chillarige School of Computer and Information Sciences, University of Hyderabad, Hyderabad, Telangana, India Verónica Chimbo Instituto Superior Universitario Tecnológico del Azuay, Cuenca, Ecuador Sheela Chinchmalatpure Vishwakarma Institute of Technology, Pune, Maharashtra, India Aatish Chiniah Department of Digital Technologies, University of Mauritius, Reduit, Mauritius

xviii

Editors and Contributors

Radhika Rani Chintala Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andra Pradesh, India Shakti Raj Chopra Lovely Professional University, Phagwara, Punjab, India Mia Torres-Dela Cruz Systems Technology Institute, Baguio City, Philippines Gokula Vishnu Kirti Damodaran Centre for Automation and Robotics (ANRO), Department of Mechatronics Engineering, Hindustan Institute of Technology and Science, Chennai, Tamil Nadu, India Subir Kumar Das Department of CSE, Supreme Knowledge Foundation, Mankundu, West Bengal, India Bibhuti Bhusan Dash School of Computer Applications, KIIT Deemed to be University, Bhubaneswar, India Bhoomil Dayani Smt. Kundanben Dinsha Patel Department of Information Technology, CSPIT, Charotar University of Science and Technology, Anand, Gujarat, India Utpal Chandra De School of Computer Applications, KIIT Deemed to be University, Bhubaneswar, India Kaustuva Chandra Dev Trident Academy of Creative Technology, Bhubaneswar, India Archita Dhande MIT World Peace University, Pune, India D. Dhinakaran Department of Information Technology, Velammal Institute of Technology, Chennai, India R. Dhivya Department of Computer Science, Dr. SNS Rajalakshmi College of Arts and Science, Coimbatore, India S. Durai Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, India Aleksei Efimov Peter the Great St. Petersburg Polytechnic University, St Petersburg, Russia Fazliaty Edora Fadzli Mixed and Virtual Reality Research Lab, ViCubeLab, Faculty of Computing, Universiti Teknologi Malaysia, Skudai, Johor, Malaysia D. Faridha Banu Department of ECE, Sri Eshwar College of Engineering, Coimbatore, India N. Sabiyath Fatima Department of Computer Science and Engineering, B.S. Abdur Rahman Crescent Institute of Science and Technology, Chennai, India Mohammed Feroz Vignan’s Foundation for Science, Technology and Research, Guntur, AP, India

Editors and Contributors

xix

K. Geetha devi Department of ECE, Sri Eshwar College of Engineering, Coimbatore, India S. R. Gengaje Walchand Institute of Technology, Solapur, India Ojasvi Ghule Vishwakarma Institute of Technology, Pune, Maharashtra, India R. Gobinath Department of Computer Science, School of Sciences, CHRIST (Deemed to be University), Bangalore, India Varun Gujarathi Vishwakarma Institute of Technology, Pune, India Praveen Gupta Computer Engineering Department, Chhatrapati Shivaji Maharaj University, New Panvel Mumbai, India Md. Rawshan Habib Murdoch University, Murdoch, Australia B. Hariharan Department of Computer Science and Engineering, Hindustan Institute of Technology and Science, Chennai, Tamil Nadu, India Sakib Al Hasan School of Information Engineering, Huzhou Normal University, Huzhou, China Harshal Ingale Vishwakarma Institute of Technology, Pune, Maharashtra, India Madhura Ingole Vishwakarma Institute of Technology, Pune, Maharashtra, India Ajune Wanis Ismail Mixed and Virtual Reality Research Lab, ViCubeLab, Faculty of Computing, Universiti Teknologi Malaysia, Johor, Malaysia Rushikesh Jadhao Vishwakarma Institute of Technology, Pune, Maharashtra, India Swati Jadhav Department of Computer Engineering, Vishwakarma Institute of Technology, Pune, India J. Jagadeesh EIE Department, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, AP, India J. Jayapradha Department of Computing Technologies, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India Swati Jha Centre for Automation and Robotics (ANRO), Department of Mechatronics Engineering, Hindustan Institute of Technology and Science, Chennai, Tamil Nadu, India S. Jinapriya Isteer Technologies, Bengaluru, Karnataka, India Paul John Department of Computer Science and Engineering, PES University, Bangalore, India T. Judgi Sathyabama ˙Institute of Science and Technology, Chennai, Tamil Nadu, India J. Jyotsna Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andra Pradesh, India

xx

Editors and Contributors

M. Kavitha Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R & D, Institute of Science and Technology, Chennai, Tamil Nadu, India R. Kavitha Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R & D, Institute of Science and Technology, Chennai, Tamil Nadu, India Aurangjeb Khan CMR University, Bengaluru, India Bui Huy Khoi Industrial University of Ho Chi Minh City, Ho Chi Minh City, Vietnam Jasmeet Kour School of Computer Science and Engineering, Shri Mata Vaishno Devi University, Katra, Jammu and Kashmir, India N. Hari Krishna Department of CSE, KKR & KSR Institute of Technology and Sciences, Guntur, Andhra Pradesh, India V. Sai Hari Krishna Department of CSE, KKR & KSR Institute of Technology and Sciences, Guntur, Andhra Pradesh, India S. Kruthika Mepco Schlenk Engineering College, Sivakasi, Tamil Nadu, India Sandeep Kumar Department of Electronics and Communication Engineering, National Institute of Technology, Delhi, India N. Kumaresan Department of ECE, Anna University Regional Campus, Coimbatore, India J. Lakshmi Prasanna Department of Information Technology, Vignan’s Nirula Institute of Technology and Science for Women, Pedapalakaluru, Guntur, Andhra Pradesh, India U. Lalith Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andra Pradesh, India M. Lavanya Department of Information Technology, Vignan’s Nirula Institute of Technology and Science for Women, Pedapalakaluru, Guntur, Andhra Pradesh, India Sweta Leena Department of Computer Science and Engineering, New Horizon College of Engineering, Bangalore, India Natalia V. Levoshich Financial University Under the Government of the Russian Federation, Moscow, Russia S Likith Department of Computer Science and Engineering, BMS Institute of Technology & Management, Bangalore, India Pavan Kalyan Lingampally Centre for Automation and Robotics (ANRO), Department of Mechatronics Engineering, Hindustan Institute of Technology and Science, Chennai, Tamil Nadu, India

Editors and Contributors

xxi

Vaishnav Loya Vishwakarma Institute of Technology, Pune, India L. Lucase St. Xaviers Catholic College of Engineering, Nagercoil, Tamil Nadu, India Anuj Mahajan School of Computer Science and Engineering, Shri Mata Vaishno Devi University, Katra, Jammu and Kashmir, India S. Maheswari Department of Computer Science, Avinashilingam Institute for Home Science and Higher Education for Women, Coimbatore, India A. L. V. N. Manikantha Sudarshan Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India P. S. Manoharan Thiagarajar College of Engineering, Madurai, Tamil Nadu, India Anirudh Manoj Department of Computer Science and Engineering, PES University, Bangalore, India G. Manonmani Department of Information Technology, Hajee Karutha Rowther Howdia College, Tamil Nadu, Uthamapalayam, India Shamla Mantri MIT World Peace University, Pune, India S. Meivel Department of ECE, M. Kumarasamy College of Engineering, Karur, India Manoj Ranjan Mishra School of Computer Applications, KIIT Deemed to Be University, Bhubaneswar, India Md. Mobusshar Islam American International University—Bangladesh, Dhaka, Bangladesh M. Mohamed Iqbal School of Computer Science and Engineering, VIT-AP University, Amaravati, Andhra Pradesh, India Chand Pasha Mohammed Lovely Professional University, Phagwara, Punjab, India Sanjoy Krishna Mondol School of Information Engineering, Huzhou University, Huzhou, China María Cristina Moreira Instituto Superior Tecnológico Sudamericano, Loja, Ecuador Pavan Vamsi Mohan Movva Department of Computer Science Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, AP, India V. Murugananthan School of Computing, Asia Pacific University of Technology & Innovation (APU), Taman Teknologi Malaysia, Kuala Lumpur, Malaysia V. Muthupriya Department of Computer Science and Engineering, B.S. Abdur Rahman Crescent Institute of Science and Technology, Chennai, India

xxii

Editors and Contributors

A. Muthusamy Assistant Professor and Head, Department of Computer Science, Faculty of Science & Humanities, SRM Institute of Science and Technology, Tiruchirappalli, Tamil Nadu, India Siddhant Nawale Vishwakarma Institute of Technology, Pune, India Natalia V. Nazarova Financial University Under the Government of the Russian Federation, Moscow, Russia V. Neethidevan Mepco Schlenk Engineering College, Sivakasi, Tamil Nadu, India Nguyen Thi Ngan Industrial University of Ho Chi Minh City, Ho Chi Minh City, Vietnam M. Nikhila Department of Information Technology, Vignan’s Nirula Institute of Technology and Science for Women, Pedapalakaluru, Guntur, Andhra Pradesh, India W. M. H. Nimsara Warnasuriya Edith Cowan University, Joondalup, Australia A Nitya Dyuthi Department of Computer Science and Engineering, BMS Institute of Technology & Management, Bangalore, India M. Niveditha Thiagarajar College of Engineering, Madurai, Tamil Nadu, India N. Noor Alleema Department of Information Technology, Veltech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, India Tryambak Kumar Ojha Department of CSE, Supreme Knowledge Foundation, Mankundu, West Bengal, India Namita Panda School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar, India Himangi Pande MIT World Peace University, Pune, India E. Helen Parimala SSM College of Arts and Science, Dindigul, Tamil Nadu, India Martin Parmar Faculty of Technology and Engineering, Chandubhai S. Patel Institute of Technology, Charotar University of Science and Technology, Changa, Anand, Gujarat, India Thaksen J. Parvat Vidyavardhini’s College of Engineering & Technology, Palghar, MS, India Devarsh Patel Faculty of Technology and Engineering, Chandubhai S. Patel Institute of Technology, Charotar University of Science and Technology, Changa, Anand, Gujarat, India Krushang Patel Faculty of Technology and Engineering, Chandubhai S. Patel Institute of Technology, Charotar University of Science and Technology, Changa, Anand, Gujarat, India

Editors and Contributors

xxiii

Pranay Patel Faculty of Technology and Engineering, Chandubhai S. Patel Institute of Technology, Charotar University of Science and Technology, Changa, Anand, Gujarat, India Ritesh Patel Faculty of Technology and Engineering, Chandubhai S. Patel Institute of Technology, Charotar University of Science and Technology, Changa, Anand, Gujarat, India Sudhansu Shekhar Patra School of Computer Applications, KIIT Deemed to be University, Bhubaneswar, India Parthasarathi Pattnayak School of Computer Applications, KIIT Deemed to be University, Bhubaneswar, India Vasu Deva Polineni Department of CSE, KKR & KSR Institute of Technology and Sciences, Guntur, Andhra Pradesh, India K. Ponmozhi Department of Computer Applications, Kalasalingam Academy of Research and Education, Tamil Nadu, Krishnakoil, India A. Pradeepthi Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andra Pradesh, India M. Prakash Department of Data Science and Business Systems, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India Kumbha Prasadarao Sathyabama ˙Institute of Science and Technology, Chennai, Tamil Nadu, India R. G. V. Prasanna EIE Department, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, AP, India K. Priya Department of Computer Science, AVP College of Arts and Science, Tirupur, Tamil Nadu, India R. Priyadarshini Thiagarajar College of Engineering, Madurai, Tamil Nadu, India S. Priyanka Department of ECE, Sri Eshwar College of Engineering, Coimbatore, India P. Rachana Department of Computer Science and Engineering, New Horizon College of Engineering, Bangalore, India P. Radha Mepco Schlenk Engineering College, Sivakasi, Tamil Nadu, India V. Radha Department of Computer Science, Avinashilingam Institute for Home Science and Higher Education for Women, Coimbatore, India Shanoo Raghav Department of Computer Science and Engineering, PES University, Bangalore, India Kasula Raghu Research Scholar, Department of ECE, JNTUH, Hyderabad, India

xxiv

Editors and Contributors

Mrugendrasinh Rahevar Faculty of Technology and Engineering, Chandubhai S. Patel Institute of Technology, Charotar University of Science and Technology, Changa, Anand, Gujarat, India G. Raja Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India Inakoti Ramesh Raja ECE Department, Aditya College of Engineering and Technology, Suram plaem, Kakinada, AP, India Laxmi Raja Karpagam Academy of Higher Education, Tamil Nadu, Coimbatore, India Manikandan Rajagopal Lean Operations and Systems, School of Business and Management, CHRIST (Deemed to be University), Bangalore, India B. Rajalakshmi Department of Computer Science and Engineering, New Horizon College of Engineering, Bangalore, India M. Rakesh Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India Umadevi Ramamoorthy CMR University, Bengaluru, India M. V. Ramana Murthy Department of Statistics, Osmania University, Osmania, India S. Ramkumar Department of Computer Science, School of Sciences, CHRIST (Deemed to be University), Bangalore, India S. L. Aruna Rao Department of IT, BVRIT HYDERABAD College of Engineering for Women, Hyderabad-90, India Puttha Chandrasekhar Reddy Research Scholar, Department of ECE, JNTUH, Hyderabad, India Luchmee Devi Reesaul Department of Digital Technologies, University of Mauritius, Reduit, Mauritius S. Revathi Department of Computer Science and Engineering, B.S. Abdur Rahman Crescent Institute of Science and Technology, Chennai, India A. Roshan Department of ECE, Sri Eshwar College of Engineering, Coimbatore, India S. A. Roshnee Department of Information Technology, Velammal Institute of Technology, Chennai, India Sibaji Roy Chemnitz University of Technology, Chemnitz, Germany B. Sabiha Sulthana Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vijayawada, Andhra Pradesh, India Manju Sadasivan CMR University, Bengaluru, India

Editors and Contributors

xxv

Ch. Gopi Sahithi EIE Department, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, AP, India K. Sai Jatin Computer Science and Engineering, PES University, Bangalore, Karnataka, India K. S. Sai ShriKrishnaa Computer Science and Engineering, PES University, Bangalore, Karnataka, India Majeti SaiRajKumar Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India Sergey Salkutsan Peter the Great St. Petersburg Polytechnic University, St Petersburg, Russia K. Saritha Department of Computer Science and Engineering, PES University, Bangalore, Karnataka, India L. V. Sastry EIE Department, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, AP, India Rabinarayan Satapathy Faculty of Emerging Technologies, Sri Sri University, Cuttack, India T. Sathwik Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India S. Sathya Priya Department of Computer Science and Engineering, Hindustan Institute of Technology and Science, Chennai, Tamil Nadu, India P. Likhitha Saveri Department of Electronics and Communication Engineering, National Institute of Technology, Delhi, India Ahmet Sayar Department of Computer Engineering, Kocaeli University Kocaeli, Kocaeli, Türkiye Md. Shahnewaz Tanvir South Dakota School of Mines and Technology, Rapid City, USA Mahammad Firose Shaik EIE Department, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, AP, India P. V. Shakira Karpagam Academy of Higher Education, Coimbatore, Tamil Nadu, India N. Shanmugapriya Department of Computer Applications, Dr. SNS Rajalakshmi College of Arts and Science, Coimbatore, India Pooja Sharma School of Computer Science and Engineering, Shri Mata Vaishno Devi University, Katra, Jammu and Kashmir, India

xxvi

Editors and Contributors

S. K. Sharmila Department of Information Technology, Vignan’s Nirula Institute of Technology and Science for Women, Pedapalakaluru, Guntur, Andhra Pradesh, India Manish Shashi Walden University, Minneapolis, MN, USA Puja Shashi Oxford College of Engineering, Bangalore, India Samyukta Shashidharan Computer Science and Engineering, PES University, Bangalore, Karnataka, India Prasad Shelke Department of Computer Science and Engineering, Vishwakarma Institute of Technology, Pune, Maharashtra, India Nikhil Shinde Department of Computer Science and Engineering, Vishwakarma Institute of Technology, Pune, Maharashtra, India Pritam Shinde Department of Computer Science and Engineering, Vishwakarma Institute of Technology, Pune, Maharashtra, India Aditya Shirsath Department of Computer Science and Engineering, Vishwakarma Institute of Technology, Pune, Maharashtra, India M. N. Shrigandhi Walchand Institute of Technology, Solapur, India Olga Shvetsova Department of Industrial Management, Korea University of Technology and Education, Cheonan, South Korea Khushwinder Singh Department of Computer Science and Engineering, BMS Institute of Technology & Management, Bangalore, India Shruti Singh Department of Computer Engineering, Vishwakarma Institute of Technology, Pune, India Akash Sinha Department of Computer Engineering, Vishwakarma Institute of Technology, Pune, India Vishal Sirvi Department of Computer Engineering, Vishwakarma Institute of Technology, Pune, India P. Snehitha Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vijayawada, Andhra Pradesh, India Ramana Solleti Department of Computer Science, Bhavan’s Vivekananda College, Sainikpuri, Secunderabad, Telangana, India A. Sree Padmapriya Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vijayawada, Andhra Pradesh, India Janaki Raman Srinivasan Centre for Automation and Robotics (ANRO), Department of Mechatronics Engineering, Hindustan Institute of Technology and Science, Chennai, Tamil Nadu, India

Editors and Contributors

xxvii

R. Srinivasan Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R & D, Institute of Science and Technology, Chennai, Tamil Nadu, India Boggarapu Srinivasulu Department of IT, BVRIT HYDERABAD College of Engineering for Women, Hyderabad-90, India Shreyansh Srivastava Department of Computer Engineering, Vishwakarma Institute of Technology, Pune, India Norhaida Mohd Suaib UTM Big Data Center, Ibnu Sina Institute of Scientific and Industrial Research, Universiti Teknologi Malaysia, Johor, Malaysia S. Suganya SSM Institute of Engineering and Technology, Dindigul, Tamil Nadu, India B. Sunandhita Department of Computer Science and Engineering, New Horizon College of Engineering, Bangalore, India Y. Sunil Raj St. Joseph’s College (Autonomous), Trichy, Tamil Nadu, India K. Sureka SSM Institute of Engineering and Technology, Dindigul, Tamil Nadu, India G. Swarna Shree Department of ECE, Sri Eshwar College of Engineering, Coimbatore, India K. Swathi Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India Weining Tang School of Information Engineering, Huzhou University, Huzhou, China A. Tejaswini Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vijayawada, Andhra Pradesh, India J. Thimmiraja Department of Information Technology, Dr. Mahalingam College of Engineering and Technology, Pollachi, Tamil Nadu, India T. Thiruvenkadam Associate Professor, School of Computer Science and IT, Jain (Deemed-to-be University), Bengaluru, Karnataka, India S. M. Udhaya Sankar Department of Information Technology, Velammal Institute of Technology, Chennai, India Divya Vadlamudi Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andra Pradesh, India Indrani Vasireddy School of Computer and Information Sciences, University of Hyderabad, Hyderabad, Telangana, India M. T. Vasumathi CMR University, Bengaluru, India

xxviii

Editors and Contributors

M. M. Venkata Chalapathi School of Computer Science and Engineering, VITAP University, Amaravati, Andhra Pradesh, India Ramkumar Venkatasamy Centre for Automation and Robotics (ANRO), Department of Mechatronics Engineering, Hindustan Institute of Technology and Science, Chennai, Tamil Nadu, India M. Vijayakumar Professor, School of Computer Science, VET Institute of Arts and Science College, Tindal, Erode, Tamil Nadu, India Akhil Vinjam Department of CSE, KKR & KSR Institute of Technology and Sciences, Guntur, Andhra Pradesh, India Vishal Satish Walunj Department of Computer Engineering, Chhatrapati Shivaji Maharaj University, New Panvel Mumbai, India Siddhesh Wani Vishwakarma Institute of Technology, Pune, India Rajeev Wankar School of Computer and Information Sciences, University of Hyderabad, Hyderabad, Telangana, India Turgut Yıldız Department of Computer Engineering, Kocaeli University Kocaeli, Kocaeli, Türkiye Irina A. Zhuravleva Financial University Under the Government of the Russian Federation, Moscow, Russia

A Detailed Survey on Network Intrusion Detection in Cloud Using Different Techniques A. L. V. N. Manikantha Sudarshan, Majeti SaiRajKumar, M. Rakesh, T. Sathwik, K. Swathi, and G. Raja

Abstract Intrusion detection system is a form of computer security that is created to monitor the flow of network data and identify unusual or potentially malicious activity. IDSs can use machine learning algorithms like ANNs or SVMs analyze data and identify potential threats. These systems have the ability to improve network security by alerting administrators to potential threats in real time. However, IDSs also have limitations, such as being affected by the caliber of the data corpus utilized to train the model, potentially providing limited context or explanation for detected anomalies, and being limited to analyzing network traffic data rather than other types of data. This study has conducted an investigation on different IDS related datasets used to recognize various algorithms which gives high accuracy among the used algorithms and we explore different types of IDS and its limitations. Keywords Cloud computing · Machine learning · Deep learning · Big data · Intrusion detection system

1 Introduction Nowadays, cloud computing has become more widespread, because it allows users to access different types of services from anywhere over the Internet. It refers to the ability to access and use computing resources, like servers, databases, networking, software, analytics, and intelligence, across the Internet on an as-needed basis. This service allows businesses to rent access to these resources rather than purchasing and A. L. V. N. Manikantha Sudarshan (B) · M. SaiRajKumar · M. Rakesh · T. Sathwik · K. Swathi · G. Raja Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India e-mail: [email protected] M. SaiRajKumar e-mail: [email protected] K. Swathi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_1

1

2

A. L. V. N. Manikantha Sudarshan et al.

maintaining them on their own, typically through a subscription model. By using cloud computing, businesses can take advantage of computing resources without incurring the costs and maintenance responsibilities of owning and operating them. According to a study by the Pew Institute, 71% of industry experts and technology decision makers believe that Internet-based applications will be the main form of work for most people by 2020. However, the transition from traditional computing to cloud computing has also introduced new security challenges due to the distributed nature of cloud computing. These challenges include various attacks, like DoS and DDoS. A. Usage of Cloud Computing The introduction of the modern cloud can be traced back to 2002 when Amazon Web Services (AWS) launched its public cloud. At this time, there were few other competitors in the field and the full potential of the cloud, including its ability to scale and adjust resources as needed (elasticity), had not yet been fully realized or demonstrated. Despite this, the cloud provides a quick fix to the management and technical challenges faced by basic and moderate-sized businesses and organizations. It allowed these entities to rent computing resources from trusted providers, rather than maintaining their own servers, and thus avoid the costs and responsibilities of in-house maintenance. In the following decade, the use of the cloud grew as more websites and workflows began to utilize it, and it evolved through two distinct generations [1, 14, 23]. B. Evolution of Clouds During the first generation of cloud development, the cloud was traditionally defined as centralized architecture in data centers that provided a large amount of computing and repositories. This model authorizes application holders to make use of twotiered architecture in which the cloud vendor accommodated the backend, and users deliver requests from mobile and web applications to the cloud. The popularity of this approach during the first generation of cloud development helped to establish the cloud as a widely used solution for hosting and managing applications. During the second generation of cloud development, the range of services offered and competition among providers significantly increased. Resource usage on the cloud became more transparent and more trusted. Beside the pay-as-you-go pricing model, another model called spot bidding for resources was also put in place. Real-time streaming services began computing data on the cloud. The use of microservices for cloud application evolution, supported by the cast of container assistance on the cloud in 2014, increased with the adoption of DevOps. The concept of hybrid clouds, which combine public and private clouds, also emerged [1, 3] C. Major Issues in cloud computing [4] 1. Privacy: There is a risk that the host company may access and potentially alter user data without permission, whether intentionally or not. 2. Compliance: In order to comply with the various regulations related to data and hosting, such as the (Federal Information Security Management Act)

A Detailed Survey on Network Intrusion Detection in Cloud Using …

3

and the (Health Insurance Portability) and (Accountability Act), users might need to agree to deployment approaches thus it can be costly. 3. Security: (Cloud-based) assistances depend on mediators for repositories and surveillance. While it may be tempting to consider, a company that operates using this (cloud-based) technology will adequately defend and protect user information, despite that their services are offered at a lowpriced or for free of cost, there is a risk that the company can share user data with others. This allows a real risk to the security of the cloud. 4. Sustainability: This issue centers on lessening the negative effects on the environment caused by (cloud computing), particularly the impact of servers on the environment. To address this issue, countries with favorable conditions are seeking to attract cloud computing data centers due to their access to natural cooling and renewable electricity sources. D. Attacks on Cloud: Cloud computing systems, which rely on remote servers to store and process data and applications, are vulnerable to various kinds of cyberattacks. These attacks can include DoS attacks that disrupt the system’s availability, data breaches that allow unauthorized access to sensitive data, malware infections that compromise security, insider attacks from employees or contractors, and misconfigured systems that are more vulnerable to attacks. To protect against these threats, organizations using cloud computing should implement robust security measures, such as regular updates, user authentication, and data encryption. E. Methods to prevent Attacks on Cloud: There are several steps that organizations can take to prevent attacks on their cloud computing systems, including implementing strong security measures, monitoring activity, using encryption, training employees, choosing reputable cloud providers, using security tools, and conducting regular security assessments. By taking these precautions, organizations can significantly reduce their risk of experiencing an attack on their cloud computing systems. F. Cloud Network Security: It is critically important in securing applications, data bases and IT resources within enterprise cloud environments, in addition to the traffic between cloud deployments and the enterprise’s internal network and on premises data centers. It is a fundamental aspect of cloud security and is necessary for securing the resources and information deployed in the cloud. G. Importance of Cloud Network Security: To ensure the protection of their resources as they move to adopt cloud-based architecture, companies must implement security measures that align with corporate guidelines. Perimeterbased defenses are not effective for protecting cloud-based structure, and the security tools offered by most cloud vendors do not meet the security needs of enterprises. Cloud network security results bridge this basic security difference in the cloud by allowing organizations to maintain the parallel level of surveillance and protection as they have in their environments, even as the network boundaries become more diffuse. This is crucial for organizations to fulfill their

4

A. L. V. N. Manikantha Sudarshan et al.

responsibilities beneath the cloud shared responsibility model and to assure corporate cybersecurity and regulatory conformity. H. Use of Big Data in Cloud Security: Big data gathered from various sources such as networks, computers, sensors, and cloud systems provide system admins and analysts with detailed insights into vulnerabilities and cyber threats. This enables them to create better security frameworks and solutions to mitigate these threats. As a result, the usage of big data analytics is escalating for the effectiveness of any cybersecurity result. Data warehousing and analytics tools can be used to store, organize, and evaluate abundant data from various sources, such as network traffic logs, system logs, and security event logs, to identify trends and patterns that may signal a security threat or vulnerability. Artificial intelligence and machine learning algorithms can be trained on datasets of benign and malicious activity to learn the characteristics of each and detect and classify new activity based on these characteristics, improving the efficiency and effectiveness of cloud security. However, it is important to carefully design and implement big data-based security systems that can scale and perform efficiently as the volume and complexity of data grows [5, 6, 3]. I. IDS: An “IDS” is a security tool namely designed to monitor network activity for signs of security threats or policy violations. It continuously analyzes network traffic and system logs to identify suspicious behavior or anomalies that may indicate an attempted intrusion or attack. • Types of IDS: In “IDS,” there are two primary categories those that operate on a network and those that are host-based. Network-based are designed to guide and analyze network traffic, while host-based will monitor activity on individual host systems. Both types of IDS can be used to locate a broad range of security threats, including malware infections, unauthorized access or activity, and network intrusions [7]. J.

Implementation of IDS using different Systems: There are many different approaches and technologies that can be used to implement an IDS, including rule-based systems, signature-based systems, and machine learning-based systems. Choosing the right IDS for a particular environment will depend on the specific security needs and constraints of the organization. There are several benefits to using an IDS, including the ability to detect attacks and intrusions that may otherwise go unnoticed, the ability to alert administrators to potential threats in real time, and the ability to gather data on security breaches for use in improving security measures. However, IDS also has some limitations. For example, it may generate a high number of false positives or alerts for nonthreatening activity. It may also be vulnerable to evasion techniques, such as packet fragmentation, that can make it challenging for the IDS to accurately locate an attack. Overall, IDS is an important tool for enhancing network security and detecting and responding to potential threats. However, it should be used as part of a comprehensive security strategy that includes other measures, such as firewalls and access controls, to protect against unauthorized access and attacks [4].

A Detailed Survey on Network Intrusion Detection in Cloud Using …

5

K. Role of Machine Learning in IDS: “Machine learning” is a subdivision of “AI” that involves the use of algorithms to automatically grasp and enhance from data without being specifically programmed. In the context of IDS, machine learning algorithms can be trained on large datasets of benign and malicious activity to learn the characteristics of each. This allows the IDS to accurately detect and classify new activity based on patterns learned from the training data. There are several different approaches to using machine learning in IDS, including [8, 9, 10]. L. Supervised Learning: In this approach, the IDS is trained on a labeled dataset where the correct classification (i.e., benign or malicious) is known for each sample. The IDS uses this training data to learn the characteristics of each class and can then classify new data based on these characteristics [9, 10]. 1. Regression: This is an analytical method which enables understanding how a dependent variable is related to one or more independent variables [11]. 2. Classification: In classification, a machine learning method is utilized forecast a categorical reply constant on the basis of many independent variables. This allows for the assignment of data points to predefined categories or classes. There are multiple algorithms for classification along with support vector machines, logistic regression, random forests, decision trees, and the appropriate algorithm is chosen on the basis of characteristics of the data and the classification goal [11]. 3. Feature Selection: The objective of feature selection is to determine which features have the most significant effect on the target outcome and to eliminate or diminish the impact of irrelevant or duplicated features, which can also enhance computational efficiency and reduce overfitting. There are several techniques that can be used for feature selection, including statistical methods, wrapper methods, and embedded methods [4]. M. Unsupervised Learning: In this approach, the IDS is not given with labeled data and must learn to classify data based on patterns and characteristics that it discovers on its own. This can be useful for detecting novel or previously unknown threats [8, 9]. 1. Clustering: It is a (machine learning) approach utilized to combine data points into clusters based on their similarities. This is unsupervised algorithm, meaning it won’t require labeled data [12, 5]. 2. K-means Clustering: In this method, it divides a data corpus into k Groups formed based on the clusters space connecting the Information points and the centroids of clusters. It begins by randomly selecting k initial centroids and then iteratively reassigning the data points to the closest one. This process continues until the clusters are stable and the centroids no longer move. While k-means clustering is fast and simple. This might also not perform well with non-linearly distributed data or with the presence of outliers. This algorithm is commonly used in fields such as biology, finance, and marketing

6

A. L. V. N. Manikantha Sudarshan et al.

and is useful for exploring the structure of a dataset and identifying patterns and relationships in the data [12, 5]. N. Hierarchical clustering: It is a (machine learning) technique utilized to divide grouping data points together based on their similarities into clusters. This is unsupervised algorithm, which means it won’t require labeled records. The dendrogram, which is a tree-like structure, presents the systematic data based on the similarities between the data points. It is commonly used in fields such as biology, finance, and marketing [4]. O. Association: Association is a machine learning method to find connections between items in a data corpus. Market basket analysis frequently employs it to identify items that are frequently bought in tandem. There are various types of association rules. The Apriori algorithm is typically used to identify association rules by iteratively generating and pruning candidate rules based on support and confidence thresholds [13]. P. Principle Component Analysis (PCA): “PCA” is a widely used technique for reducing the intricacy of data with numerous dimensions in machine learning and data analysis. It transforms a dataset with many features (dimensions) into a new dataset with a smaller number of features that still captures the most important information from the original dataset. While PCA can be effective in simplifying the data, it is important to keep in mind that it may also result in some loss of information from the original dataset [14]. Q. Semi-supervised Learning (S-sl): In “S-sl,” (IDSs) are given a combination of labeled and unlabeled data, allowing it to learn from both known and unknown samples. This can be useful for improving the accuracy of the IDS in situations [11, 8]. R. Role of Deep Learning in IDS: It is a type of “machine learning” that involves using “ANN.” There are multiple layers to learn and improve upon data corpus without the need for explicit programming. Regarding the circumstances surrounding with IDS, these algorithms can be trained on large datasets of both normal and malicious activity to learn complex patterns and features that may be difficult for traditional machine learning algorithms to identify [15]. There are various approaches to using deep learning in IDS, such as (CNNs) for analyzing, classifying images, (RNNs) for analyzing sequential data like log files or network traffic, and autoencoders for identifying unusual or anomalous activity by comparing reconstructed data to original input data. While these algorithms can greatly enhance the correctness and efficiency of IDS by allowing it to learn complex patterns and features, they may also be computationally intensive and require specialized hardware. It is essential to carefully design and validate deep learning-based IDS systems to ensure their effectiveness and lack of bias [16, 17]. 1. Convolutional Neural Networks: This is “deep learning algorithm” employed in categorizations of images, recognition tasks. They are based on the structure of the visual cortex and excel at processing and analyzing visual data. CNNs have multiple layers of artificial neurons, each layer performing

A Detailed Survey on Network Intrusion Detection in Cloud Using …

7

a specific task. The input layer receives raw image data, the convolutional layers apply filters to detect features such as edges, corners, and patterns, and the fully connected layers classify the image based on the detected features [16, 17]. 2. Long short-term memory (LSTM): (LSTM) represents a category of (RNN) used for predicting sequences of data. LSTMs excel at modeling long-term dependencies in data and are therefore effective for tasks like language translation and speech recognition. LSTMs are commonly used in (nlps), (speech recognition), (time series) forecasting, and are powerful tools for making predictions, identifying patterns, and understanding relationships in data [16, 17]. S. Role of Big Data in IDS: Big data refers to data corpus that are both large and complex conventional data corpus processing techniques cannot efficiently analyze this information processing tools and methods. In the context IDS, big data can refer to the large volumes of network traffic, log files, and other data sources that need to be analyzed in order to detect and prevent cyber threats. There are several ways to use big data in IDS, including data warehousing, stream processing, and Hadoop. Data warehousing involves storing and organizing large datasets for fast and efficient querying and analysis, while stream processing involves real-time analytics to process and analyze data as it is generated. Big data analytics can be an effective tool for improving the accuracy and efficiency of IDS, but it is important to carefully design and implement big data-based IDS systems to ensure they can scale and perform efficiently as the data volume and complexity increases [11, 14, 3]. 1. MapReduce: It is an outline for handling huge data corpus in parallel across a cluster of computers. It uses the map and reduce functions to divide the work into smaller chunks and process the data in parallel. 2. Hadoop: It is free software which saves and processes large datasets in a diverse environment. It is based on MapReduce and can scale from a single server to thousands of machines [18]. 3. Spark: It is freely distributed processing body for large-scale data that is faster and more flexible than MapReduce. It can handle a variety of data types, including streaming data and machine learning algorithms.

2 Literature Revıew Table 1 lists the details gathered from the references gone through.

8

A. L. V. N. Manikantha Sudarshan et al.

Table 1 References Name

Overview

Swathi et al. [12] The goal of this research paper is to study the performance of a FkNN for a NIDS in a cloud background. The CICIDS2017 dataset, which has 78 features, is used for the evaluation process. A framework is introduced for executing the FkNN classifier. This framework is compared with traditional model based on two categories: performance measures (accuracy) and computational time. The outcomes of this model show high accuracy and low computational time Peng et al. [5]

This paper proposes a new method called PMBKM to further improve efficiency. This algorithm involves preprocessing the data to convert strings to numerical values and data normalization, minimizing the dimensionality of the processed records set using PCA, and then using mini batch k-means for clustering. The results of the experiments showed that PMBKM was more effective than any other existing models, and it was also found to be suitable for use in an IDS in a big data environment

Othman et al. [11]

This paper introduces a Spark-Chi-SVM model for IDS. The Chi-Square Selector is used for feature selection, and an intrusion detection model is built by making use of a SVM classifier on the big data platform. The model is trained and tested with existing dataset. The training time of Spark-Chi-SVM is 10.79 s, and the prediction time is 1.21 s. The training time of SVM classifier is 38.91 s, and the prediction time is 0.20 s. Outputs show that this model has better performance, reduces the time of training, and is good for handling large amounts of data

Nassif et al. [14]

This method combines the methods of IG and PCA with a classifier which depends on SVM, IBK, and MLP. Some existing data corpuses were used to check the performance of this hybrid method. Results of the comparative analysis show that this method has good classification detection rate, false alarm rate, and accuracy than many existing approaches. The accuracy and detection rate of nsl-kdd with IG-PCA is 98.24, 98.2. The accuracy and detection rate of iscx-2012 with IG-PCA is 99.011,99.1

Peng et al. [6]

This research paper is about the IDS using fog computing. It reduces the unnecessary connection between the cloud center and users. In this paper, a system is proposed based on decision tree. And it is not only tested with 10% but it is tested with full dataset. And the results shown by the system is affective. Also detection time is compared with each procedure. Regarding accuracy, the decision tree would not be suitable, but the time of calculation taken by the system is acceptable

Natesan et al. [18]

This paper has proposed an algorithm for effective classification and feature selection in IDS. These approaches have been shown to be faster and more scalable than traditional sequential methods on large datasets, and they can minimize memory requirements and computational complexity for IDS. The equivalent kind of the binary bat algorithm was progressed and has a significant impact on reducing the memory requirements and other complexities of the given classifier. Naïve Bayes model was used to speed up the detection rate of attacks (continued)

A Detailed Survey on Network Intrusion Detection in Cloud Using …

9

Table 1 (continued) Name

Overview

Ali et al. [1]

In this research paper, a new IDS is introduced which can be used in cloud computing environments, where traditional network and host-based IDS may not be effective. The IDS employs machine learning algorithms and artificial intelligence algorithms for effective intrusion detection. The FLN machine learning algorithm is used for increasing the speed and accuracy of the IDS and lowers the false alarm rates. However, there are limitations to both the current IDS and the FLN algorithm, including the unbalanced dataset of the IDS and randomly selected parameters that do not provide optimal solutions for the system

Ali et al. [8]

IDS which depends on machine learning algorithms have shown better performance compared to traditional IDS. These systems are necessary to protect networks from online threats that can compromise their availability and integrity. IDS is powerful software or hardware tool that keeps track of computer networks for detecting intrusions

Ali et al. [9]

This paper has proposed hybrid models that combine ML algorithms with optimization algorithms. These hybrid models have demonstrated improved performance in IDS compared to single-algorithm approaches. Hybrid models that combine multiple ways that are extensively used for improving the IDS performance. PSO metaheuristic was used to optimize an Extreme Learning Machine (ELM) model, using the NSL-KDD dataset. The resulting PSO-ELM model showed improved testing accuracy compared to a basic model

Singh et al. [2]

In this paper, a cloud (IDS) utilizing ensemble learning techniques and a voting mechanism is put forward. The model was tested using the well-known CICIDS 2017 dataset and the CloudSim simulator. The outcome of the implementation shows a remarkable accuracy rate of 97.24%

3 Performance Measures In assessing the execution of ML models, accuracy, precision, and recall are key metrics to consider. These metrics are frequently used in classification tasks, where the aim is to predict the label or class of an input.

3.1 Accuracy It returns the percentage of correct predictions made by a model, calculated by performing the division of the number of accurate forecasts by the total forecasts. As an example, if a model makes 100 predictions and is correct in 70 of them, the accuracy of the model is 70% [12]. Accuracy =

True Positive + True Negative True Positive

(1)

10

A. L. V. N. Manikantha Sudarshan et al.

3.2 Precision It is the proportion of definite forecast that are actually true, derived by dividing the number of TP forecasts by the total number of definite predictions made [12]. Precision =

True Positive False Positive + True Positive

(2)

3.3 Recall It is the percentage of definite cases that are accurately predicted, which can be calculated by performing the division of the number of correct definite predictions by the total number of definite occurrences in the dataset [12]. Recall =

True Positive True Positive + False Negative

(3)

It is essential to recognize that accuracy, precision, and recall may not always be in alignment. For instance, a model that has a high accuracy rate may not necessarily have high precision or recall, and the opposite can also be true. When deciding which metric to optimize for, it is crucial to consider the specific needs of your application.

3.4 Chi Square The chi-square test is a statistical method used to compare observed and expected frequencies in a sample and determine whether there is a significant difference between them. In order to perform a chi-square test, you must have both observed frequencies (the actual number of times each category appears in the sample) and expected frequencies (the number of times each category would be expected to appear if the null hypothesis were true) [19].

4 Proposed Methodology See Fig. 1.

A Detailed Survey on Network Intrusion Detection in Cloud Using …

11

Fig. 1 General framework

4.1 Data Collection The first step is to collect and prepare data corpus as shown in Fig. 1 to train data and test data with “machine learning” algorithm. This may involve collecting data from various sources, such as network traffic logs, system logs, and security event logs.

4.2 Data Preprocessing The collected data is usually preprocessed to remove any noise or inconsistencies, and to prepare it for analysis. This may involve cleaning the data, normalizing it, and transforming it to required format of the “machine learning” algorithm. 1. 2. 3. 4.

Imputing missing values Identifying and removing outliers Scaling features Selecting features

12

A. L. V. N. Manikantha Sudarshan et al.

5. Transforming data.

4.3 Feature Engineering In this step, the data is transformed into a set of features that will be used by the machine learning algorithm to learn characteristics of each class (i.e., benign or malicious activity).

4.4 Model Training The machine learning algorithm is then trained on the preprocessed and transformed data using a suitable training algorithm.

4.5 Model Evaluation After the model is trained, it is typically evaluated on a separate dataset to check performance.

4.6 Model Deployment If prototype performance shows good output results on test data corpus, then this prototype is deployed in a production environment to detect and classify new activity.

4.7 Limitations/Challenges 1. Anomaly detection-based IDS: Anomaly detection can be compromised by poor data quality, incorrect assumptions about the patterns or distribution of the data, a lack of context for the detected anomalies, and an inability to identify completely new or unknown anomalies. Additionally, some algorithms may only consider one feature at a time, which can restrict their ability to detect anomalies involving multiple variables. False positives, where normal behavior is mistakenly identified as anomalous, may also occur, leading to unnecessary alerts and confusion. Finally, anomaly detection using IDS is limited to analyzing network traffic data [11, 13].

A Detailed Survey on Network Intrusion Detection in Cloud Using …

13

Table 2 Types of alerts and its description [20] S. No

Alert types

Description

1

False Negative

A false negative alert signifies that the IDS has failed to identify a genuine threat

2

True Negative

A true negative alert indicates that the IDS has correctly identified normal behavior and not raised an alert

3

True Positive

A true positive alert signifies that the IDS has correctly identified a genuine threat

4

False Positive

A false positive alert is generated when the IDS mistakes normal behavior as a threat

2. ANN-based IDS: The accuracy of (ANN)-based (IDSs) can be compromised by poor quality data, which can lead to poor performance or false positives or negatives [9]. One limitation of (IDSs) using (ANNs) as the underlying machine learning algorithm is the lack of context or explanation for the anomalies they detect. (IDSs) using (ANNs) as the underlying machine learning algorithm have some limitations. These limitations can lead to unnecessary alerts and confusion for network administrators [16, 18]. 3. SVM-based IDS: Accuracy of (SVM)-based (IDSs) counts on quality of the data corpus chosen for train the prototype. If the data corpus is noisy or contains errors, the performance of the IDS may be negatively impacted, potentially resulting in false positives or negatives [11]. One limitation is the lack of context or explanation for the anomalies detected, which can make it difficult to determine the appropriate actions to take. Additionally, SVM-based IDSs may only be able to identify anomalies that are similar to those seen in the past [8, 10].This can lead to unnecessary alerts being generated, which can cause confusion and frustration for network administrators. Types of alerts and its description are listed in Table2.

4.8 Common Research Challenges [12, 9, 15] In the realm of intrusion detection, researchers have been working toward improving the accuracy and efficiency of models for many years. However, despite the advancements, some persistent research difficulties persist. One of the significant challenges is the imbalance in the data distribution of intrusion detection datasets. These datasets often have a disproportionate number of normal instances compared to abnormal instances, causing models to be less effective in detecting intrusions. Attempts to resolve this issue through techniques such as oversampling, undersampling, and cost-sensitive learning have been made, but the problem of class imbalance still prevails.

14

A. L. V. N. Manikantha Sudarshan et al.

Another persistent issue is the ever-evolving nature of intrusions. Intrusion techniques and methods are continually changing, making it imperative to regularly update intrusion detection models to keep up. This requires constant monitoring and updating, which can be both time-consuming and resource-intensive. Additionally, the high dimensionality of intrusion detection data remains a persistent research problem. Intrusion detection datasets often include a large number of features, many of which may be irrelevant or redundant, leading to overfitting and reduced accuracy of the models. Techniques such as feature selection and dimensionality reduction have been used to tackle this issue, but finding the optimal set of features remains a challenge. Finally, the interpretability of intrusion detection models remains a significant research challenge. These models are often complex and difficult to understand, making it challenging for practitioners to comprehend why a particular intrusion was detected. Efforts have been made to develop more interpretable models, such as decision trees and rule-based systems, but interpretability remains a crucial area of research. In conclusion, despite progress in the field of intrusion detection, there are still persistent research problems that need to be addressed. Addressing these challenges will require continued innovation and collaboration between researchers and practitioners.

4.9 Datasets 1. NSL-KDD Data Set [4]: “NSL”-“KDD” data corpus will help to differentiate various ids prototypes. Data corpus has been improved from original KDD’99 dataset. Statistical examination of this data corpus revealed a high percentage, but the data corpus also contains a hugely repeated record. The data corpus contains 43 features where 41 related to traffic input and the remaining are representing labels. DoS attacks aim to disrupt traffic flow to and from the target system, probe or surveillance attacks seek to gather information from a network, U2R attacks and R2L attacks aim to gain local access to a remote machine. The attributes are listed in Fig. 2. 2. KDD Cup 99 dataset [7, 4]: The aim of data corpus data corpus is to create an intruder detector, which is a predictive model designed to identify and distinguish between malicious links and legitimate connections. The data corpus has been changed from the existing work. This dataset contains labels for each connection indicating whether it is a normal connection, with an appropriate attack (shown in Figs. 3 and 4). 3. CICDS-2020 dataset [15, 22]: The CICDS-2020 dataset consists of network traffic data that has been generated by simulating various cyberattacks on a computer system. CIC and CSE created this dataset to be used for research and development (IDSs). This dataset is very useful for researchers and developers working on IDSs.

A Detailed Survey on Network Intrusion Detection in Cloud Using …

Fig. 2 Attributes of dataset [4]

Fig. 3 Feature names and its types [7]

Fig. 4 Description of features [7]

15

16

A. L. V. N. Manikantha Sudarshan et al.

4. CSE-CIC-IDS2018 [21]: The CSE-CIC-IDS2018 data corpus is a collection of network traffic data that was generated through the simulation of various kinds of cyber assaults on a computer system. CIC and CSE created this dataset to be used for research and development of intrusion detection systems (IDSs). 5. UNSW dataset [7]: The UNSW-NB15 dataset is a collection of network traffic data that was created by simulating various cyberattacks on a computer system.it is developed by the (UNSW) and is intended for use in research and development of intrusion detection systems (IDSs). It includes a variety of attack types, including DoS, DDoS, and web attacks. Researchers and developers working on IDSs may find the UNSW-NB15 dataset to be a useful resource as it includes a realistic and diverse range of attack types and scenarios.

5 Conclusıon In this paper, a survey has been taken up on different types of intrusions and their detection methods over different types of cloud systems. To sum it up, the incorporation of machine learning, artificial intelligence, and big data into intrusion detection systems has significantly enhanced their performance and efficiency. By utilizing the vast data available, these technologies can identify patterns and deviations that could potentially signify an intrusion attempt. This enables more accurate and timely detection, leading to improved protection for organizations. Moreover, the use of these technologies enables ongoing learning and development, ensuring that the intrusion detection system can adapt and evolve as new threats arise. Ultimately, the integration of machine learning, artificial intelligence, and big data into intrusion detection systems will improve the performance of the IDS model addresses issues faced by the IDS like computational time, accuracy, etc. Real-time network data can be collected and IDS implementation can be taken up based on machine learning and big data techniques as a future work.

References 1. Ali MH, Zolkipli MF (2018) Intrusion-detection system based on fast learning network in cloud computing. Adv Sci Lett 24(10):7360–7363 2. Singh P, Ranga V (2021) Attack and intrusion detection in cloud computing using an ensemble learning approach. Int J Inf Technol 13:565–571 3. El-Seoud S, El-Sofany H, Abdelfattah M, Mohamed R (2017) Big data and cloud computing: trends and challenges. Int J Interact Mobile Technol (iJIM) 11:34. https://doi.org/10.3991/ijim. v11i2.6561 4. Khraisat A et al (2019) Survey of intrusion detection systems: techniques, datasets and challenges. Cybersecurity 2(1):1–22 5. Peng K, Leung VC, Huang Q (2018) Clustering approach based on mini batch kmeans for intrusion detection system over big data. IEEE Access 6:11897–11906 6. Peng K, Leung V, Zheng L, Wang S, Huang C, Lin T (2018) Intrusion detection system based on decision tree over big data in fog environment. Wirel Commun Mobile Comput

A Detailed Survey on Network Intrusion Detection in Cloud Using …

17

7. Al-Daweri, Muataz Salam et al. An analysis of the KDD99 and UNSWNB15 datasets for the intrusion detection system 8. Ali MH, Jaber MM (2021) comparison between extreme learning machine and fast learning network based on intrusion detection system (No. 5103). EasyChair 9. Ali MH, Fadlizolkipi M, Firdaus A, Khidzir NZ (2020) A hybrid particle swarm optimization -Extreme learning machine approach for intrusion detection system. In: 2018 IEEE student conference on research and development (SCOReD), 2018, pp 1-4. Symmetry 12.10 (2020):1666 10. Sultana N, Chilamkurti N, Peng W, Alhadad R (2019) Survey on SDN based network intrusion detection system using machine learning approaches. Peer-to-Peer Netw Appl 11. Othman SM, Ba-Alwi FM, Alsohybe NT, Al-Hashida AY (2018) Intrusion detection model using machine learning algorithm on big data environment. J Big Data 5(1):1–12 12. Krishna KV, Swathi K, Rao BB (2020) A novel framework for NIDS through fast kNN classifier on CICIDS2017 dataset. Int J Recent Technol Eng (IJRTE) 8(5) 13. Hajisalem V, Babaie S (2018) A hybrid intrusion detection system based on ABC-AFS algorithm for misuse and anomaly detection. Comput Netw 136:37–50 14. Salo F, Nassif AB, Essex A (2019) Dimensionality reduction with IG-PCA and ensemble classifier for network intrusion detection. Comput Netw 148:164–175 15. Idrissi I, Azizi M, Moussaoui O (2020) IoT security with deep learning-based intrusion detection systems: a systematic literature review. In: 2020 fourth international conference on intelligent computing in data sciences (ICDS). IEEE 16. Alom MZ et al (2019) A state-of-the-art survey on deep learning theory and architectures. Electronics 8(3):292 17. Pouyanfar S et al (2018) A survey on deep learning: algorithms, techniques, and applications. ACM Comput Surv (CSUR) 51(5):1–36 18. Natesan P, Rajalaxmi RR, Gowrison G, Balasubramanie P (2017) Hadoop based parallel binary bat algorithm for network intrusion detection. Int J Parallel Prog 45(5):11941213 19. Satorra A, Bentler PM (2010) Ensuring positiveness of the scaled difference chi-square test statistic. Psychometrika 75(2):243–248 20. Shittu R et al (2015) Intrusion alert prioritisation and attack detection using post-correlation analysis. Comput Secur 50:1–15 21. Bhuvaneswari Amma NG, Selvakumar S (2021) A statistical class center based triangle area vector method for detection of denial of service attacks. Cluster Comput 24(1):393–415 22. Ali MH, Al Mohammed BAD, Ismail A, Zolkipli MF (2018) A new intrusion detection system based on fast learning network and particle swarm optimization. IEEE Access 6:20255–20261 23. Brao B, Swathi K (2016) Variance-index based feature selection algorithm for network intrusion detection. IOSR J Comput Eng 18:1–11. https://doi.org/10.9790/0661-1804050111

Adoption of Network Function Virtualization Technology in Education S. Jinapriya and B. H. Chandrashekhar

Abstract Network function virtualization is a method of cost reduction trigger services of cloud deployment for network operations. It separates the functionalities like a firewall or encryption from a dedicated source or hardware and moving to virtual servers. The network function virtualization has seen keen attention and grasping from both industries and academic institutions as a transformation in telecommunications. This technology allows compactible provisioning, deployable and provides a centralized system of virtual private network functions. On developing a Web-based application or game-based software, we have to see to that the application is safer and secure with all encryption standards. That safety can be provided by network function virtualization standards and its concepts. At the same point, there are benefits and significant challenges and appropriate solutions of network function virtualization; henceforth, it pays the way for future research in these domains. Keywords Network · Industries and academic institutions · Game-based software · Encryption · Firewall · Virtual private network

1 Introduction Network function virtualization (NFV) is an innovative latest networking concept planned to address the consistent requirement for network providers to reconstruct the present conventional network. This is designed and framed to improve the network architectural pattern. It adopts the method of virtualization to reconstruct the present conventional network, by which it enhances the process of network. The industries and companies have been mainly relied on network operators and on their physical devices for each functionality that is been as part of a service. S. Jinapriya (B) Isteer Technologies, Bengaluru, Karnataka, India e-mail: [email protected] B. H. Chandrashekhar RV College of Engineering, Bengaluru, Karnataka, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_2

19

20

S. Jinapriya and B. H. Chandrashekhar

Fig. 1 Traditional CPE VS CPE with NFV implementation. Source ResearchGate.net

Adding to these advantages, service components have seen a keen reflection on the network topology and in localizing service components. Traditional CPE VS CPE with NFV implementation is shown in the below Fig. 1 . The implementation has its beginning from the open flow model where all the switches and routers are interlinked to the central hub of the controller. Each and every switch has a flow network. It has the ability of assignment, modification, and avoiding rules and instructions in the flow network which is monitored and controlled remotely by the controller. The main challenge is here where this concept opens a big space for the hackers to attack this and send packets in messages to the controller. This cannot be controlled or hold by any operators which reduces the performance of the network due to lack of memory. Figure 2 is the representation of SDN network.

2 Network Function Virtualization in Research Education For better research, the main keen aspect is the quality research and the essential strategies they adapt. To adapt these network subscribers, Corporate Organizations with Information Technology executives started incorporating the methodology of network function virtualization technology in an effective manner. This technology wide opens for all sectors like business, philosophy, economic, socio, technical, and security that made this so powerful and adaptable. The network function virtualization technology has a greater impact in quantitative approach where it is very beneficial in obtaining empirical data through several methods such as assessments, iterating, and predictions in a span of time. The researchers were able to answer the questions such as why, when, how, what

Adoption of Network Function Virtualization Technology in Education

21

Fig. 2 SDN network. Source [19]

while pursuing their research [20]. Classic network approach VS network function virtualization is shown in Fig. 3.

2.1 Advantages of Network Function Virtualization 1. Reduction of cost needs to make a hardware setup and high reduction of power consumption. 2. By using network function virtualization technology, we can automate services. Also improving the flexibility by reducing the time taken to deploy new services. 3. Network function virtualization opens a vast wide way for innovation due to the increased speed in deploying services. It mainly implies for software-based development. 4. The wide range of services can be covered and catered since there is no need to install a software remotely with hardware components. 5. Open to eco-systems and encouraging the small players of the market. 6. Managing workload and load balance. Also, reduction of energy consumption in several servers and storage. There is a wide connection between the cloud and network function virtualization. This network function virtualization is not restricted to any boundaries of services in telecommunication industry. Many of the applications in information technology run on the servers that are in the cloud. However, most of the use cases of network function virtualization that are in practice have the root from the telecom industry. The requirements of these are higher than those of information technology applications.

22

S. Jinapriya and B. H. Chandrashekhar

Fig. 3 Classic network approach VS network function virtualization. Source Blue Planet

Hence, the performance is higher, and load balance is managed. Cloud computing service models mapping with NFV are shown in Fig. 4. Comparison of NFV in telecommunication networks and cloud computing is listed in Table 1.

Adoption of Network Function Virtualization Technology in Education

23

Fig. 4 Cloud computing service models mapping with NFV. Source Research Gate

Table 1 Comparison of NFV in telecommunication networks and cloud computing [16] Issue

Network function virtualization (Telecom networks)

Cloud computing

Approach

Service/function abstraction

Computing abstraction

Formalization

ETSI NFV Industry Standard Group

DMTF Cloud Management Working Group

Latency

Expectations for low latency

Some latency is acceptable

Infrastructure

Heterogenous transport (optical, Ethernet, wireless)

Homogeneous transport (Ethernet)

Protocol

Multiple control protocols (e.g., OpenFlow, SNMP)

OpenFlow

Reliability

Strict 5 NINES availability requirements

Less strict reliability requirements

Regulation

Strict requirements, e.g., NEBS

Still diverse and changing

3 Network Function Virtualization Essentials and Considerations 3.1 Efficiency While implementing the network functions through virtualization technologies, the major question arises the efficiency, performance, and speed of general-purpose servers. Hence, it all dependent on the design strategies and its principles that have been adopted. A proper decision on a design principle will be the root cause for efficiency.

24

S. Jinapriya and B. H. Chandrashekhar

3.2 Accordance with Information Technology The architecture of the network function virtualization technology should be accessible in the right locations at the correct time to instantiate dynamic network and interconnection. The enhancement of this and the flexibility will arise in the necessity of adopt both virtual and appliances. But there will be a difference in cost and the viewpoint of customers.

3.3 Adaptability and Creditability This is one of the most important features; the network operators and providers provide a wide vast of services such as video and voice calls. They have to cater some services such as load balancing based on customer requirements [13–16]. Based on this, the scalability of the customers and its vast range increases, and this will be a key indication for failure free services.

3.4 Security and Other Threatening Issues On deploying the network function virtualization services, the network operators and subscribers need to guarantee on the fact that the security of the network is not affected. In that perspective, network function virtualization could bring more benefits on security with new security concerns [17]. On implementing appropriate firewalls, the network domains and networks services can be protected.

3.5 User’s View The user will be a regular customer who uses Internet and Internet Technology. The user aims in utilization of the services provided by the network operation based on the network function virtualization technology. The only concerned aspect is that the user must be aware of the functionality and its effective usage for a better perspective.

Adoption of Network Function Virtualization Technology in Education

25

4 Awareness Required and Challenges on Network Function Virtualization 4.1 Establishing a Mobile Virtualization Network In the present world of Internet and Mobile Technology, the mobile network lacks back in handling the signaling protocols. Especially in India, when a specific function is lagging behind the network operators and subscribers should take an initiative and provide sufficient services to the customers by installing the equipment or by replacing old layer transport protocols by new centralized gateways such as 4G engineering, procurement, and construction (EPC). These long-distance communication channels or tunnels are very effective in establishing a communication. But these have to be maintained by the network operators which is very costly. To address these issues and conflicts of the data communication, cloud engineering, procurement, and construction (EPC) is an effective solution which caters all these. This meets the current market requirements and needs. It also takes a new dimension of network function which is bright, intelligent, effective, and flexible [11]. Engineering, procurement, and construction cloud is shown in Fig. 5.

Fig. 5 Engineering, procurement, and construction cloud. Source sites.google.com

26

S. Jinapriya and B. H. Chandrashekhar

4.2 Establishing a Home Network In today’s present, educational situation system is a hybrid system where learners having different options of learning especially post-COVID. They can learn either through class-room teaching method or also through online learning. To make this blended learning more efficient and effective, a home network is very much essential. Customer premise equipment also known as CPE supported by network backend systems is the service offered by network providers for home services. It also includes residential gateways which is a part of customer premise equipment device used to access the Internet and set-top boxes and other multimedia services. This ideology of virtualizing our homes with this network is actually a new revolution and very new concept. It has enormous advantages to both customers and network providers/operators. • Mainly, it reduces the cost and operating expense by which constant maintenance and monitoring the customer premise equipment devices will be avoided. Or updating the device very frequently by which the user needs to constantly call the centers [18]. • Secondly, it has a greater performance quality by improvising and providing unlimited storage capacity. It also provides full access, and shared access network will be enabled to all devices such as smartphones, tablets, and personal computers [2] • It also offers a dynamic working environment by providing quality management and controlled access which enhances programmability to end users using application programming interfaces. • Customer premise equipment has a lot of services which are flexible and can be used effectively which reduces the dependency on the functions. However, inspite of all the advantages, there lies a big question that “will this can be achieved?” The reason behind this first network operators has to come with a better service and functionality for the customers. At the same time, this awareness and knowledge have to be instilled to users (in order terms parents). This job has to initiated by the educators and teaching community to instill this. If this is fruitful, the space for learning and teaching will be the best place. Home network is shown in Fig. 6.

4.3 Compatibility with the Existing One Rather than transforming a complete setup, it is ideal to revise the existence concept and bringing up with the new methodology. NFV or network function virtualization implementation must go well the network operators in reusing the existing old equipment such as operating support system (OSS), business support systems (BSSs), network management systems, and other network systems where the legacy

Adoption of Network Function Virtualization Technology in Education

27

Fig. 6 Home network. Source [10]

of the existing functions does not disappear. In other simple words, this must be a hybrid functioning system composed of old physical network and virtual network applications and systems. Blended mode is shown in Fig. 7.

Fig. 7 Blended mode. Source [12], ResearchGate.net

28

S. Jinapriya and B. H. Chandrashekhar

4.4 Virtualization of the Resource The layer of the virtualization based on hardware provides a unique computing resources and cloud native virtualized network functions to all service software if the layer has any faulty or defect, the functions can be overcome by the attacks or the consequences.

4.5 Sharing of Resources A single server can run multiple host or tenants with the virtual resources one of the examples like virtual machine or containers might use the distributed applications across the server. This breaks the boundaries of risks, attacks, and hacking.

4.6 Security Issue on Open Source When this virtual network function is used by several educators and learners, there will be increasing demand in the concept and methodology where the use of opensource software will be in high demand. This leads to new security challenges where the network providers need to give a consistent approach by designing the flawless application and provide and proper service to the customers.

4.7 Multi-vendor Environment Such an environment is it difficult to coordinate security policies and determine responsibility for security problems and requires more effective network security monitoring capabilities.

4.8 Supply Chain It introduces risks such as malicious software and hardware, counterfeit components, poor designs, manufacturing processes, and maintenance procedures. This may result in negative consequences, such as data and intellectual property theft, loss of confidence in the integrity of the 5G network, or exploitation to cause system and network failure [3–9]. Figure 8 illustrated the security issues.

Adoption of Network Function Virtualization Technology in Education

29

Fig. 8 Security issues. Source ResearchGate.net

5 Future of Network Function Virtualization The future of network function virtualization lies in the placement of virtual appliances where network providers should operate effectively such that the network will be used most efficiently and least expensive. It depends on the network topology how it is defined that are placed between two nodes or end-points. Implementing a lightweight virtual machines enhances the functionality. For example, ClickOS a tiny Xen-based device can be facilitated and instantiated within 30MS a very limited memory of 5 MB [1]. Outsourcing the network using end-to-end principle of initial network architecture has an impact on the future of network function virtualization. It does not ideally modify the packets rather it ranges from 1,000 hosts and crosses beyond 1,00,000 hosts. This widens the reach of the technology and its benefits too. Overall, network function virtualization is an emerging and expanding revolution among the networks. It has more opportunities and enabling technology which erratically revolutionize the way network operates. Education and technology through the networking research community introduce network function virtualization and its widespread and successful adoption.

6 Conclusion The network function virtualization is driven by the next-generation network. It re-enforces this innovative technology. Networks operators or service providers

30

S. Jinapriya and B. H. Chandrashekhar

transform the existing network infrastructure into network function virtualization to meet and cater the needs of the customers for a benefit. The physical network consumes high power to operate the physical connection and devices, and this led to the operating cost and adverse effects on the environment. Organizations in the information technology, telecommunication, and data center business terrains need field-proven adoption strategies of network function virtualization technology. These strategies may enable the seamless transition of their networks from traditional networks into network function virtualization infrastructure to have the capability to provide next-generation network functions and services provisioning cost-effectively. Network function virtualization technology may enable end-to-end integration of heterogeneous legacy appliances, software functions, and services presented at the optimal time with minimal time to market and are provided to their network users on a “software as a service” basis. The documentation of network function virtualization technology adoption strategies may allow seamless adoption of network function virtualization technology by lagging organizations with no access to the field-proven strategy of adopting network function virtualization technology.

References 1. Aggarwal V, Gopalakrishnan V, Jana R, Ramakrishnan KK, Vaishampayan VA (2013) Optimizing cloud resources for delivering IPTV services through virtualization. IEEE Trans Multim 15(4):789–801 2. Bhaumik S, Chandrabose SP, Jataprolu MK, Kumar G, Muralidhar A, Polakos P, Srinivasan V, Woo T (2012) CloudIQ: a framework for processing base stations in a data center. In: Proceedings of MOBICOM 2012, August, pp 125–136 3. China Mobile Research Institute (2011) C-RAN the road towards green RAN. China Mobile White Paper, October 4. Chiosi M et al (2012) Network functions virtualisation: an introduction, benefits, enablers, challenges & call for action. ETSI White Paper, October 5. Greenberg A, Hamilton J, Maltz DA, Patel P (2009) The cost of a cloud: research problems in data center networks. ACM SIGCOMM Comput Commun Rev 39(1):68–73 6. Hwang J, Ramakrishnan KK, Wood T (2014) NetVM: high performance and flexible networking using virtualization on commodity platforms. In: Proceedings of NSDI 2014, April, pp 445–458 7. Jin X, Li LE, Vanbever L, Rexford J (2013) SoftCell: scalable and flexible cellular core network architecture. In: Proceedings of CoNEXT 2013, December, pp 163–174 8. Manzalini A, Minerva R, Callegati F, Cerroni W, Campi A (2013) Clouds of virtual machines in edge networks. IEEE Commun Mag 51(7):63–70 9. Martins J, Ahmed M, Raiciu C, Olteanu V, Honda M, Bifulco R, Huici F (2014) ClickOS and the art of network function virtualization. In: Proceedings of NSDI 2014, April, pp 459–473 10. Sherry J, Hasan S, Scott C, Krishnamurthy A, Ratnasamy S, Sekar V (2012) Making middleboxes someone else’s problem: network processing as a cloud service. In: Proceedings of SIGCOMM 2012, August, pp 13–24 11. Sivaraman V, Moors T, Gharakheili HH, Ong D, Matthews J, Russell C (2013) Virtualizing the access network via open APIs. In: Proceedings of CoNEXT 2013, December, pp 31–42 12. The European Telecommunications Standards Institute (2013) Network Functions Virtualisation (NFV); architectural framework. GS NFV 002 (V1.1.1), October

Adoption of Network Function Virtualization Technology in Education

31

13. The European Telecommunications Standards Institute (2013) Network Functions Virtualisation (NFV); Use Cases. GS NFV 001 (V1.1.1), October 14. Wang G, Ng TSE (2010) The impact of virtualization on network performance of Amazon EC2 data center. In: Proceedings of INFOCOM 2010, March, pp 1163–1171 15. Wang Y, Keller E, Biskeborn B, van der Merwe J, Rexford J (2008) Virtual routers on the move: live router migration as a network management primitive. In: Proceedings of SIGCOMM 2008, August, pp 231–242 16. https://www.researchgate.net/publication/281524200_Network_Function_Virtualization_S tate-of-the-Art_and_Research_Challenges. Accessed 25 Jan 2023 17. https://www-users.cselabs.umn.edu/classes/Fall-2017/csci8211/Papers/NFV%20Challengesn-Opportunities.pdf. Accessed 25 Jan 2023 18. https://tec.gov.in/pdf/Studypaper/Network_Function_Virtualization%20.pdf. Accessed 25 Jan 2023 19. https://www.researchgate.net/publication/337055946_Security_of_Software_Defined_Netw orks_SDN_Against_Flow_Table_Overloading_Attack. Accessed 25 Jan 2023 20. https://scholarworks.waldenu.edu/cgi/viewcontent.cgi?article=12295&context=dissertations. Accessed 25 Jan 2023

An Efficient Secure Cloud Image Deduplication with Weighted Min-Hash Algorithm R. Dhivya and N. Shanmugapriya

Abstract Cloud storage services have emerged as a competitive alternative to other methods of data storage, but its operators must balance privacy with optimization. Clients would want to utilize end-to-end data encryption to protect data security; however, effective encryption nullifies the value of all established storageoptimization methods like data deduplication. To save cost and storage space, image deduplication in cloud computing is proposed. To protect the confidentiality of the image, the notion of a hybrid secure approved deduplication algorithm with a weighted min-hash pre-processing method is presented in the paper. The image will be encrypted or decrypted in the deduplication system using a weighed min-hash encryption key that is produced by computing the hash value of the image content. The ciphertext used to verify the duplicate image copy will be produced by identical image copies in the same way. This system has undergone security analysis to ensure its safety. Keywords Cloud · ˙Image deduplication · Ciphertext · Hash value

1 Introduction For a variety of clients—from small businesses to private users—cloud solutions offer a realistic, cost-effective substitute for traditional in-house IT systems. Moving to the cloud, however, requires clients to outsource their data because, rather than being saved in a certain location on a specific disc, the data could end up practically anywhere the cloud service provider has his storage facilities. As a result, it is not R. Dhivya (B) Department of Computer Science, Dr. SNS Rajalakshmi College of Arts and Science, Coimbatore 641049, India e-mail: [email protected] N. Shanmugapriya Department of Computer Applications, Dr. SNS Rajalakshmi College of Arts and Science, Coimbatore 641049, India

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_3

33

34

R. Dhivya and N. Shanmugapriya

possible for customers to encrypt the data before outsourcing it to the cloud and maintain the key locally [1]. Clients often need to process their data, and cloud providers need to utilize storage-optimization techniques to free up space in their storage. As investigated several storage-optimization methods, data deduplication emerged as a promising contender for a secure storage-optimization strategy. For some datasets, data deduplication is a technique for optimization that significantly reduces the amount of data (over 99 percent for common backup cases, for example [2]). Since multi-user cloud storage services that store, among other things, wellknown songs, photographs, and movies seem to be a good fit for highly-efficient deduplication, to investigate how encryption and data deduplication can be utilized to create a storage-efficient and secure storage service. A cloud storage-optimization method called data deduplication aims to do just what its name implies: remove duplicate data. Data deduplication’s basic tenet is straightforward: If a dataset contains some (part of) data in many copies, it is simple to save only one copy of this duplicate data and so conserve storage space. Deduplication ratio, or DR, which is a comparison of the size of the original dataset to the size after deduplication, is a common way to represent the efficacy of deduplication. Figure 1 shows how the deduplication process works. Deduplication is a method for reducing data technique which requires less disc space and transmission power. A file’s or block’s secure hash-based fingerprint is generated, and duplicates are identified by contrasting their fingerprints. Deduplication, which removes redundant copies from the cloud and replaces them with links to copies [3], is currently a frequently used technique for reducing data redundancy in cloud storage. Various technologies are developing, and deduplication now confronts many difficulties. Deduplication is opposed by users’ frequent desire to encrypt data Original Dataset

X

Y

X Deduplicated Dataset Y

Y

X Z Y Z

X

X

Z

Y Z

Y

X Z

X Z

Fig. 1 Data deduplication method illustration. Various files are represented by objects. In the sample provided, the deduplication ratio (DR) is 3:1

An Efficient Secure Cloud Image Deduplication with Weighted …

35

before uploading it, for instance. The convergent encryption (CE) algorithm, which uses the hash value of the image as the encryption key, is presented as a solution to this issue. Message-locked encryption, which establishes the plaintext variables to guarantee that the similar ciphertext is generated from the corresponding plaintext, was proposed by Bellare et al. [4] as a result of this. It is often used in encrypted deduplication. Traditional deduplication techniques, however, are unable to produce a good deduplication impact for multimedia data, such as images. Since images are frequently more logical and colorful than words when conveying information, the number of images stored on cloud servers will grow alarmingly quickly [5, 6]. Instead than focusing on the image’s precise characteristics, users typically focus on the image’s content. Since the present deduplication techniques for ordinary files are so exact and rigid in their definitions of repetition, they cannot be used for deduplication of images [7, 8]. In addition, the security of the picture during deduplication must be guaranteed. Currently, content-based repetitive image detection technology is primarily used in deduplication. The secure and precise deduplication of photographs is a topic that many academics have studied. Hashing can accurately reflect a picture’s content and is frequently used in duplicate image identification. The performance of the weighted min-hash algorithm has improved in terms of accuracy and speed. Current major issues include the exponential growth of digital data in cloud storage systems. The storage systems are put under additional strain due to a significant amount of duplicate data. Deduplication is a useful method that has gained popularity in massive storage systems. Deduplication minimizes storage cost, improves storage utilization, and gets rid of redundant data. The issues for comparable files with minor modifications are not fully addressed by data deduplication in the cloud, which is a major issue. Deduplication storage solutions constantly struggle to provide the requisite throughputs and capacities required to move backup data within backup and recovery timeframes since data in data centers is expanding quickly. By uniformly distributing cloud resources among all of the servers in the cloud, the load balancing technology also reduces the stress on the server. Therefore, to increase storage efficiency, a limited appropriate workload algorithm is used to ensure that data chunks are transferred to the proper target storage nodes. A hybrid secure approved deduplication algorithm with a weighted min-hash pre-processing mechanism is presented in the study. The proposed research, in the meantime, proposes an improved equally spread current execution load balancing technique to reduce load balancing in cloud storage to perform workloads to maximize storage efficiency.

2 Related Work An attribute-based storage system with safe deduplication was reported by Cui et al. [9] in a hybrid cloud environment, where a private cloud handles duplicate detection and a public cloud controls storage. Their approach has two benefits over earlier data deduplication solutions. First, rather of distributing decryption keys, it may be used

36

R. Dhivya and N. Shanmugapriya

to set access controls and exchange data with users in a private manner. Second, it defines a stronger security notion than existing systems, achieving the standard notion of semantic security for data confidentiality. Additionally, they proposed a way to convert ciphertexts with the same plaintext but under different access rules from ciphertexts under a single access policy without exposing the underlying plaintext. According to Yan et al. [10], one of the most crucial cloud computing services is cloud storage, which enables customers to grow their storage without upgrading their equipment and get around the bottleneck of constrained resources. However, encrypted data may waste a lot of cloud storage and make it more difficult for authorized users to share data. The writers are still having trouble managing and storing encrypted data with deduplication. Traditional deduplication strategies always concentrate on particular application scenarios in which data owners or cloud servers have complete control over the deduplication. They are unable to adapt to different expectations from data owners based on the degree of data sensitivity. The authors suggested a heterogeneous data storage management framework that flexibly provides access control and deduplication management at the same time across various cloud service providers (CSPs). With security study, comparison, and implementation, they assessed its performance. A brand-new technique for extracting local invariant features from high dynamic range photos was proposed by Zhuang and Liang [11]. First, this study establishes a thorough theoretical framework for HDR imaging and mapping by introducing the perceptual model of the human eye’s perception of scene irradiance based on the multi-exposure fusion HDR picture computation model. Next, a novel LIFT extraction technique is suggested in this research, which carries out LIFT detection on the scene reflection layer and LIFT description on the scene illumination layer. Finally, studies have demonstrated that this approach can raise the proportion of machine vision broad baseline images whose feature points are correctly matched. A novel block design-based key agreement protocol with support for many participants was presented by Shen et al. [12], and based on the block design’s structure, it can dynamically grow the number of participants in a cloud environment. The authors provided a general formulas for generating the common conference key IC for numerous participants based on the suggested group data sharing paradigm. Note that the suggested protocol’s computational complexity rises linearly with the number of participants, and that the communication complexity is significantly decreased thanks to the (v, k + 1, 1)-block architecture. A group-based nuclear norm and learning graph (GNNLG) model were proposed by Yan et al. [13]; after, it was shown that group-based image restoration techniques are more effective at gathering similarities among patches. Within a searching window, they located and aggregate the most comparable patches for each patch. In their approach, they take advantage of the grouped patches’ inherent low-rank property. In order to generate the graph Laplacian matrix, which represents the topological structure of the image, they further examined the manifold learning approach and developed an efficient optimal learning strategy. The denoised depth image might then be subjected to the smoothing priors in further detail. The alternate direction

An Efficient Secure Cloud Image Deduplication with Weighted …

37

method of multipliers (ADMM) is suggested to solve their GNNLG in order to obtain quick speed and high convergence. Li et al. [14] proposed a novel framework for distributed edge-facilitated deduplication (EF-dedup). The authors maximized the space efficiency by exploiting the high spatial and temporal correlation of edge data at the network edge. Edge nodes can successfully suppress duplicate edge data by using the dispersed computational power that is accessible on the edge in a collaborative manner, using significantly less storage and WAN traffic. Non-trivial network cost may prevent some edge nodes with highly linked data from constantly being in the same edge cloude. An effective secure deduplication technique that allows user-defined access control was suggested by Yang et al. [15]. Specifically, their system can effectively reduce duplicates without infringing cloud users’ security and privacy by permitting only the cloud service provider to authorize data access on their behalf. Their authorized secure deduplication method achieves data confidentiality and tag consistency while fending off brute-force attacks, according to a thorough security investigation. Additionally, in terms of computational, communication, and storage overheads as well as the efficiency of deduplication, thorough simulations show that their approach performs better than the current competing schemes. Focusing on the re-encryption deduplication storage system, Yuan et al. [16] demonstrate that the recently developed lightweight rekeying-aware encrypted deduplication scheme (REED) is susceptible to a type of attack that they referred to as the stub-reserved attack. The authors also suggested a safe data deduplication method based on the convergent all-or-nothing transform (CAONT) using randomly selected bits from the Bloom filter, together with effective re-encryption. Their system can withstand the stub-reserved attack and ensure the privacy of the sensitive data of data owners, thanks to the inherent property of the one-way hash function. Additionally, data owners are only required to re-encrypt a small portion of the package through the CAONT rather than the complete package, thus lowering the system’s computational overhead.

3 Proposed Methodology The proposed hybrid secure approved deduplication algorithm with a weighted minhash pre-processing mechanism method performs to test all the experimentations which were conducted on real-time image database in cloud environment. The proposed algorithm successfully processes the image min-hashing scheme, which is intended to classify identical compressed and encrypted images in order to perform deduplication on them. This results in a secure deduplication strategy without additional computational overhead for image encryption, hashing, and deduplication. The overall proposed flow diagram is described in Fig. 2. Real-time images are used as inputs in the MATLAB simulation in Fig. 2. The initial action taken by the owner of an image is to upload and download that image to cloud storage. In order to discover duplicate images in cloud storage, the next cloud

38

R. Dhivya and N. Shanmugapriya

Fig. 2 Proposed algorithm flow diagram

Owner Image Upload

Image Download

Cloud Deduplication Storage System Image Data Hash (min-hash) Key Generation Image Encryption

Image Download

Image Upload

User

deduplication storage system processes the images using a deduplication method with a minimum hash with key encryption procedure. The user of the image can interface with the cloud storage server in this final phase and use the tag to see if there are any duplicate copies of the image stored there.

3.1 System Setup Consider a deduplication cloud system in this system design that consists of an owner, an user, and a cloud service provider. It is believed that the owner of the image has encrypted it before uploading it to the cloud storage server. Considering that some authentication and key-issuing mechanisms are properly used to authorize users and the owner of the image. The encrypted image could then be accessed by approved image users after being uploaded to the cloud server. In additional specifics, a cloud storage server receives a request from an approved image user, and the server checks the ownership documentation. Owner: The owner of the image is the entity that uploads it to the cloud service for storage, sharing, and future access. The owner must encrypt the image before uploading it to the cloud in order to secure the image’s content. Only the initial owner of an image might store it in the cloud in a client-side image deduplication system. The owner will be informed by the storage server that this image is a duplicate if it is not the initial one. Therefore, the cloud storage only has one duplicate of the image.

An Efficient Secure Cloud Image Deduplication with Weighted …

39

User: By presenting ownership documentation to the deduplication cloud system, an entity with access rights to the same image is known as an image user. Additionally, the term “user” covers acquaintances of the owner who have shared the image resource on the cloud. Cloud deduplication: For owners and users, a deduplication cloud storage server offers image storage. Using cloud storage, virtual machine images can be imported into a cloud image library from on-premises sites or copied from the cloud to onpremises locations. Additionally, the cloud storage server will do duplicate image processing before users upload their photographs. If the cloud storage server already has an image with the same content, the users will not be able to upload it again. Instead, they will be granted access to the image by providing evidence of ownership.

3.2 Cloud Deduplication Storage System The following three entities are included in the cloud deduplication system model: Image set: In image set, the labeled faces in the wild image database [17] could be applied to proposed algorithm of various test requirements. A cloud storage server’s image set is a collection of images. Using the bilinear interpolation technique, all input images I are converted to row × column for the purpose of generating fixedlength hashes for images of different dimensions; the image is then subjected to Gaussian low pass filtering in order to lessen the impact of noise interference on the calculated hash value. Group members upload and distribute images. A cryptographic hash value is generated for each image by the weighted min-hash pre-processing algorithm. The cryptographic hash value results are always just 32 characters long, since MD5 checksum was used. User Group: These users can communicate and share data with one another. Additionally, members belonging to the same group share a cipher key that is used to encrypt the image’s hash. The key generation is modeled with cipher key method with 32 bit length. The fingerprint of the image is the hash value of the image that has been encrypted using the key. An initial hash is performed on the input image to produce a set of binary codes. The next step is to check every bucket in the related hash table that is within a short Hamming distance of each hash code, treating the images that contain the key values in those buckets as candidates for results. One of the most important steps is the indexing of the hash table of the aforementioned system because it ensures low memory usage and acceptable deduplication speed. Combining many effective strategies, such as multiple-table indexing, weighed min-hash search, and Hamming distance-based ranking, will yield the needed performance. Assume that hc = [h1 (i), …, hB (i)] ∈ {−1, 1}M to construct a set of B binary hash codes for each key image i. The simplest hash function is random projection; however, there are other more that can be employed as well. Then, by evenly dividing

40

R. Dhivya and N. Shanmugapriya

the code hc into L subcodes of length M/L, which is often 32 in practice, to construct L hash tables. Then, based on hashes of length M/L, a set of hash tables called {T l , l = 1, …, L} can be constructed using these subcodes. Each bucket in these hash tables houses crucial frames from the database’s images. Figure 3 illustrates a sample images for processing a deduplication, and Fig. 4 shows the proposed weighted min-hash algorithm results. The client communicates the cloud storage it uses the tag to see whether there are any duplicate copies of the image already saved there before uploading it. To determine whether the image is duplicate, I = H(C) will be computed for it. The weighted min-hash image ciphertext will be sent to the cloud storage if this is the first time an image has been uploaded. The owner of the photograph could also change the properties to regulate access rights. If a duplicate copy is discovered on the server, the client will be prompted to prove his ownership. If he succeeds, he will then be given a pointer that enables him to view the picture. To confirm ownership of the image copy (IC), the image user must send and run a verification method in full. The user wants to upload a new image into cloud server of “A.jpg”, if the image is not present in the server, then the image is successfully uploaded. After that, the another user wants to upload a same image with different name called “B.jpg”, then the deduplication process is check the verification weighted min-hash algorithm to detect a duplicate image. The weighted min-hash method only computes hash values for special subelements, i.e., “active indices”. The algorithm is as follows: The first step is to extract a list of min-hashes from each storage image and input query image. A weighted min-hash is often a single number with the property that two sets with weight1 and weight2 have the same value of min-hash with a probability equal to their similarity Simm (weight1 , weight2 ). The min-hashes are organized into n-tuples

Fig. 3 Input sample images

An Efficient Secure Cloud Image Deduplication with Weighted …

41

Fig. 4 Result of proposed weighted min-hash algorithm

known as sketches for effective cloud deduplication. Then, a hash table is effectively used to find duplicate sketches of images. Images that have at least h drawings that are exactly the same are deemed to be potential near duplicates, and their similarity is then calculated using all of the available min-hashes. Figures 5 and 6 show the deduplication results. Cloud Storage Server: Clients use this server to outsource and store images. Assuming a user desires to download an image I. The user sends the cloud storage server a request initially, along with the picture names. The cloud storage server will decide whether the user is eligible for download authorization after receiving the request and the image name. The cloud server gives the user the ciphertext CM and cipher key CK if the test is successful. The user decrypts and obtains the K key that is locally stored. The cloud storage server will transmit the appropriate key if the user’s characteristics and owner preferences line up. The original photos could be recovered by the user using the convergent encryption key. If unsuccessful, the cloud storage server will notify the user with an abort signal to explain the unsuccessful download.

4 Results and Discussion The results have been estimated using the proposed hybrid secure approved deduplication algorithm with a weighted min-hash pre-processing (HSAD) method with

42

Fig. 5 Input “A.jpg” image

Fig. 6 Result of cloud deduplication process

R. Dhivya and N. Shanmugapriya

An Efficient Secure Cloud Image Deduplication with Weighted …

43

Table 1 Comparison of the proposed HSAD and existing techniques’ encryption time performance Image file size (MB)

ECC

DH

5

14.47

10.22

MECC 6.21

Proposed HSAD 5.92

10

22.56

19.64

10.04

9.52

15

28.00

28.32

15.54

14.86

Table 2 Comparison of the proposed HSAD and existing techniques’ decryption time performance Image file size (MB)

ECC

DH

5

13.50

12.21

MECC 5.28

Proposed HSAD 4.86

10

22.66

18.11

10.22

9.96

15

29.43

26.43

16.74

15.01

modified elliptic curve cryptography (MECC), Diffie-Hellman (DH), and elliptic curve cryptography security algorithm techniques [18]. On a PC running Windows 10 with MATLAB R2018b simulations, the findings were implemented using an Intel I5-6500U series processor running at 3.21 GHz and 8 GB of main memory. The performance comparison of the proposed HSAD with the widely used MECC, DH, and ECC algorithms regarding Enct and Dect is explained in Tables 1 and 2. The comparison is done with the uploaded file size as the focal point. Enct and Dect are considered as the encryption and decryption time seconds (sec). The difference between the starting time and finishing time is used to calculate the encryption and decryption times. It is assessed as Enct = Ence − Encs

(1)

Dect = Dece − Decs

(2)

where Enct and Dect are the encryption and decryption time, Ence and Dece are the encryption and decryption ending time, and Encs and Decs are the encryption and decryption starting time. Figures 7 and 8 show the comparison of encryption and decryption computational time (seconds) measures.

5 Conclusion The goal of the paper was to demonstrate a novel secure image deduplication scheme. The suggested plan consists of three parts: user group, weighted min-hash pre-processing algorithm, and verification. While supporting image deduplication, the key encryption has been employed to safeguard the secrecy of sensitive image content. While simultaneously using a feature-based encryption approach to share photographs with friends by defining access privileges, the owner of the image

44

R. Dhivya and N. Shanmugapriya

Time in Sec

Encryption Time (sec)

5

30

10

25

15

20 15 10 5 0 ECC

DH

MECC Methods

Proposed HSAD

Fig. 7 Comparison of encryption time

Decryotion Time (sec) 5 Time in Sec

30

10 15

20 10 0 ECC

DH

MECC

Proposed HSAD

Methods Fig. 8 Comparison of decryption time

might download the ciphertext again and decrypt the image using the secret key. By providing ownership documentation, a user who owns the identical image copy may be granted access to the ciphertext and allowed to destroy his duplicate copy.

References 1. Harnik D, Pinkas B, Shulman-Peleg A (2010) Side channels in cloud services: deduplication in cloud storage. IEEE Secur Privacy 8(6):40–47 2. Fu Y, Xiao N, Jiang H, Hu G, Chen W (2019) Application-aware big data deduplication in cloud environment. IEEE Trans Cloud Comput 7(4):921–934 3. Jin K, Miller EL (2009) ‘The effectiveness of de duplication on virtual machine disk images. In: Proc SYSTOR, Israeli Exp Syst Conf, New York, NY, USA, pp 1–12

An Efficient Secure Cloud Image Deduplication with Weighted …

45

4. Bellare M, Keelveedhi S, Ristenpart T (2013) Message-locked encryption and secure deduplication. In: Proceedings of annual international conference on the theory and applications of cryptographic techniques (EUROCRYPT), pp 296–312 5. Zhang L, Ma J (2009) Image annotation by incorporating word correlations into multi-class SVM. Soft Comput 917–927 6. Yuan H, Chen X, Jiang T, Zhang X, Yan Z, Xiang Y (2018) DedupDUM: secure and scalable data deduplication with dynamic user management. Inf Sci 456:159–173 7. Yan C, Gong B, Wei Y, Gao Y (2021) Deep multi-view enhancement hashing for image retrieval. IEEE Trans Pattern Anal Mach Intell 43(4):1445–1451 8. Yan C, Shao B, Zhao H, Ning R, Zhang Y, Xu F (2020) 3D room layout estimation from a single RGB image. IEEE Trans Multim 22(11):3014–3024 9. Cui H, Deng RH, Li Y, Wu G (2017) Attribute-based storage supporting secure deduplication of encrypted data in cloud. IEEE Trans Big Data 5(3): 330–342 10. Yan Z, Zhang L, Ding W, Zheng Q (2019) Heterogeneous data storage management with deduplication in cloud computing. IEEE Trans Big Data 5(3):393–407 11. Zhuang Y, Liang L (2019) A novel local invariant feature extraction method for highdynamic range images. In: Proceedings of the 2nd international conference on safety produce informatization (IICSPI), pp 307–310 12. Shen J, Zhou T, He D, Zhang Y, Sun X, Xiang Y (2019) Block design-based key agreement for group data sharing in cloud computing. IEEE Trans Depend Secure Comput 16(6):996–1010 13. Yan C, Li Z, Zhang Y, Liu Y, Ji X, Zhang Y (2020) Depth image denoising using nuclear norm and learning graph model. ACM Trans Multimed Comput Commun Appl 16(4):1–17 14. Li S, Lan T, Balasubramanian B, Won Lee H, Ra M-R, Krishna Panta R (2022) Pushing collaborative data deduplication to the network edge: an optimization framework and system design. IEEE Trans Netw Sci Eng 9(4) 15. Yang X, Lu R, Shao J, Tang X, Ghorbani AA (2022) Achieving efficient secure deduplication with user-defined access control in cloud. IEEE Trans Depend Secure Comput 19(1) 16. Yuan H, Chen X, Li J, Jiang T, Wang J, Deng RH (2022) Secure cloud data deduplication with efficient re-encryption. IEEE Trans Serv Comput 15(1) 17. Huang GB, Ramesh M, Berg T, Learned-Miller E (2007) Labeled faces in the wild: a database for studying face recognition in unconstrained environments. University of Massachusetts, Amherst, Technical Report 07–49, October 18. Shynu PG, Nadesh RK, Menon VG, Venu P, Abbasi M, Khosravi MR (2020) A secure data deduplication system for integrated cloud-edge networks. J Cloud Comput: Adv, Syst Appl

An Intelligence Security Architecture for Mitigating DDOS Attack in CloudIoT Environment E. Helen Parimala, K. Sureka, S. Suganya, Y. Sunil Raj, and L. Lucase

Abstract In the field of information technology, IoT and Cloud play a crucial role. IoT aims to connect disparate objects so that smart services and applications can be accessed from anywhere. Even though IoT and Cloud have been developed independently, combining these two technologies results in a renaissance in the construction of smart environments and future networks. CloudIoT is the name of the latest evolution. The major concern with CloudIoT is security. Researchers from around the world work to incorporate innovative CloudIoT services to satisfy users needs. But so far, no well-known structure has been established. DDoS is one of the destructive attacks that can potentially impair CloudIoT data transfer. Therefore, a comprehensive resolution to this problem is required. To safeguard against DDoS attacks against a network of clients and a network of providers, a special security technique is suggested. Keywords CloudIoT · Smart mitigating service firewall · Distributed denial of service

1 Introduction The terms “Internet” and “Things” are used to characterize the IoT, a brand-new paradigm in the area of computer technology. The online world is a repository of computer networks that are linked together and act as the global system for users E. H. Parimala (B) SSM College of Arts and Science, Dindigul, Tamil Nadu, India e-mail: [email protected] K. Sureka · S. Suganya SSM Institute of Engineering and Technology, Dindigul, Tamil Nadu, India Y. Sunil Raj St. Joseph’s College (Autonomous), Trichy, Tamil Nadu, India L. Lucase St. Xaviers Catholic College of Engineering, Chunkankadai, Nagercoil 629003, Tamil Nadu, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_4

47

48

E. H. Parimala et al.

all over the world [1] it uses the Internet Protocol suite (TCP/IP). This network connects a number of publics, private, business, educational, and government institutions worldwide, and it is connected to a variety of wireless electronic networking and optical [2]. For the purpose of communication, Cloud computing technology is integrated with a number of IoT devices and provides resources on-demand [3]. The following needs could be met using this technology: 1. lower prices for facilities 2. high performance 3. Use of the most computational power possible 4. open device accessibility 5. versatility. Although the IoT and the Cloud are distinct approaches, combining them results in a resurgence in the creation of smart environments and future networks. CloudIoT is the name of this most recent development. IoT and Cloud computing are two distinct innovations that are used in various aspects of our daily life [4]. Because their use and adoption are said to be more persistent, they are said to be the most significant component of the future internet. However, Cloud users still face a number of privacy and security issues, such as identity management and the tendency for transmitted data to be lost in the authentic node by IoT devices. The rising number of DDoS-related incidents that have been reported is a significant and fatal threat [5]. In order to launch a DDoS assault, an attacker must seize control of an online machine network [6]. Malware infects data systems and other devices, such as Internet of Things, each one becoming a robot or a zombie [7]. Once a botnet, also known as a group of bots, is created, the attacker can control it remotely [8]. Each bot will respond by sending requests to the target when the spammer targets the IP address of a victim, possibly forcing the targeted server or network to exceed capacity and resulting in a denial of service to regular traffic [9]. When a fleet is created, the assailant can control it by remotely controlling each drone and providing it updated instructions [10]. The users and data’s privacy and authenticity could be seriously compromised by any CloudIoT information leak [11]. To meet the requirements of CloudIoT users, researchers around the world work to integrate intelligent CloudIoT services [12]. However, no notable architecture has yet been verified [13]. As a result, it is critical to develop architecture that combines Intelligent CloudIoT activities and operations to enable secure navigation and to secure activities at any time and from any location [14]. Security considerations like authenticity, confidentiality, integrity, and privacy present the most significant obstacles in putting this scenario into action [15]. There are a variety of attacks that can harm network services and resources, according to network security experts. As a result, the provider is able to increase the users required infrastructure in response to the high demand. The users bills will reflect this procedure, by finding Cloud too expensive. There must be a technical solution to this issue [16]. To stop a DDoS attack on a CloudIoT Providers network and CloudIoT Users network, a novel security method is proposed. This aggressive model provides continuous monitoring for the remaining packets utilising different security layers, in addition to hiding the locations of protected servers and categorising users into four groups based on two verification checks, and verifies the legitimacy of users prior to their initial access to the network. In order to create a secure and convenient cloud environment

An Intelligence Security Architecture for Mitigating DDOS Attack …

49

and speed up reaction time for legitimate users, it entails initially confirming the users validity and then monitoring their behaviour utilising a variety of methods and components in cloud customers networks. However, the primary focus of this paper is on safeguarding legitimate CloudIoT users networks as well as those of CloudIoT service providers from distributed denial of service attacks (DDoS) and maintaining uninterrupted CloudIoT services. It could be seen as a contribution to the fight against this kind of threat to the CloudIoT environment. Security risks can be mitigated with smart mitigating firewall and adoption of ECC. Keeping the discussed factors, smart mitigating service firewall for DDoS Attack architecture proposed in this paper, provides complete security to CloudIoT user and provider by using a variety of measures against DDoS attacks.

2 Review of Literature The major security issues and threats are also reviewed to construct a secured architecture for integrating IoT and Cloud. The main focus of the literature review carried out is based on the following to define the problem statement of the proposed research work. The author [17] provided a comprehensive explanation of the hacker attacks evolutions and variations. The methods for prevention and mitigation had been categorised and looked at in light of how they worked. Secure overlay services, honeypots, load balancing, filters, and awareness-based prevention systems are just a few of the preventive methods guiding principles. Communication difficulties, data management, the scalability of massive amounts of data, real-time data processing, security, privacy, interoperability, and a lack of standardization are just a few of the key issues that the author has highlighted [18]. The author [19] has proposed mitigation technique, when an attack occurs, the architecture issues an alert signal along with the packet, and the detection model is updated when a new type of attack occurs. The function executes two things: Classification of countermeasures and formation of logs. In the event of an attack alert, a counter measure based on dropping the attacked packets has been proposed for 2018. The log database will hold the information pertaining to the attacked packet. This system has one disadvantage: at certain times, communication overhead could occur, necessitating adjustments to the cloud service architecture [20]. The author [21] makes the case for an end-to-end security architecture that contributes to the idea of a dynamic composite trust model. Here, the policy monitoring and enforcement (PME) component will keep an eye on each service individually during their runtime to look for suspicious behaviour. Aspect-oriented programming (AOP) provides the model for safeguarding malicious activity with tamper-proof and non-bypassing protection. Based on the attacks happening in software-defined networks (SDNs), the author [22] presented a review based on the DDoS attack scenario, and the paper made a

50

E. H. Parimala et al.

broad classification regarding intrinsic (depending on the innate qualities) motivational factors (depending on outside variables). Based on the logic in circuits and defence functions such as detection, mitigation, and both detection ad mitigation (combined), the solutions are classified in this paper. The author proposed [23] a secure centralised alert and control system that seems to receive the control from the server and has the data support mechanism with high effectiveness and flexibility. The same token is applied with the distribution verified among the central data scheme. The author [24] has proposed MIoT. Nowadays, MIoT protects the life of people who are at stake, and ensures their wellbeing. A patient’s various health parameters can be monitored remotely, and this can be done in a medical data centre in real-time. The author [25] has proposed a framework for resolving a number of cloud computing security and ethical issues by employing authentication. In the cloud, secure data performance and storage operations can be carried out using this strategy. Utilising symmetric cryptography that is converted as cypher data stored in the cloud for use in computations that are comparable to those performed on the plain text. The data could then be decrypted using the same approach and the appropriate parameters for cloud data retrieval. One of the goals of the authors [26] use of text-driven CAPTCHA is to reduce DDoS attacks on the IaaS Cloud service, reduces traffic and determines whether the packs are made up of humans or computers. The challenge-response system (program). To give a new or more reliable captcha format, a lot of effort has been done. A cloud-based DoS attack detection model (CDOSD) is suggested by the author [27] that combines a decision tree classifier with the basic binary version of artificial bee colony optimization (BABCO) and the key features extraction technique. Due to their dynamic varying nature, DoS attacks are said to be a serious threat that the firewall cannot prevent, resulting in the users inability to access cloud services. Due to its rapid learning rate in comparison to the other classification method, the decision tree classifier is widely used. Additionally, the cloud hosts DoS attack can be detected with high accuracy and low false-positive rates using CDOSD. According to an assessment of the literature, there is no comprehensive framework that combines cloud computing and the Internet of Things to provide intelligent distributed denial of service attack mitigation. Implementing this scenario presents significant difficulties due to security threats like legitimacy, anonymity, consistency, and secrecy. By using ECC, security concerns can be reduced. This paper therefore proposes and evaluates a protected framework for combating DDOS attack merging Web of Things and Cloud.

3 Proposed Architecture The major goals of the suggested architecture (Fig. 1) are to create a trusted architectural style to Reduce DDoS Integrating IoT and Cloud, to create a security paradigm,

An Intelligence Security Architecture for Mitigating DDOS Attack …

51

Fig. 1 Smart mitigating service firewall architecture

edge security mechanism, and security algorithms for the configuration mode, and to reduce the effect of DDoS in a stratified way. In addition to, hiding the locations of the protected servers and splitting users into four dimensions according to two verification tests, it is a study that validates the genuineness of users at the starting of their internet connectivity and then offers surveillance for the leftover transmissions using other layers of protection. In order to create a secure and convenient cloud environment and shorten the response time for legitimate users, it requires initially confirming the user’s legitimacy and then monitoring their activity using a variety of techniques and components in cloud customers networks. In order to detect malicious activity from users who pass the Jigsaw Image Puzzle Test and Dynamic Captcha with Equal Probability Test, because they are human users and not botnets, the SMS FIREWALL DDoS uses a prevention and detection of intrusions device, a reverse proxy (RP) server, and a traffic monitoring system to examine the contents of the received packets. The two scenarios below show more robust method of removing the packets IP address from the White List or Black List. 1. When a person successfully completes the Jigsaw Image Puzzle Test after passing the Dynamic Captcha with Equal Probability Test, the user is deemed to be a malevolent user, and their IP address is added to the Black List. 2. Upon passing both checks, the users IP address is added to the White List. Using the second trial if the user fails the Jigsaw Image Puzzle Test but passes in the first test, the architecture appreciates the first attempt. So, the architecture transfers that user IP address into a suspicious List. This reverse proxy has a crucial function in determining the options securely from the cloud database by the clients. To permit the legitimate user and device details, reverse proxy does client and apparatus authentication, and the authenticated consumer and gadget will get secure service from the cloud database with the certificate registry through ECDSA certificate, which is granted to the client. ECC

52

E. H. Parimala et al.

secure algorithm is used for this purpose and based on the ECDSA certificate, cloud security services are provided to all the legitimate users. Through this entire process, End-to-End security architecture had been implemented.

3.1 Functionality of the Proposed Architecture Kindly assure that, when you submit the final version of your paper, you also threeway handshake, secrecy, and integrity are all assured via secure client and gadget identity verification. The encrypted US_id and DV_id are validated against with the user information and gadget details provided in the issued X.509 licence when a customer uses the CloudIoT connected smart service. The user and device credentials are validated against the US_id and DV_id. Unless the customers privileges and the customers gadget match the US_id and DV_id, the users, user name and password are verified against the credentials already saved in the mitigation system (Fig. 2) database server. The customer and the gadget are authenticated to utilize the CloudIoT features if they match. Otherwise, the authentication procedure is terminated and the user or the devices are blocked from utilising the service if any of the aforementioned processes fails. The service requested by the user, is sent from the cloud to the requested user securely. The user request service will be authenticated along with the device used by the user. The gadget and identity authentication using the certificate enables the authorized user to avail the requested service. The facilities suggested for architecture end-to-end confidentiality and overcome DDOS attack moreover provide secure services from CloudIoT to User. Because elliptic curve cryptography is being used,

Fig. 2 SMS of DDOS mitigating service of functional descriptions flow diagram

An Intelligence Security Architecture for Mitigating DDOS Attack …

53

it will be more difficult rather than tiresome for any hacker to hack or break the information during data transfer. The SMS firewall keeps up with four different lists.: (1) White list (2) Black list (3) Suspicious list, and (4) Malicious list. The SMS firewall malware detection service, which is regarded as the first verification phase through the graphical turing test, verifies user authentication. The user is allowed to proceed to the second verification step after the first verification phase. After the first verification phase, the user is free to move on to the second verification phase. The client puzzle server makes it possible for users to randomly receive jigsaw puzzle images. Similar to the previous instance, users are allowed three trials and three attempts. The client puzzle server notifies the SMS firewall that a user has passed the test after they have done so. When a lawful user logs into a cloud account, the IDPS system will take into account the user details that are kept on the suspect and malicious list that was sent by the SMS firewall. The IDPS binary firefly optimisation technique is used in the suggested architecture. Thus, the cloud server only accepts and authorizes authorized users, and IDPS serves as a filter, separating malicious from valid users. The white list category of the reverse proxy contains a list of authorized users. Now, the genuine users on the IDPS system’s list who are filtered out will also have the chance to use cloud services. End-to-end security architecture was utilized throughout the entire procedure.

4 Security Algorithms This study presents the security algorithms created for the suggested secured architecture for mitigating DDOS attack for fusing Internet of Things and cloud computing. All tasks, including data processing and data transactions secured between users, the services environment, and the CloudIoT, are made possible by the various levels of authentication and appropriate cryptographic algorithms. This algorithm creates self-signed ECC certificates using secure socket layer. The commands used are also furnished along with the pseudocode. The certificate generated using the following algorithm is a sample which may be used as a model for generating certificates with the credentials in each process at different levels of security in the proposed architecture.

4.1 Algorithm for Secure User and Device Registration Input: Username, Password, Mobile number, Device IMEI number Function X509_Certificate Register User And Device(username, password EncodedBase64, mobilenumber, imei) { /*validate or register the user*/ srg passwordDecoded = DecodeBase64(passwordEncodedBase64); int US_id = Check_User(username, passwordDecoded);

54

E. H. Parimala et al.

/*validate or register the device*/int DV_id = Check_Device(mobilenumber, imei); /*get or generate new certificate for clientauthentication*/ /*the new certificate will be generated using the username, password. the US_id and Device_id will be added in the X509 extension part*/ X509_Certificate = GetUser_Certificate(US_id, DV_id); if(certificate = = null)certificate = CreateUser_Certificate(US_id, DV_id, username, passwordEncodedBase64); /*a custom certificate validater will be used based on the following parameters (US_id, DV_id, username, passwordEncodedBase64) */ MapOneToOneCertificateAuthentication(us_id, dv_id, username, passwordEncodedBase64); AuthenticatedUser(Us_idCertificate); return certificate;

4.2 Algorithm for Key Generation Using ECC CN = Helen/emailAddress = [email protected] Public Key Algorithm: id- Public-Key: (256 bit) pub: 04:df:48:f7:e4:a6:93:b8:e8:1a:ed:f0:04:0f:73: 0d:75:eb:6e:8a:f3:91:f4:f7:96:7b:c4:bd:5b:64: f6:18:2d:cf:f0:93:c5:28:5f:8a:d0:11:62:de:79: d4:e5:46:ad:83:f4:a3:e7:ed:e1:f5:2c:0f:03:26: d0:8e:96:49:31 ASN1 OID: prime256v1 Attributes:{Credentials needed for the identification of user/device/service} Signature Algorithm: ecdsa-with-SHA256 60:25:03:31:00:be:8f:1f:1b:1e:13:85:0c:47:79:7e:c8:05: fd:fa:c8:7b:b5:eb:cd:76:16:40:a9:04:32:48:61:5b:96:a8: fe:02:20:29:00:e5:14:3b:cf:06:c5:58:4a:62:c5:65:2e:75: 16:4d:70:06:20:ab:7d:72:15:f6:1c:f7:22:50:5f:62:9c. The algorithm is developed for the secure user and device registration. To access the CloudIoT-enabled services, the user has to register the user and user device in CloudIoT integrated environment. The user has to enter the user credentials like name, gender, date of birth, mobile number, and the email id. Once the details are received, the password and username are generated by the user. It automatically extracts the cell phone number and IMEI of the device used for registration. Upon successful OTP verification for the same device for device validation, the device generates a user id (US_ id) and a device id (DV_ id). By encoding the login and password, the US_ id is created, similar to how the IMEI number and registered cell phone number are encoded to create the DV_ id. Using US_ id and DV_ id, a new certificate is created. The created certificate is kept on the smart mitigating database server, where it is checked using the parameters

An Intelligence Security Architecture for Mitigating DDOS Attack …

55

in an unique certificate validation process. The person and device are successfully registered if the validation is successful.

4.3 Significance of the Proposed Security Algorithms The security requirements of the proposed architecture are achieved using the security algorithms put forth. The various security requirements are confidentiality, privacy, integrity, and non-repudiation. Mutual authentication: Mutual authentication is ensured by secure user and user device registration and authentication algorithms. The user and the device registered may avail the registered service with mutual authentication between user, user’s device and service. Privacy: Privacy is realized with message encryption and by generating self signed certificate containing user’s credentials. User’s data in the alerts, messages, and mails are sent only to the appropriate CloudIoT client to the registered user’s device securely. Confidentiality: Confidentiality is achieved with secure service and CloudIoT device registration. Certificates are generated with credentials such as US_id and DV_id and authenticated with credentials stored with SM_DB. By the secure authentication of services, permission is to be acquired to post the data inferred by the CloudIoT devices of service environment. So, there is no possibility of unauthorized service providers or the devices post the data for data aggregation in SM_DB in SMS_Firewall smart mitigating service. Similarly, the unauthorized access of data is prevented by unauthorized user or user device. Integrity: The Elliptic Curve Digital Signature Algorithm (ECDSA) along with elliptic curve cryptography (ECC) for encryption and decryption and for signing the message by generating self-signed certificate over SSL ensure the integrity of the sensitive data such US_id, DV_id, user name, password, data of birth, gender, IMEI number, and MAC id of the user’s devices. Appropriate user with registered device alone may be used to change the user’s information. So, there is no possibility of unauthorized change in the user and user device credentials stored in SM_DB of SMS_Firewall.

5 Experimental and Result Analysis The main focus of the experimental study is to test the functionality of the proposed architecture in tune with the algorithms devised. The proposed SMS_Firewall performance validated through OpNet Simulation Tool, and results are recorded and tabulated. Time taken for user and user device registration, SMS_Firewall performance,

56

E. H. Parimala et al.

user and device authentication and service authentication are measured for different number of parallel requests. The memory size taken for the above-listed processes is also computed for different number of parallel requests. The performance tests are carried out through OpNet Simulation Tool. The simulated results obtained are tabulated and presented graphically.

5.1 Secure User and Device Registration The user and device registration is carried out securely making use of the security algorithm proposed for the same. The user and device registration is done using user interface. User credentials such as name, age, mobile number, and email id are fed into mitigating system database server. The device credentials such as IP address and MAC id are automatically extracted from the device used by the CloudIoT client. US_id and DV_id are generated by mitigating system ata base server and a selfsinged certificate is created. Using the certificate, the user and device information are validated and the user and device is registered securely. The registered information is encrypted using ECC using the key generated by ECDSA. The secure service registration is carried out securely using the security algorithm proposed for the same, user Interface to register the service in CloudIoT Database. The service registration is done using User Interface depicted in Fig. 3. The IP address and MAC id of User are automatically extracted for service registration process. Sv_id and CD_id are generated by mitigating system data base server and a self-singed certificate is created. Using the certificate, service is validated and registered successfully. Among the Five thousand parallel requests, 500 requests were chosen and analysed with the interval of 50 requests. The response time taken for the secured service registration is graphically represented in Fig. 4. Response time of user and device registration value is recorded in Table 1.

5.2 Number of HTTP Application Requests that the Server Has Received The average amount of HTTP application request traffic received by the server from users in the network system is displayed in bytes per second. Figure 5 shows the average volume of request traffic received by the server from the users in the network system for HTTP applications in bytes per second. The representations of the best effort scenarios show the ideal traffic volume for average HTTP requests (150 to 350 Bytes/seconds) received by the server, where no assigned attacks have been considered over the implemented system. On the other hand, the largest volume for the average HTTP requests was 300 to 550 Bytes/second, as shown in the scenario with active attacks without an active policy on a firewall for the increased volume of

An Intelligence Security Architecture for Mitigating DDOS Attack …

57

Fig. 3 Figure user and device registration

Fig. 4 Response time of user and device registration Table 1 Response time of user and device registration Response

No.of parallel requests

Timing for parallel

50

100

150

200

250

300

300

400

400

500

Requests for SMS_Firewall User and device registration 0.56 0.81 1.04 1.55 3.44 3.45 4.69 5.88 7.69 15.12

58

E. H. Parimala et al.

Fig. 5 Average number of HTTP application traffic requests made by users that the server receives in Bytes/second

HTTP requests. However, with the implementation of the firewall policy, the volume of HTTP requests received by the server reduced to a level of between 250 and 380 Bytes/second which is close to the optimum level of received traffic.

5.3 Application Response Time for HTTP As illustrated in Fig. 6, the average response time for HTTP applications launched between system users and the HTTP server represents the degree of effectiveness of the DDoS prevention mechanism in place. The quickest response time on average is 382.620 s. The overall performance analysis ensures that the secure transmission between the user and the service provider takes place in relatively less time. It also demonstrates how effective the suggested solution is at processing.

6 Conclusion This research work aims at proposing an architecture which mitigates to denial of distributed systems integrating IoT and Cloud. Cloud computing and the IoT are

An Intelligence Security Architecture for Mitigating DDOS Attack …

59

Fig. 6 Average response time in seconds for HTTP apps started by users and HTTP Server

essential components of informatics. Despite the fact that the Cloud and IoT are independent technologies, combining them leads to a revival in the development of future networks and smart environments. CloudIoT is the name of this latest evolution. Security is one of the major issues with CloudIoT. A major bottleneck will be the difficulties in integrating the cloud with IoT. However, no notable architecture has been confirmed to date. As a result, it is essential to create an architecture that integrates CloudIoT smart services and apps to enable secure, anytime, everywhere access to smart services. In order to prevent the consequences of DDoS assaults, the CloudIoT user network can be installed with the SMS FIREWALL DDoS suggested architecture. It can also be deployed on the CloudIoT provider’s side to protect it from DDoS attacks. This indicates that the suggested solution is regarded as AntiDDoS from both the CloudIoT user’s and CloudIoT provider’s perspectives. The findings demonstrated how well the suggested framework works to speed up response times for authorized users. The activities of the suggested design are completed in 382.620 s for 5000 service accesses or requests. Additionally, it demonstrates the correctness and efficiency of the SMS DDoS in defending the customer’s network and cloud while offering new users the services they need with a quick reaction time.

60

E. H. Parimala et al.

References 1. Handbook of research on cloud and fog computing infrastructures for data science (2018). IGI Global, pp 108–123 2. Liu X, Yang X, Lu Y (2018) To filter or to authorize: Network layer dos defense against multimillion-node botnets. In: ACM SIGCOMM computer communication review. ACM 3. Azarmi M, Bhargava B (2017) End-to-end policy monitoring and enforcement for serviceoriented architecture. https://doi.org/10.1109/CLOUD.2017.17 4. KalkanK GuG, Alagoz F (2017) Defense mechanisms against DDoS attacks in SDN environment. IEEE Commun Mag 55:175–179 5. Rastegar F, Dehghan M, Fooladi T (2019) Online virtual machine assignment using multi-armed bandit in cloud computing. IEEE. https://doi.org/10.1109/COMITCon.2019.8862268 6. Rathi A, Parmar N (2015) Secure cloud data computing with third party auditor control. In: Advances in intelligent systems and computing, vol 2, p 328. International Publishing. Switzerland, Springer. https://doi.org/10.1007/978-3-319-12012-6_17 7. Raoof Wani A, Rana Q, Pandey N (2017) Cloud security architecture based on user authentication and symmetric key cryptographic techniques. In: 6th international conference on reliability, Infocom technologies and optimization (ICRITO) (Trends and Future Directions). 978-1-5090-3012-5/17 8. Sahi A, Lai D, Li Y, Diykh M (2017) An efficient DDoS TCP flood attack detection and prevention system in a cloud environment. IEEE Access 5:6036–6048 9. Sandar V, Shenai S (2016) Economic Denial of Sustainability (EDoS) in cloud services using HTTP and XML based DDoS attacks. Int J Comput Appl 11–16 10. Kumar Seth J, Chandra S (2018) An effective DOS attack detection model in cloud using artificial bee colony optimization. Springer. https://doi.org/10.1007/s13319-018-0195-6 11. Li S, Kwang K, Choo R, Sun Q, Buchanan WJ, Cao J (2019) IoT forensics: Amazon echo as a use case, pp 6487–6497 12. Suciu G, Vulpe A, Halunga S, Fratu O, Todoran G (2013) SmartCities built on resilient cloud computing and secure internet of things. In: 19th international conference control systems and computer science (ICCSCS), pp 513–518 13. Sharma S, Gupta A, Agrawal S (2016) An intrusion detection system for detecting denial-ofservice attack in cloud using artificial bee colony. Springer 14. Siegel JE, Kumar S, Sarma SE (2018) The future internet of things: secure, efficient and model-based, pp 2386–2398 15. Sivakumar S, Anuratha V, Gunasekaran S (2017) Survey on integration of cloud computing and internet of things using application perspective. Int J Emerg Res Manag Technol 6(4). ISSN: 2278-9359 16. Haller S. Internet of Things: an Integral part of the future Internet. In: SAP presentation. http:// Services.future-internet.eu/images/1/16/A4-Things-Haller.pdf 17. Subashini S, Kavitha V (2011) A survey on security issues in service delivery models of cloud computing. J Netw Comput Appl 34:1–11 18. Sun W, Cai Z, Li Y, Liu F, Fang S, Wang G (2018) Security and privacy in the medical internet of things: a review. Hindawi Secur Commun Netw 9. Article ID 5978636. https://doi.org/10. 1155/2018/5978636 19. Surbiryala J, Li C, Rong C (2017) A framework for improving security in cloud computing. In: 2nd ieee international conference on cloud computing and big data analysis. 978-1-5090-4499 20. Uddin M (2013) Intrusion detection system to detect DDoS attack in Gnutella hybrid P2P network. Indian J Sci Technol 4045–4057 21. Velliangiri S, Premalatha J (2017) Intrusion detection of distributed denial of service attack in cloud. Springer. https://doi.org/10.1007/s10586-017-1149-0 22. Venkatesh A, Marrynal S, Eastaff (2018) A study of data storage security issues in cloud computing. Int J Sci Res Comput Sci, Eng Inf Technol (IJSRCSEIT) 3(1). ISSN: 2456-3307

An Intelligence Security Architecture for Mitigating DDOS Attack …

61

23. AliaaWalid AM, Salama M (2017) MalNoD: malicious node discovery in internet-of-things through fingerprints. In: European conference on electrical engineering and computer science (ECEECS). IEEE. https://doi.org/10.1109/EECS.2017.58 24. Wang H, Jia Q, Fleck D, Powell W, Li F, Stavrou A (2014) A moving target DDoS defense mechanism. IEEE 25. Wang B, Zheng Y, Lou W, Hou T (2015) DDoS attack protection in the era of cloud computing and software-defined networking. Comput Netw 81: 308–319. https://doi.org/10.1016/j.comnet 26. Yan Q, Yu F, Gong Q, Li J (2016) Software-defined networking (SDN) and distributed denial of service (DDOS) attacks in cloud computing environments: a survey, some research issues, and challenges. IEEE Commun Surv Tutor 18(1):602–622 27. Yihunie F, Odeh A, Abdelfattah E (2018) Analysis of ping of death DoS and DDoS attacks. https://www.researchgate.net/publication/325705681

Automated Activity Scheduling Using Heuristic Approach Bhoomil Dayani, Raj Busa, Divyesh Butani, and Nirav Bhatt

Abstract The intricacy of classroom scheduling may result from moving and organizing classrooms based on the audience’s capacity, all available facilities, lecture time, and many other factors. This paper suggests a heuristic timetable optimization method to increase lesson planning productivity. This work aims to find an optimal solution to the timetabling problem, which is one of the highly constrained N-P hard problems. The need for this sort of timetabling software arises as manually designing a timetable takes too much time and effort, and if an overlap occurs among the timetable, the timetable is redesigned using hit-and-error methods which have very high time costs. So, In this work, we try to develop software that will generate a timetable automatically based on the provided information. The expected main input is about teachers, classes, and subject data along with the maximum workload of a teacher in a week to generate a valid timetable. The main constraints that this software should satisfy are that a teacher should not have a lecture in more than one class at the same time slot, and a class should not have more than one lecture in a given time slot. The solution which gets from this project should have to satisfy the above-mentioned constraints. Keywords Timetable scheduling · Genetic algorithm · Constraints · Optimization · Heuristic

B. Dayani · R. Busa · D. Butani · N. Bhatt (B) Smt. Kundanben Dinsha Patel Department of Information Technology, CSPIT, Charotar University of Science and Technology, Anand, Gujarat, India e-mail: [email protected] B. Dayani e-mail: [email protected] R. Busa e-mail: [email protected] D. Butani e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_5

63

64

B. Dayani et al.

1 Introduction The proposal is made in “Time Table Scheduling using Genetic Artificial Immune Network.” One of the crucial duties that arise in everyday life is scheduling. Numerous scheduling issues exist, including those involving employees, production, education, and others. Due to the various constraints that must be met to find a workable solution, organizing educational schedules may also be challenging. According to Jonas and Rasmus [8], the complexity of the problem is what leads to the adoption of so many different algorithms. Almost every school has unique restrictions and requirements that must be met. The phase of production control known as scheduling assigns a priority rating to each work item and then arranges for its timely and orderly release to the plant. According to Jha [19]. Heuristic algorithms have been utilized in research with varying degrees of effectiveness [6]. Following an examination of our institute’s scheduling system, we attempted to use a genetic algorithm to resolve it. In educational institutions, finding a workable lecture/tutorial schedule for a department may be a difficult issue that arises frequently [7]. For this, this work employs a special algorithm. And suggested using a timetable object in our approach to creating timetables. Even though most faculty organization tasks are now automated, creating lecture schedules is still typically done manually, which requires a lot of time and effort. In addition to being utilized extensively in schools, colleges, and other teaching settings. In these scenarios, precisely designed timetables are reused for the entire generation without any alterations, which is dull. Another example is that difficulty arises when there aren’t enough employers or employees, which forces schedule adjustments or the quick filling of vacant seats. They must plan their course so that it fits the requirements of the current time frame and the facilities at their disposal. However, they must adjust their schedule to accommodate both the new course additions and the newly enrolled students in the new batches. This can lead to rescheduling the entire timetable for all of its batches to schedule it as soon as feasible before the batch courses begin. Another issue that comes up when establishing exam schedules. When many batches are scheduled to take tests on the same day, scheduling must be done carefully to account for any issues with the facilities that are available to hold these exams concurrently [20].

2 Literature Survey Literature surveys evaluate the data in the literature about a proposed piece of work. A literature review, which summarizes all prior research on a topic and establishes the framework for ongoing research, may be an essential component of a research endeavor. It is the most crucial section of the report since it directs research most appropriately. It helps to set a goal for analysis. The issue of time scheduling is resolved via evolutionary methods. Methods like genetic algorithms and evolutionary algorithms have been employed with varying

Automated Activity Scheduling Using Heuristic Approach

65

degrees of effectiveness. In this essay, we have examined the issue of using a genetic algorithm to schedule an instructional timeline. We also used a synthetic genetic defensive network and a mimetic hybrid algorithm to tackle the issue, and we compared the outcomes with those of the genetic algorithm [1]. The results show that GAIN can reach a possible solution faster than that GA [3]. Academics frequently struggle with figuring out the study plan that is feasible in the university’s major department. This study provides an evolutionary algorithm (EA) method for resolving the robust timetable problem at the institution. On to chromosomal representation issues. Utilizing timelines, heuristics, and contextualbased reasoning may have been acquired at the appropriate computer moment [2]. To increase cohesiveness, a clever genetic alteration plan has been implemented. Using actual data from a top university, the comprehensive curriculum plan described in this paper is accepted, assessed, and discussed. An automated timeline system employs Chowdhary’s [3] efficient timing algorithm, which can manage both strong and weak impediments. So that each teacher and student can review their schedule once a specific semester is over, but without making any specific plans. By the teacher’s schedule, the availability and power of visual resources, and other rules applicable to different classes, semesters, teachers, and grade levels, the timetable generation system develops a timetable for each class and teacher. Nanda [4] proposes a widespread remedy for the time issue. Most of the previously suggested heuristic programs were considered to be challenging from the viewpoint of students. However, this technique works from the perspective of the subject, i.e., the instructor’s availability at a specific time. The planning strategy given in this study is adaptable, with the main goal of addressing academic and academic conflict, and teacher-related concerns, even though all potential barriers (e.g., teacher availability, etc.) are dealt with firmly. Al-Khair [5] presented algorithmic solutions to address the scheduling issue while offering teacher availability admissions. The challenge of scheduling school time is completely resolved by this technique, which employs a heuristic methodology. It initially builds a temporary timeline using title sequences that are produced at random. The subjects are moved to the Clash data structure if a teacher is divided by more subjects than are permitted.

3 Problem Statement The challenge of timetable creation can be modeled as a restricted satisfaction problem with several ambiguous factors. Modeling these problems in a way that the scheduling algorithm can understand is necessary. Planning entails setting up several sensible constraints on how many jobs can be completed at once. For instance, two courses taught by the same faculty member might not be scheduled at the same time at a tertiary school to better manage classrooms. The two disciplines that the same set of students must take also shouldn’t be incompatible.

66

B. Dayani et al.

4 Related Works The timetable planner assists students in creating their schedules based on the courses they choose to enroll in by accurately recommending the best potential combinations for that particular semester [9]. A review of university scheduling by Burke et al. [10] focuses on meta-heuristics, multi-criteria, case-based reasoning, hyper-heuristic, and self-adaptive techniques. The goal is to improve real-world timetabling systems by using parameters, multi-criteria approaches, and other techniques. It does not rely on relevant knowledge and represents a more adaptable method for finding the best solution to more challenging challenges. New challenges are solved using the knowledge-based paradigm of case-based reasoning. To overcome the issue and fine-tune the parameter settings, the single-stage simulated annealing (SA) design method is reliable and efficient [11]. Based on the provided set of operating parameters for timetable construction utilizing an evolutionary strategy, tabu search (TS) accelerates the solution-finding process [12]. Simulated annealing is dependent on a cooling schedule, and a long enough Tabu list is needed for Tabu search. For the different search techniques such as Tabu search, simulated annealing, scatter search, and genetic algorithms, Nanda et al. [13] use a heuristic approach to try to obtain a decent approximation solution. Memetic and genetic algorithms, as well as more recent heuristics, aid in the exploration of neighborhood solutions [14]. The runtime of the novel greedy heuristic algorithm is greatly shortened since it models the restrictions based on the objective function for all of a student’s curricula and transforms heuristics using relational calculus [15]. When there are problems with conflicts between lectures and teacher-related topics, it offers an all-encompassing solution. A sound set of solutions is produced by a heuristic strategy that doesn’t break strict limitations. The cycle of analysis, selection, mutation, prioritization, and the building is followed by the evolutionary squeaky wheel optimization process, which ends when the stop condition is met [16]. To give agents for improved resource allocation and discover a solid solution to this problem, distributed algorithms are frequently utilized. Agent technology is a significant area of study that aids in our ability to improve user profiles and learn more about the applications in the field [17]. The main focus of distributed architecture is on multi-agent system-based techniques that improve the ability to schedule each department and avoid resource conflicts [18].

5 Existing System The construction of a timetable is tedious and time-consuming. Since there aren’t any active timetable generators, this is currently done manually. The main issue while inputting the timetable is slot collisions. Slot collisions are the main issue while entering the timetable. As a result, even previously developed software does not

Automated Activity Scheduling Using Heuristic Approach

67

follow the standards. So, the current system is time-consuming, a tedious process that requires manual labor and simultaneously, with little flexibility. The platform we used to develop this software is a web application. The programming language used to implement this software is JavaScript. In the development of the UI, React.js is used while the backend is implemented using node.js. To make it a multi-user app login facility is provided and to store the data corresponding to the user, the MongoDB database is used locally. The project is currently run on local servers, but we have deployed it using Heroku and we have also changed the database from local to MongoDB Atlas. There are five major steps involved in the proposed work: • Step 1: We have designed our algorithm for this problem by analyzing the problem deeply and designing the UI for the project. • Step 2: To improve the efficiency of our algorithm and calculated its time complexity. We also proved the correctness of our algorithm. • Step 3: To implement the algorithm on the backend and get the desired results by providing our dummy inputs. • Step 4: The backend was implemented and the rest of API’s were developed to interact with the database and frontend • Step 5: Frontend was developed and iteration of the backend with our UI is done in this step. The entire process of creating a schedule is done manually while considering all potential limitations, both large and little.

6 Proposed System Current system generates a timetable manually. It requires lots of time. The creation of a semester schedule is one of the top duties at the beginning of each academic year. This may seem simple, but creating a schedule that takes into account everyone’s availability for all semesters can be a hard job. The manual process of creating timetables in most cases can be tedious and time-consuming for faculty members. The final system should be able to construct timetables entirely automatically, which will save an institute administration a ton of time and work. It will provide a schedule that can be used for all semesters as well. The timetable should be planned following the university’s set schedule for each course and the workload of the faculty members who will teach the corresponding disciplines. This also emphasizes the efficient use of resources, such as academic personnel, labs, and spaces. These inputs will be used to build potential timetables for the working weeks days for teaching faculty. This will integrate by using all resources as efficiently as possible while taking into account the limits. Flowchart of proposed system is shown in Fig. 1. The home page contains a Login and Registration Page, where the teacher will fill in some basic information about his or her appointment, and the section to create a teacher account in the application. You will be taken to the homepage, where you

68

B. Dayani et al.

Fig. 1 Flow of proposed system

will find several sections and a menu bar with choices for adding classes, subjects, teachers, and spaces to the currently selected page. After selecting a section and filling out its details, click the “add” button to add the information to the database. If you click on the home button, you will be taken to the appropriate page. And the database has all information at all times. Check out all the information on all the

Automated Activity Scheduling Using Heuristic Approach

69

professors, classes, times, and subjects. Then once your task is complete you can exit the app. An application to connect to a website. With this website, we can create, add, update or delete teacher information as per your concerns. We can use the website to store teacher information and thus be able to create our timetable. You will be taken to a website with sections for first, second, and third; after selecting one, you will be taken to a page with a faculty name and a button that, when clicked, will display a timetable. There are other details about the professors to be added when you click on “Add Faculty, Classroom, and Times,” including Teacher Name, Subject, Lecture Timing, Slots, Semester, Credit, and Generate.

6.1 Algorithm Description As the time table-generating problem is one of the N-P hard problems, so it is difficult to get an optimal solution. The algorithmic approach which we used in this project is heuristic approach. All the hard constraints are dealt with by using constraintbased programming. The algorithm has two functions, one main function generates a timetable and a supportive function and day are used in the main function. The features and constraints implemented can be found here. Our algorithm takes multiple inputs which are listed below: instances: Data Structure which stores info about provided slots to be organized, i.e., [[Ti, Ci, Si, LTi, Li],……,[Tn, Cn, Sn, LTn, Ln]] Here T = Teacher C = Class S = Subject LT = noon lectures L = Labs ins = which keeps a record of lecture assigned & will be added in the generating function according to the number of slots given. givenSlots: The data structure which stores info about Given Slots on each day, i.e., [2, 3, 3–5] classes: The data structure which stores info about Classes (classes), i.e., [“A”, “B”] teachers: The data structure which stores info about Teachers, i.e., [“T1”, “T2”] Our algorithms generate time table section wise mean it handles time table of one section at a time and checks in the provided slots. If it founds a slot related to the

70

B. Dayani et al.

class, it adds that instance to the section instances data structure. This was done in the first half. In the second half, again a section is selected, and iterating through the slots on the given day and the section instance, a slot in time table is assigned. And the end of the algorithm, we get an array that has nested arrays. Each nested array represents time table of a section.

6.2 Pseudo Code Input: In this algorithm, we will be giving the following input to get desired results. instances: Data structure which stores info about provided slots to be organized, i.e., [[Ti, Ci, Si, LTi, Li],……,[Tn, Cn, Sn, LTn, Ln]] Here T = Teacher C = Class S = Subject LT = noOfLectures L = Labs ins = which keeps a record of lectures assigned & will be added to the generating function according to the number of slots given. givenSlots: The data structure that stores info about GivenSlots on each day i.e. [2, 3, 3–5]. classes: Data structure that stores info about Classes(classes) i.e. [“A”, “B”] teachers: Data structure that stores info about Teachers i.e. [“T1”, “T2”] Variables used in the algorithm • sectionInstances: data structure to store info about each section • TT: data structure which is initialized with all given slots with 0 and further on the variable containing info about lecture replaces zero which is decided and given that specific slot • teacherTT: it stores info about each teacher and the slot in which he is assigned a lecture • numOfDays: it stores the total working days; • Flags & Counters to keep track of clashes

Automated Activity Scheduling Using Heuristic Approach

71

• • • •

regenerateTimeTableCountSec regenerateTimeTableFlagSec: flag to check if there comes any clash regenerateTimeTableListSec: Keeps record of input that causes the clash timeTableNotPossibleCount: keeps count of how many time timetable generation fails on specific input • impossible: it says that it is impossible to generate time table with given data

7 Result Analysis 7.1 Dashboard This page will be shown after a user successfully login to the application. It consists of a header and a navigation panel. • Home Button: This will lead to the dashboard whenever someone presses it. • Logout Button: This will log out a user from the application and takes him/her to the welcome screen. In the navigation panel, there are 5 buttons which are described below. • Classes: It has a submenu that shows Add and All Classes Button. By clicking the Add button, a new page will be opened where the user can add a new class. By clicking on the All Classes button, all the classes will be shown on the UI. • Subjects: It has a submenu that shows Add and All Subjects Button. By clicking the add button, a new page will be opened where the user can add a new object. By clicking on the All Subjects button, all the subjects will be shown on the UI. • Teachers: It has a submenu that shows Add and All Teachers Button. By clicking the add button, a new page will be opened where you can add a new teacher. By clicking on the All Teachers button, all the teachers will be shown on the UI. • Slots: It has a submenu that shows Add and All Slots Button. By clicking the Add button, a new page will be opened where you can add a new slot. By clicking on the All Slots button, all the slots will be shown on the UI. • Generate: This button will send a request to the backend to run the algorithm and will show the returned output. Dashboard Page Interface is shown in Fig. 2.

7.2 View All Slots This will show all the slots added by a user. You can go to this page by clicking on slots in the navigation panel on the left and selecting all slots from the submenu. It has only one button named remove which will remove an instance of a slot. View Slots/Lectures Page Interface is shown in Fig. 3.

72

B. Dayani et al.

Fig. 2 Dashboard page interface

Fig. 3 View slots/lectures page interface

7.3 Timetable By clicking on the generate button from the navigation panel on the left, you will see the generated timetable. Our output will be shown in tables on this page. Each table will represent time table of one class. Each row will represent a working day and each slot will represent a time slot. In each cell, we will show the assigned teacher name and assigned subject name. Figures 4 and 5 show the output page.

Automated Activity Scheduling Using Heuristic Approach

73

Fig. 4 Output page UI (5IT2)

Fig. 5 Output page UI (5IT1)

8 Future Scope We have presented a model for timetabling problems in this paper. The scheduling problem is considered as an optimization problem, it cannot be solved with a fixed objective function. So, following a detailed literature review, NP-hard was selected to develop the University’s course schedules. This timetabling project seeks to generate near-optimal timetables using the principles of the NP-hard algorithm. It is easy to understand, has less paperwork, is effective, and is automated, which is useful for the faculty’s administrators. The proposed system can only construct timetables based on a few strict constraints, and it only provides optimal solutions, not the best ones, and the NP-hard problem itself has a long execution time.

74

B. Dayani et al.

9 Conclusion Managing a large faculty and giving out assignments on time is a physically challenging undertaking. Therefore, our suggested system will aid in resolving this contradiction. As a result, we can create a schedule for any quantity of courses and semesters. With the help of this program, you may make flexible pages that can be used with a variety of tools that are more effective and liberated. This system generates different timetables for each class, genre, and lab. If another timeline is required, it can be created using a mix of various slots. You feel the pain of installing a timetable while the project minimizes time utilization. There won’t be any scheduling issues because of how the project is being developed.

References 1. Shehadeh KS, Cohn AEM, Jiang R (2021) Using stochastic programming to solve an outpatient appointment scheduling problem with random service and arrival times. Nav Res Logist 68(1):89–111 2. Jian, Srinivasan D, Seow TH, Xu X. Automated time table generation using multiple context reasoning for university modules. In: 2002 IEEE conference 3. Chowdhary A, Kakde P, Dhoke S, Ingle S, Rushiya R, Gawande D (2014) Timetable generation system. Int J Comput Sci Mob Comput 3(2):410–414 4. Nanda A, Pai MP, Gole A (2012) An algorithm to automatically generate schedule for school lectures using a heuristic approach. Int J Mach Learn Comput 2(4):492 5. Elkhyari A, Guéret C, Jussien N (2003) Solving dynamic timetabling problems as dynamic resource constrained project scheduling problems using new constraint programming tools. In: Practice and theory of automated timetabling IV. Springer-Verlag, pp 39–59 6. Boomija MD, Ambika R, Sandhiya J, Jayashree P (2019) Smart and dynamic timetable generator. Int J Res Appl Sci Eng Technol 7. Abhinaya V, Sahithi K, Akaanksha K (2019) Online application of automatic timetable generator. Int Res J Eng Technol 8. Fredrikson R, Dahl J (2016) A comparative study between a simulated annealing and a genetic algorithm for solving a university timetabling problem 9. Schaerf A (1999) A survey of automated timetabling. Artif Intell Rev 13(2):87–127 10. Sani HM, Yabo MM (2016) Solving timetabling problems using genetic algorithm technique. Int J Comput Appl 134(15) 11. Burke EK, Petrovic S (2002) Recent research directions in automated timetabling. Eur J Oper Res 140(2):266–280 12. Zhang D, Guo S, Zhang W, Yan S (2014) A novel greedy heuristic algorithm for university course timetabling problem. In: Proceeding of the 11th world congress on intelligent control and automation. IEEE, pp 5303–5308 13. Rohit PS (2013) A probability-based object-oriented expert system for generating time-table. Int J Res Comput Appl Inf Technol 1(1):52–58 14. Asmuni H, Burke EK, Garibaldi JM (2005) Fuzzy multiple heuristic ordering for course timetabling. In: The proceedings of the 5th United Kingdom workshop on computational intelligence (UKCI05). London, UK, pp 302–309 15. Hsu C-M, Chao H-M (2009) A two-stage heuristic based class-course-faculty assigning model for increasing department-education performance. In: 2009 international conference on new trends in information and service science. IEEE, pp 256–263

Automated Activity Scheduling Using Heuristic Approach

75

16. Deris S, Hashim M, Zaiton S (2009) Solving university course timetable problem using hybrid particle swarm optimization, pp 93–99 17. Burke EK, Silva J, Soubeiga E (2005) Multi-objective hyper-heuristic approaches for space allocation and timetabling. In: Metaheuristics: progress as real problem solvers. Springer, Boston, MA, pp. 129–158 18. Soria-Alcaraz JA et al (2016) Iterated local search using an add and delete hyper-heuristic for university course timetabling. Appl Soft Comput 40:581–593 19. Jha SK (2014) Exam timetabling problem using genetic algorithm. Int J Res Eng Technol 3(5):649–654 20. Bhatt N, Bhatt N, Prajapati P (2017) Deep learning: a new perspective. Int J Eng Technol Manag Appl Sci (IJLTEMAS) 6(6):136–140

Automated Helpline Service Using a Two-Tier Ensemble Framework K. Sai Jatin, K. S. Sai ShriKrishnaa, Samyukta Shashidharan, Sathvik Bandloor, and K. Saritha

Abstract For many years, emergency calls have been handled by humans. Under supervision, a group of people would undergo training for a minimum of three months before they were allowed to become call operators. But this procedure is quite tedious and time-consuming. A person typically calls such an operator to get some kind of immediate assistance when they find themselves in an emergency. However, based on the caller’s emergency information, the operator must decide which department’s services must be sent out. Additionally, all incoming calls are handled in a queuelike manner. Therefore, if the operator spends a significant amount of time talking to a caller, it is possible that the callers in the queue that follow them will receive assistance very late. The aim of this project is to develop an application that could simulate an automated call operator. The front-end interface for distressed users to notify the system of their emergency would be a chatbot. Respective departments are then alerted based on the emergency label it gets classified into, that is carried out using a two-tier Ensemble framework with a final accuracy of 94.7%. Keywords Chatbot · Ensemble framework · Machine learning models · Decision trees · Support vector classifiers · Random forest · Hadoop file system

K. Sai Jatin · K. S. Sai ShriKrishnaa · S. Shashidharan (B) · S. Bandloor Computer Science and Engineering, PES University, Bangalore, Karnataka, India e-mail: [email protected] K. Sai Jatin e-mail: [email protected] K. S. Sai ShriKrishnaa e-mail: [email protected] S. Bandloor e-mail: [email protected] K. Saritha Department of Computer Science, PES University, Bangalore, Karnataka, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_6

77

78

K. Sai Jatin et al.

1 Introduction A chatbot is often described as one of the most advanced and promising expressions of interaction between humans and machines. These digital assistants streamline interactions between people and services, enhancing customer experience. The COVID-19 pandemic has driven a substantial increase in the use of chatbots to support and complement traditional health care systems [1]. However, despite an increase in their use, evidence to support the development and deployment of chatbots in public health remains limited. Statistics show that in a year, around 240 million 911 calls are made on average in the US, i.e., over 600,000 calls are made per day. Even in Uttar Pradesh, the count per day reaches around 18,000 as of 2021. In the current system of emergency handling, the call operator who handles a particular emergency call is responsible for determining the emergency department that needs to be contacted to dispatch assistance. This is done by analyzing the callers’ responses to a predefined set of questions. Initial help may be provided, if necessary, to keep the caller calm until help arrives. But this is quite a long process. It may result in many needy emergent callers being put on hold for long, sometimes even getting help only once it’s too late. Also, many surveys reveal that the training process of such control room staff is very time- and labor-intensive, often going on for at least three whole months. This project aims at automating the emergency handling process carried out by call operators, with the use of a chatbot interface for people who need any kind of assistance during emergencies. The chatbot interacts with the application users to collect more information about the situation they are in. This data is then used to categorize the emergency into a particular label (it could either belong to Fire, EMS, or Police labels) and then alert the necessary authorities like Fire, Medical or the Police Departments. For example, when it receives an emergency “Help! I’m surrounded by Fire!”, it will alert the Fire department and notify the user, along with providing some suggestions on what needs to be done until they arrive at the user’s location. The main goal of this project is to carry out faster emergency handling, so that many users can be tended to on time more efficiently.

2 Literature Survey With respect to chatbots [2] and classification of text [3], there have been many different approaches tried and tested in the past. A web-based chatbot [4, 5] that handles voice input, uses a client–server approach for its functioning. The server generates responses to the provided queries in two categories–retrieval of data and information output. A black box approach is used which separates the client from the server logic. This in turn increases its flexibility. To improve the information output of the system, it uses a combination of new and intelligent algorithms at the backend. If a response is providable, it is then recorded to improve its capabilities for response generations in the future.

Automated Helpline Service Using a Two-Tier Ensemble Framework

79

An AI Medical Chatbot [6] developed during the novel COVID-19 pandemics, aimed at helping doctors tend to sick people with the help of AIML. It identified the infection severity of the user by analyzing his symptoms. A predefined questionnaire is provided which asked him/her to check the visible symptoms he or she has. Each symptom was assigned a severity score using which the overall infection severity of the person is analyzed. If the value is greater than a preset threshold, they are referred to a live doctor. Otherwise, suggestions for keeping the symptoms at bay were provided. This chatbot reduced the overhead of medical consultations by acting as an intermediary between the user and doctors. Another advantage is that it could provide suggestions and preventive measures for covid-related symptoms along with the fact that it is available 24/7. As for limitations, firstly, the questionnaire covered only a fixed set of symptoms, but each wave of the coronavirus saw people suffering from varying symptoms. Secondly, the severity score for each symptom was fixed while it could change over time. This wasn’t being taken into consideration. A novel two-tier ensemble framework [7] made up of various deep learning models was developed using a meta-learning approach. This framework was tested on six publicly available benchmark datasets and showed a significant increase in performance. For each dataset considered, a different combination of the deep learning models was used to increase its classification performance. Some of the models used included CNN, RNN, LSTM, and GRU [8–10]. The outputs produced by the models in the first tier were passed as input to the models in the second tier. The outputs produced by them were then collected and the result was calculated using a metalearning approach. The proposed framework significantly increased the classification accuracy of the base models used and outperformed the basic ensemble framework implementations as well. NLP sees widespread use in understanding and processing user input, in turn producing a meaningful result as the output. The chatbot architecture in [11] integrates a computational model and a language model. It retrieves basic information from the user like their qualifications, their interests, and hobbies which the chatbot uses to answer questions from a question set present in the knowledge base of the chatbot. This system uses a filtering process, which reduces the search space so that system is more efficient. NLTK (Natural Language Toolkit) is a set of programs and libraries for symbolic and statistical NLP, for English written in Python. Even if a student has not framed his question properly, the system will be able to pick out important keywords from it and answer accordingly.

3 Proposed System The goal of the system proposed in this paper is to speed up and eliminate the need for human intervention in the emergency handling process by simulating an automated call operator. Consequently, many emergencies can be attended to promptly, saving lives. The chatbot, which will act as a call operator, will be the frontend. It will interact with the people using the application to learn more about the situation they’re in.

80

K. Sai Jatin et al.

A classifier, which would be an ensemble model [12, 13], would be located at the backend. This is synonymous with the call operator’s brain. The data that the chatbot collects is then sent to the backend, where it is assigned a specific label (such as fire, EMS, or police). SMS messages will be sent to the appropriate department based on the label it is assigned. Until help arrives, the chatbot would assist the user by answering any questions they may have. The department receives the user’s location and any other pertinent information. A model learning phase is added because there isn’t enough data to train the model. In this phase, correctly classified emergencydepartment pairs are saved in the database for when the model is retrained, which makes it better overall. For the implementation of this project, it is assumed that the user can communicate an emergency via text or speech. Additionally, all the emergencies that are provided to the application are regarded as genuine in light of the current complexity of the system.

4 User Classes The application is accessible to two primary user classes. Administrators and application users make up these.

4.1 Admin Users The developers of the application make up this group. They manage the chatbot’s knowledge base and other stored files, integrate the chatbot into the system, manage the chatbot’s data flow with the backend classification model, and also handle GPS location tracking.

4.2 Application Users These are the non-technical users who want emergency assistance. They can use either text or speech to communicate with the chatbot, receive emergency-related suggestions, and even ask the chatbot supported questions. They only have access to the chat screen, through which they can interact with the application.

Automated Helpline Service Using a Two-Tier Ensemble Framework

81

5 Dataset The most crucial component of any machine learning model is data [14]. Early on in the machine learning process, it is necessary to have accurate, complete, and relevant data. The algorithm can only pick up features and discover the relationships necessary to make precise future predictions if it is provided with sufficient training data. Hence, high-quality training data is the most significant aspect of both machine learning and artificial intelligence. As a result, the system is more likely to perform better if there is more data available. Also, from a human resource perspective, one must be sure that development team members can understand the data and manipulate it in a way to make it compatible with the classifiers that will be used to build the ensemble model at the backend. The dataset used to train the ensemble model at the backend is taken from Kaggle’s [15] large repository of datasets. It has nine unique columns consisting of String, DateTime and Integer attributes, but for this application, the focus will only be on a few attributes. As this is a publicly available dataset, the number of records present are limited, with only 464,465 records in it, including duplicate and null entries. This is because accessing larger data sources requires special permissions from the concerned authorities. Keeping this in mind, a model learning phase is introduced to overcome this, which will be covered in later chapters. Figure 1 represents a snapshot of the dataset considered for the project. All emergencies reported are classified into three labels–EMS, Police, and Fire as depicted by Fig. 2, hence, making this dataset the most suitable for our application.

Fig. 1 Snapshot of ‘Emergency-911 Calls’ dataset considered for training the classification models

82

K. Sai Jatin et al.

Fig. 2 Different classification labels present in the dataset

6 Architecture Figure 3 shows the architecture of the application. The user emergency is sent as input to the saved state of the classification model, and its classification is used to contact the necessary department via SMS. During the model learning phase, the correctness of the classification is verified, and based on the result, it is either ignored or added to the database. The system’s performance would benefit from this. Additionally, the chatbot responds to user inquiries. The system’s architecture is broken down into three main parts, which are outlined below.

Fig. 3 Architecture of the application

Automated Helpline Service Using a Two-Tier Ensemble Framework

83

6.1 Assistance Chatbot Being the application’s front end, it is where users interact with the system. The chatbot collects information about the user’s emergency which will be sent as an input to the saved state of the classification model, after processing. After the emergency has been classified, it also offers advice on what to do in general and answers questions users may have about the emergency. Additionally, location tracking is enabled to monitor the user’s location and transmit it to the appropriate emergency department.

6.2 Classification Model It is designed as the Ensemble framework model used to classify the emergencies into the three predefined labels (Fire, EMS, or Police) and takes the input dataset for training from the database. Because it is a scalable, distributed storage that can process large amounts of data in parallel, hadoop file system is used as the database. The system would become faster and more efficient as a result of this, since it would take less time to load and store the dataset back into storage. Once an acceptable accuracy level of the model is reached initially, the state of the model is saved. User emergencies that are sent to the model for categorization are run as input on this saved state of the model; based on the label it gets assigned to, the respective department is contacted via SMS. The ensemble framework is designed as two tiers. Evaluation methods used by different researchers reveal that the use of multi-tier ensemble frameworks was able to significantly increase the accuracy of classifiers on public benchmark datasets. Combining the outputs of various classifiers can reduce generalization errors and deal with high variance of individual classifiers. The first tier of the framework consists of decision trees, random forest, and support vector classifier (SVC) models, while the second tier is a logistic regression model used as the final estimator as shown in Fig. 4. The outputs produced by the first-tier models are aggregated using stacking methods and fed as inputs to the second tier model. The classification produced by tier 2 is considered as the final predicted department label. Although deep learning is getting a lot of recognition today in the field of text classification, there are two main limiting factors to its use–data availability and computing power. As the data available for this project is very limited in comparison to the requirements for training deep learning models, the focus was shifted to more commonly used machine learning models for text classification. Multiple models were run on a subset of the chosen dataset, and their performance was measured in terms of both accuracy and the time it took to train the model and make predictions. A comparison is shown in Table 1. From Table 1, it can be seen that K-nearest neighbor algorithm had an accuracy of 94.73%, which was the highest observed, followed by random forest, decision trees and SVC, with an accuracy of 94.72%. But K-nearest Neighbor algorithm took significantly more time to execute. As the

84

K. Sai Jatin et al.

Fig. 4 Architecture of the two-tier ensemble framework for the classification model

size of the dataset is expected to increase in the future due to the introduction of the model learning phase, the use of K-nearest neighbors could lead to latency issues, which is not a desirable property of emergency handling systems. Keeping this in mind, SVC, random forest, and decision trees was considered for constructing the tier 1 of the ensemble framework. Figure 5 provides a graphical representation of the observations recorded in Table 1. As stated previously, K-nearest neighbor has the highest accuracy recorded at 97.3%, but with an execution time of 2.48 s, it is the slowest among all the models considered. Table 1 Comparison of performance of various models run on a subset of the chosen dataset

Model

Accuracy (%)

Exec time (s)

Support Vector Classifier (SVC)

94.72

2.1

Random Forest (RF)

94.72

2.29

Decision Trees (DT)

94.72

1.9

Logistic Regression Model (LR)

94.69

2.2

Multinomial Naïve Bayes (MNB)

93.2

2.0

K-nearest Neighbor (KNN)

94.73

2.48

Stochastic Gradient Descent (SGD)

93

2.03

Automated Helpline Service Using a Two-Tier Ensemble Framework

85

Fig. 5 The first graph is a plot of model accuracies, while the second graph shows their execution times on the input data. Despite having the highest accuracy, KNN required significantly more time than the other models. Latency issues would arise as the dataset’s size increased in the future; thus, it was not selected

6.3 Model Learning To overcome the issue of insufficient data to train the classification model, the model learning phase was added. This is based on the premise that two models making the same prediction are much less likely to make mistakes than one model. Therefore, if both models predict the same label, there is a greater likelihood that it is the correct classification; Keeping track of such correctly classified outputs could help increase the dataset size. As the number of models increases, the probability that the prediction made is incorrect decreases, but it in turn introduces latency issues. In addition to the ensemble classification model, only the naïve Bayes algorithm is taken into consideration for this reason. The naive Bayes algorithm is also run on the user provided emergency query, classifying it into a particular department label. The prediction result of the ensemble

86

K. Sai Jatin et al.

classification model is then compared with this classification. If they are identical, the emergency and its department label are added to the dataset and re-stored in the Hadoop File system for further model retraining. The model learns over time and becomes more accurate as a result, making it better at classifying new emergencies.

7 Implementation 7.1 Assistance Chatbot The chatbot is the component of the application with which the users would interact, which is implemented using React Native and JSON. The chat screen would appear after the user has finished logging in. At the backend, a file contains all login and registration information for users. The user is free to write down his or her emergency once the session has begun. In the event that the user’s emergency is recorded as audio, it is converted into text before being sent. This is accomplished using the speech option on keyboards. The user’s emergency is then sent to the system’s backend. Python flask is used to establish connectivity between the frontend and backend. It is a lightweight web framework that lets developers create web applications without worrying about low-level details like protocol and thread management. At the backend, the input is sent to the classification model and the prediction label is sent back to the frontend. The chatbot provides assistance to the user until the concerned departments arrive, by allowing them to query it regarding any concerns they might have in such a situation. A built-in knowledge base and the Levenshtein library are used to respond to the user’s query. The Levenshtein distance, a metric for comparing any two sequences, serves as the foundation for this library. The Levenshtein distance is calculated for each of the entries in the knowledge base for that particular predicted department label and the emergency query. The user receives the response as the entry that matches a minimum Levenshtein distance. There is no limit on the number of queries the user is allowed at ask per session. The user’s GPS location is made available at the backend, where it will be sent to the emergency department in an SMS along with the user’s information. GPS receivers provide location as a combination of latitude, longitude and altitude. It was possible to generate urls that included the user’s location by utilizing Google maps. One would be taken to Google Maps, where they could track the location, upon clicking these. Instead of directly accessing the latitude and longitude and determining the location, the departments could use this information more quickly and easily to reach the emergency locations on time. A sample conversation from a test user’s particular session is depicted in Fig. 6. The emergency is sent to the backend for classification after it is entered. The department being contacted based on the prediction made is also displayed on the screen. The user is informed that his current location is being recorded and the url is shared.

Automated Helpline Service Using a Two-Tier Ensemble Framework

87

In addition, the knowledge base provides the best possible answers to a series of questions posed by the user.

Fig. 6 Sample conversation between Chatbot and application user reporting an emergency

88

K. Sai Jatin et al.

7.2 Query Processing The query provided by the user will be in the form of text ultimately, even if it is provided through speech. But machine learning models can’t understand textual data. They only accept numbers. Hence, they need to be converted to numerical values. The emergency query is first processed to remove any special characters and then broken down into words. These words are then converted into vectors using a python library named CountVectorizer. CountVectorizer is a great tool provided by the scikit-learn library in Python. It is used to transform a given text into a vector based on the frequency of each word that occurs in the entire text. The output of this is a sparse matrix with the frequency of each unique word present in the query. This sparse matrix is then provided as input to the models at the backend. This process of converting common sentences or phrases into a format that can be processed by models is known as feature extraction and is an important step in any machine learning algorithm. In the sparse matrix, each unique word is represented by a column, and each text input from the document (set of queries) is a row in it. The value of each cell is nothing but the count of the word in that particular text input. This can be visualized with the help of Fig. 7. Two queries are provided as input to the CountVectorizer object. This then results in a sparse matrix as shown in the first print statement. A ‘0’ means the word isn’t present in that particular text input while a ‘1’ depicts otherwise. The one problem with the sparse matrix is that it could lead to a wastage of space. Only those fields having a ‘1’ is considered as input to the model, but most of the values in a sparse matrix are expected to be ‘0’. An approach to overcome this is to represent the significant fields as a combination of its row and column; Both being zero—indexed. This approach saves space and can be understood by analyzing the output generated by the second print statement.

Fig. 7 Processing of two sample queries using CountVectorizer

Automated Helpline Service Using a Two-Tier Ensemble Framework

89

7.3 Classification Model Initially, any null values in the dataset that was loaded from the Hadoop File System are removed, and it is converted into the required format. Before being used, it must then be divided into two sets. This is done with the intention of using the same file for both testing and training. Both the Naive Bayes model and the Ensemble Classification model are trained using the training dataset. The accuracy scores for the models’ performance will be calculated using the testing set. The training set is made up of 70% of the samples chosen at random, and the testing set is made up of the remaining 30% of the sample. The random forest, decision tree, and SVC model are utilized in the construction of the model’s first tier. Prior to being sent to the SVC model, for performance optimizations, that data is first scaled. The default implementation uses Gini Impurity as a loss function for both decision trees and random Forest, while square of the hinge loss is used by SVC. The outputs generated by tier 1 are pipelined to the logistic regression model with the help of Stacking, which forms the second tier. The logistic regression model uses the lbfgs solver. For each department label (EMS, Police, and Fire), the second tier generates probabilities. The department label with the highest probability value will be considered as the prediction of the ensemble model. Additionally, the ensemble framework is constructed with the assumption that each model has the same weight. An optimization implemented here is the use of a saved state of the model. The model’s current state is saved after training. All emergency queries that are sent to the model for classification are run as input on this saved state; Based on the prediction, respective department will be contacted through SMS. This reduces the overhead in training the model for every emergency query received. The model is retrained to update its saved state after fixed time intervals. This is done to make sure that the model is trained with the new data records that were added to the dataset in previous model learning phases.

7.4 Notifying Department At the backend, each of the emergency departments is linked to a specific phone number. The team members’ contact numbers have been used as test values for this project’s demonstration. This is because it is not encouraged to make applications that use actual department numbers without getting special permissions first. The department associated with the user emergency is contacted via SMS using the stored contact information following the Ensemble Classification model’s classification of the emergency into a predicted label. Twilio provides developer-friendly APIs, scalability, and built-in software for compliance, routing, and advanced use cases. It provides an API for sending one-way SMS to a particular number. From the backend,

90

K. Sai Jatin et al.

Fig. 8 Sample SMS received by the department

messages containing the application user’s information and location are sent to the department’s phone number using this API. A sample SMS sent to one of the departments based on the classification provided by the ensemble model is depicted in Fig. 8. It contains information like the user’s registered name, the mobile number he provided during registration, and the location from which he reported the emergency.

7.5 Model Learning After processing the input emergency and converting it into a vector (represented as a sparse matrix using CountVectorizer), the Naïve Bayes algorithm takes it as input and predicts a department label the emergency could possibly fall under. The label from the ensemble classification model is then compared to this one. The emergency and the label it was categorized into are added to the datatset if they are identical. This is then stored back into the Hadoop File System. PyArrow or Python Arrow is a Library that helps connect the application backend logic with the Hadoop File System. By default it comes with bindings to the Hadoop File System, making connectivity with it simple, clean and seamless. This library also provides a Python API for interoperability with pandas, NumPy and other softwares in the Python ecosystem. Figure 9 shows the pseudo code for the Naïve Bayes algorithm and the model learning phase. Here, the prediction made by the Naïve Bayes Model is compared with that made by the ensemble classification model. They are added to the current dataset and stored back in the Hadoop File System if they match. In the future, this is used to retrain the models, making them better at predicting new emergencies.

Automated Helpline Service Using a Two-Tier Ensemble Framework

91

Fig. 9 Naive Bayes algorithm and model learning PseudoCode

8 User Interface The user would first need to complete a one-time registration after installing the application for the first time. This would ensure that the system already had access to the user’s information and would make it simpler to access it and forward it to the appropriate departments as needed. If the user already has an account but had to reinstall the application due to a problem, a login form with his email address or mobile number and password would be provided. If a user forgets their password, they can get it back if they can prove they have an account with the application first. They would be taken to the chat screen once their login was successful. The user interface has a chat window displaying the entire conversation between the application user and the chatbot during the current session. On starting a new session for another emergency, the same user will not be able to access conversations for any previously reported emergency. The first screen picture shown in Fig. 10 shows the login/signup page the user would notice when he first opens the application. After he finishes logging in, he would be able to access the chat screen. Using the keyboard, he can speak/type out his emergency. The chatbot would respond with the department currently being contacted and enquire about any further queries. Suggestions would also be provided to these by the chatbot. The user’s current location is automatically tracked and made available at the backend when an emergency query is entered. The second screen picture of Fig. 10 depicts a test conversation where a user reports an emergency. After the chatbot notifies the user of the department it contacted, it answers further queries.

92

K. Sai Jatin et al.

Fig. 10 Login/Signup page and chat screen of the application

9 Results and Discussions The application was subjected to both functional and non-functional testing. Nonfunctional testing consisted of performance and security testing, while functional testing included unit, integration, and system testing. All of them were carried out manually using a developer made test set, and the results were recorded accordingly. This was done to ensure all of the application’s functions worked as expected and met certain performance standards. A snapshot of the application’s performance-testing environment is shown in Fig. 11. When the test emergency was “I can’t breathe,” the ensemble framework correctly predicted “EMS” with 94.7% accuracy, while the Naive Bayes model correctly predicted “EMS” with 94.5% accuracy. The “File updated” message indicates that both models made the same prediction and that the dataset was modified to include the new data record, concluding the model learning phase because the predictions are identical. The average overall emergency response time, or the time it takes to process and categorize an emergency, is around 1.6 s. Model accuracy and emergency response time were taken into account when evaluating the performance criteria. Initial expectations for overall accuracy ranged from 80 to 85%, and the maximum acceptable emergency response time was 30 s.

Automated Helpline Service Using a Two-Tier Ensemble Framework

93

Fig. 11 Sample test environment of the application

The application’s accuracy is currently around 94.7%, and the average time it takes to respond to an emergency is less than 5 s. Table 2 shows the performance of the ensemble classification model and the Naïve Bayes model in terms of their accuracies. The ensemble model makes predictions with an accuracy of 94.7%, while the Naïve Bayes model classifies the same input query with an accuracy of 94.5%. This can be graphically visualized through Fig. 12. Additionally, the ensemble model outperformed the three independent base classifiers used to construct its tier 1: Support vector classifier, random forest, and decision trees. A developer-created testset was used to test these models after they were trained on a subset of the dataset. Table 3 contains the observations. For better comprehension, Fig. 13 provides a graphical representation of the data in Table 3. The ensemble model appears to perform better than the individual base models from which it was constructed. Table 2 Accuracy of ensemble and Naïve Bayes model

Model

Accuracy (%)

Ensemble model

94.7

Naïve Bayes model

94.5

94

K. Sai Jatin et al.

Fig. 12 Comparison of accuracy of ensemble and Naïve Bayes model

Table 3 Model accuracies observed during testing

Model

Accuracy (%)

SVC

93.185

Random forest

93.185

Decision trees

93.185

Ensemble model

94.71

Fig. 13 Comparison of accuracy of ensemble model, SVC, random forest, and decision trees

10 Conclusion The primary objective of this project was to develop an automated call operator for emergencies. The application’s frontend is a chatbot that can analyze user-provided

Automated Helpline Service Using a Two-Tier Ensemble Framework

95

speech or text inputs as emergencies. Location of the distressed user is tracked using GPS. Emergency categorization is carried out using an ensemble-based framework consisting of Python-based models at the backend. With the help of this framework, it is categorized into a particular predefined label following query processing. Department services are contacted via SMS based on the categorized labels, and user information along with the location will be sent. Additionally, a model learning phase is added to help overcome the drawbacks of limited training data and improve model accuracy. The application user can also receive assistance from the chatbot through emergency-related suggestions. Currently, the accuracy of the application is 94.7% which is significantly higher than the initial performance estimates of 80–85%. This was possible as the 2-tier Ensemble classification model designed for the application performed better than conventional standalone classifiers. The classification model improved in its ability to predict with each iteration as more training samples were generated by the introduction of the model learning phase. In addition, the observed emergency response time was less than 5 s, or 1.6 s on average, in contrast to the initial prediction of 30 s. In conclusion, the observed outcomes are significantly superior to what was initially anticipated. For this project’s implementation, it was assumed that all emergencies reported through this application are real. In the future, a feature for determining the validity of emergencies provided by the user could be integrated into the application. Additionally, user input via speech and text is currently only accepted in English; however, this may be extended to include a variety of other language options. Allowing of multiline descriptions of an emergency and reducing the dependency on good internet connectivity are some of the other improvements that could be implemented. To further improve the performance of the ensemble model, an optimization function could be introduced, which would allow each model in the tiers to have some weightage. Based on the predictions made by the models, these weights would be modified with every iteration, improving application performance.

References 1. Almalki M, Azeez F (2020) Health chatbots for fighting COVID-19: a scoping review. Acta Inf Med 28(4):241–247. https://doi.org/10.5455/aim.2020.28.241-247.PMID:33627924;PMCID: PMC7879453 2. Tamrakar R, Wani N (2021) Design and development of CHATBOT: a review. In: Conference: international conference on “Latest Trends in Civil, Mechanical and Electrical Engineering” At: Maulana Azad National Institute of Technology, Bhopal, April 3. Kolluri J, Razia S, Nayak SR (2019) Text classification using machine learning and deep learning models (June 4, 2020). In: International conference on artificial intelligence in manufacturing & renewable energy (ICAIMRE) 2019. https://doi.org/10.2139/ssrn.3618895 4. du Preez SJ, Lall M, Sinha S. An intelligent web-based voice chat bot. https://doi.org/10.1109/ EURCON.2009.5167660 5. Akshaya Preethi Pricilla P, Thulasi Bharathi S (2019) ChatBot: machine learning approach based college helping system 8(1)

96

K. Sai Jatin et al.

6. Battineni G, Chintalapudi N, Amenta F (2020) AI Chatbot design during an epidemic like the novel coronavirus. https://doi.org/10.3390/healthcare8020154 7. Mohammed A, Kora R (2021) An effective ensemble deep learning framework for text classification, 13 p. https://doi.org/10.1016/j.jksuci.2021.11.001 8. Minaee S, Kalchbrenner N, Cambria E, Nikzad N, Chenaghlu M, Gao J (2020) Deep learning based text classification: a comprehensive review 1(1):43. https://doi.org/10.1145/nnnnnnn. nnnnnnn 9. Cai J, Li J, Li W, Wang J (2018) Deep learning model used in text classification. In: 2018 15th international computer conference on wavelet active media technology and information processing (ICCWAMTIP), pp 123–126. https://doi.org/10.1109/ICCWAMTIP.2018.8632592 10. Zulqarnain M, Ghazali R, Mohmad Hassim YM, Rehan M (2020) A comparative review on deep learning models for text classification. Indonesian J Electr Eng Comput Sci 19(1):325–335. ISSN: 2502-4752. https://doi.org/10.11591/ijeecs.v19.i1.pp325-335 11. Ayanouz S, Anouar Abdelhakim B, Benhmed M (2020) A smart chatbot architecture based on NLP and machine learning for health care assistance, April, 6 p 12. Wang G, Song Q, Zhu X. Ensemble learning based classification algorithm recommendation. arXiv:2101.05993 13. Huimin F, Yingze Z, Pengpeng L, Danyang L. An ensemble learning method for text classification based on heterogeneous classifiers 14. Jain A, Patel H, Nagalapatti L, Gupta N, Mehta S, Guttula S, Mujumdar S, Afzal S, Sharma Mittal R, Munigala V (2020) Overview and importance of data quality for machine learning tasks. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining (KDD ’20). Association for Computing Machinery, New York, NY, USA, pp 3561–3562. https://doi.org/10.1145/3394486.3406477 15. Chirico M (2020) Emergency—911 calls. Kaggle. https://doi.org/10.34740/KAGGLE/DSV/ 1381403

Big Data Security: Attack’s Detection Methods Using Digital Forensics Ch. Charan, A. Pradeepthi, J. Jyotsna, U. Lalith, Radhika Rani Chintala, and Divya Vadlamudi

Abstract The main objective of this research is to defend big data against various cyberattacks. Today, it is commonplace to use big data security to safeguard data and analytical processes against malicious activity. Cybercriminals are increasingly using fraudulent calls, texts, and other techniques to steal users’ private information. Nowadays, these attacks are increasingly frequent online. In this paper, we addressed various security technologies and strategies that are being utilized to make big data more secure from our prior research references. We also presented various digital forensics tools that may be used to identify who, where, and how an attack was conducted. The fields of data science and cybersecurity are combined in this study. Detecting attacks on big data and ensuring and mitigating data security using cybersecurity and digital forensics techniques are the key goals of this study. Keywords Big data · Security · Cyber attack · Digital forensics · Threat detection system

1 Introduction Big data security is a really fascinating area of study. It is a set of tools used to safeguard analytical procedures and data from various hostile actions that could harm big data. Big data is large data that is fast and it is hard to process using the traditional methods. The function of storing and acquiring the large amount of data like transaction records and financial records, documents and multimedia files, Ch. Charan (B) · A. Pradeepthi · J. Jyotsna · U. Lalith · R. R. Chintala · D. Vadlamudi Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andra Pradesh, India e-mail: [email protected] R. R. Chintala e-mail: [email protected] D. Vadlamudi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_7

97

98

Ch. Charan et al.

webservers, and sensors. The few applications that require security on big data are secure non-relational data, secure data storage and transaction logs, endpoint filtering, and validation preserve data privacy. A large collection of data that contains various sorts of raw data is referred to as “big data.” Later, this unprocessed data is converted into information. This entails using a method known as “data processing” in the field of data science to transform enormous amounts of raw data into meaningful information. Volume, variety, and velocity are the three V’s that define big data the most as shown in Fig. 1. The volume used here refers to the amount of data sent. Variety refers to the type of data transmitted. Finally, velocity refers to the speed at which data is produced [1]. We are aware that big data consists of enormous amounts of data of many types, some of which may be sensitive and are thus kept on servers owned by third parties, raising the danger of data breaches. As a result, you must identify attacks and counterattacks in order to protect sensitive information. For this, we require various cybersecurity tools and techniques. We also explore how digital forensics can be used to detect attackers. Let’s look at some real-world applications for big data security. For example, most social media sites are growing more popular worldwide. As a result, most of us use various social media platforms like Facebook, Instagram, etc. We all provide personal information to log in or create accounts when using these apps. This information is presented as raw data. Data science applications are then used to convert this raw data into informative content. Given that it pertains to the user specifically, the data must be stored safely and with confidentiality. We use cybersecurity measures, including encryption, on the data to safeguard it and defend it against threats. Therefore, data science and Fig. 1 V’s in bigdata

Big Data Security: Attack’s Detection Methods Using Digital Forensics

99

cybersecurity are combined to achieve our goals [2]. Big data security may also be used in a variety of contexts, including banking, education, and finance. Data protection has always been the main concern. We also had to consider what may happen if the data were attacked, even if we added the required security. After then, digital forensics was used to identify the attacker. Today’s growing use of digital platforms and cyberattacks necessitates following crucial steps in the digital forensics’ procedure [3]. In order to confirm the criminals, procedures involve gathering evidence, protecting the integrity of the data acquired, interpreting the results, and recording them with suitable proof. The literature study in Sect. 2 covers the following topics: how analytics are implemented with big data; what big data attacks are; how attacks are evolving. Section 3 covers about the threat detection system, finding attackers with digital forensics, and various digital forensic tools. The study’s conclusion is described in Sect. 4.

2 Literature Survey This literature work is stepped forward to help you understand the process of analyzing big data, attacks on big data, tools for detecting and preventing data and using digital forensic techniques to find attackers or cybercriminals.

2.1 Using Analytics with Big Data Today, big data is being used by various businesses across multiple industries to hold customer information. However, gathering data alone is insufficient. If you don’t place the data in a well-structured format, they won’t be able to use it even though they need to. The data is then organized using this data analytics procedure as shown in Fig. 2. This comprehensive understanding of the data analytics process is provided by knowledge of [4]. This data analytics method allows for the conversion of raw data into informed decisions. Let’s examine the steps included in the big data analytics procedure. Raw data must be gathered from an organization of any kind. After collecting the data, it is processed to properly organize the information. Batch processing and stream processing are the two different methods of processing. While stream processing handles data in small batches, batch processing handles large amounts of data. As a result, after processing, the data is transferred to cleaning for filtration. In other words, during this phase, duplicate and unnecessary data are eliminated, improving the efficiency of the data. This enhances the quality of the data and makes it easier to quickly obtain the results. The big data analytics process uses a variety of tools, including Hadoop, Spark, Tableau, YARN, and others [5]. As a result, we may draw the conclusion that the process of data analysis is more

100

Ch. Charan et al.

Fig. 2 Data analytics process

beneficial for making huge data accessible. This indicates that businesses deliver and maintain data quality while facilitating the usage of big data. The information should then be kept within the organization securely.

2.2 Attacks Against Big Data Attacks against big data can be achieved in several ways [6], including: • Data Breaches: Unauthorized access to sensitive information stored in big data systems. • Ransomware: Encryption of big data systems making the data unavailable until a ransom is paid. • Distributed denial of service (DDoS): Overloading the network with traffic to disrupt the availability of big data systems. • Injection attacks: Injecting malicious code into big data systems to steal or alter data. • Manipulation of data: Unauthorized modification of data stored in big data systems. • Insider Threat: Malicious actions by individuals with authorized access to big data systems. Here, we go with various cyberattacks against large data. Let’s examine a few of the attacks, such as those involving malware, phishing, passwords, and more. An attack using malware occurs when a person is tricked into installing harmful software that

Big Data Security: Attack’s Detection Methods Using Digital Forensics

101

contains viruses, such as spyware, trojans, and worms. Phishing is Attackers send fake emails with dubious links to log into accounts. These emails have the same official appearance as emails sent from the company’s website. Therefore, if you click the link in that email and connect to your account, the attacker will be able to steal your login information and use it against you. Finally, a password attack is the ultimate type of attack [7]. Attackers employ John the Ripper and other Kali Linuxbased tools in this instance. Attackers can exploit accounts and crack passwords using these techniques. Users lose data security as a result of these attacks, which expose, disrupt, and destroy data. As a result, consumers no longer have access to their data’s confidentiality, integrity, and availability. These attacks can be avoided by using antivirus software, updating your operating system, changing your passwords frequently, using strong passwords, and staying away from suspicious websites.

2.3 Increase in Attacks on Big Data A rise in attacks on big data is being caused by the fact that as technology advances, attacks are occurring more quickly and intelligently. From 2021 until the third quarter of 2022, Fig. 3 shows us the weekly average number of attacks by organizations worldwide. There is a significant increase in attacks in 2022 compared to 2021. Additionally, based on our analysis, we found that the number of attacks increased by almost 30% between 2021 and 2022. This tells businesses that investing in cybersecurity techniques has a higher risk. Additionally, organizations must concentrate on catching cybercriminals. A data breach issue occurred during the third quarter of 2022, exposing the data of almost 15 million users. Therefore, we must prevent these attacks by taking the safeguards that we have covered in the prior section. There are several ways to prevent data from attacks, including [8]: • Encryption: encrypting sensitive data both at rest and in transit to protect it from unauthorized access and limitations are complexity, performance overhead, key management, and compatibility. • Access control: implementing strict access controls to ensure that only authorized individuals can access sensitive data and limitations are It requires ongoing maintenance and upgrades to stay up-to-date and secure. • Data backup: regularly backing up data to a secure location to ensure that data can be recovered in case of an attack and limitations are Backing up a large amount of data requires significant storage space and can be expensive. • Network security: implementing firewalls, intrusion detection systems, and other security measures to protect the network from attack. • Software updates: regularly updating software to patch known security vulnerabilities and protect against new threats.

102

Ch. Charan et al.

Fig. 3 Weekly average increase of attacks

• Employee training: educating employees about the dangers of phishing, malware, and other security threats, and the importance of following security best practices. • Third-party security: verifying the security of third-party vendors and service providers to ensure that they are not exposing sensitive data to risk.

3 Threat Detection System A system called threat detection system was proposed by the researchers in the article [9]. Using this technique, we can both identify the attack and stop it from happening, thus it acts as both a detection and a preventive tool. By keeping track of what the firewall and routers are doing online, the threat detection system (TDS) module shown in Fig. 4 will provide users with continuous support. So, if the TDS detects any malicious activity, it will quickly generate an alarm and shut down the servers to protect the data. This is how the threat detection system functions. Because it offers more security and operates more effectively than employees, this technology is incredibly beneficial for businesses. Additionally, it reduces costs by replacing labor. Therefore, this approach has many benefits for the organization.

Big Data Security: Attack’s Detection Methods Using Digital Forensics

103

Fig. 4 Threat detection system [10]

3.1 Finding Attackers with Digital Forensics Finding the attacker requires few procedures in digital forensics as presented in Fig. 5. The digital forensic process consists of a total of seven steps, namely: identification, preparation, approach strategy, preservation, examination, analysis, and finally the data presentation. At first, we must interview the suspect and find out where, when, and how the attack took place. We must decide what the investigation’s requirements are and

Fig. 5 Digital forensics process

104

Ch. Charan et al.

must collect every available proof. Then, data analytics must be used to enhance the data’s quality so that it is simpler to detect clues. Now that the evidence has been gathered, it must be stored safely so that only those with the proper access can view the documents. Following this, extensive documentation and some supporting evidence are needed to establish who perpetrated the attack. Finally, in order to receive approval to apprehend the criminal, we must show this documentation to higher authorities [11]. There are many various kinds of digital forensics, including wireless, malware, network, mobile phone, and other forensics. The field of digital forensics has certain benefits and drawbacks. The process will experience difficulties because there won’t be any physical evidence and it will take up more storage space, making the investigation more complex. However, the key benefit of this procedure is that the cybercriminal will be punished right away if we are able to use digital evidence to prove the identity of the attacker and provide sufficient specifics. Finally, situations involving intellectual property theft, industrial espionage, employment conflicts, and fraud investigations require digital forensics.

3.2 Digital Forensics Tools Digital forensics tool for detecting malware that runs in a virtual machine detects suspicious objects by examining their behavior. If there are any suspicious objects in a virtual machine the Malware sandbox detects them as malware. The developers use the sandbox to test programming code and cybersecurity professionals use to test malicious software. Few of malware analysis sandboxes are listed below. • • • • • •

Detox Sandbox IRIS-H CAPE Sandbox Any. run Binary Guard True Bare Metal Comodo Valkyrie

The sandboxing tools contain security mechanism that helps to mark malicious programs which are hidden inside the downloaded files, or from few untrusted user emails and untrusted websites [12]. Digital forensics tools helps to identify and preserve the evidence which supports public and private law. Few of the tools are addressed in this section. • The sleuth kit: The tool sleuth kit is a command line tool that extracts and analyzes the data by using windows and Unix-based libraries. It offers the open-source general, public, and common community. It helps to check windows and mac systems [13].

Big Data Security: Attack’s Detection Methods Using Digital Forensics

105

• Autopsy: The tool autopsy identifies flags in data that allot open-source programs by sleuth kit. The graphical user interface exhibits search results of hidden threats in data and it makes the investigation process easier [14]. • Hash Keeper: The tool hash keeper is an index management application where users can perform forensic surveys in systems. It uses algorithms to prove hash values for different data files. It reduces the time to check whether the data file is exceptional or not [15]. • Bulk Extractor: The tool bulk extractor Allows data files for extraction that examine the database without testing the file system layout. The extracted information from data files is processed dynamically with simplified tools. Mainly used for decryption of bulk data [16].

4 Conclusion Big data security in digital forensics plays a major role to guard the data against attacks. Real-world applications like social media contain raw data which has to be secured with cyber security measures including banking, education, and finance. Implementation of data analytics against the threats in digital forensics tools helps with the investigation process. Digital forensics tools identify the strategic approach that analyzes the bulk data. In this paper, we addressed the digital forensics process for identifying the attacks and also described a few digital forensics tools. We conclude that taking precautions by using antivirus software, updating the operating systems, changing passwords frequently, using strong passwords, and staying away from suspicious websites can act as preventive measures for protecting the data.

5 Future Process By incorporating real-time behavioral analytics and incorporating collective intelligence from multiple sources such as global threat intelligence feeds and user behavior data, we can significantly enhance the accuracy and efficiency of threat detection models, resulting in a more proactive and effective response to cyber threats.

References 1. Sonawane S, Patel D, Kevadiya M, Modi R, Moradiya J, Thomas A (2018) Big data by 3V’s and its importance. Int J Res Eng, Sci Manag 1(12):1–2 2. Sarker IH, Kayes ASM, Badsha S et al (2020) Cybersecurity data science: an overview from machine learning perspective. J Big Data 7(41):1–29. https://doi.org/10.1186/s40537-020-003 18-5

106

Ch. Charan et al.

3. Song J, Li J (2020) A framework for digital forensic investigation of big data. In: 3rd international conference on artificial intelligence and big data (ICAIBD), pp 96–100. https://doi.org/ 10.1109/ICAIBD49809.2020.9137498 4. Gao P, Han Z, Wan F (2020) Big data processing and application research. In: 2nd international conference on artificial intelligence and advanced manufacture (AIAM), pp 125–128. https:// doi.org/10.1109/AIAM50918.2020.00031 5. Gotmare M, Nikam R (2022) Survey of big data and their analytical frameworks. Int Res J Mod Eng Technol Sci 4(8):814–823 6. Adlakha R, Sharma S, Rawat A, Sharma K (2019) Cyber security goal’s, issue’s, categorization & data breaches. In: International conference on machine learning, big data, cloud and parallel computing (COMITCon), Faridabad, India, pp 397–402. https://doi.org/10.1109/COM ITCon.2019.8862245 7. Li Y, Liu Q (2021) A comprehensive review study of cyber-attacks and cyber security; emerging trends and recent developments. Energy Rep 7:8176–8186 8. Ambalavanan V, Shanthi Bala P (2020) Cyber threats detection and mitigation using machine learning. In: Handbook of research on machine and deep learning applications for cyber security, pp 18. https://doi.org/10.4018/978-1-5225-9611-0.ch007 9. More R, Unakal A, Kulkarni V, Goudar RH (2017) Real time threat detection system in cloud using big data analytics. In: 2nd IEEE international conference on recent trends in electronics, information & communication technology (RTEICT), pp 1262–1264. https://doi.org/10.1109/ RTEICT.2017.8256801 10. Pedamkar P. Ethical hacking tutorial. EDUCBA. https://www.educba.com/ids-tools 11. Shalaginov A, William Johnsen J, Franke K (2017) Cyber crime investigations in the era of big data. In: IEEE international conference on big data. https://doi.org/10.1109/BigData.2017. 8258362 12. Denham B, Thompson DR (2022) Ransomware and malware sandboxing. In: IEEE 13th annual ubiquitous computing, electronics & mobile communication conference (UEMCON), pp 173– 179. https://doi.org/10.1109/UEMCON54665.2022.9965664 13. Dizdarevic A, Barakovic S, Husic B (2020) Examination of digital forensics software tools performance: open or not?”In: Proceedings of the international symposium on innovative and interdisciplinary applications of advanced technologies, lecture notes in networks and systems, vol 83. Springer. https://doi.org/10.1007/978-3-030-24986-1_35 14. Ghosh A, Majumder K, De D (2021) Android forensics using sleuth kit autopsy. In: Proceedings of the sixth international conference on mathematics and computing, advances in intelligent systems and computing, vol 1262. Springer. https://doi.org/10.1007/978-981-15-8061-1_24 15. Adamu H, Adamu Ahmad A, Hassan A, Gambasha SB (2021) Web browser forensic tools: autopsy, BHE and NetAnalysis. Int J Res Sci Innov 8(5):103–107 16. Almogbil A, Alghofaili A, Deane C, Leschke T, Almogbil A, Alghofaili A (2020)digital forensic analysis of fitbit wearable technology: an investigator’s guide. In: 7th IEEE international conference on cyber security and cloud computing (CSCloud)/6th IEEE international conference on edge computing and scalable cloud (EdgeCom), pp 44–49. https://doi.org/10. 1109/CSCloud-EdgeCom49738.2020.00017

Decentralized Expert System for Donation Tracking and Transparency Swati Jadhav, Siddhant Nawale, Vaishnav Loya, Varun Gujarathi, and Siddhesh Wani

Abstract Charities operate in a challenging financial climate, mostly because they are not very transparent. Every day, it becomes harder to tell if donations were going to the correct places or whether they were going in another direction, like funding shady campaigns or terrorist efforts. Donors thus lose confidence in these groups, which ends communication with philanthropic organizations. Because of this, it is more challenging for organizations to generate money at high expense. Blockchain technology, a decentralized database that offers security, transparency, and cheaper financing costs by removing the involvement of third parties between donors and organizations, was recommended in this study as a potential solution to the described problem. A new track donation model that included a number of additional participants that controlled the contribution process and allayed any scepticism about the charity was offered. Blockchain enables the tracking of all donations, enabling donors to trace the usage of their funds. Smart contracts are used for all donations, enabling donors to know exactly when and how their contributions will be used. Keywords Blockchain · Smart contracts · Decentralized ledger · Donation · Transparency

S. Jadhav (B) · S. Nawale · V. Loya · V. Gujarathi · S. Wani Vishwakarma Institute of Technology, Pune, India e-mail: [email protected] S. Nawale e-mail: [email protected] V. Loya e-mail: [email protected] V. Gujarathi e-mail: [email protected] S. Wani e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_8

107

108

S. Jadhav et al.

1 Introduction Nowadays, people in this world are willing to contribute to this whole society. There are many ways by which people can contribute to any disaster which has happened or to help any needy person. But the most usable way of helping others is the way of charity. There are numerous platforms available in the market and on the internet which post many applications and posts of needy persons or some specific campaigns, and also, they use advertisement as a tool for promoting their posts on social media such as on Instagram, Facebook, YouTube, and many more. By seeing such advertisements, people usually tend to click on such advertisements and end up donating their money for the person who wanted money for fulfilling their need. In such platforms, those who have donated their money which we can refer to as a donor can’t be able to view their donation or track where it went and how it reached the end person who needed that and whether that individual had used their money for fulfilling their purpose or not. By such lack of transparency, it creates trust issues in such platforms whether they are genuine or not as they did not provide facility to track the donor’s donation starting to end. By such, donors lose trust in respective platforms, and they are not willing to donate money on such platforms. Therefore, solving such issues, this paper proposes one decentralized donation platform which will be based upon a blockchain network for complete transparency inside the platform. This helps to record every transaction which will tamperresistance in nature. Henceforth, this platform will try to regain the trust in donation platforms by providing such transparency and security to the platform.

2 Related Work In today’s world, recent donation platforms require one whole proof system which will do verification themselves without requiring any other person or application. The transparency of charitable organizations is lacking, and it is hard to oversee them, which has a detrimental effect on people’s desire to give. Debate over the transparency of contribution systems has existed for a very long time. On the other side, the emphasis on transparency creates privacy issues for donors and receivers, with some people attempting to hide gifts or cash transactions. A contribution technique that guarantees openness and anonymity is crucial to minimizing unfavorable effects. Main solution for this is blockchain itself as it does not rely on any other platform as it verifies and maintains the integrity independently of any transaction. Ethereum as a platform, as it is public and it is able to run more than seven transactions to 20 transactions per second [1]. Specifically in India, an Aadhar card is one thing which has been distributed to all the citizens of India and its Aadhar number is integrated with all their bank accounts and their data attached with their other details and their location. So they have utilized this Aadhar number with blockchain technology

Decentralized Expert System for Donation Tracking and Transparency

109

for various other purposes like voting and in healthcare sectors along with donation [2]. The author in [3] examines and introduces fresh Trust management system approaches for blockchain-based authentication. Online transactions between two parties necessitate reliance on and trust in third parties for both authentication and verification. A chain of trust is established between the third party and other network nodes [4]. This trust is then distributed throughout the whole network [5]. In order to prevent such a sequence, the article suggests a decentralized strategy, specifically for the safe trust management authentication procedure. By eliminating any third parties required for authentication, using lockchain technology offers the advantage of providing an extra genuine layer of safe trust to a system [6]. The various cryptocurrency mining systems go over the mining techniques and algorithms employed by the major cryptocurrencies. The need for mining because every time a transaction occurs, two parties involved in the transaction share some form of currency [7]. Blockchain technology uses the mining process to validate each transaction that takes place between the two parties. Here, the miner’s responsibility is to determine whether the payer is using their currency or not. The advantages and debates surrounding various forms of cryptocurrency. Different mining algorithms are explained along with various cryptocurrencies like, Ethereum, Bitcoin, peercoin, and Blackcoin [8]. The system has improved in security and reliability. In this study, the methodology uses a graph model, and a distributed ledger to create a tamper-proof graph record using block chain encoding. Thus, the blockchain stores a graph model which will be encoded as a state machine. The model will increase security compared to more wellknown trust management solutions like Web PKI and PGP Web of Trust. As a result, systemic attacks would be reduced by the block chain paradigm [9]. Provides examples of the security and privacy mechanisms that will be applied to the blockchain system. The author uses digital signatures to secure confidentiality, integrity, and mainly data sharing authentication among its two parties. Any node in the network will be able to see an encrypted message that has been sent without a digital signature because they all have the sender’s public key. Therefore, the author suggests using RSA digital signatures to ensure that only the designated receiver can view the message [10]. First encryption of the message will be executed using the sender’s private key, and the final product is then encrypted using the intended recipient’s public key. Elliptic curve digital signature algorithm (ECDSA)-based transaction signing and verification [11]. The secp256k1 standard is used by ECDSA to define a curve. The secp256k1 aids in determining constants that the blockchain can use when signing a transaction. To find these constants, a challenging mathematical puzzle must be solved. Thus, it has strong cryptography [12]. To allow connection of the client side to an ethereum node. Since it is a low-level interface, libraries like web3.js and ethers.js are required for generation of function calls for client side and provide back a response [13–17]. Instructions for creating a decentralized application. The author has advocated using the online editor Remix to create smart contracts using the solidity language. Additionally, the author recommends using the Truffle.js framework for effective management of decentralized applications [18, 19]. Blockchain is a viable technology that is increasingly popular for addressing a wide range of security-related issues that are controlled by both the public and commercial

110

S. Jadhav et al.

sectors. Blockchain is becoming more and more popular among charitable organizations. People have lost faith in charities as a result of the lack of transparency in the transactions involving donations, which prevents donors from knowing if their contributions are being used effectively. The author of this paper [20] suggests a decentralized blockchain-based donation tracking system that would give complete transparency, accountability, and direct access to the intended beneficiaries. It will be implemented on the ethereum blockchain.

3 Proposed Work 3.1 Problem Statement It is crucial to create a social network using blockchain technology that can support non-profit organizations. Blockchain will allow all platform users to examine their accounts and a breakdown of each donation made by the organization they support. A further benefit of distributed ledger technology is that it will guarantee direct funding transfers from donors to their intended beneficiaries. There will be information about charities.

3.2 Proposed System The diagram below shows the anticipated charity platform mode. The three responsibilities are to donors, recipients, and charity organizations. Charitable organizations can access information on how to request assistance through the website. The website is used by donors to learn about charitable efforts and subsequently make donations to the charities. The blockchain maintains a record of the entire currency movement, making it possible to monitor transactions and safeguard money from being mishandled. Main responsibilities in this platform are donor and charitable organizations as depicted in Fig. 1. Donors will send the donation in the form of tokens on the donation platform and those tokens will be securely sent to the charity organization in the same format. Charitable organizations have to upload the report on the website through different pages and then donors can submit the request for the report and then donors can download the report which has been uploaded by charitable organizations. Functionality of this platform is divided into two parts mainly as for charitable organizations and another one for donors as depicted in Figs. 2 and 3. Functionality of donors will be as follows: 1. Donors will be able to make a donation through this decentralized website. 2. Donor will be able to get the information regarding his donation through the website on the dashboard page after the signing process of the donor.

Decentralized Expert System for Donation Tracking and Transparency

Fig. 1 Proposed model interactions

Fig. 2 Workflow of platform

111

112

S. Jadhav et al.

Fig. 3 Functionality of platform

3. Donors can download the reports from the website directly on the request. Charity Organization’s functionality includes: 1. Organizations will be able to receive the donation. 2. Organizations will have to register themselves as an organization on a website and after that, they can login as a charity organization. 3. Organizations will be able to upload the reports to the website.

3.3 Methodology 3.3.1

Tools

1. Vite-React: A front-end tool of the future, Vite is geared at performance and quickness. It is divided into two main sections: 1. A development server that offers numerous feature improvements over native ES modules, including pre-bundling, typescript, jsx, and dynamic import support. 2. A build tool that produces static assets that are optimized for production while also bundling your code using Rollup. For faster page loads, Vite can combine dependencies with several internal modules into a single module. These modules frequently send out hundreds of requests all at once, which may clog up the browser and slow down load times. However, users

Decentralized Expert System for Donation Tracking and Transparency

113

only need to submit one request because these dependencies have already been pre-bundled into a single file, improving overall efficiency. 2. Solidity: Smart contracts can be created using the high-level, object-oriented language called Solidity. The behavior of accounts in the ethereum state is controlled by programs referred to as smart contracts. The language is designed for the ethereum virtual machine and employs curly brackets (EVM). Python, C++, and JavaScript have an impact on it. It also supports inheritance, libraries, and advanced user-defined types, in addition to being statically typed. For applications like voting, crowdsourcing, blind bidding, and multi-signature wallets, Solidity can be used to construct contracts. 3. MetaMask: It is a cryptocurrency wallet called MetaMask that gives users access to the Web 3 ecosystem of decentralized apps (DApps). It also acts as a browser extension that doubles as an ethereum wallet and can be set up similarly to other browser extensions. Once loaded, users can use any ethereum address to transact by storing Ether and other ERC-20 tokens. Users can utilize their money in games, stake tokens in gambling apps, and trade them on decentralized exchanges by linking MetaMask to ethereum-based DApps (DEXs). It also gives consumers a method to access DeFi apps like compound and PoolTogether, which are part of the emerging decentralized finance, or DeFi, industry. 4. Goerli Test Network: Before launching on Mainnet, the primary ethereum network, blockchain projects can be tested on the Goerli test network. As previously mentioned, it is a testnet, a decentralized computing network whose ledger is distinct from the regular ethereum ledger, meaning that transactions between the two do not cross over. It utilizes proof of authority as its primary consensus algorithm rather than ethereum’s mainnet proof of stake (PoS). 5. Firebase: Firebase, a Google-sponsored application development platform, allows developers to construct iOS, Android, and Web apps. Firebase provides tools for monitoring analytics, reporting and resolving app faults, and designing marketing and product trials. It is also capable of storing data in a database. Firebase’s mission is to provide mobile developers with access to a comprehensive set of fully managed mobile-centric services such as analytics, authentication, and real-time database. Cloud functions completes the offering by allowing you to expand and link the behavior of firebase features by adding server-side code. A cloud product suite that allows you to easily construct a serverless mobile or web app. It provides the majority of the standard services used in any app (database, authorization, storage, and hosting).

114

S. Jadhav et al.

6. Tailwind: In essence, Tailwind CSS is a utility-first CSS framework for quickly creating unique user experiences. It is a low-level CSS framework that is extremely adaptable and provides all the building blocks required to create custom designs without requiring you to struggle to overcome obnoxious opinionated styles. The best part about tailwind is that it doesn’t enforce design guidelines or dictate how your website should look; instead, you just combine little parts to create a oneof-a-kind user interface. Simply put, Tailwind takes a “raw” CSS file, processes it through a configuration file, and outputs the results.

3.3.2

About Smart Contract

Smart contracts are first blockchain-based programs that run when specific conditions are met. They are often employed to automate the process of executing a contract so that all parties can be certain of the result immediately, without the need for a middleman or extra time. Developers have a wide range of transaction choices to choose from and can store practically any form of data on a blockchain. Smart contracts built on blockchains are assisting in lowering transaction costs by enhancing the security, effectiveness, and cost-effectiveness of transactions and other business processes. The terms of smart contracts are stored in a distributed database and cannot be changed because they are executed on blockchain. This blockchain is also able to handle automating counterparties, transactions, and payments. Smart contract in this paper is capable of storing all the transactions that happen through it and then retrieving them when needed. It stores fields like senders address, receivers address, amount, message, if any, and then the timestamp. These fields are then handled by various functions some of them are like getalltransaction() which works exactly like it sounds it fetches all the transactions made via that particular contract, which can further be sorted out as per individual needs. The other one is addtoblocVishwakarma Institute Of Technologykchain() which adds the new transaction to the chain and also keeps the count of the total transactions made till date. Then there are events which are inheritable contract members that, when emitted, store the arguments supplied in the transaction logs. With the aid of the EVM’s logging functionality, events are typically utilized to inform the caller application of the present status of the contract. Here, an event named transaction is used which helps in storing all the variable data mentioned before. Then in order to use this contract, we have to first deploy it on an active ethereum network. Now this smart contract is deployed on a test network known as Goerli Test Network. Once the contract is deployed then a block which contains that construct data and an address which refers to that contract is created through which anyone can access that contract for future use. Figure 4 depicts the deployed smart contract on the Goerli test network.

Decentralized Expert System for Donation Tracking and Transparency

115

Fig. 4 Deploying smart contract

Pseudocode: contract Transactions { /* Define Variable and their types */ uint256 transactionCounter; TransferStruct[] transactions; address owner; struct Transferst {address senderadd; address receiveradd; string message; uint256 timestamp; uint amount; } /* Events */ event Transfer(address senderadd, address receiveradd,string message,uint256 timestamp,uint amount); /* Constructor */ constructor() { set owner = msg.sender; } /* Modifiers */ modifier validateTransferAmount(uint amount) { Check whether the amount is more than zero. otherwise, instruct the owner to enter an amount larger than zero. Respective Function body } modifier restrictToOwner() { Check to see if the sender is the owner of the contract that is being deployed.

116

S. Jadhav et al.

Respective Function body } /* Functions */ function addtochain(address payable receiveradd,string memory message,uint amount ) validateTransferAmount(amount) restrictToOwner() { Add TrabsferStructt(msg.sender, receiveradd, amount, message, block.timestamp) to transactions do transactionCounter = tracsactioncounter + 1 Emit the event Transfer with parameters as (msg.sender, receiveradd, message, block.timestamp, amount) } function getalltransaction() { return transactions } function getTransactioncount() { return transactionCounter } }

3.3.3

Frontend

The approach utilized in this research is based on vite react, a Facebook-developed open-source JavaScript toolkit for front-end development. You may create web apps with high-quality user interfaces using its component-based library. This package works with Virtual DOM and lets you embed HTML code inside JavaScript. Compilation and deployment of Smart Contract is done with react module Hardhat. And page routing is done with BrowserRouter, Routes, and Route which is found in react-router-dom. For the database, firebase is used which uses an unstructured type of data saving method. For contract execution, ether module is used which is a web 3.0 library that can be used for interacting with smart contracts on the ethereum blockchain and other ethereum virtual machine (EVM) compatible blockchains. The walkthrough of the pages is as follows, first when the web app is launched we see the login page which is connected to the register page if the user wants to register, or if the user wants to login he/she can enter their credentials and press login. Once successfully logged in the user is then directed to the user home page where the main donation takes place. Here, they first have to connect the WebApp to their MetaMask account and they can see their account address and balance on the website. Users can also check the charity reports after going to the charity page. This above Fig. 5 is the first page of the website where donors will have to come and login to move on to the donation. Here, donors will have to enter their username and password which then will be verified. Once verified, the donor will be moved to the main dashboard page.

Decentralized Expert System for Donation Tracking and Transparency

117

Fig. 5 User login page

This in Fig. 6 is the registration page of the website. If the donor is new to the platform, he will first have to register by going to the registration page. Where he will be asked different fields like Name, Type, Username, and password. Here, field type represents the type of account the user wants to register for that is where the account is a donors account or an charities account.

Fig. 6 User registration page

118

S. Jadhav et al.

Fig.7 Database structure

3.3.4

Database

Figure 7 depicts the user and charity database structure. Where a user has items such as Uname which is a primary key, Type which indicates whether the user is a donor or a charity organization, the user’s name, and the password. On the other hand, charities have elements such as name, account address (which is the address of the account on the blockchain network and can be obtained from the home page or MetaMask), and then facts about the organization, government certificates, and contact information.

4 Results This decentralized network for donations is available on websites where users register and receive authorization to operate in accordance with their roles, which can include those of donors, non-governmental organizations, or governmental bodies. Each user will be able to be recognized and granted access to their own dashboard thanks to their special username and password. The dashboard will allow the contributor to monitor transactions, make donations, and see their total contribution. Donors can use tracking to find out the most recent transaction status. Above in Figs. 8 and 9 is the main home page of the website where user will land on after successful registration and login. Here, if the user is a donors, he/she will get their individual dashboard and from this page they can do the transaction, send the donation to the respective charitable organization or in case of a charitable organization see the donations received. The panel that displays a user’s specific transactions is shown in Fig. 10. Now, the transactions that are displayed will be the donations made by the donor if the

Decentralized Expert System for Donation Tracking and Transparency

119

Fig. 8 Donor dashboard page

Fig. 9 Charitable organization dashboard page

user is a donor. Additionally, the transaction of the received donation will be shown if the user is a charitable organization. And the user will be sent to the report page in Fig. 14 when they click on those transactions. So the reports for the donation may be received from the report page. Below the dashboard on the main home page, donors will be able to see the transaction history of the transactions made by the donor along with all the detailed information about the transaction. Before moving on to the transactions, donors have to establish the connection with the MetaMask wallet as a part of the transaction for sending tokens in the form of ethereum. As indicated in Fig. 11, the donor will be prompted to select which

120

S. Jadhav et al.

Fig. 10 List of user transactions

account he/she wants to link to the platform, and after the connection is formed, the donor will be able to carry out his transaction. After successfully connecting to the MetaMask wallet, entering the transaction information, and then hitting the send now button, MetaMask will display one window confirming the transaction as shown in Fig. 12, whether the donor initiated it or not. The transaction will be completed after the donor confirms it. This page shown in Fig. 13 is mainly for charitable organizations where they have to first register themselves as charitable organizations before login in the website. After registering, they will be able to login as charitable individuals.

Fig. 11 Connecting with MetaMask wallet

Decentralized Expert System for Donation Tracking and Transparency

121

Fig. 12 Confirming transactions through MetaMask

Fig. 13 Register page for charity organizations

Because understanding how the donor’s money was used is vital to the giver, this platform allows the charity organization to post the donation usage report on the report page in Fig. 14, which the donor may see here in Fig. 14 on request.

5 Conclusion Donor transactions are tracked by this decentralized platform, which is based on blockchain technology and smart contracts. Smart contracts on the blockchain enable

122

S. Jadhav et al.

Fig. 14 Report uploading page for charity organizations

direct control of the transfer of tokens or virtual currencies between the parties involved in a transaction without the need for a trustworthy third party. The donation platform accepts and permits cryptocurrency donations. Each cryptocurrency transaction is distinct, thus the blockchain makes it easy to follow them. High levels of openness and social responsibility can allay donors’ fears, encourage people to give, and improve the favorable image of generous giving.

6 Future Enhancement Further, we can make our own digital cryptocurrency for management of all the transactions which will help to build ultimate trust between all the peoples by reducing the most important factor of corruption with the help of full transparency.

References 1. Aras ST, Kulkarni V (2017) Blockchain and its applications – a detailed survey. Int J Comput Appl 180(3): 0975–8887 2. Mudliar K, Parekh H, Bhavathankar P A comprehensive integration of national identity with blockchain technology. 978-1-5386-2051-9/18/$31.00 ©2018 IEEE 3. Saleh H, Avdoshin S, Dzhonov A (2019) Platform for tracking donations of charitable foundations based on blockchain technology. In: Actual problems of systems and software engineering (APSSE) 4. Saleh H, Avdoshin S, Dzhonov A (2019) Platform for tracking donations of charitable foundations based on blockchain technology. In: 2019 actual problems of systems and software engineering (APSSE), pp 182–87. Ieeexplore.ieee.org

Decentralized Expert System for Donation Tracking and Transparency

123

5. Anjum A, Sporny M, Sill A (2017) Blockchain standards for compliance and trust, 23256095/17/$33.00 © 2017 IEEE 6. Mukhopadhyay U, Skjellum A, Hambolu O, Oakley J, Yu L, Brooks R (2016) A brief survey of cryptocurrency systems. In: 2016 14th annual conference on privacy, security and trust (PST). IEEE, pp 745–752 7. Swati J, Nitin P, Saurabh P, Parikshit D, Gitesh P, Rahul S (2022) Blockchain based trusted secure philanthropy platform: crypto-GoCharity. In: 2022 6th international conference on computing, communication, control and automation (ICCUBEA), Pune, India, pp 1–8. https:// doi.org/10.1109/ICCUBEA54992.2022.10011026 8. Jayasinghe D, Cobourne S, Markantonakis K, Akram RN, Mayes K (2012) Philanthropy on the blockchain 9. Suma V (2019) Security and privacy mechanism using blockchain. J Ubiquitous Comput Commun Technol (UCCT) 1(01):45–54 10. Alexopoulos N, Daubert J, Mühlhäuser M, Habib SM (2017) Beyond the hype: on using blockchains in trust management for authentication. In: 2017 IEEE Trustcom/BigDataSE/ICESS. IEEE, pp 546–553 11. Wood G (2014) Ethereum: a secure decentralized generalized transaction ledger. Ethereum Proj Yellow Pap 151:1–32 12. Palladino S (2019) Querying the network. In: Ethereum for web developers. A Press, Berkeley, CA, pp 89–125 13. Ta¸s R, Tanrıöver ÖÖ (2019) Building a decentralized application on the ethereum blockchain. In: 2019 3rd international symposium on multidisciplinary studies and innovative technologies (ISMSIT). IEEE, pp 1–4 14. Reiten A, D’Silva A, Chen F, Birkeland K (2016) Transparent philanthropic microlending. In: Final project - 6.857 network and computer security. Massachusetts institute of technology 15. Miraz MH, Ali M (2018) Applications of blockchain technology beyond cryptocurrency. Ann Emerg Technol Comput (AETiC) 2(1) 16. Bunduchi R, Symons K, Elsden C (2018) Adding value with blockchain: an explorative study in the charity retail sector 17. Agarwal P, Jalan S, Mustafi A (2018) Decentralized and financial approach to effective charity. In: 2018 international conference on soft-computing and network security (ICSNS) 18. Abou Jaoude J, Saade RG (2019) Blockchain applications – usage in different domains. IEEE Access: Pract Innov Open Solut 7:45360–45381 19. Yang T, Guo Q, Tai X, Sun H, Zhang B, Zhao W, Lin C (2017) Applying blockchain technology to decentralized operation in future energy internet. In: 2017 IEEE conference on energy internet and energy system integration (EI2), pp 1–5. ieeexplore.ieee.org 20. Singh A, Rajak R, Mistry H, Raut P (2020) Aid, charity and donation tracking system using blockchain. In: 2020 4th international conference on trends in electronics and informatics (ICOEI) (48184), pp 457–62. ieeexplore.ieee.org

Design of Wireless Cloud-Based Transmission Intelligent Robot for Industrial Pipeline Inspection and Maintenance Joshuva Arockia Dhanraj, Gokula Vishnu Kirti Damodaran, C. R. Balaji, Swati Jha, Ramkumar Venkatasamy, Janaki Raman Srinivasan, and Pavan Kalyan Lingampally Abstract With the rapid increase in industries and trade, the worldwide usage of pipelines has increased exponentially. It is essential that these pipelines can be maintained properly and should be inspected for leakage, cracks, and proper welding after the installation. It is hard for humans to manually inspect the pipeline for the larger distances and to work in hazardous situations which makes it a cumbersome task and especially pipelines with varying sizes and diameters make the task an even more impossible one to implement, so replacement of the pipeline becomes the only viable solution. This solution is not only a waste of manpower, time, and money but also a highly unsustainable usage of raw materials. Keeping this problem in mind, a robot named PIPELINE360 has been developed. This robot functions based on the Non-Destructive Testing (NDT) method of inspection and the self-adjustable mechanically designed structure, which can be used for the inspection of the pipelines with minimum 440 mm–maximum 1270 mm variation in diameter. It can give the live visual feed of the pipeline and is also facilitated to check for the internal and external cracks in the walls and the welded joints of the pipeline through non-contact Ultrasonic Testing. The operation of the said robot can be controlled using wireless J. Arockia Dhanraj (B) · G. V. K. Damodaran · C. R. Balaji · S. Jha · R. Venkatasamy · J. R. Srinivasan · P. K. Lingampally Centre for Automation and Robotics (ANRO), Department of Mechatronics Engineering, Hindustan Institute of Technology and Science, Chennai, Tamil Nadu 603103, India e-mail: [email protected] G. V. K. Damodaran e-mail: [email protected] C. R. Balaji e-mail: [email protected] S. Jha e-mail: [email protected] J. R. Srinivasan e-mail: [email protected] P. K. Lingampally e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_9

125

126

J. Arockia Dhanraj et al.

cloud-based transmission system. There can be a reduction in the manpower, cost of maintenance, and time consumption for the inspection with the use of this robot. Keywords Mobile robot · Wireless cloud transmission · Inspection assist · Pipeline inspection · Automation · Non-destructive testing (NDT) · Ultrasonic testing

1 Introduction The automated approach of robotics in the field of the Pipeline Inspection will help to reduce the manpower, cost of maintenance, time, and wastage of material [1]. This robot uses the advantage of implementing the Non-Destructive Testing (NDT)-based acoustic testing for the inspection of the damage and the cracks in the pipeline and the welded and non-welded joints using a non-contact and couplant-free electromagnetic acoustic transducer (EMAT) for inspection of the internal and external cracks by which maintenance of the pipeline can be scheduled [2] along with the live visual feed of the pipeline. This feed can be utilized for the visual inspection and acts as an added advantage as it combines both the aspects of inspection hence producing a quality output at the time of inspection. Design of the robot plays a vital role in automated inspection as there are various factors which could affect its output such as: pipeline length, size of the pipeline quality of the welding alignment of pipeline in the place of joints, and so on. Considering the above constrains, a self-adjustable design will accomplish the need in the automated Pipeline Inspection which uses a spring back type of adjustable design using a calculated spring value with an idler support wheel system and block-type arrangement in the design for various pipe dimensions [3]. The automation of the process of Pipeline Inspection will be a key factor in minimizing the cost of maintenance as well as the labor cost which is highly economically for small- and medium-scale business establishments.

2 Literature Review Ab Rashid et al. [1] state that the main problem during an inspection is the restricted maneuverability of the robot due to geometric changes in the pipe. This can be avoided by creating a suitable mathematical model to create an efficient robotic inspection system. As a result, they offer a thorough analysis of several modeling approaches for in-pipe inspection robotic systems, including an evaluation of the system’s kinematic and dynamic mathematical models. They cover robotic systems for in-pipe inspection with a variety of propulsion methods, including fluid-driven, wheeled mobile drive (WMD), screw or helical drive, legged, and biomimetic drives.

Design of Wireless Cloud-Based Transmission Intelligent Robot …

127

Roslin et al. [2] reviewed and categorized several hybrid robots according to their implemented locomotion system. Caterpillar wall-pressed type, wheeled wallpressed type, and wheeled wall-pressing screw type are a few examples of hybrid locomotion systems. According to the study, the most widely used primary locomotion system in the creation of in-pipe robots is wall-pressed type. Most of the prototypes can enter branches that are the same diameter as the pipe. When compared to branches, the incorporation of a caterpillar wheel has a greater advantage in preventing motion singularity problems. The wheeled wall-pressed model, however, offers advantages in terms of high-speed mobility. The optimum navigation in curved pipes is provided by the wheeled wall-pressing screw type. But, none of these innovations demonstrate how well they can navigate larger pipes to smaller branches. Kakogawa and Ma [3] explored the solution of the redundancy of Pipeline Inspection problem using Mobile Robots that adapt to the work environment inside the pipeline. In their research, a completely unique machine-based PIR-scanner (PIR) is proposed to check large-diameter pipelines. Additionally, major advantage of this machine is that it can cover a good diameter (500–1000 mm) of pipes. This robot can solve the issues of underground pipeline testing. This robot can overcome the challenges of the human element in hard or dangerous work and may also add inaccessible conditions during the repair and maintenance of underground pipelines in various industries and in lifestyle. Vigneswaran et al. [4] suggested the active monitoring and inspection of the pipeline systems have become highly expensive using the traditional systems and a better alternative for those traditional systems could be the use of. There are various damages that could occur in the pipeline system like corrosion, development of cracks, etc., and to overcome all these, regular inspection is required. The system proposed in this paper provides information on the robot inspection technology using wireless sensor. Jain et al. [5] worked on designing of the multilink articulated with omni and hemispherical wheels for the inspection of the pipeline. Said design could quickly adapt to the shape of the pipeline and the rollover movement without using the forward and the backward movements as this is designed using omni wheel with articulated link design. The parameters such as driving wheels and torsion springs, magnitude of driving forces, stiffness, and natural angle of the spring that are required to adapt to various pipelines were investigated for this multilink-articulated robot, this design incorporates few driving actuators and an elastic joint which is the torsional spring for the body to adapt and bend over the bends of the pipeline, and the efficiency and effectiveness of the design are experimentally verified. Tavakoli et al. [6] described the design and development of a pole climbing robot (PCR) for inspection of industrial size pipelines. With the virtually ideal workspace and weight, special V-shaped grippers, and a quick rotational mechanism around the pole axis, a PCR with a novel four degrees of freedom ascending serial mechanism was introduced throughout the design phase. The design process focused heavily on simplicity, safety, minimum weight, and manipulability, and it was discovered that the created prototype demonstrated the viability of using PCRs for NDT inspection on elevated structures. PCRs were built and intended to be able to cross bends and

128

J. Arockia Dhanraj et al.

T-junctions, but they face far greater challenges than those that must ascend from a straight pole. Kawaguchi et al. [7] describe a new mechanism, communication system, and vision system of an internal pipe inspection robot. The crawler-like design, which is built on dual magnetic wheels, not only gets over the restriction but also gives the robot the ability to climb over sharp objects like sleeve and dresser joints. To lessen the friction, a fiber-optic communication system has been implemented. Because of the new vision system’s substantial downsizing, it can clearly see and check the welded portion beneath the robot while orienting itself. Verma et al. [8] explained their system based on programmable logic controller (PLC) for the controlling of the robot which is implemented and the internal control is done using human–machine interface (HMI). They used two parts in programming the robot in this work—the first part of the programming work was carried out by using a PLC software in which the simulation of the sequential operation of the robot functionality was carried out. The second part of the programming was developed with the HMI which is for the internal controlling of the robot which gives an easy access for the operation using the HMI touch screen user which can define the size of the pipeline and the functionality required for the inspection. Kwon et al. [9] elaborate on the design and motion planning algorithm of a caterpillar-based pipeline robot that can be used for inspection of 80–100-mm pipelines in an indoor pipeline environment. To ensure that the robot extends to grip the pipe walls, the robot system uses spring-loaded four-bar mechanisms and a differential drive to steer the robot. The caterpillar wheels, analysis of the fourbar system supporting the treads, closed-form kinematic approach, and user-friendly interface of this robot are its distinctive features. A brand-new motion planning method is also suggested, allowing two robot modules to cooperatively navigate through challenging pipe portions by using springs to connect them. Additionally, a method of analysis for choosing optimal compliance is recommended to guarantee functionality and cooperation. Kakogawa et al. [10] proposed a wheeled Pipeline Inspection robot which is multilink articulated and operates based on anisotropic shadow assistant method which is projected. A crescent-shaped shadow appears in the image captured in the bent pipe when the position of the illuminator is displaced with respect to the camera. The robot’s orientation around the pipe axis is responsible for the size, position, and orientation of the shadow. At a certain point in orientation, the shadow tends to disappear and that is called as anisotropic shadow. In the previously developed robot, when the robot’s orientation and the pathway direction of the bent pipe are aligned, the robot adapts itself to a bent pipe without any control. Operation-assisted system is proposed to pass through the winding pipes after aligning those two specific orientations. Two types of images’ binarization are used in this paper to extract shadow images. By applying the pathway direction of the bent pipe to the rolling movement of the robot, the proposed system was experimentally verified. Sera et al. [11] elaborated about most of the Pipeline Inspection robots being manually operated especially when there is a bend or when the robot must pass through T-branches. Yet there are few research and studies’ station that such robots

Design of Wireless Cloud-Based Transmission Intelligent Robot …

129

could pass through bends and T-branches automatically by recognizing the bending direction and adjusting its posture; however, it comes with certain correlations. In this paper, an articulated Pipeline Inspection robot that can move through an 8-inch gas pipeline has been proposed along with their join angle trajectories while passing through the bends. The geometrical relationship between the robot and the pipe has been used to derive the joint angles. A control system for passing through the bends and branches has been shown as implemented by mathematizing the joints. The efficiency of the proposed system has also been verified experimentally [12–15].

3 Feasibility Study While considering work in the product point of view, feasibility study plays a vital role in this modern mechatronics designing approach. Based on previous literatures, four types of feasibility studies have been taken into consideration, which have been explained in the upcoming sub-topics.

3.1 Technical Feasibility This robot uses a 300 RPM standard geared 12 V DC motor for its movement using which the robot can move inside the pipeline with its payload and components. This robot is fixed with a FPV CMOS 1000TVL 90° camera which has the resolution of 1280*720 pixel including the inbuilt 6 mm CMOS image sensor which will provide a clarity image without any breakage in the pixel; this is suitable to operate in the temperature range of −40 °C–105 °C which is optimal for the industrial application. For transferring the video, TS5823 transmitter module is used which has operating range of 600–1500 m. This transmitter module has 32 channels through which video can be transmitted and it transmits the video using a variable bandwidth of 0–8 MHz; the robot is mounted with the Electromagnetic Acoustic Transducer (EMAT) transducer for NDT, it offers a contactless testing, it uses sound waves for testing the material, this type of transducer can handle metallic material and also the concrete coated material, this transducer has the frequency range of 2–7 Hz, and this sensor offers a measuring thickness range of 10–150 mm which will give an optimal results for the inspection; Arduino Uno R3 is an easily available controller with an easy to program user interface and is used to control the movement and handle the incoming and outgoing data from and to the robot which is connected to a ESP8266 WIFI module for connectivity which offers a bandwidth of 12–15 GHz. On the other hand of mechanical designing structure, block-type designing is used which is useful for a layman to assemble the robot and use it which eliminates the requirement of a specialized technician and the spring back-type mechanism to provide the advantage of rigidity and less link, joint and moving part which increases the robustness of the

130

J. Arockia Dhanraj et al.

product along with this idler wheel type design which is provided to give stability and balance to the system when moving in the pipeline.

3.2 Operational Feasibility This customized robot offers an easy way to operation such that it can be operated by a low-level labor which eliminates a specialized technician, this robot can be operated over a long distance using a simple two-axis joystick, the visual can be viewed in the screen given along with the controller, and the data of the inspection used will be displayed in the form of a graph by which crack can be identified from the graph where the points are deviated from the normal. This enhances the inspection and makes it quick and consumes the time and manpower required for the inspection.

3.3 Power Feasibility In this project, the components are chosen based on the operating range, power consumption, requirement of the components for this application. In this operating range and the power consumption play a major role in this project the CMOS camera uses 4.5–6 V for is operation, WIFI module uses 5 to 9 V for operation, TS58233 transmitter uses 7 V of power supply Arduino used 5 V of power supply and motor uses 12 V power supply for each and EMAT transducer used 12–24 V of power supply in total 46 V of power supply is required to operate this robot which can be powered using a two LIPO 11.4 V Battery which would give an optimal operational time for inspection.

3.4 Economic Feasibility All the components used for this project are highly reliable products and can be easily replaced if required, and all the components used along the mechanical parts, electrical parts, electronic parts are used and designed based on the industrial standards which can be easily available at the local store at low cost.

4 Proposed Model and Circuit of the Pipeline360 Robot The PIPELINE360 circuit is split into different modules for the designing of the circuit, namely Power source module, Sensory module, Data transmission module, Driving module, System control module.

Design of Wireless Cloud-Based Transmission Intelligent Robot …

131

Fig. 1 Block diagram of the robot

The block diagram (Fig. 1) of the robot depicts the major functional units of the Pipeline Inspection robot. All the sensors and actuators are connected to the controller of the robot. Here, the controller used is an Arduino UNO R3, which controls, namely CMOS camera for image capturing, a 300 rpm DC motor for the robot’s movement, an EMAT sensor for fault detection in the surface, a led and the transmitter and receiver along with the VTX module. A Li-Po battery acts as the source of power for the entire robot including all its sensors and actuators. The battery is connected to the controller from which the power is distributed to all other units. The block diagram (Fig. 2) of the joystick controller used to manually control the robot is depicted above. The controller used here is an Arduino nano to which all the other units are connected. The joystick controller relates to a display unit which displays the visual feed provided by the camera using a VRX module. The NRF24Lo01 transceiver is also connected to the controller, and it is used for receiving the acquired data from the robot. The power supply unit is connected to the data acquisition system which is used for acquiring the data obtained by the EMAT sensor and the power supply unit is connected to the controller also which then distributes power to all other units effectively.

4.1 Sensory Module 4.1.1

EMAT Transducer

Electromagnetic acoustic transducer (EMAT) is used in non-destructive testing for the identification of the internal and external cracks. This transducer uses the principles of electromagnetic induction for the generation of the shear waves [8]. The

132

J. Arockia Dhanraj et al.

Fig. 2 Block diagram of the base system

EMAT is contactless and does not require any couplant as the sound wave is generated within the material [9]. WORKING: Initially, the conductor is placed near to the test specimen and the conductor is energized with an alternating current. This alternating current will induce alternating magnetic field which in turn develops the eddy current in the test specimen. When a strong electromagnet is introduced, the Lorentz force is created, due to the interaction between the permanent magnetic field and the eddy current; the direction of the eddy current and the Lorentz force is changed instantaneously, and the change in the direction of the Lorentz force will disturb the particles in the test specimen, which generates and transmits the shear waves throughout the material, due to this the ultrasonic shear wave is generated which will be used to determine the flaws in the respective test specimen [10].

4.1.2

CMOS Camera

1000TVL 90° CMOS camera is tailored for First Person Perspective (FPV), it has a pixel resolution of 1280*720, and it includes 6 mm CMOS image sensor inbuilt. It has an operating temperature range of −40 °C–105 °C, and its working voltage range is 4.5v–6v. It has an automatic white balance and digital noise reduction feature inbuilt to the system [4and can be easily interfaced with the RF transmitter and receiver circuit.

Design of Wireless Cloud-Based Transmission Intelligent Robot …

133

4.2 Data Transmission Module 4.2.1

NRF24L01 Rf Transceiver

NRF24L01 is transceiver module made of single chip for the ISM band of 2.4– 2.5 GHz. This NRF24L01 board consists of fully integrated frequency synthesizer, and by using the SPI interface, the output power, frequency channel, and protocol setup can be easily programmed [7]. This module can be easily switched to transmitter and the receiver mode. It is an ultra-low-powered device which makes the system more efficient. This device has a temperature range of −40 °C– + 85 °C. It has a maximum data rate of 2 Mbps. In this system, the transceiver is used for the movement control of the robot in forward and reverse directions along with the wireless data transmission between the EMAT probe and the receiver system.

4.2.2

ROTG01 UVC

ROTG01 UVC is an First Person View (FPV) receiver which is used to receive the transmitted video/audio from the transmitter using the RF technology; it is a lowpower video receiver which has an operating voltage of 5 V and has a temperature range of −10–60 °C, a frequency range of 5645–5945, and a 150CH. This is a device which can be directly used in the Android as well as in Windows device, it can be directly interfaced to the connecting devices using the OTG connecter, and this receiver is best suit for the transmission of the 5.8 GHz.

4.2.3

TS5823 Transmitter

TS5823 is a video transmitter specially used for the FPV wireless video transmission, this is best suited for the drone and the Mobile Robot application, and this is small and light weight. It is a moderate-powered device which has an operating voltage range of 7 V, a video bandwidth of 0–8 MHz with 32CH and can transmit up to the range of 1500 m theoretically, and its transmission range in open area is a minimum of 600 m to maximum of 950 m. This can be directly interfaced with the camera using the connector cable.

4.3 System Control Module 4.3.1

ATMEL 256

ATMEL 256 is a powerful microcontroller which has 256 KB of flash memory of which 8 KB is used for bootloader and 8 KB for SRAM and 4 KB for EEPROM. It

134

J. Arockia Dhanraj et al.

has a single clock cycle which can execute the instructions at a high speed. This is a low-powered microprocessor which can be used to do specific task. This has 54 input output pins in which 16 pins are analog pins and it also has six communication ports. In this microcontroller, pulse with modulation (PWM) has been inbuilt which makes the user to control an entire system.

5 Function Tree Function analysis (Fig. 3) is a method for analyzing and developing a function structure. A function structure is an abstract model of the new product, without material features such as shape, dimensions, and materials of the parts. It describes the functions of the product and its parts and indicates the mutual relations. In this paper, the base has been laid using the function means analysis. It starts with the objective which is ‘To scrutinize the wall of the industrial pipeline in search of cracks or any other deformations’. It is followed by the flow of power from its source which will be a lithium-polymer 12 V battery from which voltage flows to the heart of the robot which is the ‘Arduino Uno R3’ microcontroller. Following which, all the actuators and sensing units are connected to the controller, and the different units have been split into three, namely movement control unit, scrutinizing unit, and transmitting unit. The power supply for the three units mentioned is passed on from the Arduino UNO R3 through the power distribution board. Then, the specific subordinates of the three major units which are the major means to be performed by the robot to reach the output function are specified. The means of movement of the Pipeline Inspection robot are the 300RPM Dc motor. The EMAT sensor and the CMOS camera serve as the means of the scrutinizing unit, these two sensors are completely responsible for the detection of cracks and other deformations, and then, they are connected to the data acquisition system and the digital image processing unit. The digital image processing unit is used to convert the image captured by the CMOS camera into a digital image and the EMAT sensor takes feedback from the data acquisition system. The transmission units mean are its Nrf24Lo01 transceiver and VTX module, and both are used to transmit the obtained data effectively. Both these modules are connected to an antenna for efficient transmission of data. All the means of different units are attached to the robot body. Hence, all these means serve the main function of the robot which is to scrutinize the walls of industrial pipeline for detection of fault/cracks or deformations.

Design of Wireless Cloud-Based Transmission Intelligent Robot …

135

Fig. 3 Functional tree

6 Working of the Robot The initialization process of a Mobile Robot in a cloud-based Pipeline Inspection system is a critical step in ensuring accurate and efficient operation. Some of the key steps in the initialization process include: 1. Network Connection: The robot needs to establish a secure and reliable connection to the cloud infrastructure to enable data transfer and remote control. This is typically done using a wireless network, such as Wi-Fi or cellular. 2. Calibration: Before the robot can start inspecting the pipeline, it needs to undergo a calibration process to ensure that its sensors and cameras are accurately aligned

136

J. Arockia Dhanraj et al.

with the pipeline. This may involve using specialized tools or techniques to establish a reference frame and fine-tune the robot’s position and orientation. 3. Configuration: The robot needs to be configured with information about the pipeline it is inspecting, such as its location, dimensions, and other relevant details. This information is usually stored in a database in the cloud and can be accessed by the robot during the initialization process. Once the robot is initialized and ready to start inspecting the pipeline, it begins collecting data and sending it to the cloud. Observations are then monitored in real time, using advanced algorithms and machine learning models to detect and analyze any anomalies or issues with the pipeline. These observations can include video streams, sensor readings, and other relevant information, which are compared to historical data to identify trends and potential problems. The cloud-based inspection system can also be set up to send notifications or alerts to the pipeline operators if any issues are detected, allowing them to take prompt and appropriate action to address the problem and maintain the integrity and safety of the pipeline. This continuous monitoring and analysis of the pipeline observations help to ensure the longevity and safety of the pipeline, reducing the risk of failures and malfunctions. The robot is manually controlled using a RF transmitter for the forward and the backward movements using the joystick provided, and the live visual of the pipeline will be transmitting through the transmitter. The sensing element is present at tip of the front arm hand of the robot, and it will be continuously rotating at the constant low speed to inspect the wall precisely with a continuous rate. The sensed data are recorded, and it can be viewed in the screen [5]. Stepper motor is used for the precise movement of the robot and to inspect the pipeline in every constant distance. Based upon the diameter of the pipeline, the blocks can be added to increase the length of the arm. The flowchart (Fig. 4) shows the process starting with initializing the robot by turning on the power, and then after the flow of charge the EMAT probe, camera and transceiver are simultaneously initialized. Following that, the sensors simultaneously start to perform their function step by step which is the acquisition of visual data and then process the acquired data and transmit it to the receiver, in the case of a camera. The same way, the EMAT probe starts scanning the surrounding area and processing the data acquired followed by its transmission. In the case of transmitter and receiver, they tend to receive data for the movement and decide upon which way the motor must rotate to move in the specific direction. The orthographic view of the robot model is shown in (Fig. 5).

7 Construction of the Robot The robot has total of six arms—three at the back and three at the front. Each arm has angular gap of 120º and has been designed in a way which ensures stability during movement. This robot is driven by three stepper motors which are coupled to the

Design of Wireless Cloud-Based Transmission Intelligent Robot …

Fig. 4 Flowchart of robot

Fig. 5 Design of the robot model orthographic view

137

138

J. Arockia Dhanraj et al.

Fig. 6 ARM design orthographic view

wheels using the ‘worm and worm’ wheel mechanism. It also has three supporting wheels for the stabilized movement of the robot along with a self-adjustable mechanical structure. The sensing element is fixed at the back portion of the robot which is driven by a low-speed DC motor, having four bright LED and a CMOS FPV camera at the front face of the robot. It can inspect pipelines with diameter varying from 440 to 1270 mm. In this robot (Fig. 6), spring-loaded piston system is used for the self-adjustable design which fits the robot perfectly into the pipeline and it gives the stability to the robot when variation in the pipeline diameter occurs [6]. The cylindrical block in the arm of the robot is designed based on the spring used in this system. In this robot, helical spring with the circular section and the plain with grounded end type of spring are used.

7.1 Design Description of Pipeline360 The PIPELINE360 robot is designed to inspect the pipelines after the machining operations in the industry. This robot is designed with a self-adjustable mechanism to inspect the pipelines with a maximum range of 440–1270 mm of diameter. This robot is designed with six legs in which three legs are driven legs and the other three

Design of Wireless Cloud-Based Transmission Intelligent Robot …

139

are idler legs which do not have any actuators to rotate the wheels in it but to provide the support to the robot while functioning or moving inside the pipeline. The main component has all the electronic system components fixed to it inside along with the legs of the robot. The stepper motor to drive the sensor arm is fixed in the front lid of the main component with the sensor arm supported by the bearing with the shaft which connects the sensor arm and the stepper motor. The sensor arm is provided with a spring mechanism which adjusts when the robot moves across the welds inside the pipeline. The sensor is fixed at the end of the arm, with a 2 mm gap between the inner wall of the pipeline and its top surface to get highly accurate results. This sensor arm is designed with two wheels across the sensor to prevent damage to the sensor while inspection or motion across bulged surfaces inside the pipeline. The base housing of all legs has been internally and the spring housing has been externally threaded for 30 mm from the edge. The linear adjusting component is designed to get interlocked at the edges and to move up and down when the spring extends and retracts. The stepper motors are used to obtain precise movements for the maximum coverage of inspection. The gear housing component will contain worm gear with the shaft, stepper motor, shaft to connect the motor and stepper motor and worm wheel. The solid rubber wheels are used to get a high grip on the inner walls of the pipeline. All the driving and idler legs are split 120 degrees. And these legs are designed by creating a separate plane with respect to the face it needs to be situated.

7.2 Calculation for the Spring Selection Total coils—n, Free length (Lf )—Pn, Solid length (Ls )—dn, P—pitch of coil. 1. Assuming material, take C-65 (carbon 0.6–0.7%), (Mn 0.5–0.8%) Tensile strength of the C-65 material is 750 N/mm2 σ y = yield str ess o f the C − 65 material is 430 N /mm2 2. Rigidity modulus – G = 0.89 * 105 N/mm2 3. Mean diameter,D = Do – d Gd 4. Spring rate (q) = 8C3n Now assuming C = D/d = 12; D = 12d; 12d = 58 – d 5. d = 58/13 ≈ 4 mm D = 48 mm(mean diameter ) (0.89 ∗ 105 ∗ 6)/(8 ∗ 123 ∗ 5) = 7.72; n ≈ 8 coils

140

J. Arockia Dhanraj et al.

f r ee length L f = Pn 40mm = (P ∗ 5) ≈ 5; Pitch(P) = 5 mm Solid height Ls = dn = 4 ∗ 8 ≈ 32 mm; Ls = 32 mm. • Therefore, the free length of the spring is 40 mm, and total no. of coils of the spring is seven coils. Solid height of the spring is 32 mm, pitch of the spring is 5 mm, wire diameter is 4 mm, mean diameter of the spring is 48 mm, and the outer diameter of the spring is 58 mm.

8 Conclusions This developed automated robot system replaces the human resources deployed for the integral process of Pipeline Inspection and eliminates the increased cost of the job and manpower for installing the new pipes along with the wastage of the materials. This robot can make the inspection of the pipeline easier and efficient, with the selfadjustable and stable design; by using the visual data and the internal and the external crack identification techniques, the pipelines can be easily repaired. The robot helps to reduce the cost of inspection and hassle-free maintenance of the pipeline can be carried out successfully. Compared to other inspection robots available in the market, the system provides a flexibility in terms of the dimensions of the robot and its ability to fit inside various sizes and diameters of the pipelines with the help of detachable blocks, ranging the inspectable pipeline diameters from 440 to 1270 mm. The system also provides adequate supervision infrastructure with the availability of user-friendly manual manipulation in any case deemed fit by the user with the help of the live-feed video feedback.

References 1. Ab Rashid MZ, Yakub MFM, bin Shaikh Salim SAZ, Mamat N, Putra SMSM, Roslan SA (2020) Modeling of the in-pipe inspection robot: a comprehensive review. Ocean Eng 203:107206 2. Roslin NS, Anuar A, Jalal MFA, Sahari KSM (2012) A review: hybrid locomotion of in-pipe inspection robot. Procedia Eng 41:1456–1462 3. Kakogawa A, Ma S (2018) Design of a multilink-articulated wheeled pipeline inspection robot using only passive elastic joints. Adv Robot 32(1):37–50 4. Vigneshwaran DS, Rahman RRA, Vignesh R Wireless network sensor monitoring platform using pipeline inspection robot” 5. RK Jain, Das A, Mukherjee A, Goudar S, Mistri A (2019) Design analysis of novel scissor mechanism for pipeline inspection robot (PIR). In: Mondal A (ed) Proceedings of the advances in robotics, pp 1–6

Design of Wireless Cloud-Based Transmission Intelligent Robot …

141

6. Tavakoli M, Marques L, de Almeida AT (2010) Development of an industrial pipeline inspection robot. Ind Robot: Int J 7. Kawaguchi Y, Yoshida I, Kurumatani H, Kikuta T, Yamada YAYY (1995) Internal pipe inspection robot. In: Proceedings of 1995 IEEE international conference on robotics and automation, vol 1. IEEE, pp 857–862 8. Verma V, Kumar R, Kaundal V (2017) Implementation of ladder logic for control of pipeline inspection robot using PLC. In: Proceeding of international conference on intelligent communication, control and devices, pp 965–971 9. Kwon YS, Yi BJ (2012) Design and motion planning of a two-module collaborative indoor pipeline inspection robot. IEEE Trans Rob 28(3):681–696 10. Kakogawa A, Komurasaki Y, Ma S (2017) Anisotropic shadow-based operation assistant for a pipeline-inspection robot using a single illuminator and camera. IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 1305–1310 11. Sera F, Kakogawa A, Ma S (2019) Joint angle control of an 8-inch gas pipeline inspection robot to pass through bends. In: International conference on advanced mechatronic systems (ICAMechS), pp 28–33 12. Gulliver TA (2013) A design of active RFID tags based on NRF24L01. In: 2013 10th international computer conference on wavelet active media technology and information processing (ICCWAMTIP), pp 210–213 13. Klann M, Beuker T (2006) Pipeline inspection with the high resolution EMAT ILI-tool: report on full-scale testing and field trials. In: International pipeline conference, pp 235–241 14. Niese F, Yashan A (2006) Wall thickness measurement sensor for pipeline inspection using EMAT technology in combination with pulsed eddy current and MF. In: Herbert Willems. 9th European conference on NDT, Berlin, vol 18, pp 45–52 15. Simek J, Ludlow J, Flora JH, Ali SM, Gao H (2012) Pipeline inspection tool with double spiral EMAT sensor array. US Patent 8,319,494

Evaluation of Security of Cloud Deployment Models in Electronic Health Records Nomula Ashok, Kumbha Prasadarao, and T. Judgi

Abstract Most of the doctors have used EHR (Electronic Health Records) for patient record-keeping due to the convenience of data or record sharing at will. Additionally, it is more convenient for the patients, nurses, and other healthcare ecosystem stakeholders. The cloud is becoming the infrastructure for the majority of EHRs due to cheaper costs and application scalability, yet without compromising data protection. Using a key-control method, we have suggested a framework for storing medical records and allowing patients and doctors to access them. The situations we’ve thought about here are for rural and urban medical facilities, making them more suitable for Indian medical services. By creating separation between the encryption algorithms used to transmit and store data, the proposed approach provides double data security. The outcome of the experiment demonstrates that it is capable of scaling both the quantity of patients and the quantity of health record components. Cloud computing has a lot of computers and a lot of data. An essential component of cloud computing is data security. The cloud is plagued by a variety of concerns, including storage, environment and security problems including dependability and privacy. Even still, despite all the effort put into resolving these issues, there are still certain security issues. In cloud storage, ensuring data security is a crucial concern. Our paper examines the research of security issues in healthcare system deployment models. Different methods that have been developed attempt to address various security concerns for healthcare systems. Keywords Cloud computing · Decryption · Electronic health record · Cloud security · Encryption

N. Ashok (B) · K. Prasadarao · T. Judgi Sathyabama ˙Institute of Science and Technology, Chennai, Tamil Nadu, India e-mail: [email protected]; [email protected] K. Prasadarao e-mail: [email protected] T. Judgi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_10

143

144

N. Ashok et al.

1 Introduction Today, the electronic health record (EHR) is increasingly appealing to the academic and healthcare sectors. The cloud computing model is one of the widely used health IT infrastructures for allowing EHR sharing and EHR combination Many healthcare organisations and insurance providers employ various electronic medical record systems today, but the majority of them store medical records in centralised databases as electronic records [1–4]. By using an inventive deployment paradigm as a source of user utility, cloud computing has made a contribution to the healthcare industry. It lowers the price and energy use of pooled computer resources (servers, storage, software, and networking). It improves healthcare systems’ performance in terms of scalability, adaptability, and dependability. Healthcare businesses that supply resources use the pay-as-you-go option to let cloud customers purchase computer resources according to their needs and requirements without worrying about the cost. The disadvantage is that many healthcare institutions are wary about using the cloud because of unaddressed security issues. The inability to exert control on clouds is another problem [5–9]. Both EMRs and EHRs are essential to the larger goal of healthcare digitalization because they may increase patient safety, quality and efficiency while lowering healthcare delivery costs. The main goal of the EHR is to offer a recorded record of care that supports the patient’s past, current and future treatment from the same or other doctors or Care providers. Patients and professionals may communicate with one another using this EHR. In this study, we presented an EHR system architecture and access control management that is particularly suited for a heavily populated nation like India. The situations we’ve thought of here are for rural and urban medical facilities, making them more suitable for Indian medical services. We do tests in a cloud and discovered that the system is suited for the scalable and adaptable needs of India’s health management system. By creating separation between the encryption algorithms used to transmit and store data, the proposed approach provides double data security [9–15].

2 Literature Review The idea of patient-centred health records that would be housed in a web-based system was first proposed by Szolovits et al. in 1994. Hiu et al. (2020) described the usage of public key infrastructure (PKI) for sharing and authentication in order to protect the privacy of medical records. Yu Chekhanovskiy also displayed a comparable piece of art (2021). Lee and Lee describe cryptographic key management as a means of ensuring the security and privacy of EHR (2008). Specifically, Benaloh et al. employ a cloud-based EHR system with hierarchical identity-based encryption and privacy protection (2009). One of the consumers’ primary concerns is trust.

Evaluation of Security of Cloud Deployment Models in Electronic …

145

According to Shen et al. (2010), this necessitates that the security mechanism’s complexity be kept to a minimal to make it easier for users to operate the system. Li et al. explored the different vulnerabilities (2010). Honski (2019) examine the difficulties, security concerns and needs that Cloud Service Providers (CSP) deal with during cloud engineering in order to assess the risk in third-party clouds. In their 2013 article, Xiao and Xiao noted a few major issues with cloud computing. Chen and Zhao have emphasised the need for ensuring security for data exchange in the cloud (2012) [16–20]. A study looking at how privacy legislation should include addressing cloud computing and security prevention and security issues with cloud-based personal data Zhou et al. provide the work that can be done (2010). Wang et al. have researched the variables influencing security in cloud computing (2011). The dynamics of information security in the Cloud are outlined by it, together with the essential security requirements for businesses. Wang conducts a research on the privacy and security compliance of software-as-a-service (SaaS) among businesses via pilot testing (2011). Oza et al. (2010) conducted a study on a variety of users to ascertain their opinions on the user experience of cloud computing and discovered that the primary concerns for all users were trust and the ability to choose the best Cloud Service Provider, in terms of record matching, data transmission and query limitation [21– 23]. They categorise the data in order to share it securely and effectively online. They provide a structure as well. Butler (2007) describes the problems with data sharing on the Internet, as information about users is made available via sharing. This is beneficial because it informs businesses about the privacy risks associated with the data they choose to make public and the unreliability of user confidentiality. From a banking standpoint, Mitchley (2006) outlines the advantages of data sharing and draws attention to the ongoing privacy concerns. Feldman et al. (2012) address the significant advantages of data sharing for public health. This provides the two hackers with two potential points of entry into the EHR’s security and privacy. We explored a double encryption system as a solution, with distinct encryption keys for data storage and data transfer. By doing this, even if data are ever stolen while they are in a communication channel, the theft is prevented from spreading to the cloud [24, 25].

3 Securıty Issues in Cloud Computıng Healthcare is unable to embrace cloud technology due to general security requirements, which are enumerated in [33–39]: Duty separation. To make the interface a bit more complicated, the basic concept is to isolate the HER system from the cloud provider. Availability. It guarantees constant, dependable access to cloud data or resources. It ensures that the connection is always available, that the systems are operating as intended, and that only authorised users are allowed access to the systems or even the network [6]. It indicates how long the service will be accessible or online and

146

N. Ashok et al.

is commonly referred to as uptime. Migration: Users that accept cloud computing are at risk since their data cannot be transferred to other clouds, thus if they rely on the data, they must remain with the cloud. Applications and services may be moved across several cloud suppliers thanks to migration. By use of a number of specialised programmes or scripts, data is frequently transferred. Integrity: The primary aspect of information security is what benefits healthcare businesses’ data, software and hardware. It ensures the reliability and accuracy of the data [8]. It demonstrates that only official individuals are capable of customising or altering data or resources. Confidentiality. The idea of secrecy prioritises credentials above permissions for individuals who need access to the system and the protected medical data. It ensures that unauthorised individuals cannot access secure medical data housed by healthcare institutions in cloud infrastructure [9]. The most significant challenges with cloud-based healthcare systems are the aforementioned security ones.

4 Methodology Theft of unencrypted PCs, portable devices and media used to hold patient information raises concerns about security with typical device-based EHRs. Since the data are saved on a distant server rather than on the device itself, cloud-based EMR is immune to this issue. Physicians and EHR suppliers have switched from desktop to cloud-based EHR due to the cheaper cost and protection against data theft. The cloud-based EHR will be crucial in linking the numerous EHRs of all healthcare facilities and eradicating the shortcomings of the existing system.

4.1 Indian Situation and EHR In India’s vast population, the necessity for EHR is unavoidable. The inclusive healthcare system in that nation is made possible by the participation of people from all over the world in health records. In various regions or states of India, different languages are spoken. As a result, changing an EHR’s language with only one click on an electronic device is highly likely. Due to a large subscriber base, language translation APIs are also becoming more affordable to use in cloud-based EHR systems. The promotion of utilising the EHR system has a problem with literacy. However, the census data for the decades following India’s independence show that the country’s literacy rate has been increasing annually. The literacy rate increased from 20 to 80% from 1951 to 2011. This pattern clearly indicates that by 2021, the literacy rate in India will reach 100%, creating the ideal conditions for all Indians to use the EHR. Most individuals will get familiar with computers and mobile devices due to the high literacy rate. In contrast to other emerging nations, India’s populace speaks English as a first language for the majority of the time. This has the additional benefit of making EHRs more appropriate for the Indian healthcare system.

Evaluation of Security of Cloud Deployment Models in Electronic …

147

4.2 EHR System Architecture Here, one of the templates for healthcare system has been utilised as an example of an electronic health record where access to cloud-based data may be secured using encryption. Figure 1 depicts the planned design for the EHR network, which includes a hierarchy and cloud connection. That is most districts can copy the same EHR either exactly as is or with minor changes. The scenario shown here is a decent representation of the Indian healthcare system as a whole. Generally speaking, there are two types of health service centres: urban (city) health service centres and rural health service centres (RHSC). Rural health care will be far less sophisticated and more basic than those offered in cities. Most of the rural region won’t have access to treatment with specialist medical services from a physician and equipment point of view. Physicians are distinguished according to whether they work for for-profit medical facilities or public hospitals. Private practitioners from both urban and rural areas might be included under the umbrella term ‘private doctors”. Direct connections to the cloud-EHR are made by private medical practitioners and rural healthcare facilities. The municipal health centre often has several health departments as well as a medical and nursing college, necessitating the necessity for a medical server. Thus, managing access to the faculty or students of the municipal healthcare facility will add to the workload for cloud management. Access management with numerous levels connected with hierarchical

EHR in cloud

Govt. Organization

Rural Health care centre 1 P1

P2

Rural Health care centre 2 Server Rural Health care centre 3

P3

Private medical practioners

Fig. 1 EHR access network on cloud

148

N. Ashok et al.

member entities may be a technique to managing keys for the medical server. Therefore, the municipal healthcare facility will have a separate server for medical records. Due to the many kinds of stakeholders involved, using this medical record server is highly difficult. the fundamental primitives for writing and reading data include encryption and decryption. EHR systems allow patients to have limited access to the health information that healthcare professionals have recorded about them. Many patient-centred electronic health records, including Indivo, Microsoft Healthvault, and Google Health [14], have been suggested, according to Szolovits et al. EHR’s system. By providing the essential computational infrastructure for EHR storage, cloud computing makes on-demand computing resources a reality. The essential computational infrastructure for EHR storage is provided by cloud computing, which offers dependable on-demand computing resources. The availability, data privacy and access control data protection, authentication, and scalability [4] of the cloud have all been taken into consideration and examined. Three challenges, which are the main problems we are attempting to solve in our suggested models, were noted by Zhao et al. [4] and are listed in Sect. 2. In order to get architectural models for applications that are based on security needs, Qinetiq has proposed the use of domain security. In order to offer client platform security and connect it with network security principles in business models, privacy domains are employed in security architecture in e-health infrastructures. These designs put more emphasis on the software engineering portion of systems than on things like cryptographic operations and security protocols.

4.3 Deployment Models Services (EHR system), Service Users, and Cloud Provider are three different categories of participants that may represent a cloud ecosystem [9]. Healthcare firms may execute their horizontal medical operations and procedures with zero service interruptions thanks to cloud service providers. When discussing cloud security, it is necessary to include assaults against cloud providers within the list of participants. The cloud provider does not necessarily need to be hostile; it may even play a supporting role in the present coordinated assault. All participants get a certain interface depending on their participating role. This model, which is the main model, tries to explain the shortcomings of every model and offers suggestions on how to make it better. As a result, newly available models are built on the required upgrades.

4.4 Model of Separation The major idea we concentrate on in this architecture to process and then store data from healthcare companies is separation of duties [10]. As a result, data processing

Evaluation of Security of Cloud Deployment Models in Electronic …

149

and storage are separated in this approach. EHR storage provider offers data storage A, while another cloud provider offers application X. As a result, two clouds each have a portion of the data under their control, increasing data security. For the purpose of processing and storing electronic health record data, it is essential to have at least two distinct services. Additionally, there should be at least two distinct service providers providing each service. One of the key steps involved in a transaction is all that each service is in charge of [4]. By preventing each service provider from exerting further control over the transactions, it is anticipated that this approach would result in a decrease in the amount of frauds and mistakes. Although security is somewhat improved in this case, service providers may identify one another, creating a possibility for collusion and different service providers filter communications. The model’s second flaw is that due to migration, consumers who are unhappy with a service provider that has access to their healthcare organization’s data are unable to switch providers. Furthermore, data cannot be restored from the cloud if any data is destroyed or if the service can no longer be used.

4.5 Model of Availability The assurance of an always-available connection enables authorised users to access the systems or network anytime it is required. It offers rapid and dependable access to cloud computing resources or data. If a service provider fails, the user will have problems using the services they rely on [5]. Another processing system is being considered for its services by the cloud service provider that supports processing services. Two image processing programmes are being investigated to provide clinicians picture analysis. One of the two image applications B and B’ may be employed in the event of any mistake or corruption. Every service in this cloud-based EHR system has been quadrupled by adding an additional data processing or storage system. The graphic makes it clear that this model fully supports availability. Data from healthcare organisations is coordinated and replicated via the duplication service. Redundancy in HER data processing and storage results in additional expenditures [4]. However, if a service is down, a backup is available and may be utilised. Many general measurement studies and network availability studies employ ICMP-based approaches, sometimes focusing on routing issues or failures in edge networks. Studies most likely employ ICMP.

4.6 Model of Migration Data transferred to the cloud may only remain there, where it is kept and maintained, and if the Customer wants data relocated, however if they do, all of the data may be lost since the new cloud may not work well with the existing one. This situation is unacceptable. According to this approach, businesses submit data to data processing

150

N. Ashok et al.

services to be processed and to cloud storage services to be kept online. The following graphic displays the general cloud migration paradigm. The migration model for a cloud-based EHR system supports cloud data migration services. The data from the indicated CDO’s EHR may be obtained by the new expert utilising a better and more precise analysis programme from another cloud provider. The data may be understood in several clouds thanks to the migration function. Users no longer have to be concerned about a cloud service provider having a tight grip on their data. Moving data from cloud storage A to cloud storage A’ and vice versa is possible. Separation of duties, availability, fault tolerance and data transfer are all protected by this paradigm. There is no proof that it promotes integrity and secrecy since data protection is not addressed.

4.7 Model for Tunnel and Cryptography This concept seeks to decrease fraud by cutting off service providers’ direct communications. Because the cloud provider is unaware of the data sent through tunnelling, they have less control over the data. In actuality, it hides data transfer from the network. The data tunnelling service will play the function of a communication channel by being positioned alongside data processing and cloud storage services, providing an interface for the services to interact with one another and access and modify data. One option is to advertise the tunnel as a service. Even if the cloud provider or anybody else may access the data thanks to encryption, it is luckily incomprehensible to them. As a result, the data are kept secure, and physicians can be assured that their private information won’t be disclosed to patients thanks to the EHR system. The addition of the cryptography service to the data tunnelling service enables data to perform cryptographic operations. Although there is no direct relationship between service providers in this paradigm, and it is unclear who performs cryptographic activities, the HER data processing service and cloud storage service are aware of the procedures and have access to the decryption key. As a result, they cannot access anything without a cryptographic key. On conventional and cloud infrastructure, collusion is a danger. This model contends that the lack of clear channels for service providers to identify one another reduces the possibility of coordination for fraud. Data migration, on the other hand, allows for the transfer of data across clouds. When data are damaged, one of the challenges of encrypted data will be evident. This safeguards duty separation, accessibility, fault tolerance, data movement, anticollusion, confidentiality and integrity. Data that is encrypted rejects any illegal access, enhancing data security.

Evaluation of Security of Cloud Deployment Models in Electronic …

151

5 Proposed Model On this paradigm, cloud service providers boost security by collecting a significant volume of data in additional cloud systems. When cloud Storage services are ineffective, the additional cloud storage service is a backup for putting customers’ data on databases so that even if a hacker gains access to one of the data centres, they are unable to access all of the encrypted data that has been collapsed and stored in different clouds at different data centres. In this approach, service providers work together to give more secure services, and in order to present transparent communication, it is necessary to act and supply certain protocols and architectures. This article aims to demonstrate a safe and trustworthy EHR system that makes use of a variety of commercial clouds to maximise the benefits of cloud computing. These methods replicate, encrypt and encode data across many clouds in an effort to increase data availability, integrity and secrecy stored in the cloud. The patient chooses to let one doctor access their cloud-based medical records at any time. This is accomplished by integrating both parties’ secret keys into the key access management system. At the time of the patient’s registration, this is anticipated to be done. Pairing is kept in key access management and is static. When a patient wishes to actively provide access to any doctor so they may see their health record history, they do after entering into the user’s account. Assertion is dynamic, and once it is made, it is removed after a certain period of time or the insertion of a record. (1) This action makes it possible to upload the most recent patient record to the cloud. A licenced doctor must only submit information about patients. In this procedure, the doctor must have the authority to update the patient’s medical record. The text of the patient record is encrypted and saved in the cloud element using the public key corresponding to the patient’s private key. The historical health record is always accessible to patients. But only when the patient makes an assertion or the doctor is associated with that patient may the doctor view this information. (2) This procedure is comparable to the first data access procedure. However, a doctor from Cn must input the information into the municipal health center’s medical record server. Any physician Cn may control the write and read operations on this medical record server. As permitted by key access control, especially by paring or assertion, the record created by a doctor Cn may also be viewed by other doctors. (3) A local medical practitioner is able to write patient records in the cloud using patient key thanks to this data access operation. The patient record history can only be accessed by a rural practitioner Rn if they are already associated with the patient. In such case, the patient must make the claim.

152

N. Ashok et al.

5.1 Cross-Authority Access and Privacy Method When the patient and doctor are not matched and the doctor needs to access the patient’s medical information, the patient’s claim must be made. If the patient is unable to assert themselves, family members or the patient’s partner may do it instead. In other circumstances, the government often wishes to conduct health surveys, compile data and analyse citizenry. In this instance, Gn, a government approved member, is able to submit high-level inquiries, and the matching requests are sent to the server. There is still another technique to get access to government agencies so they may survey or examine healthcare system statistics without jeopardising patient data privacy.

5.2 Write-Record Operation The initial authorization step is carried out between the credentials of the patient and physician before creating the health record on the EHR that is present in the cloud. It consists of the handshake signals and data access time diagram for the write-record operation. After entering the data, the gadget encrypts the health record before sending it to the cloud. The data, together with header information and record keys, are then stored.

5.3 Record-Reading Procedure It uses handshake signals and data time flow to depict the read-record (Fig. 2) process. Following the successful pairing or assertion of the patient’s and doctor’s authorisation, the doctor submits the request to view the patient’s health record along with the patient’s identification and the range of records. The cloud process has access to the encrypted records with the patient’s ID and the data for a variety of documents. Following the first decryption of these documents, a public key and private key pair are formed, with the private key being the same as the physician key. The document is then encrypted using a freshly created public key. This entails using several sets of private and public keys to encrypt both stored and transmitted communications. This plan provides improved data security since the cloud-stored message will remain secure even if the transmitted message is intercepted by an outsider. A set of keys message may be used in many cloud-stored communications since encrypted messages in the cloud are accessible during transmission. This raises the amount of encryption keys that are accessible, increasing the number of users and communications that can be processed by the secure EHR system.

Evaluation of Security of Cloud Deployment Models in Electronic …

153

Fig. 2 Record-reading procedure

6 Experimental Findings and Analysis Authorization is often carried out via an access control schema and key management, as is common in Table 1. Here, we explore the computational difficulty of evaluating time performance and the security analysis of encryption techniques.

6.1 Security Evaluation We examine the security of suggested access control strategies in this section. The proposed EHR system employs numeric operation-based privacy-preserving techniques and text-based encryption algorithms. For network and storage-based health records, text-based encryption is employed. Although we have utilised the Elgamal method here for text encryption, other encryption also be employed. Security analysis will stand in the path of any algorithm. Elgamal algorithm’s security analysis varies based on its settings, key generation system and implementation style. Table 1 Time performance for records with various numbers of elements during read–write cycles No. of elements Time in ms

3

6

9

12

15

30

30

33

38

40

42

58

154

N. Ashok et al.

Table 2 A number of records read in a given amount of time by the client node No. of Records Time in ms

1

10

20

50

100

200

1000

0.8

2

3.5

5

9

11

64

6.2 Performance Analysis Depending on the needs of a given EHR system, a certain encryption technique may be appropriate. However, the primary objective of an EHR system is a straightforward password structure, making attribute-based or predicate-based encryption algorithms more appropriate. Public/private key generators are essential in these kinds of cryptographic methods. Table elements were chosen from actual clinic paperwork, including patient details, the principal complaint, symptoms and prescriptions. The trials are repeated with various numbers of items in the record, and the timings required for the record’s write and read cycles are recorded. Collectively, the data are first written to the cloud, where they can then be accessed. From there, they can then be viewed on the client device. Table 1 displays the timing performance for the number of elements that was acquired in this experiment. Additionally, we played around with a variety of health history documents. and the same is examined in terms of time. Table 2 displays the findings for different records counts. Table 1 and Fig. 3a show the access time performance for records with various amounts of components in the cloud-based EHR. It is clear that the suggested system may accommodate any number of components for a health record. Encryption and decryption time for accessing the record is approximately linear. Due to the variable sizes of pieces that are gathered in the record, the slope of linearity varies. The access time performance for various record counts is shown in Table2 and represented as a graph in Fig. 3b. Being the no. of records grows, so does the linear rise in access time. The recommended security plan for health records has been shown to be scalable for a large number of health records.

7 Conclusıon This article focuses on significant user problems that must be addressed if consumers are to support cloud computing. By using exact design for the deployment of information technology systems on clouds, user worries may be completely eliminated. Second, rather than requiring cooperation from a single cloud, the suggested models are related to inter-cloud interaction. It is essential to inform users about the models so they may have faith in cloud computing. The infrastructure for the majority of EHRs is moving to the cloud due to cheaper costs and application scalability. To ensure that patient privacy is protected, it is crucial to keep data on the cloud with a high level of security. Using a key-control method, we have suggested a framework

Evaluation of Security of Cloud Deployment Models in Electronic …

155

Fig. 3 a Time versus no of elements, b Time versus no of records

for storing health records and allowing patients and doctors to access them. The situations we’ve thought of here are for rural and urban medical facilities, making them more suitable for Indian medical services. By creating separation between the encryption algorithms used to transmit and store data, the proposed approach provides double data security. The outcome of the experiment demonstrates that it is appropriate for large populations and capable of scaling in terms of both the number of patients and the items in the health record.

References 1. Benaloh J, Chase M, Horvitz E, Lauter K (2009) Patient controlled encryption: ensuring privacy of electronic medical records. In: Proceedings of the 2009 ACM workshop on cloud computing security. ACM, pp 103–114 2. Butler D (2007) Data sharing threatens privacy. Nat News 449(7163):644–645 3. Chen D, Zhao H (2012) Data security and privacy protection issues in cloud computing. In:

156

4.

5. 6. 7. 8.

9. 10.

11.

12.

13. 14.

15.

16. 17.

18.

19. 20.

21.

22.

23.

N. Ashok et al. 2012 international conference on computer science and electronics engineering (ICCSEE), vol 1. IEEE, pp 647–651 Domingo-Ferrer J (2002) A provably secure additive and multiplicative privacy homomorphism. In: Proceedings of 5th international conference on information security. Feldman, Lindsay, Patel, Deesha, Ortmann, Leonard, Robinson Kara, Popovic T (2012) Educating for the future: another important benefit of data sharing. Lancet 379(9829):1877–1878 Geoghegan S (2012) The latest on data sharing and secure cloud computing. Law Order 24–26 Hu J, Chen H-H, Hou T-W (2010) A hybrid public key infrastructure solution (HPKI) for HIPAA privacy/security regulations. Comput Stand Interfaces 32(5):274–280 Hu H, Xu J, Ren C, Choi B (2011) Processing private queries over untrusted data cloud through privacy homomorphism. In: 2011 IEEE 27th international conference on data engineering (ICDE), pp 601–612 Lee W-B, Lee C-D (2008) A cryptographic key management solution for HIPAA privacy/security regulations. IEEE Trans Inf Technol Biomed 12(1):34–41 Li H-C, Liang P-H, Yang J-M, Chen S-J (2010) Analysis on cloud-based security vulnerability assessment. In: 2010 IEEE 7th international conference on e-business engineering (ICEBE). IEEE, pp 490–494; Mitchley M (2006) Data sharing: progress or not. Credit Manage 10–11 Oza N, Karppinen K, Savola R (2010) User experience and security in the cloud – an empirical study in the finnish cloud consortium. In: 2010 IEEE second international conference on cloud computing technology and science (CloudCom). IEEE, pp 621–628 Popovic K, Hocenski Z (2010) Cloud computing security issues and challenges. In: MIPRO, 2010 proceedings of the 33rd international convention. IEEE, pp 344–349; Sahafizadeh E, Parsa S (2010) Survey on access control models. In: 2010 2nd international conference on future computer and communication (ICFCC), vol 1. IEEE, pp V1–1 Sarathy R, Muralidhar K (2006) Secure and useful data sharing. Decis Support Syst 42(1):204– 220 Shen Z, Li L, Yan F, Wu X (2010) Cloud computing system based on trusted computing platform. In: 2010 international conference on intelligent computation technology and automation (ICICTA), vol 1. IEEE, pp 942–945 Szolovits P, Doyle J, Long WJ, Kohane I, Pauker SG (1994) Guardian angel: patient-centered health information systems. Massachusetts institute of technology, laboratory for computer science Takabi H, Joshi JBD, Ahn G-J (2010) Security and privacy challenges in cloud computing environments. IEEE Secur Priv 8(6):24–31 Wang Y-H (2011) The role of SAAS privacy and security compliance for continued SAAS use. In: 2011 7th international conference on networked computing and advanced information management (NCM), pp 303–306 Wang J-S, Liu C-H, Lin GTR (2011) How to manage information security in cloud computing. In: 2011 IEEE international conference on systems, man, and cybernetics (SMC). IEEE, pp 1405–1410 Xiao Z, Xiao Y (2013) Security and privacy in cloud computing. IEEE Commun Surv Tutor 15(2):843–859 Yu WD, Chekhanovskiy MA (2007) An electronic health record content protection system using smartcard and PMR. In: 2007 9th international conference on e-health networking, application and services. IEEE, pp 11–18 Zhou M, Zhang R, Xie W, Qian W, Zhou A (2010) Security and privacy in cloud computing: a survey. In: 2010 sixth international conference on semantics knowledge and grid (SKG). IEEE, pp 105–112 Li M, Yu S, Zheng Y, Ren K, lou W (2013) Scalable and secure sharing of personal health records in cloud computing using attribute based encryption. IEEE Trans Parallel Distrib Syst 24(1):131–143 Barga R, Bernabeu-Auban J, Gannon D, Poulain C (2009) Cloud computing architecture and application programming. SIGACT News 40(2):94–95

Evaluation of Security of Cloud Deployment Models in Electronic …

157

24. Chow R, Golle P, Jacobsson M, Shi E, Staddon J, Masuoka R, Molina J (2009) Controlling data in the cloud: outsourcing computation without outsourcing control. In: 2009 ACM workshop on cloud computing security (CCSW 2009); 2009 November 13; Chicago, IL. NY: ACM, pp 85–90 25. Zhao G, Jaatun MG, Rong C, Sandnes FE (2010) Deployment models: towards eliminating security concerns from cloud computing. In: IEEE, 2010, pp 189–195

Leasing in IaaS Cloud Using Queuing Model Bibhuti Bhusan Dash, Utpal Chandra De, Manoj Ranjan Mishra, Rabinarayan Satapathy, Sibananda Behera, Namita Panda, and Sudhansu Shekhar Patra

Abstract In the present scenario, most of the IaaS clouds use simple resource leasing policies for eg., Advance Reservation (AR), immediate (IL), and best-effort (BE). Resources must be provided for immediate leasing either immediately or not at all. The scheduler executes these leases in the event that resources are available, otherwise, not. Resources are assigned to the lease under the BE policy as soon as they become available. The ability to allocate resources depends on the service provider. When the requested resources are available, the scheduler processes the request. There are no time restrictions with this kind of lease. If resources are not available the request is placed in a FIFO queue. Another sort of lease that necessitates resources within a constrained time frame is the AR lease. Due to limited resources, a cloud provider cannot accommodate all requests at once. There will be no issues in the Best-effort leases. But in critical applications in the cloud, there are many applications that need immediate attention from the cloud provider and the delay in processing such leases have a great impact on such applications. There needs attention for immediate leases. An analytical Markov model for the proposed scheme is developed and analysed. Keywords Cloud computing · Immediate lease · Best-effort lease · Queueing model

B. B. Dash · U. C. De · M. R. Mishra · S. S. Patra (B) School of Computer Applications, KIIT Deemed to Be University, Bhubaneswar, India e-mail: [email protected] M. R. Mishra e-mail: [email protected] R. Satapathy Faculty of Emerging Technologies, Sri Sri University, Cuttack, India S. Behera · N. Panda School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_11

159

160

B. B. Dash et al.

1 Introduction Nowadays, the majority of IaaS clouds in use employ simple resource scheduling techniques including AR, Il and BE leases. The available resources must be scheduled promptly or not at all under immediate leases. The scheduler manages these leases if resources are available; if not, they are not through [1–3]. Under the BE approach, resources are assigned to the leases whenever they become available. The resources are required when they will be available to the service provider when a customer makes a BE request for them. If resources are not immediately accessible, the client is willing to wait until they are [4, 5]. The two types of BE leases are preemptible and non-preemptible leases. When an AR lease with a larger priority than a BE lease comes, Haizea first analyses whether in future it can be served [6]. If Haizea realized it is not possible, it will flatly refuse the same. If Haizea decides to accept the lease, it must first decide if any other leases must be preempted in order to arrange the AR lease. Let’s imagine that in order to set up an AR, Haizea needs to preempt certain leases. Haizea does this in the case in question by halting the lower priority BE lease, moving it to the head of the queue, and then starting it up again once the AR lease is complete. Be leases may be preempted by higher priority leases, such as IL and AR leases. Tasks requiring advanced reservations or immediate leases have an infinite buffer size and an independent, identically distributed exponential service time. When there are no AR leasing tasks, the cloud node does BE leases. There are 2 models of this that are equivalent. The node initially consistently executes the BE leases. The node completes its housekeeping chores whenever an AR lease task is reached before moving on to the AR lease tasks [6]. Time passes exponentially. If the buffer has any additional advanced lease jobs, the node observes the completion of the AR lease task and does not take on BE lease activities. The node conducts the BE lease jobs sequentially until an AR lease task shows up, or it models the BE lease chores as taking id exponential time. The remaining time to execute its current BE leasing work is exponential once such an AR lease arrives. Both options are equivalent in terms of the challenging reservation leasing responsibilities. Figure 1 shows the leasing practices in cloud model. The following summarises this work’s significant contributions: • We propose a reservation policy to model the system as a Markovian model and a few VMs are reserved for IL leases. The blocking probability of the BE leases and the dropping probability of the IL leases have been derived. • In case of all the VMs are occupied with leases, the IL has been queued. The blocking probability of the BE leases and the dropping probability of the IL leases have been derived. • Numerical findings have been shown to analyse the behaviour of the suggested analytical model as well as different cloud system performance parameters. The remainder of the paper is set up accordingly. The relevant works are presented in Sect. 2. The analytical modelling of the system is shown in Sect. 3. Sect. 4 gives

Leasing in IaaS Cloud Using Queuing Model

161

Fig. 1 Leasing practices in cloud model [7]

the outcomes of the simulation. Sect. 5 includes the concluding remarks and the projection of the future work.

2 Related Work The IaaS cloud splits resources among competing requests based on accepted resource allocation procedures. The majority of cloud service providers use simple resource allocation guidelines like IL and BE. The general public can purchase computer resources from Amazon EC2, a public cloud [8] on a pay-per-use basis. To create a cloud on local infrastructure, the cloud toolkits Nimbus [9], Eucalyptus [10], and OpenNebula [11] are utilised. The only Virtual Infrastructure (VI) management solution that provides flexible VM deployment and enhanced capacity reservation is Haizea. It is an open-source scheduler for Open Nebula, which is in

162

B. B. Dash et al.

charge of managing the leases. Cloud service providers can have resource limitations that prevent them from quickly responding to all requests. Numerous resource allocation and work scheduling algorithms have been developed recently for cloud computing in order to accommodate various lease kinds. For deadline-sensitive (DS) leases, authors in [12] have presented a dynamic planning-based resource allocation technique. Firm deadlines, however, run the risk of losing lease acceptance. To support AR and BE leases, Li et al. [13, 14] introduced two online task scheduling methods, namely cloud list scheduling and cloud min-min scheduling (or applications). However, because AR has a higher priority than BE, the AR application job can override the BE task that is already running when it arrives. Two methods have been developed by Shrivastava et al. [15] to prevent hunger among BE leases and the conversion of AR to BE leases. When the system has many leases of the same type, these methods might not be able to manage the situation. To cope with three leases, namely AR, BE, and DS leases, the authors of [16] devised a resource allocation method and three online job scheduling algorithms. The assumption made by these algorithms, like Haizea, is that the AR lease is non-preemptive while the other two leases are preemptive. Additionally, these leases have rigid deadlines and set start and end times, which renders them ineffective and rigid for job scheduling and resource allocation.

3 Analytical Modelling For simplicity we have considered only IL and BE leases in our model. A. Reserved-VMs Scheme We suggest a reserved-VMs plan with C VMs, of which Ci VMs are set aside for short-term rentals (IL). The remaining C-Ci VMs are split over the IL and BE leases. All IL and BE leases are approved when there are more idle VMs than Ci. Only IL are approved and all BE leases are prohibited if the number of available VMs is less than or equal to Ci. For the sake of simplicity, it is assumed that the BE leases and IL leases’ arrival processes are both Poissonian, with arrival rates of λ and λi respectively. The service rates of both IL and BE leases are exponentially distributed with a mean rate of 1/μ. As a result, we may represent the cloud system as continuous-time Markov chain (CTMC) with C VMs, with the number of occupied VMs serving as the system state. Figure 2 displays the state transition diagram for leasing with the reserved_VM scheme of IL. From Fig. 2, the state equations can be written as:  Pk =

λ+λi Pk−1 , f or kμ λi P , f or k kμ k−1

The normalized equation is

k = 1, 2, ....., C − Ci = C − Ci + 1, ...., C

(1)

Leasing in IaaS Cloud Using Queuing Model

163

Fig. 2 State transition diagram for leasing with the reserved_VM scheme of IL ∞ 

Pk = 1

(2)

k=0

These equations may be solved to provide the following probability distribution: C−C i

P0 = [

j=0

 Pk =

(λ + λi ) j + j!μk

C 

(λ + λi ) j C − Ci λi j−(C−Ci ) ] j!μk j=C−C +1

−1

(3)

i

(λ+λi )k P0 , f or k!μk (λ+λi )kC−Ci λi k−(C−Ci ) P0 , k!μk

k = 1, 2, ....., C − Ci f or k = C − Ci + 1, ...., C

(4)

As is well knowledge, if the busy VMs >= C − Ci, a BE leasing request will be denied. The total of all the probabilities for all the states that are equal to or greater than C-Ci is the BE lease blocking probability, indicated by Pnb . Pnb =

C 

Pk

(5)

k=C−Ci

We obtain the ILs dropping probability since IL leases may only be accepted into the system if and only if all of the VMs are not occupied. PI Ld = PC

(6)

B. Queueing IL Leases The dropping probability of IL leases in the original reserved-VMs Scheme can be increased. This is accomplished, albeit, at a higher risk of BE blockage. The plan is the same as the reserved-VMs plan, with the exception that if no idle VMs are found, an IL request is queued. However, no BE leases are queued. The leases waiting in line are handled via FCFS. The service rates for the IL leases in the queue have a mean of 1/μq and follow an exponential distribution. Figure 3 displays the diagram of a state transition where the IL leases are queued.

164

B. B. Dash et al.

Fig. 3 The state transition diagram for the Queued IL leases

In the similar manner, from the state equations and the normalized condition, the probability distribution Pk is given by:  C−C i (λ + λi ) j + P0 = j!μk j=0

j−(C−Ci )

C 

(λ + λi )C−Ci λi j!μk j=C−C +1 i

j−(C−Ci )  −1 (λ + λi )C−Ci λi +  j−C C h=1 (Cμ + hμq ) j=C+1 C!μq ⎧ (λ+λi )k ⎪ P0 , f or k = 1, 2, ....., C − Ci ⎪ k!μk ⎨ (λ+λi )kC−Ci λi k−(C−Ci ) P0 , f or k = C − Ci + 1, ...., C Pk = k!μk ⎪ ⎪ )kC−Ci λi k−(C−Ci ) ⎩ (λ+λC i P0 , f or k = C + 1, C + 2, ....., j−C

∞ 

C!μq

h=1

(7)

(8)

(Cμ+hμq )

Thus, the BE lease blocking probability is the sum of the probabilities of states that are >= C − C i Pnb =

C 

Pk

(9)

k=C−Ci

The IL lease dropping probability can be depicted as follows: PI Ld =

∞ 

PC+k PI Ld|k

(10)

k=0

where PILd|k shows the conditional probability that an IL lease will be dropped only if it enters the queue in position k + 1. Using mathematical analysis, we can compute PI Ld|k =

(k + 1)μq Cμ + (k + 1)μq

(11)

Leasing in IaaS Cloud Using Queuing Model

165

Fig. 4 Mean IL leases in the cloud system versus the arrival rate

4 Numerical Results Using MAPLE 18, we illustrate the numerical outcomes produced by the proposed approach. Figure 4 shows the average number of IL leases in the system versus the arrival rate of the BE leases keeping the arrival rate of IL leases λi fixed as 5.5. The service rates have been taken as exponential distribution with mean 1/μ. As shown in the graph, the average number of IL leases in the system increases as the number of VMs decreases. Figure 5 shows the behavior of IL leases dropping probability as a function of offered load for various BE leases. The graph has been plotted for various values of VMs for C = 5,6,10 and 15. As shown in the figure as the number of VMs increases the number of dropping probability of the IL leases decreases. Table 1 depicts the probability for the dropping probability of the IL leases. It is observed that the probability of PILD increases with the increase in arrival rate λ and decreases by increasing the number of VMs.

5 Conclusion In Haizea the BE leases are preempted when an IL request reaches the cloud system. This paper models the VMs of the cloud computing system with two different models, first as a reserved VM scheme for IL leases and second by queueing the IL leases. Utilising MAPLE 18, the analytical model validation was done to check the accuracy of the results produced. The numerous numerical illustrations in the form of tables

166

B. B. Dash et al.

Fig. 5 Behavior of IL leases dropping probability versus load Table 1 Probabilities of the dropping probability varying λ and c c

λ

PILd (λi = 4)

PILd (λi = 4.5)

PILd (λi = 5)

PILd (λi = 5.5)

PILd (λi = 6)

6

5

0.913423

0.93543

0.94543

0.94783

0.94928

6

5.5

0.922828

0.94456

0.94738

0.94827

0.95181

6

6

0.932828

0.95262

0.96383

0.96491

0.96827

6

6.5

0.942562

0.96256

0.97838

0.97902

0.98101

6

7

0.955637

0.97182

0.98109

0.98457

0.99019

6

7.5

0.967637

0.98172

0.98837

0.99012

0.99362

7

5

0.902838

0.91020

0.91676

0.91910

0.92171

7

5.5

0.903823

0.91918

0.91898

0.92818

0.93101

7

6

0.910292

0.92717

0.92101

0.92627

0.93526

7

6.5

0.919206

0.92542

0.92637

0.92928

0.93728

7

7

0.929303

0.93182

0.93234

0.93727

0.94151

7

7.5

0.931020

0.93727

0.94013

0.94426

0.94928

8

5

0.880912

0.89101

0.89202

0.90282

0.91123

8

5.5

0.891012

0.90188

0.90892

0.91829

0.92262

8

6

0.901922

0.90672

0.91028

0.91718

0.92453

8

6.5

0.902833

0.90892

0.91453

0.91901

0.92827

8

7

0.910283

0.92728

0.93122

0.93462

0.93829

8

7.5

0.916272

0.92027

0.93452

0.93728

0.94178

Leasing in IaaS Cloud Using Queuing Model

167

and figures and the different performance metrics of the cloud system provided can help the provider to model the system. The limitation of the model is the system has considered only IL and BE leases and the AR leases are not considered. In the future, we can model the system by queueing the BE leases and by queueing both the IL and BE leases.

References 1. Maharani G, Murthy CSR (1998) An efficient dynamic scheduling algorithm for multiprocessor real-time systems. IEEE Trans Parallel Distrib Syst 9(3):312–319 2. Carpenter J, Funk SH, Holman P, Srinivasan A, Anderson JH, Baruah SK (2004) A categorization of real-time multiprocessor scheduling problems and algorithms 3. Goswami V, Patra SS, Mund GB (2012) Optimal management of cloud centers with different arrival modes for cloud computing environment. Int J Cloud Appl Comput (IJCAC) 2(3):86–97 4. Patra SS (2018) Energy-efficient task consolidation for cloud data center. Int J Cloud Appl Comput (IJCAC) 8(1):117–142 5. Rosenblum M, Garfinkel T (2005) Virtual machine monitors: current technology and future trends. Computer 38(5):39–47 6. Foster I, Kesselman C, Nick J, Tuecke S (2002) The physiology of the grid: an open grid services architecture for distributed systems ıntegration. Technical report, Global Grid Forum (2002) 7. Clark C, Fraser K, Hand S, Hansen JG, Jul E, Limpach C, Warfield A (2005) Live migration of virtual machines. In: Proceedings of the 2nd conference on symposium on networked systems design & implementation, vol 2, pp 273–286 8. Jena JR, Goswami V, Patra SS, Barik RK (2022) SluggishQCloud: towards the sluggish queueing assisted cloud system for efficient resource management. In: 2022 3rd international conference on computing, analytics and networks (ICAN). IEEE, pp 1–6 9. Talbot D (2009) Vulnerability seen in amazon’s cloud computing. Technology review 10. Keahey K (2010) Nimbus: cloud computing for science. Chicago, IL 11. Nurmi D, Wolski R, Grzegorczyk C, Obertelli G, Soman S, Youseff L, Zagorodnov D (2009) The eucalyptus open-source cloud-computing system. In: 2009 9th IEEE/ACM international symposium on cluster computing and the grid. IEEE, pp 124–131 12. Sotomayor B, Montero RS, Llorente IM, Foster I (2009) Virtual infrastructure management in private and hybrid clouds. IEEE Internet Comput 13(5):14–22 13. Nathani A, Chaudhary S, Somani G (2012) Policy based resource allocation in IaaS cloud. Futur Gener Comput Syst 28(1):94–103 14. Li J, Qiu M, Ming Z, Quan G, Qin X, Gu Z (2012) Online optimization for scheduling preemptable tasks on IaaS cloud systems. J Parallel Distrib Comput 72(5):666–677 15. Li J, Qiu M, Niu JW, Chen Y, Ming Z (2010) Adaptive resource allocation for preemptable jobs in cloud systems. In: 2010 10th international conference on intelligent systems design and applications. IEEE, pp 31–36 16. Shrivastava V, Bhilare DS (2012) Algorithms to improve resource utilization and request acceptance rate in iaas cloud scheduling. Int J Adv Netw Appl 3(5):1367 17. Panda SK, Jana PK (2015) Novel leases for IaaS cloud. In: 2015 international conference on advances in computing, communications and informatics (ICACCI). IEEE, pp 1037–1043

Resource Allocation Using MISO-NOMA Scheme with Clustering Technique Kasula Raghu and Puttha Chandrasekhar Reddy

Abstract These days, the introduction of new multimedia-based services and the expansion of wireless networks makes demand for high-speed communications. Because it can achieve great spectrum efficiency (SE) and energy efficiency (EE), non-orthogonal multiple access (NOMA) is envisioned as a desirable multiple access approach for 5G. This study addresses a number of resource allocation challenges in NOMA-based communication systems and enhances network performance in terms of EE and fairness. To assess EE performance, we used multiple-input single output (MISO-NOMA) systems with incomplete channel state information (CSI). At base station (BS), zero-forcing method is used to reduce interference between various clusters. All of these schemes’ theoretical analyses and simulation performances have been confirmed using simulation data. Keywords Non orthogonal multiple access · Energy efficiency · Clustering · Zero forcing · 5th generation

1 Introduction Mobile gadgets and wireless technology have had a tremendous impact on people’s lives during the last several decades. As a result, wireless devices play a vital role in human’s daily lives, services, and communication methods. The coming wireless technologies in future are predicted to deliver a 1000-fold capacity increase over present wireless networks in order to accommodate this tremendous expansion of data traffic [1]. These needs to introduce a slew of additional obstacles, including a massive connectivity of network devices as well as consistent user experience quality wherever and at any time [2, 3]. Furthermore, the radio spectrum has become K. Raghu (B) · P. C. Reddy Research Scholar, Department of ECE, JNTUH, Hyderabad, India e-mail: [email protected] P. C. Reddy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_12

169

170

K. Raghu and P. C. Reddy

Fig. 1 Comparison of OFDMA and NOMA schemes

congested due to heavy use. As a result, there is a chance of becoming radio spectrum shortage in wireless networks to enable this vast connection with high data rate applications [4]. Hence, the developing wireless systems in future must be designed to achieve these massive connectivity and high data rates in order to overcome these concerns [5]. The present wireless communication technologies are incapable of meeting future, hence, innovative technologies for 5th Generation (5G) and beyond wireless networks must be developed [6]. Various prospective technologies are addressed in the literature to suit the demanding needs of data rates and huge connection such as massive multiple-input multiple-output (MIMO) and non-orthogonal multiple access (NOMA) [7–10]. Several NOMA schemes are being evaluated for 5G mobile communication system requirements in order to make efficient use of the available spectrum. Nowadays, NOMA provide high data rate and connectivity [11]. Unlike OFDMA, NOMA may provide a shared radio resource to several users at the same time, as seen in Fig. 1, considerably increasing system throughput through frequency reuse within a cell [10]. Multiple users parallelly share same time and frequency resources by using the power domain NOMA. As a result, NOMA can provide a variety of benefits, including enhanced SE, increased throughput, and low delay [11, 14]. NOMA is future technology with the usage of multiple antennas [12–14] at BS, merging with other technologies such as LTE [15, 19] will enhance frequency reuse gain and it is capable of recycling limited spectrum [20]. To achieve high data rates and to satisfy other parameters in 5G, a mix of diverse communication technologies must be deployed [16, 17]. In [23], EE performance was evaluated using NOMA system for a particular statistical CSI at the transmitter. Under the premise of incomplete CSI, the combined user scheduling was investigated in [18, 25]. Various methodologies like non-clustering and clustering tactics are parts of NOMA systems. The first method distributes radio resources across all users via NOMA, with each user having their own beamforming vector. On the other hand, the clustering system splits users in a cell into different clusters, with the NOMA scheme supporting users in the same cluster [19–24, 27, 28, 30]. To avoid interference between separate clusters at the BS, zero-forcing (ZF) is utilised [25]. Motivated by the above discussion, we focus on resource allocation techniques to effectively manage the influence of imperfect CSI on EE performance using MISO-NOMA system.

Resource Allocation Using MISO-NOMA Scheme with Clustering …

171

A. Sorting of Paper Section 2, described about MISO-NOMA system model and imperfect CSI design. In Sect. 3, the complete ZF scheme and hybrid-ZF beamforming approaches are explored, as well as EE calculations under channel uncertainties. In addition, simulation analysis is supplied in Sect. 4, and lastly, conclusions are offered in Sect. 5.

2 MISO-NOMA Model A MISO-NOMA downlink (DL) system broadcast with K users, Uk , k e {1, . . . .K } is assumed, as given in Fig. 2 [23]. The NOMA method is used at the BS to broadcast signals from various users. Each user has a single transmit antenna, and the BS is equipped with a total of N transmit antennas [24]. So, the received signal at k − th user can be mathematically shown as follows; yk = h kH wk sk +

E

h kH wm sm + n k

(1)

m/=k

where sk and wk represents the transmitted symbols and beamforming weighting vector of user Uk . Other term, h k (is the )channel coefficient lies between BS and user Uk and n k denotes a noise value 0, σk2 . In MISO-NOMA DL transmission, BS is having N transmit antennas wants to establish a connection with K´ clusters and each cluster is equipped with two users as K´ = 2 K and each user is having single antenna in Fig. 3 [26, 27].

Fig. 2 MISO-NOMA downlink system

172

K. Raghu and P. C. Reddy

Fig. 3 MISO-NOMA DL system with imperfect CSI and K´ clusters

Let / h l ,k is vector of channel coefficients from the BS to Ul ,k , it can be modelled −β as χ dl,k , where χ represents the route loss exponent, dl ,k is distance from BS to Ul ,k , and β denotes the path loss coefficient.

3 Energy Efficient MISO-NOMA Scheme with Clustering Technique The difficulty imposed by growing the number of users is one of the implementation problems of using SIC at the receiver. Grouping users into various clusters with a modest number of users is a realistic way to decrease complexity. The BS transmits the superposition coding of users’ signals as part of the NOMA protocol is given as; x=

K E

√ √ wk ( p1 ,k s1 ,k + p2 ,k s2 ,k )

(2)

k=1

where s1 ,k and s2 ,k are symbols of U1 ,k and U2 ,k . The representation of received signals at U1 ,k and U2 ,k can be shown as; y1 ,k = h 1 ,kH x + n 1 ,k

(3)

y2 ,k = h 2 ,kH x + n 2 ,k

(4)

Resource Allocation Using MISO-NOMA Scheme with Clustering …

173

where n l ,k ∼ C N (0, σl ,2k ) for l = 1,2. U2 ,k uses SIC at the receiver to decode and erase U1 ,k data from y2 ,k , that has been received before decoding its own data. A. Hybrid-Zero Forcing (ZF) Scheme The user’s channel is used to build the beamforming vector (h l ,k ) and to match conditions given in [28]; h l ,mH wk = 0, ∀m /= k

(5)

Because of inter-cluster interference, h l ,mH wk /= 0, for any m /= k. H = [h 2 ,1 , . . . . . . h 2 , K ]

(6)

The following mathematical statement may be used to create the beamforming vector; ( )−1 W = [w1 , w2 . . . . . . wk ] = H ∗∗ = H H H H

(7)

In Eq. (7), H ∗∗ is the pseudo-inverse of the matrix H and wk is kth cluster beamforming vector. The received signal at U2 ,k is; y2 ,k = h 2 ,kH wk

(√

p1 ,k s1 ,k +



E (√ ) ) √ w j p 1 , j s1 , j + p 2 , j s2 , j p2 ,k s2 ,k + /\h 2 ,kH /\

j/=k

+ n 2 ,k .

(8)

The signal of the weak user is decoded using SINR at the strong user. S I N R21 ,k

| |2 p1 ,k |h 2 ,kH wk | = | |2 ( | |2 E ) | | p2 ,k |h 2 ,kH wk | + j/=k |/\h 2 ,kH w j | p1 , j + p2 , j + σ2 ,2k /\

(9)

The SINR to decode its own signal after eliminating the weak users is; S I N R22 ,k

| |2 p2 ,k |h 2 ,kH wk | = | |2 ( | |2 E ) | | p1 ,k |h 2 ,kH wk | + j/=k |/\h 2 ,kH w j | p1 , j + p2 , j + σ2 ,2k /\

(10)

Other hand, SINR of U1 ,k is utilised to decode its own signal and it is formulated as; S I N R11 ,k

| |2 p1 ,k |h 1 ,kH wk | = | |2 ( | |2 E ) | | p2 ,k |h 1 ,kH wk | + j/=k |/\h 1 ,kH w j | p1 , j + p2 , j + σ1 ,2k /\

(11)

174

K. Raghu and P. C. Reddy

B. Full-Zero Forming Scheme This scheme is used to totally eliminate inter-cluster interference and must match the following conditions in order to accomplish this [28]; h i ,mH wk = 0, ∀m /= k, i = 1, 2

(12)

The remaining channel matrices are combined in order to create the beamforming vector that satisfies the requirements in the preceding statement; /\

/\

/\

/\

Hk = [H1 , . . . Hk−1 , Hk . . . .Hk+1 ]

(13)

/\

here Hk = [h 1 ,k , h 2 ,k ]. By exploiting this condition, full-ZF beamformer scheme is applied, and finally, the received signal at Ul ,k is; yl ,k = h l ,kH wk

(√

p1 ,k s1 ,k +



E (√ ) ) √ p2 ,k s2 ,k + /\h l ,kH w j p 1 , j s1 , j + p 2 , j s2 , j /\

j/=k

+ n l ,k l = 1, 2

(14)

Due to imperfect CSI, the second word in Eq. (14) is taken into account. The SINR at the weakest user mathematical formula is; S I N R11 ,k

| |2 p1 ,k |h 1 ,kH wk | = | |2 ( | |2 E ) | | p2 ,k |h 1 ,kH wk | + j/=k |/\h l ,kH w j | p1 , j + p2 , j + σl ,2k /\

(15)

Similarly, the required SINR at U2 ,k to decode weaker user’s signal is calculated by; S I N R21 ,k

| |2 p1 ,k |h 2 ,kH wk | = |2 ( | | |2 E ) | | p2 ,k |h 2 ,kH wk | + j/=k |/\h 2 ,kH w j | p1 , j + p2 , j + σ2 ,2k /\

(16)

and required SINR to decode its own for U2 ,k after performing SIC is; S I N R22 ,k

| |2 p2 ,k |h 2 ,kH wk | = | |2 ( | |2 E ) | | p1 ,k |h 2 ,kH wk | + j/=k |/\h 2 ,kH w j | p1 , j + p2 , j + σ2 ,2k /\

(17)

C. Energy Efficiency Calculation The achievable rates for U1 ,k and U2 ,k are defined as follows for both ZF schemes [29];

Resource Allocation Using MISO-NOMA Scheme with Clustering …

175

{ ) R1,k = log2 (1 + min minS I N R11 ,k , minS I N R21 ,k ), ∀k

(18)

R2,k = log2 (1 + minS I N R22 ,k ), ∀k

(19)

Finally, mathematical expression to calculate energy efficiency value is given as; Ek k=1 (R1 ,k + R2 ,k ) EE = Ek k=1 ( p1 ,k + p2 ,k ) + Pc /\

(20)

/\

4 Simulation Results For downlink transmission, we have considered a BS which is equipped with N antennas. According to the clustering technique, two users are grouped as one cluster and we have such K´ and (K´ = 2 K) clusters, each cluster is having single antenna. When there are a few clusters, the hybrid-ZF system surpasses the performance of EE compared to full-ZF scheme, as shown in Fig. 4. The proposed systems SE and EE trade-offs are assessed in Fig. 5. The EESE trade-off performance comparison is analysed between full-ZF and hybrid-ZF systems are depicted in Fig. 5. It is observed from the simulation that SE and EE 10

Energy Efficiency Value

9

8

7

6

5

Full-ZF, K'=3, N=6 Hybrid-ZF, K'=3, N=6 Hybrid-ZF, K'=2, N=4 Full-ZF, K'=2, N=4

4

3 15

20

25

30

35

40

Transmit Power

Fig. 4 EE performance for various transmit powers for different ZF schemes with an error rate ∈=0.01

176

K. Raghu and P. C. Reddy 7

Full-ZF, K'=2, N=4 Hybrid-ZF, K'=2, N=4

6.5

Energy Efficiency

6 5.5 5 4.5 4 3.5 3 2.5 2 2

4

6

8

10

12

14

16

18

20

Spectral Efficiency

Fig. 5 Trade-off between EE and SE for various ZF schemes with error rate ∈=0.01 and K´ = 2 clusters

values rise up to a certain peak value, and it is known as an optimal trade-off point, as illustrated in Fig. 5. Further, EE performance reduces with the rise in SE value. Beyond this maximum value, the EE value should be sacrificed in order to rise in SE value with the pay of increased BS transmit power. Figure 6 depicts that as the fluctuations in the channel uncertainty in the CSI rises, the EE value becomes lower for both techniques such as OMA and NOMA schemes. The NOMA scheme, as shown in the simulation outperforms the OMA scheme for ∈=0.01 and N = 2. As the CSI’s channel uncertainty varies, the EE for both systems falls, necessitating a rise in transmit power for all users. The achievable rate performance is demonstrated in Fig. 7 for various transmit powers and antennas at BS. As predicted, increasing the transmit power threshold or limiting the number of users increases the feasible fairness rate and it is proved by using the simulation given in Fig. 7. The rate improvement is compromised as the power threshold rises. As shown in simulation, the achievable is maximum with NOMA scheme compared to other multiple access schemes. Figure 8 shows the EE performance for various transmit powers and number of users (K) with a fixed error value ∈= 0.01. It has been demonstrated that as the number of users grows, the EE grows as well. The EE continues to expand as the number of users grows, although at a slower rate, as predicted by the rate formulation used to calculate the EE.

Resource Allocation Using MISO-NOMA Scheme with Clustering … 6

177 NOMA OMA

5.5

Energy Efficiency

5

4.5

4

3.5

3

2.5

2

1.5 0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.1

Variance of Channel Uncertainties

Fig. 6 Energy efficiency performance for NOMA and OMA schemes with error rate ∈=0.01 and N = 2 antennas 3.5

3

Achievable Rate

2.5

2

1.5

NOMA, N=5 NOMA, N=3 ZF, N=5 OMA, N=5

1

0.5

0 5

10

15

20

25

Transmit power

Fig. 7 Achievable rate with various transmit powers for NOMA, ZF and OMA schemes

5 Conclusion In order to meet 5G objectives, this work investigates an EE architecture for MISONOMA systems with imprecise CSI at the BS. To reduce the complexity of SIC at receivers, we employed a clustering NOMA strategy in which several users formed a group. The ZF methods are used to reduce inter-cluster interference. Based on the number of users and available transmit antennas at the BS, two distinct ZF techniques,

178

K. Raghu and P. C. Reddy 7

Energy Efficiency

6.5

6

5.5

5

4.5 K=6 K=4 K=2

4 20

25

30

35

40

Transmit Power

Fig. 8 EE performance for various transmit powers with N = 3 and ∈= 0.01

hybrid-ZF and full-ZF, are offered. Even though the full-ZF system can eliminate interference between clusters, modelling findings show that it needs more transmit antennas than the hybrid-ZF method does to serve the same number of clients. The full-ZF technique performs better because inter-cluster interference grows as the number of clusters increases. This work is done using MISO-NOMA scheme with clustering technique and it can be extended further using MIMO-NOMA technique and non-clustering techniques.

References 1. Kostakos V, Ojala T, Juntunen T (2013) Traffic in the smart city: exploring citywide sensing for traffic control center augmentation. IEEE Internet Comput 17(6):22–29 2. Stankovic JA (2014) Research directions for the internet of things. IEEE Internet Things J 1(1):3–9 3. Ahlgren B, Hidell M, Ngai ECH (2016) Internet of things for smart cities: Interoperability and open data. IEEE Internet Comput 20(6):52–56 4. Dahlman E, Parkvall S, Skold J (2011) 4G: LTE/LTE-advanced for mobile broadband. Academic Press 5. Wunder G, Jung P, Kasparick M, Wild T, Schaich F, Chen Y et al (2014) 5GNOW: nonorthogonal, asynchronous waveforms for future mobile applications. IEEE Commun Mag 52(2):97–105 6. Osseiran A, Boccardi F, Braun V, Kusume K, Marsch P et al (2014) Scenarios for 5G mobile and wireless commu nications: the vision of the METIS project. IEEE Commun Mag 52(5):26–35 7. Larsson EG, Edfors O, Tufvesson F, Marzetta TL (2014) Massive MIMO for next generation wireless systems. IEEE Commun Mag 52(2):186–195

Resource Allocation Using MISO-NOMA Scheme with Clustering …

179

8. Roh W, Seol JY, Park J, Lee B, Lee J, Kim Y, Cho J, Cheun K et al (2014) Millimeter-wave beamforming as an enabling technology for 5G cellular communications: theoretical feasibility and prototype results. IEEE Commun Mag 52(2):106–113 9. Alavi F, Yamchi NM, Javan MR, Cumanan K (2017) Limited feedback scheme for deviceto-device communications in 5G cellular networks with reliability and cellular secrecy outage constraints. IEEE Trans Veh Technol 66(9):8072–8085 10. Dai L, Wang B, Yuan Y, Han S, Chih-Lin I, Wang Z (2015) Non-orthogonal multiple access for 5G: solutions, challenges, opportunities, and future research trends. IEEE Commun Mag 53(9):74–81 11. Endo Y, Kishiyama Y, Higuchi K (2012) Uplink non-orthogonal access with MMSE-SIC in the presence of inter-cell interference. In: International symposium on wireless communication systems (ISWCS), pp 261–265 12. Zhang R, Hanzo L (2011) A unified treatment of superposition coding aided communications: theory and practice. IEEE Commun Surv Tutor 13(3):503–520 13. Wunder G, Kasparick M, Brink ST, Schaich F, Wild T, Chen Y, et al (2013) System-level interfaces and performance evaluation methodology for 5G physical layer based on non-orthogonal waveforms. In: Asilomar conference on signals, systems and computers, November 2013, pp 1659–163 14. Islam SMR, Avazov N, Dobre OA, Kwak KS (2017) Power-domain nonorthogonal multiple access (NOMA) in 5G systems: Potentials and challenges. IEEE Commun Surv Tutor 19(2):721–742 15. Lim C, Yoo T, Clerckx B, Lee B, Shim B (2013) Recent trend of multiuser MIMO in LTEadvanced. IEEE Commun Mag 51(3):127–135 16. Zhu H, Karachontzitis S, Toumpakaris D (2010) Low-complexity resource allocation and its application to distributed antenna systems [coordinated and distributed MIMO]. IEEE Wirel Commun 17(3):44–50 17. Zhu H (2012) On frequency reuse in cooperative distributed antenna systems. IEEE Commun Mag 50(4):85–89 18. Wang J, Zhu H, Gomes NJ (2012) Distributed antenna systems for mobile communications in high speed trains. IEEE J Sel Areas Commun 30(4):675–683 19. Elsawy H, Hossain E, Kim DI (2013) HetNets with cognitive small cells: user offloading and distributed channel access techniques. IEEE Commun Mag 51(6):28–36 20. Wang H, Zhou X, Reed MC (2014) Coverage and throughput analysis with a non-uniform small cell deployment. IEEE Trans Wirel Commun 13(4):2047–2059 21. Chin WH, Fan Z, Haines R (2014) Emerging technologies and research challenges for 5G wireless networks. IEEE Wirel Commun 21(2):106–112 22. Sun Q, Han S, Chin-Lin I, Pan Z (2015) Energy efficiency optimization for fading MIMO non-orthogonal multiple access systems. In: Proceedings of IEEE international conference on communications (ICC), pp 2668–2673 23. Zhang Y, Wang HM, Zheng TX, Yang Q (2017) Energy-efficient transmission design in nonorthogonal multiple access. IEEE Trans Veh Technol 66(3):2852–2857 24. Fang F, Zhang H, Cheng J, Leung VCM (2016) Energy-efficient resource allocation for downlink non-orthogonal multiple access network. IEEE Trans Commun 64(9):3722–3732 25. Fang F, Zhang H, Cheng J, Leung VCM (2017) Energy-efficient resource scheduling for NOMA systems with imperfect channel state information. In: Proceedings of IEEE international conference on communications (ICC), May 2017, pp 1–5 26. Al-Obiedollah H, Cumanan K, Thiyagalingam J, Burr AG, Ding Z, Dobre OA (2019) Energy efficiency fairness beamforming design for MISO NOMA systems. In: Proceedings of IEEE wireless communications and networking conference (WCNC) 27. Al-Obiedollah H, Cumanan K, Thiyagalingam J, Burr AG, Ding Z, Dobre OA (2019) Energy efficient beamforming design for MISO non-orthogonal multiple access systems. IEEE Trans Commun 1–1 28. He X, Wu YC (2015) Tight probabilistic SINR constrained beamforming under channel uncertainties. IEEE Trans Signal Process 63(13):3490–3505

180

K. Raghu and P. C. Reddy

29. Zhang Q, Li Q, Qin J (2016) Robust beamforming for non-orthogonal multiple-access systems in MISO channels. IEEE Trans Veh Technol 65(12):10 231–10 236 30. Alavi F, Cumanan K, Ding Z, Burr AG (2017) Robust beamforming techniques for nonorthogonal multiple access systems with bounded channel uncertainties. IEEE Commun Lett 21(9):2033–2036

Resource Request Handling Mechanisms for Effective VM Placements in Cloud Environment T. Thiruvenkadam, A. Muthusamy, and M. Vijayakumar

Abstract In a cloud computing environment, effective resource request handling and their dynamic management can optimize the VM placement process. An efficient resource request handling can resolve the problem of increased waiting time as well as load imbalance rate during the course of running the VM resources. Load balancing techniques only helps to reschedule the VMs in the appropriate PMs based on the dynamic changes in the load. To make use of the resources in an effective way, an optimized resource scheduling and load balancing model is necessary along with the proper model to manage the resource request made by the cloud users. A rule based method utilizing the M/M/C queue model for effective management of resource request handling is presented in this work. The RB-M/M/C queue model can ensure the availability of cloud datacenter services by managing user requests. This further enriches the cloud service by increasing the physical server utilization rate and minimizing the waiting time and number of servers used to fulfill the requirements of the users. The resource request queue management algorithm is designed by utilizing some suitable modesties of queuing theory. Simulation results for performance analysis shows that the proposed model decreases the waiting time and at the same time increases the utilization rate. Keywords VM scheduling · Queuing model · Resource request · Resource allocation · Load balancing

T. Thiruvenkadam (B) Associate Professor, School of Computer Science and IT, Jain (Deemed-to-be University), Bengaluru, Karnataka, India e-mail: [email protected]; [email protected] A. Muthusamy Assistant Professor and Head, Department of Computer Science, Faculty of Science & Humanities, SRM Institute of Science and Technology, Tiruchirappalli, Tamil Nadu, India M. Vijayakumar Professor, School of Computer Science, VET Institute of Arts and Science College, Tindal, Erode, Tamil Nadu, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_13

181

182

T. Thiruvenkadam et al.

1 Introductıon The most crucial model in a cloud datacenter environment is infrastructure-as-aservice (IaaS). Technology used for virtualization makes up an IaaS platform. The IaaS model offers all computing resources, including those needed to load and run applications, pack data, or enable a business to have its entire computing environment [1]. With the aid of virtualization technologies, cloud datacenters can be used to host applications on shared infrastructure. Utilizing virtual machines can reduce datacenter expenditures (VMs) Virtual machines (VMs) can be created in a huge quantity by cloud service providers to meet a variety of work load and resource configuration-related needs [2]. Every virtual machine is configured with a specific number of computing resources depending on the workload needs in order to match user expectations. In order to reduce the number of physical servers, which further reduces power consumption and investment costs for purchasing servers, the cloud service provider groups the greatest number of virtual machines into the fewest number of actual hosts. The main strategy for achieving the economic size of a cloud datacenter is successful virtual machine consolidation [3]. Utilizing virtualization technology enables the consolidation of physical servers in cloud datacenters, which reduces the amount of physical resources utilized as well as energy consumption. Researchers have offered a variety of approaches for the consolidation of servers in data centers, but none have taken into account all the factors that ensure Quality of Service and lower costs for cloud datacenter providers. In order to improve service to cloud users and lower operational costs for the cloud service provider, a novel resource request management method and VM placement are needed. The right host must be able to accommodate the VM in order to guarantee active resource use, reduce datacenter costs, and reduce energy consumption.

2 Related Work Numerous users request shared resources from the cloud service provider in the current cloud environment, where the issues with VM placement and resource sharing hold a crucial position. Solutions to these issues typically pointed to techniques for allocating VMs to nodes in a way that efficiently balanced the load on all PMs while also achieving a particular level of service [4]. Research projects aimed at addressing the issue of VM placement and load balancing have been developed with distinct objectives that took into account different processes [5]. For instance, the objectives could be to cut down on the number of PMs required, decrease resource waste, and increase power efficiency. To obtain a better cloud service environment, it is required to exploit all the resources in an effective mode using the VM placement and resource sharing policies [6]. Queuing algorithm is a way to for the systematic models used in the resources allocation. The users of the demanded services must wait in the line to get their

Resource Request Handling Mechanisms for Effective VM Placements …

183

requests fulfilled. The foundation of queuing process encompasses the users moving base at a queuing structure to acquire certain resources [7]. The physical servers available in the data center start providing the requested resources that wait in queue as soon as they complete the currently running task. The resources or services are provided to the required users whose requests are waiting in the queue in an assured perception to make sure the service are offered in a balanced way [8]. The cloud environment is expected to provide services to its users in a dynamic way, different qualities of user resource requirements and time dependency, the cloud environment emphases on the establishment of the dynamic nature of service. In [9] an attempt is made to regulate the optimized mechanisms to allocate the required requests of the users in a speedy and fruitful manner. To obtain the most optimal system, the researchers the two different frameworks one server framework M/M/1 and multi-server framework M/M/C are assessed by considering the waiting time of the user [10]. A model is developed for resource allocation that assesses the performance of cloud in a heterogeneous environment [11]. Another same kind of work was done by [12], who discussed the existing hindrances in dynamic resource allocation. A model developed by [13] for resource allocation and load rebalancing focuses more on load imbalance that occurred over a period of time. On the other hand [14], provided a detailed working description and compared performance evaluation of several existing prominent methods for scheduling and load balancing. The methods compared include the round robin algorithm, greedy algorithm, Backward Speculative Placement, Dynamic Priority Based Scheduling Algorithm, genetic algorithm and power save algorithm.

3 Resource Request Handling and Scheduling In general, scheduling can be defined as a method where a VM request is assigned to PMs that can complete the requirement. The requests are initially placed in a queue, which are then mapped to appropriate PMs that can accommodate and complete the request. A request from the queue is serviced either when the applications are completed (or a PM becomes idle) or when the PM has extra resources that have room for satisfying the request. This scenario is presented in Fig. 1, where circles indicate the requests to be placed and triangles indicate the completed requests. Many scheduling algorithms are job-oriented, where the mapping is carried out in accordance with the needs of the user. In this configuration, the resources are assigned in bundles, and the scheduling algorithms are less concerned with the resource availability in the cloud system and more with the task characteristics of the user applications. In this case, users are able to use a specific set of resources, such as CPU cycles, main memory, storage, and network bandwidth rate, to their fullest advantage. Due to the significant amount of resource sharing, which causes resource waste and extended waiting times, traditional models of resource allocation become challenging.

184

T. Thiruvenkadam et al.

Fig. 1 Process of a scheduling algorithm

This research paper suggests VM scheduling and load balancing algorithms that are both user- and resource-oriented in order to address these problems. The suggested method is user-oriented in that it accepts user-provided resource requirements and resource-oriented in that it takes into account the status of cloud resource repositories to enable dynamic resource scheduling that takes network traffic and available resources into account. This strategy can benefit cloud systems in a number of ways, such as by reducing fluctuations in resource needs, increasing overall efficiency, and strengthening the link between scheduling and load balancing with respect to each unit of time. In order to optimize its operation, the VM placement algorithm performs scheduling in two steps, as listed below. 1. VM Queuing 2. Scheduling and Load Balancing The process scheduling algorithm in Fig. 1, after the incorporation of the above two steps is shown in Fig. 2.

3.1 Request Handling Components The process involved in resource request handling starts with requests that are collected between every pre-determined time interval. The time interval is set to 1 h in the research, but can be set to a value as low as 10 min or to a value as high as 1 month, depending upon the cloud provider. Queues are generated data center wise, thus generating multiple queues, to which requests are mapped using the shortest queue concept. Each queue has a separate algorithm designated to perform VM placement and load balancing. This designation was fixed based on the experimental results obtained while analyzing them to find the best fitting algorithm for each queue.

Resource Request Handling Mechanisms for Effective VM Placements …

185

Fig. 2 Process of scheduling and load balancing algorithms

Queue modeling algorithm, join shortest queue algorithm, traffic analysis procedure, three VM placement and load balancing for handling rush hour requests (each for one queue) and one algorithm for handling requests during non-rush hour.

4 Queuing Model In a cloud environment, the requests are made at a particular time interval, which are collected by the VMM and stored in a VM request queue. The placement algorithm should consider each request from this queue and perform mapping in a manner that will minimize the use of resources, so that more number of requests can be served with the available resource capacity that minimize SLA violation. Increasing the usage of cloud architecture is also increasing the number of requests to be handled. In this situation, algorithms that can be designed to support queues are more advanced while allocating VMs to PMs. These models provide dual advantages to cloud system, namely, minimizing the waiting period of the incoming requests and minimizing the queue length that has to be serviced. This improves the overall cloud performance by maximizing server utilization, and response time.

186

T. Thiruvenkadam et al.

Several researches use grouping of requests to reduce rejection rate, maximize profit and improve resource utilization. Usage of execution time (or completion time) as a QoS parameter during the request queue generation has been used by [15]. Priority based queues are also used to improve placement and resource utilization [16]. In [17] SLA based scheduling algorithm for placing applications to available resources and generated queues based on a QoS parameter, namely, maximum profitable requests had presented. This algorithm focuses on improving cloud provider and broker profitability and attention on user satisfaction was less. On the other hand [18] uses the shortest average execution time to create a queue that can improve the scheduling process. However, this algorithm faces the increased starvation time of requests with a longer execution time. All of these works consider only a single resource during queue generation. The cloud system works on a heterogeneous resource requirement environment, where the queuing models based on a single resource might not be sufficient. As an alternative, multi-queue queuing models can be used. In this model, the scheduler algorithm considers the resource requests for multiple services (like CPU and bandwidth) and queuing model sorts these requests to different queues based on request characteristics. The requests inside each queue are given equal importance. The usage of a multi-queuing model increases both customer satisfaction and provides profitability. When combined with dynamic scheduling, it can also reduce starvation [19]. In general, a cloud system uses the queue manager to generate and manage the queues. Three types of queues as listed below are used during scheduling and load balancing. 1. Small queue—comprises the first 40% of the requests 2. Medium queue—comprises the following 40% of the requests 3. Long queue—comprises the left over 20% of the request The requests from these queues are selected dynamically in a fashion that eliminates the issue of starvation. Many of the published works using multi-queuing based scheduling use characteristics related to the application like jobs burst time and ignore user preferences and cloud resource availability. Examples include the works of [20]. Ref. [21] has proposed an algorithm that considers the available resources and software defined networks for generating queues which are then used for resource reservation. These systems consider only the reservation requests and they are not dynamic. Inspired with these works, the queuing model proposed is also designed as a multi-queuing model, which is designed to. (i) use the multi-dimension resource representation (ii) consider user preferences and current resource availability status while generating the queues (iii) be traffic aware and current load aware. The proposed queuing model is designed as a multi-queue model where there are multiple datacenters, each having multiple PMs which can serve a set of VMs. The following assumptions are made during the design of the proposed queuing model.

Resource Request Handling Mechanisms for Effective VM Placements …

187

1. All datacenters are assumed to have identical service capabilities. 2. In all the datacenters, all the arriving requests wait in a single queue, which are then classified and moved to the first available PM to be serviced. 3. The incoming requests are assumed to follow a normal distribution, whereas the service time for every server is expected to follow an exponential probability distribution. 4. All the server’s service rate is considered as same. 5. The initial response to requests is made only after collecting resources for a stipulated or final amount of time and not allocating the resources as and when they arrive. The main goal of the proposed queuing model, as mentioned earlier, is to group the resource requests into High, Medium and Low Request Queues using an algorithm based on the existing load and resource availability at a specific point of time, T. The procedure of the resource handing component is shown in Fig. 3. This procedure uses two types of queues, namely, primary and secondary queues. The primary queue collects the requests in the form of a resource cubic vector. The main job of a primary queue is to hold the requests while processing requests in secondary queues. The size of the primary queue is static and that of a dynamic queue is dynamic. Care has been taken to make sure that the size of the dynamic queues does not exceed the total capacity of PM. Thus, this criterion has necessitated the need for the primary queue to hold the incoming requests. Requests from the primary queue are moved to secondary queues in two situations. Situation 1: Over a time period ‘t’, requests in primary queue are analyzed and classified into any one of the secondary queues. Situation 2: When the secondary queues are empty. This helps to deal with the enormous amount of incoming of requests that are dynamic in nature. When any of the above two situations arises, the analysis algorithm

Fig. 3 Steps performed by resource handling component

188

T. Thiruvenkadam et al.

begins by first estimating the current load and current resource utilization status of the system. These estimations are then used to decide whether a request can be accommodated by the cloud system. In case a request cannot be accommodated three actions can be performed. Action 1: Reject the request. Action 2: Create new VMs. Action 3: Perform Load Rebalancing to free up bottle-necked resources. This situation is identified as follows. Let RLt denote the capacity (in terms of resource cubic vector). The current load and resource availability of the system is then estimated using Eqs. (1) and (2) at the time interval t. C Lt(Re) = S R Lt(Re)

(1)

R A(Re) = {T Ct(Re) − C Lt(Re)}

(2)

Based on these two factors, the resource request admission criterion is performed using Eq. (3). {

Reject or Create or Rebalance RD(Re)t +1 > RA(RE)t Move to a secondary queue Otherwise

(3)

While performing admission control, all the three selected resources have to be taken into consideration, that is, RD(C) > RA(C), RD(R) > RA(R) and RD(B) > RA(B). The second important part of the queue generation algorithm is the dynamic estimation of thresholds. Two thresholds, namely, T1 and T2, are used as lower and upper bounds are created based on the current load of the system. The upper bound threshold (T2) is estimated as 70% of the current load while the lower bound threshold (T1) is set as 20% of the current load of the system. The estimation of CL, RA, admission control, T1 and T2 is updated after the satisfaction of each request (that is, after each VM placement).

4.1 Join-Shortest Queue (JSQ) Algorithm The proposed algorithm works with more than one datacenter, each having its own set of HRQ, MRQ and LRQ. When requests arrive the queuing model must attach it to any of these queues. The time taken to join a new request to a queue should be independent of the arrival process, without user intervention. For this purpose, the JSQ algorithm is used. Let ‘m’ denote the type of queue holding the requests from the user, that is, m denotes HRQ or MRQ or LRQ. Let Q denote a vector having the lengths of these queues at various datacenter, i. Then, Qmi denotes the queue length of type ‘m’ queue at data center ‘i’. Using these notations, the JSQ algorithm can be described as below.

Resource Request Handling Mechanisms for Effective VM Placements …

189

All the requests of type ‘m’ arrive during’t’ are routed to the queue with shortest length for the same type of queue, that is, to the datacenter i m∗ (t) = argmin Qmi (t), i=1,2,...L

where L is the number of datacenters in the cloud system. The JSQ algorithm aims to assign new requests to queues in order to reduce the average queue length of requests at each data center. This algorithm dispatches a request towards a queue whose length is small. Under JSQ, an incoming request is directed to the queue with the smallest number of incomplete requests. Hence, JSQ attempts to balance the load throughout the queues sinking the probability of one queue having multiple jobs while another queue stands idle. A greedy policy is followed for the new arrival because the new arrival may prefer allocation with as less requests as possible. JSQ algorithm minimizes each request’s individual expected delay and has a non-decreasing failure rate. The algorithm has the advantage of having low computational overhead and therefore does not add extra cost during scheduling and load balancing.

5 Experimental Environment We evaluated our proposed rule based algorithm with other patterns and present its outcomes. We also assessed the efficiency of our algorithm by using parameters like response time, Average waiting time, Throughput, Number of physical servers and Mean queue length. Simulation setup The proposed algorithm is implemented in JAVA Net beans IDE. We also used Java Modeling Tool (JMT) which is a free open-source tool for the performance evaluation of queuing models to verify our results in terms of response time, Average waiting time, Throughput and Mean queue length. The parameters used during the simulation are clearly mentioned in Table 1. Table 1 Parameters used in the simulation

Parameters Details

Value

/\

Task arrival rate

1–24 tasks

µ

Service rate per server

1 task

A

Ratio of mission critical tasks to all Tasks

0.1

P

Initial number of Physical Servers

20 per DC

Pmax

Maximum number of Physical Servers 25 per DC

Pmin

Minimum number of Physical Servers 15 per DC

B

No. of Datacenter (DC)

5

190

T. Thiruvenkadam et al.

5.1 Results In this subsection, we present the numerical results of our algorithm. The evaluation of the proposed rule based algorithm is compared with two of the existing queuing model namely M/M/C and M/M/C/N assessment is performed using four performance metrics listed below. Performance Metrics Response Time—It is the amount of time taken to respond by a particular scheduling and load balancing algorithm in a cloud system. This parameter should be minimized. Figure 4 shows the performance comparison of response time of the proposed rule based algorithm for queuing against M/M/C and M/M/C/N. There is no big difference in the response time between the proposed and existing algorithms when the VM request rate is less at the same time with the increased VM request rate our proposed algorithms outperform the existing models. Number of PMs used—This parameter is used to analyze the packing efficiency of the placement algorithm in mapping VMs to PMs. This performance metric analyzes the effect of the proposed algorithms by estimating the number of PMs required for placing a certain number of VMs. The aim of the placement algorithm is to pack VMs into a lesser number of PMs. In other words, the number of PMs used during VM placement should be minimum. It is observed in the Fig. 5 that the total number of PMs required for placing a certain number of VM requests is lesser in the proposed rule based algorithm when compared to the existing one. Resource Utilization—This metric is measured in percentage, usually the amount of physical servers being utilized by the users. It is determined as the ratio of Fig. 4 Performance comparison of response time

Resource Request Handling Mechanisms for Effective VM Placements …

191

Fig. 5 Comparison of No. of physical servers used

the amount of resources utilized by a VM currently to the amount of resources provided to it by the PM. An algorithm is considered efficient when increases the resource utilization rate. Figure 6 shows that the resource utilization percentage of rule based queuing algorithm is always better than the M/M/C and M/M/C/N queuing models irrespective of the VM request rate. Average waiting time—This metric refers to the total time taken for a request between joining in the queue and allotted with the resources. It should be reduced Fig. 6 Comparison of resource utilization rate

192

T. Thiruvenkadam et al.

Fig. 7 Comparison of average waiting time

the algorithm can be considered as outperformed if the waiting time is lesser when compared to other algorithms. Average waiting time comparison of the proposed algorithm and existing is shown in the Fig. 7. Based on the simulation results it is found that M/M/C/N model provides better results when compared with the M/M/C model and RB-M/M/C outperforms both the models in terms of average waiting time for the VM resource requests.

6 Conclusion and Future Work Corporations continue to trust on cloud computing technology for several use cases, including growing efficiency, reducing costs, ensuring data security, and storing limitless data. As the cloud resource requests are increasing continuously the cloud providers also need to find a reliable model for allocating the cloud resources to the users in an efficient way. To address this issue we proposed a rule based queuing model to enhance the resource allocation and optimal scheduling in the cloud computing environment. The rule based resource allocation algorithm divides the resource request into low, medium and high priority queues based on its priority level then it uses waiting time reduction techniques also before applying the JSQ algorithm, to attach the resource request to any of the appropriate queues to lessen the response time and average waiting time. The performance of the proposed RB-M/M/C queuing model is compared with the existing models. From the analysis, it is understood that the response time, Average waiting time and total number of physical servers used are reduced and utilization of the proposed system are very high when compared to the existing models. However, as a future work, a model needs to be endorsed allowing

Resource Request Handling Mechanisms for Effective VM Placements …

193

all the elements affecting the service system that are dynamic in nature. This will let the service systems to be considered more exactly and efficiently additional enhancements can be done in any imperfect environment. The new model thus articulated may be installed in real time systems to verify their effectiveness in offering consistent services in the dynamic cloud environment.

References 1. Zhang GW, He R, Liu Y (2008) The Evolution based on Cloud Model. J Comput Mach 7:1233–1239 2. Breitgand D, A Epstein (2011) Sla-aware placement of multivirtual machine elastic services in compute clouds. In: Integrated Network Management (IM), ’11 IFIP/IEEE International Symposium on, pages 161–168 3. Meng X, Isci C, Kephart J, Zhang L, Bouillet E, Pendarakis D (2010) Efficient resource provisioning in compute clouds via vm multiplexing. In: Proceeding of the 7th international conference on Autonomic computing, pages 11–20, New York, NY, USA, 2010 4. Walia NK, Kaur N (2021) Performance analysis of the task scheduling algorithms in the cloud computing environments. In:2021 2nd International Conference on Intelligent Engineering and Management (ICIEM) 5. Shi Y, Suo K, Kemp S, Hodge J (2020) A task scheduling approach for cloud resource management. In: 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4) 6. Zhou Z, Li F, Zhu H, Xie H, Abawajy JH, Chowdhury MU (2020) An improved genetic algorithm using greedy strategy toward task scheduling optimization in cloud environments. Neural Comput Appl 32(6):1531–1541 7. Safvati MA (2017) Analytical review on queuing theory in clouds environments. In: 2017 Third National Conference on New Approaches in Computer and Electrical Engineering 8. Arunarani A, Manjula D, Sugumaran V (Feb.2019) Task scheduling techniques in cloud computing: a literature survey. Futur Gener Comput Syst 91:407–415 9. Ghomi EJ, Rahmani AM, Qader NN (2017) Load-balancing algorithms in cloud computing: a survey. J Netw Comput Appl 88:50–71 10. Ani Brown Mary N, Saravanan K (2013) Performance factors of cloud computing data centers using [(M/G/1): (∞/GDMODEL)] queuing systems. In: International Journal of Grid Computing & Applications (IJGCA) Vol.4, No.1 11. Satyanarayana A, Varma PS, Sundari MR, Varma PS (2013) Performance analysis of cloud computing under non-homogeneous conditions. In: International Journal of Advanced Research in Computer Science and Software Engineering. 12. Mishra M, Sahoo A (2011) On theory of VM placement: Anomalies in existing methodologies and their mitigation using a novel vector based approach. In: IEEE International Conference on Cloud Computing, pp. 275–282 13. Thiruvenkadam T, Teklu T (2019) Enhanced algorithm for load rebalancing in cloud computing environment. Lect Notes Data Eng Commun Technol 26:1391–1399 14. Mills K, Filliben J, Dabrowski C (2011) Comparing vm-placement algorithms for on-demand clouds. In: IEEE third international conference on cloud computing technology and science (CloudCom), pp. 91–98 15. Subramanian S, Krishna NG, Kumar KM, Sreesh P, Karpagam GR (2012) An adaptive algorithm for dynamic priority based virtual machine scheduling in cloud. Int J Comput Sci 9(6), No 2:397–402 16. Ghanbari S, Othman M (2012) A priority based job scheduling algorithm in cloud computing. Procedia Eng 50:778–785

194

T. Thiruvenkadam et al.

17. Selvarani S, Sadhasivam GS (2010) Improved cost-based algorithm for task scheduling in cloud computing. In: Proceedings of the IEEE international conference on computational intelligence and computing research, pp. 1–5 18. Li J, Qiu M, Niu J, Gao W, Zong Z, Qin X (2010) Feedback dynamic algorithms for preemptable job scheduling in cloud systems. In: Proceedings of the IEEE/WIC/ACM international conference on web intelligence and intelligent agent technology, pp. 561–564 19. Rajeshram V, Shabariram CP (2015) Heuristics based multi queue job scheduling for cloud computing environment. Int J Res Eng Technol 04(05):163–166 20. Kaushik NR, Figueira SM, Chiappari SA (2006) Flexible time-windows for advance reservation scheduling. In: Proceedings of the 14th IEEE international symposium on modeling, analysis, and simulation of computer and telecommunication systems (MASCOTS ‘06), pp. 218–222 21. Sharkh AM, Ouda A, Shami A (2013) A resource scheduling model for cloud computing data centers. In: IEEE Proceedings of the 9th International Wireless Communications and Mobile Computing Conference (IWCMC ‘13), pp. 213–218

Self-adaptive Hadoop Cluster for Data Analysis Applications Luchmee Devi Reesaul, Aatish Chiniah, and Humaïra Baichoo

Abstract With continuous improvements of information and technology, the world is witnessing an exponential growth in the generation and processing of data. The handling of big data has become a challenge. Hadoop clusters have emerged as a solution to tackle this problem by sharing the processing power between each node in the cluster, hence boosting the processing speed of data analysis applications. In order to improve the Hadoop cluster and help in improving the throughput and with balanced CPU utilization in all the slave nodes, the Hadoop cluster could be programmed to be self-adaptive. Such that, before executing tasks on Hadoop, depending on the dataset file size, the cluster will dynamically shrink or expand. This will improve the overall performance of the system by allocating an appropriate number of slave nodes based on the size of the input file and then will proceed with MapReduce processing. Keywords Big data · Hadoop · Clusters · Data analysis applications · MapReduce

1 Introduction In this era of automation-oriented environment and information, the availability of large amounts of data to decision makers has increased exponentially. Also, data can be easily accessed within a few clicks, where it can be visualized and analyzed as a dashboard. However, with increasing days, organizations are growing fast and so are datasets. They have led to the concept of 4 V’S of big data that is volume, variety, velocity, and veracity. As days keep passing, the rate of data production and processing keeps increasing. So, cloud storage has become the common opt solution for big data challenges due to its numerous advantages. According to statistics, since the year 2009, investment in L. D. Reesaul · A. Chiniah (B) Department of Digital Technologies, University of Mauritius, Reduit, Mauritius e-mail: [email protected] H. Baichoo CITS, University of Mauritius, Reduit, Mauritius © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_14

195

196

L. D. Reesaul et al.

cloud computing has increased at 4.5 times the rate of IT investment and is predicted to rise by 6 times the rate of IT investment as from 2015 through 2020 [1]. In this cloud computing environment, due to the advancement in technologies, Hadoop is commonly used as a big data processing engine. However, looking at the other side of the coin, the Hadoop cluster being the solution for big data processing, its configuration part is very tedious and consequently requires specialized trained users. Due to its complexity, many precise configuration parameters need to be set to run a single function. If parameters are not well set, the Hadoop cluster may not work and thus requiring changes each time which is very annoying and time-consuming. Whenever a file is uploaded in the master node, it is divided into small blocks and gets distributed in the slave nodes for storage and are similarly processed. But this is ideal for very large dataset and not for small datasets, since small datasets does not require a large cluster of slave nodes for processing, a cluster of two to three will be enough. So, to improve the Hadoop cluster, to help in improving the throughput and with balanced CPU utilization in all the slave nodes, the Hadoop cluster should be programmed to be self-adaptive. Such that, before executing tasks on Hadoop, depending on the dataset file size, the cluster will dynamically shrink or expand. This will, therefore, improve the overall performance of the system by allocating an appropriate number of slave nodes based on the size of the uploaded file. Hence, the aim of the developed system is to construct a Hadoop Cluster with 11 nodes including 1 master and 10 Datanodes and also to program the cluster to be self-adaptive, in other words automatically configure the extra nodes needed for data storage and processing.

2 Related Works Years ago, Greg Pfister, a computer scientist who used to study and work on cluster computing stated that the idea of machine clustering came from clients but not via a specific vendor [2]. At those time, lots of research was conducted to develop clustering products to support parallel data processing while maintaining reliability and uniqueness of data [3]. In 1977, Datapoint (an IT company in San Antonio, Texas, United States, founded in July 1968) built the first clustering solution that is ARC net (Attached Resource Computer Network) [2]. Later in 1984, the next clustering product built was VAXcluster by DEC. With time, the clustering technique started to evolve and companies like Sun Microsystem, Microsoft and other leading IT firms began to sell cluster in-built packages [2]. Linux is the most preferred operating system for node clustering because: • Linux is an open UNIX-like operating system that is commonly used and consists of many functions and system software tools which are freely available [4]. • Linux is a reliable and flexible Portable Operating System Interface (POSIX) which is a robust platform.

Self-adaptive Hadoop Cluster for Data Analysis Applications

197

• Linux is integrated with features usually found in standard UNIX including multiuser access, pre-emptive multi-tasking, demand-paged virtual memory and SMP support [4]. Basically, a cluster is a batch of interlinked distributed computers that work in parallel using fast networks. Choosing an appropriate network connection depends on factors such as cost, performance, hardware and operating system compatibility of cluster. The performance of network is measured via the bandwidth and latency. Bandwidth is defined as the rate of data transmission over interconnected hardware, while latency is the time taken for data transmission from a source node to a destination node [5]. Some common examples of interconnection technologies used are Gigabit Ethernet, Scalable Coherent Interface (SCI) and Infini-band. In a cluster, the incoming requests from resources are received and distributed among all the nodes for processing[6, 7]. All these computers work simultaneously to successfully process and execute massive volumes of data instantly and without hanging, which is unfeasible using a single computer. The intent of using cluster is to: • Increase the storage capacity just by connecting and configuring an extra node. • Ensure high availability, by eliminating single points of failure without any hindrance. If one of the computers fails, the system will still stay operational since there’s always a backup node. • Provide stability by preventing the probability of the system to crash during huge data processing since any issue cropped can be handled via dynamic reconfiguration. • Share processing workload since in a cluster, multiple nodes working together function as one virtual machine. • Facilitate user to add cluster-related extensions including user-loadable drivers.

3 System 3.1 System Description Initially, a Hadoop cluster is constructed using a combination of 11 nodes (1 master and 10 datanodes). The clustering system runs on the same network using a 16 port Giga-Ethernet switch. The system makes use of Gigabit Ethernet cables to connect the switch to the internet and Fast-Ethernet cables to connect the switch to the nodes for a better throughput. All the configurations are performed manually at first. The system will have a web interface coded using Html, Html5, CSS, CSS3, JavaScript and PHP. User can register or login to the system via the web interface coupled with a series of validation. The user is then directed to the dashboard web interface after successful login. The page consists of a navigation bar comprising of:

198

L. D. Reesaul et al.

• Profile user icon: On clicking the user icon, this will display the information of the user accessing the web page and provide user with the options of either to edit their profile or to edit their account (including username, password). • My_Files: By default, it will be hovered on the ‘Files’ option. All the datasets files uploaded on the Hadoop Distributed File System (HDFS) and MongoDB will be displayed on this page. On clicking on a specific dataset file, user will be prompted with the options of either to delete or view details of that selected file. Also, user will be prompted with the size of selected dataset file and with an ‘adjust’ button, which on clicking will dynamically shrink or expand the cluster. Moreover, via this page, user can upload dataset files via the ‘plus’ icon button and delete multiple dataset sets via the ‘minus’ icon button. • Logs: This option will display all the information concerning the operations (file uploading, and deletions) performed by user. Information may include the name of file being uploaded or deleted, the file size, the path the file stored in HDFS, the date at which the operations were conducted and the username under which these operations were performed. • File_Info: This option will display all the details of a selected dataset files. • A search bar: This option will allow user to search for a specific uploaded file. If the file is founded, it will direct user to the File_Info page thereby displaying the file details and providing the options of either to delete or adjust the cluster for later processing. • A div element for displaying the state of HDFS (either it is active or inactive) and displaying the total number of active and inactive slave nodes in the cluster. The system also provides an admin interface, which upon successful login by entering correct admin credentials, will direct the admin to the same dashboard and features as a user will be accessing but it will have additional features like: • Users: This option will allow admin to view all the users using the system and will be able to manage the users either by removing an existing user while deleting their account or to add a new user. • All Logs: This option will allow admin to monitor all user activity by viewing all the operation details conducted by all users. The system is a self-adaptive Hadoop cluster that is the system is able to analyze the size of the dataset file uploaded via the web interface. Further, it determines the number of nodes required to compute this dataset file and depending on the number of nodes required, the cluster will adapt itself, that is, it either shrinks or expands. All the automation part is written using PHP and processing parts is in Java.

3.2 Hadoop The Apache Hadoop framework was built to support distributed processing of massive data sets over thousands of clustered nodes [8]. All these clustered nodes

Self-adaptive Hadoop Cluster for Data Analysis Applications

199

Fig. 1 HDFS/MapReduce layer composition [9]

operate parallelly and using simple programming models. Hadoop was initially inspired by Google’s MapReduce and GFS because of their mode of operation of breaking an application into various manageable parts for processing and storage. It was designed by Doug Cutting and Mike Cafarella, former Yahoo employees in 2006, to support Nutch, the open source web crawler. Then, in 2008, Apache took over Hadoop which is now known as Apache Hadoop [8]. Due to its good performance, IDC predicted Hadoop to be worth $813 million in 2016. Also, according to 2015 Forrester Predictions, it has been stated that “Hadoop is a must-have platform for every company’s business technology, forming the critical foundation of any flexible forthcoming data platform needed in the era of the customer.” The foundation of Hadoop is written in Java and it is open source. The core of Hadoop composes of MapReduce and HDFS. MapReduce is a distributed data processing framework based on Java, which is used on clusters for big datasets parallel processing. The word “MapReduce” is made up by concatenating 2 words that is Map and Reduce: Map tasks transform a set of data to another set of data, whereby data are further divided as key pairs (i.e. data tuples). On the other hand, reduce task, carry out the final output by loading the output of map task and links these data tuples into further small group of tuples. Reduce task includes mapping, sharing, shuffling and reducing [8]. HDFS is responsible for data processing and storage. These processes are illustrated in Fig. 1.

3.3 Hadoop Distributed File System (HDFS) HDFS is a component of Hadoop which is responsible in processing and storing of metadata and application data by using a cluster of hundreds or thousands stand-alone

200

L. D. Reesaul et al.

Fig. 2 HDFS architecture [10]

machines. All these machines are fully interconnected with each other using TCPbased protocols. The HDFS is like a block-structured file system, in which large files are analyzed and broken down into small blocks and stored across clusters. HDFS adopts the master/slave architecture. That is, a single NameNode (call the master) which stores metadata and many Datanodes (the slaves) that stores Application data (Fig. 2).

3.4 Hadoop Distributions Before, after the release of Hadoop version 1.0, limitations such as security, scalability issues and limited utilization of resources arose. Then, due to further researches and technology advancement, Hadoop version 2.0 was released to solve these issues [11]. Just after, vendors who supported and used the Core Hadoop files, began manipulating and altering the current Hadoop source code since it is open source. This was done in order to have a better version of Hadoop with enhanced features. The common Hadoop distributions are Cloudera and HortonWorks. These distributions support features such as advanced intelligence analysis tools, coordination services, distributed storage systems and resource management [12].

3.5 MapReduce MapReduce is a component of Hadoop platform. It is a programming paradigm best suited for big data processing. Hadoop can execute MapReduce commands written in different languages such as Python, Ruby, C++ and Java. MapReduce is used to conduct huge data analysis parallelly across different multiple nodes in a cluster [12]. Tasks in MapReduce (Fig. 3) is divided into two parts namely the Map task

Self-adaptive Hadoop Cluster for Data Analysis Applications

201

Fig. 3 Hadoop map reduce architecture [15]

and Reduce task. When writing code, programmers need to define and specify two functions that is the map and reduce functions. Normally, the MapReduce system consists of four stages that are splitting, mapping, shuffling and reducing [13, 14].

3.6 Architectural Design The Fig. 4 shows the overall architecture of the whole system to be implemented. The cluster will consist of 1 master node, 10 slave nodes, the web server and the database server, which will be accessed by the master. The cluster will be linked to the internet whereby authorized users will access the internet and perform all operations, such as insertion, deletion, download and processing of the dataset file using a browser. Then, accordingly, the cluster will auto-adapt itself. If the user uploads a new file and chooses to process it, the system analyzes the size of the dataset file, then determines the number of nodes needed to process the following file, and finally, depending on the number of nodes required, the cluster will shrink or expand. If user deletes a file, the system will remove the dataset file from the slave nodes and thus freeing disk space.

202

L. D. Reesaul et al.

Fig. 4 Architectural design

4 Implementation A cluster of 11 nodes including 1 master and 10 slave nodes was setup. The configuration of the cluster is as follows: • The Master node is a powerful PC (4 × 3.2 GHz Intel Processors with 8 GB of RAM and 1 TB HDD) hosting the NameNode/RaidNode. • 11 HP PCs acting as Clients hosting DataNodes (each with a 3.2 GHz Intel Processor with 4 GB RAM and 500 GB HDD). • The average bandwidth of this cluster is 12 MB/s. • Linux Ubuntu version 16.04, Hadoop 3.0.0, PHP 5.6 + cURL, MongoDB v3.3.3, Apache Http Server version 2.4.18 among other sofware were used for this work. Then the cluster was programmed to be self-adaptive. For cluster to be selfadaptive, the replication factor was altered to 3. A range of dataset files of different sizes was processed via MapReduce while altering the Hadoop cluster size. The algorithm was derived, i.e. the number of datanodes to allocate to each dataset file sizes. Then a PHP code was written to capture the dataset size and to adjust the cluster dynamically before dataset processing. A website was built to allow communication through HTTP requests between the master PC and HDFS. To meet the user’s requirements, the website provides functionalities such as upload files, view files, search, adjust cluster and processing. Also, allow user to keep a record after each operation (upload/delete operations).

Self-adaptive Hadoop Cluster for Data Analysis Applications

203

5 Results The cluster starts well while starting the NameNode and Secondary NameNode on the master. Data Node is also started on the master and on all slave nodes as shown in Fig. 5. Dataset is processed within seconds and the output is stored in HDFS as shown in Fig. 6. Figure 7 shows the adaptiveness of the cluster from 10 to 6 datanodes, and while doing so no data blocks are not lost during adjusting the cluster size (number of

Fig. 5 HDFS starting after cluster setup

204

L. D. Reesaul et al.

Fig. 6 MapReduce processing

dataNodes). Only the configuration for the slave “workers” are modified, and affects MapReduce tasks and not storage of datablocks.

6 Evaluation After setting the Hadoop cluster successfully, when uploading a dataset in HDFS, the dataset is broken down into smaller data blocks and are stored in different data blocks. When starting the Hadoop cluster, all the datanodes connected to the master will start and all these datanodes were involved for MapReduce processing of dataset irrespective of the size of the dataset. The issue was that, if for example a dataset of 1 MB is needed to be processed, all the 10 datanodes were engaged in the MapReduce processing which was not feasible at all and was also a waste of CPU resource. Since, to process a dataset of 1 MB, a cluster of 2 datanodes may be enough and the time taken to produce the output would be the same as the time taken for 10 datanodes to process it. In order to improve the Hadoop cluster, to help in improving the throughput and with balanced CPU utilization in all the slave nodes, the Hadoop cluster was

Self-adaptive Hadoop Cluster for Data Analysis Applications

205

Fig. 7 Cluster Adativeness: Shrink

programmed to be self-adaptive such that, before executing MapReduce tasks, depending on the dataset file size, the cluster will dynamically shrink or expand without the need of reconfigurations which is a tedious job. Before, the replication factor was set to 1 and this was a problem during resizing of cluster. So, the replication factor was updated to 3 that is the data blocks will be replicated 3 times thus making the system fault tolerant and providing a high availability of data. Self-adaptiveness will depend on the dataset size and to achieve that, a range of datasets of different sizes were processed while altering the size of Hadoop cluster. Then the time taken was noted and a performance graph was plotted for analysis to

206

L. D. Reesaul et al.

have a better estimation of how many datanodes would be allocated to a specific file size. Table 1 shows the results of each dataset which was processed with different cluster sizes to note the processing time taken. Since replication factor is set to 3, a minimum of 3 datanodes will be needed. The unit of measurement is seconds. Based on the readings (Table 1) obtained, a graph was plotted to analyze the performance of the cluster. From the graph (Fig. 8), it can be concluded that: • When increasing the number of datanodes, the processing time decreases. • But at a point, increasing the number of datanodes will not decrease the processing time. • For smaller size datasets (for example less or equal to 50 MB), using more datanodes doesn’t improve the performance. Using master node and 2 slave nodes are sufficient for its processing. Table 1 MapReduce processing time 4

5

No. of datanodes in cluster

3

Average processing time (seconds)

Dataset size: 10 MB 13

12

10

6

7

8

9

10

10

10

10

10

10

14

14

14

14

14

16

16

16

16

16

24

23

23

23

23

24

23

23

23

23

25

24

24

24

24

37

31

30

30

30

40

37

30

30

30

41

37

35

35

35

46

38

38

35

32

60

41

39

38

37

Dataset size: 50 MB 17

16

14

Dataset size: 100 MB 26

21

20

Dataset size: 150 MB 30

27

25

Dataset size: 200 MB 37

35

28

Dataset size: 250 MB 37

35

31

Dataset size: 300 MB 43

40

37

Dataset size: 350 MB 47

44

42

Dataset size: 400 MB 50

45

43

Dataset size: 450 MB 55

50

47

Dataset size: 500 MB 75

70

67

Self-adaptive Hadoop Cluster for Data Analysis Applications

207

Fig. 8 Performance Graph for WordCount program

• For larger files greater than 600 MB, increasing the number of datanodes, do decrease the processing time and hence increases the performance. • Table 2 below shows the total number of datanodes allocated to each dataset size range based on the performance graph. Table 2 Number of datanodes allocated to each dataset size

Dataset size (MB)

Number of slave nodes in a cluster

50 && >100

4

>=100 && 150 && 250 && 300 && 400 && =550

10

208

L. D. Reesaul et al.

7 Conclusion The system was successfully built and it can be concluded that the project helps and speeds up different analytic workloads on the same datasets, at the same time increasing productivity and also save configuration time of nodes. Moreover, the system can be used by anyone dynamically via its web interface to meet their objectives within few clicks, which will be operated and managed by system administrators. Finally, the evaluation conducted for the system proves that it is reliable and that the algorithm for choosing the number of datanodes to participate in the processing of any MapReduce task is optimal, and hence the effectiveness of the self-adaptive cluster. As future work, it is planned to try the system with a cloud service provider and to add some additional features such as direct dataset upload.

References 1. Columbus, L (2015) Roundup of cloud computing forecasts and market estimates, 2016. Forbes magazine 2. Baker M, Fox GC, Yau HW (1995) Cluster computing review 3. Jethwani K and Gaur S (2016) A study on cluster computing. Int J Adv Res Comput Sci Softw Eng (2016) 4. Jin H, Buyya R, and Baker M (2015) Cluster Computing Tools, Applications, and Australian Initiatives for Low Cost Supercomputing 5. Yeo, C. S., Buyya, R., Pourreza, H., Eskicioglu, R., Graham, P., & Sommers, F.. Cluster computing: High-performance, high-availability, and high-throughput processing on a network of computers. Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with Emerging Technologies, 521–551 (2006). 6. Jeyaraj R, Ganeshkumar P, and Anand P (2020) “Big Data with Hadoop MapReduce: A Classroom Approach”. Apple Academic Press 7. Beakta R (2015) Big data and hadoop: A review paper. Int J Comput Sci & Inf Technol 2(2):13–15 8. Zhang Q, Cheng L, Boutaba R (2010) Cloud computing: state-of-the-art and research challenges. J Internet Serv Appl 1:7–18 9. Markey, Steven C. “Deploy an openstack private cloud to a hadoop mapreduce environment.“ (2012). 10. Bakshi A., “Hadoop Distributed File System | Apache Hadoop HDFS Architecture | Edureka”, (2019) 11. Chiniah A, Mungur A (2020) Data management in erasure-coded distributed storage systems. In: 2020 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGRID). IEEE, pp. 902–907 12. Bhathal G, Dhiman AS (2018) Big data solution: Improvised distributions framework of hadoop. In: 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS). IEEE, pp. 35–38 13. Ghazi MR, Gangodkar D (2015) Hadoop, MapReduce and HDFS: a developers perspective. Procedia Comput Sci 48:45–50 14. Chiniah A, Chummun A, & Burkutally Z (2019) Categorising AWS Common Crawl dataset using mapreduce. In: 2019 Conference on Next Generation Computing Applications (NextComp). IEEE, pp. 1–6

Self-adaptive Hadoop Cluster for Data Analysis Applications

209

15. Ahmed N, Barczak AL, Susnjak T, Rashid MA (2020) A comprehensive performance analysis of Apache Hadoop and Apache Spark for large scale data sets using HiBench. J Big Data 7(1):1–18 16. Chiniah A, Mungur A (2022) On the adoption of erasure code for cloud storage by major distributed storage systems. EAI Endorsed Trans Cloud Syst 7(21), e1

Sustainable Cloud Computing in the Supply Chains Manish Shashi and Puja Shashi

Abstract Contrary to the popular belief that cloud-only helps save costs, it can help drive end-to-end real-time visibility, intelligent automation, scalability, agility, and speed. The Supply chain industry is prime for cloud computing because of the large number of internal and external stakeholders, including suppliers and partners collaborate to facilitate the seamless delivery of products and services to consumers. The profitability and survival of an organization in the long term are closely associated with sustainability measures, but unfortunately, digitalization models in the supply chain often ignore these aspects. The cloud-based self-thinking supply chains and workloads are more sustainable, elastic, and scalable than resources used in traditional data centers. The paper aims to bridge the gap between supply chain practitioners and academic researchers by seeking to understand how sustainable cloud computing will help to address key supply chain opportunities and challenges. Keywords Cloud computing · Supply chain · Sustainability · Digitalization

1 Introduction Efficient supply chain management (SCM) deals with the optimal flow of goods and services and helps to collaborate with channel partners, such as vendors, logistics providers, and customers, to ensure the right products at the right time. Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal effort from management or service providers [11]. Sharing configurable computing resources, also termed cloud computing, is an innovative, convenient, on-demand network access model and can be provisioned rapidly with less effort and minimal cost from providers [5]. Cloud computing helps user’s access M. Shashi (B) Walden University, Minneapolis, MN, USA P. Shashi Oxford College of Engineering, Bangalore, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_15

211

212

M. Shashi and P. Shashi

data anytime and anywhere, compared to traditional computing, where business users can access data only on the system of its storage. Any web browsing devices, such as smartphones, laptops, desktops, and tablets, can be the primary tool to manage all facets of the integrated supply chain functionality using cloud computing. With an annual growth rate of 41%, the cloud-based supply chain aims to reach $222.23 billion by 2028. Cloud computing is an economical, architectural, and infrastructural approach to information technology. Gartner predicts that spending on the public cloud model also grows 22% yearly in all enterprise IT spending. Cloud computing helps in resource acquisition, usage, and infrastructure maintenance capabilities. Heterogeneous administration is provided by cloud providers using various deployment and service models that help customers to access and utilize software Infrastructure as a Service (IaaS), software as a Service (SaaS), or Platform as a Service (PaaS) service model offerings. Any of these models helps them lower their total cost of ownership compared to traditional off-premises data [7]. The most popular deployment model is hybrid cloud, which is a blending of various other deployment models, such as community, private, or public. In a hybrid cloud deployment model, the business formulates strategies depending upon the business requirement, where non-critical activities are performed using public and critical and more secure activities are performed using private [15]. In an organization, numerous public and private clouds can be combined to achieve business goals. Existing supply chain processes can be optimized by leveraging the cloud computing model’s software, platform, and infrastructure solutions [15]. The cloud-based model also helps in achieving designed operational and financial benefits. The balance between economic, social, and environmental is understood as sustainability in the cloud-based supply chain system [13]. As per one estimate, the internet emits four percent of carbon dioxide every year, and the physical warehouses are globally responsible for approximately 14% of overall supply chain undesirable gas emissions. Another survey says that compared to traditional data center-based supply chains, carbon footprints can be reduced by 89%, and power consumption can be minimized by 85% using a cloud computing-based supply chain model. Instead of operating a decentralized supply chain, an optimally designed cloudbased supply chain design model is one of the solutions to achieve sustainability goals. Sustainability helps achieve efficient supply chains [13]. This research paper proceeds in the manner as explained. The problem statement is explained in the next section, followed by the literature review in the third section. The fourth section presents various cloud-based supply chain models with key benefits of adopting those models. The fifth section presents some of the findings, and the final section presents the conclusion.

2 Problem Statement The main objective of any successful organization is to have an effective supply chain to ensure the delivery of services and products in a timely fashion with the most

Sustainable Cloud Computing in the Supply Chains

213

cost-effective processes. Unfortunately, many cannot achieve this with the traditional non-responsive supply chain liner processes [5]. In 2020, global supply chains were severely disrupted by COVID-19, resulting in a shortage of raw materials and labor, reduced operational capabilities, and a lack of inputs from other businesses in supply chains of complex nature. Various mitigation strategies, including cloud deployment, which facilitates linking multiple partners through information technology resources with shared information and funds, were one of the digital enablers among many that were increasingly applied [5]. As per a survey from 1050 supply chain executives by Accenture, more than half of participants said that the disruptions created by the corona pandemic and other crises around the world prompted them to explore using sustainable cloud possibilities in supply chain processes [1].

3 Literature Review The supply chain includes internal and external stakeholders and their connections network through which the production, supply, delivery, and sales of essential products and services are distributed to the end-users efficiently [10]. The production and delivery of products and services in a timely fashion with short lead time and less cost are critical supply chain objectives of many organizations, but unfortunately, many are not able to achieve this with the traditional non-responsive supply chain technology they have [7]. Any organization can use on-premises and owned server or cloud-based supply chain solutions. Traditional supply chains running on in-house data centers could not cope with some of the critical success metrics, such as integration with other business systems, scalability, connectivity, improved security, and utilizing cutting-edge supply chain analytical tools. The cloud supply chain comprises two or more parties connected by the provision of cloud services and represents a network of interconnected businesses involved in the end-to-end processing of products and services to customers [7]. Cloud computing is an innovative digital technology that empowers agile, financially savvy, and versatile solutions to supply chain business (Balan). The cloud-based supply chain can help increase resilience and sustainability by 53% and 49%, respectively [1]. It can also help control supply chain operating costs by 16% and increase profitability and revenue growth by 5% [1]. Cloud and virtualization are often adapted in the business process layer and are the main driver to speed up the supply chain [3]. Cloud computing is not new but is used in every emerging technology because of its vital and powerful force in changing how data and services are managed [16]. Emerging technology like the cloud digitally transforms how the current enterprise’s information technology (I.T.) infrastructure is constituted and collected through resources, such as infrastructure, platforms, and applications [8]. Five critical features of cloud computing are shown in Fig. 1. On-demand selfservice is the most attractive, cost-effective, and automated service offered by cloud computing vendors, enabling cloud resources based on specific client requirements on-demand. By paying extra, users can also scale up the required infrastructure based

214

M. Shashi and P. Shashi

on increased future needs without disrupting the host operations. There can be potential compliance and regulatory services while implementing self-service, and relevant controls must be in place to prevent a single user from accessing all the services [11]. It is the main reason some key regulatory processes cannot be automated. Broad network access is one of the requirements to prevent latency issues on low bandwidth connections. Cloud solutions must be architected in a way that does not require a client application. Various devices, such as tablets, smartphones, laptops, desktops, and other IoT devices, must access it without any platform-specific applications and without using any significant amount of bandwidth to use the services. Resource pooling is based on the principle that instead of keeping the resource idle, it can be used by another customer. It is often achieved through virtualization and helps save cost and provide flexibility by cloud computing vendors. The key factor in rapid elasticity is that even though the resources are available, they are only used once needed. It allows the provider to save on consumption costs [11]. When reaching the threshold, rapid elasticity is often done using automation, orchestration, and automatic capacity extension (burst capacity) triggers. In a supply chain environment, this burst capacity may only be needed some of the time but at the end of the quarter or year-end to meet additional order processing capacity. Other times, they can save money by utilizing existing capacity. The measures services are closely linked with pay-as-we-go metrics of cloud computing. The service provided by the vendor must have the ability to measure quantitative aspects, such as data used, bandwidth used, and time used. Supply chain customers also need to understand the logic and must have a good understanding of the measured services metrics charges associated with that. Sustainability in the integrated supply chain ecosystem is understood as a balance between environmental, social, and economic pillars [13]. The cloud helps the supply chain revolutionize in many ways from a sustainability perspective. As per estimates from EPA, the physical data center may account for close to two percent of all Fig. 1 Characteristics of cloud computing (Source Distefano, 2015) [6]

Sustainable Cloud Computing in the Supply Chains

215

electricity consumption in the USA and is responsible for huge carbon emissions. Organizations can contribute significantly by shifting to the cloud model, which may primarily use public cloud providers. Cutting down on data centers can significantly lower carbon emissions and electricity consumption and help achieve environmental sustainability goals. Poor efficiency has a negative impact on the environment. Supply chain solutions based on cloud computing may also help in the high utilization rate of servers and increase efficiency compared to a traditional data center owned by organizations.

4 Analysis of Various Cloud-Based Supply Chain Models Figure 2 shows the basic cloud model for the supply chain. Service providers and service aggregators are a couple of the main components of the model. Service providers might act as infrastructure, platform, or service providers and can be directly in contact with end customers [9]. The service aggregator collaborates with providers and may market their services on their brand name. Products and services must have basic functional features, can have innovative flavors, and moves through a cloud supply chain network. The information model contains monitoring, accounting, and billing inwards and outwards flow. Figure 3 shows three layers, digitally enabled physical supply chain, cloud-based information architecture, and virtual supply chain control tools. The set of virtual tools in the first layer helps supply chain managers to oversee and manage operations across the supply chain by providing a collaborative, mobile, and dynamic interfaces model [2]. Data from all elements of supply chain nodes, such as logistics partners, customers, contract manufacturers (CMOs), manufacturing facilities, and raw material sourcing partners, can be analyzed by processed by the analytical tool. It can be

Fig. 2 Basic cloud supply chain model (Source Linde et al., 2010) [9]

216

M. Shashi and P. Shashi

accessed by managers, which provides them visibility about many metrics, such as inventory levels, service levels, machine utilization, and demand sensing. It helps them in making real-time decisions and any proactive decisions if needed. The third layer comprises all physical units, such as warehouses, equipment, devices, and manufacturing facilities. The digitalization of the supply chain must include virtual sensors that can relay information to physical manufacturing facilities. Digitalization enables them to turn into smart objects using internet of things (IoT) technologies. Manufacturing units, also called smart factories, may use sensors to monitor equipment performance and machine learning algorithms for highly integrated manufacturing processes. Smart storage in the warehouse can use a smart cabinet to track and trace automatically and subsequently signals replenishment based on preset consumption level. Automatically guided vehicles, packaging and scanning robots, and drones may perform highly automated work in physical warehousing. Layers one and three communicate through cloud computing which facilitates decision and information flow. Cloud-based information architecture in the second layer makes it possible to fetch, store, and translate data from all supply chain nodes

Fig. 3 Cloud-enabled supply chain ecosystem (Source Behner, P., & Ehrhardt, M., 2016). [2]

Sustainable Cloud Computing in the Supply Chains

217

Fig. 4 Cloud-based self-thinking supply chain (Source Calatayud et al. 4)

across the physical network. It provides flexibility in managing all supply chain partners and also helps in scalability and easy integration. The cloud-based architectural structure is prevalent for the inbound and outbound flow of information and decision. A cyber-physical system is formed when all these three layers come together and work synchronized [2]. It truly transforms the silo structure of the supply chain into a digitalized integrated supply chain ecosystem. Figure 4 shows the most popular cloud-based self-thinking supply chain model. The self-thinking cloud-based model helps in collaboration and through self-learning. Box B in Fig. 4 is used to regulate (slow down or slow up depending upon the situation) the material flow [4]. Controlling the flow of materials is often used in the supply chain by switching to air transport mode instead of the sea container route in case of congestion in the sea [5]. Internet of Things (IoT) technology allows real-time connectivity, visibility, and integration until the product reaches the customers.

5 Findings Cloud-based self-thinking supply chain model is the most popular model, which helps collaborate with all partners efficiently through self-learning and improvement. A hybrid cloud deployment model also suits most supply chain organizations. The integration module of hybrid cloud deployment helps organizations with the added flexibility to switch to a public cloud from an existing private, depending on businessspecific requirements.

218

M. Shashi and P. Shashi

Traditional linear Supply chains do not communicate with other system components in circularity. It results in siloed and fragmented information and makes it difficult for partners and suppliers to collaborate and share information. Organizations can achieve many benefits by implementing a cloud-based supply chain, such as real-time end-to-end supply chain visibility, the collaboration between organizations, capturing disruption risks, and creating a knowledgeable learning community for optimizing decision-making [9]. Cloud-based solutions can be applied to functional areas, such as logistics, transportation, and planning, instead of bringing entire supply chain operations into its periphery. Some cloud-based SaaS applications, such as drayage dispatch and transportation management, exist in the markets that help link 3PLs (third-party logistics providers) and various ocean carriers. The cloudbased solution helps identify efficient vendors and save spending on transportation, and manual task automation, reducing costly billing errors [12]. It further results in process streamlining, which helps collaboration and involves all parties making informed and intelligent decisions [12]. Building a cloud-based control tower is the most choice for most supply chain organizations. Fig. 5 shows the proposed model where a cloud-based control tower will coordinate with all supply chain functions, such as sourcing, manufacturing, distributions, and Customers. The Control tower is a connected dashboard with crucial business metrics and all planned and unplanned events across the supply chain cycle. The Control tower is a cloud-based solution built with the help of advanced digital enablers, such as AI and IoT. IoT has an important role in cloud-based supply chains and interconnects physical devices using the internet to exchange data for monitoring, reporting, and controlling purposes [12]. Control tower helps management understand, prioritize, and resolve any critical issue until the product or service reaches the customers and is consumed without any adverse events. It also helps management in removing silos, improving resiliency, and responding to any critical disruptive events. Supply chain disruptions signals and potential market opportunities in real-time should instantly alert planners with lead time and cost implications in the control tower. While building the control tower, some key architectural pillars, such as security, cost optimization, sustainability, performance efficiency, and reliability, must be considered. Collaborations: Cloud-based supply chain models improve the connectivity of the supply chain network, which helps organizations in real-time visibility and seamless collaboration with their partners. System integration with partners should not be confused with collaboration. It happens when multiple discrete organizations work closely with the objective of maintaining service levels, dynamic reduction in inherent costs and inventory levels, enhancing specific aspects of supply chain performance, and agility by satisfying ever-changing customer demands and needs. The following section discusses some of the critical collaboration needs of the supply chain ecosystem using cloud computing and the Internet of things (IoT) in conjunction with cloud computing. The IoT generates a huge amount of data, and cloud computing provides the data to reach the IoT-generated data to its destination.

Sustainable Cloud Computing in the Supply Chains

219

Fig. 5 Cloud-based control tower model (Source Prepared by Shashi, M.)

1. Demand forecasting: Supply chain partners, such as retailers, wholesalers, suppliers, and distributors, collaborate efficiently on a cloud-based platform to increase the accuracy of forecasting and service levels. These cloud-based platforms are efficient enough to get data from the internet and perform modeling and accurate demand forecast for all partners. It helps chain partners to recognize if real demand is volatile. 2. Manufacturing: The communication system of a smart manufacturing factory comprises a wireless sensor network for connecting the sensor module and gateway. The sensor and sensor module are then distributed to the necessary position in the factory [14]. IoT in the smart factory also optimizes production yield and reduces variability. 3. Sourcing: The cloud platform helps organizations to collaborate with multiple sourcing partners by providing databases from different suppliers. It also helps in deciding the preferred suppliers which can provide specified materials with no defects timely. It also helps in collaborating toward vendor-managed inventory. 4. Logistics: It includes warehouse management systems and transportation processes. Using cloud computing, providers pool their resources by dynamically reassigning physical or virtual resources. It helps cater to resource demand from various business users collaborating on the cloud platform. Business users can collaborate using their platforms, such as smartphones and laptops. The cloud platform also helps in elasticity and scalability in case of any fluctuations in user requirements and maintains performance goals (SLA) even when workload increases. The internet of things (IoT) signifies smart devices connected and exchanging real-time data. The IoT application, such as cold chain monitoring, resources (man and machine) tracking, packaging, and warehouse management, is very much suited to supply chain management [14]. Cloud-based IoT solutions

220

M. Shashi and P. Shashi

help organizations track goods and services across the supply chain management lifecycles. 5. Inventory management: Optimizing stock levels is critical for any supply chain organization. Cloud-based inventory management system updates all stakeholders, including manufacturers, vendors, and shippers, to get updated information about available inventory. It helps mitigate stock-out situations and not lose business. 6. Reliability: Metrics such as frequency of component failure is used in determining the reliability of the cloud system, and overall services downtime is used in determining the availability of the cloud system. It is natural to have component failures and overall service downtime (outages) in a particular geography or data center, but that impacts little on long-lasting disruptions on business as there can be other services also in different geographies. Sometimes metrics, such as security, performance, and cost efficiency, are determined for reliability. Cost efficiency mostly depends upon organizational strategies to use the needed resources. In terms of security, cloud services are more secure than traditional on-premises data centers. Security can also be enhanced at the user level by implementing measures to control data access. Automation strategies can be implemented that include multi-factor authentications, deleting resources if unused for a longer time, revoking user privileges if found suspicious, and regular resource configuration validation.

6 Conclusion Modern organizations need sustainable, self-thinking, adaptive, and agile supply chains. Trading partners, customers, and suppliers demand services, products, and information. Instead of a linear and manually driven supply chain, a cloud-based solution can transform the supply chain into an automated and dynamic supply network. A cloud-based supply chain control tower can leverage advanced technologies and help proactively manage supply chains. Cloud-based control towers are in high demand, and the global market is projected to grow to 17 billion dollars worldwide by 2027. Cloud-based emerging technology solutions help drive sustainability across the value chain by reducing the supply network’s environmental footprints. Organizations may also improve risk mitigation and regulatory compliance by achieving sustainability goals. The cloud-based solution also helps enable innovation as providers invest good money in bringing new features that help supply chain organizations by using cutting-edge technologies and capabilities at affordable costs. The performance of cloud computing can be reliable by managing component failures and overall downtime managed by providers. However, reliability in terms of cost efficiencies also depends upon organizational strategies using the capacity they need. Organizations can develop automated solutions based on predefined guardrails, which help formulate reliability and performance monitoring policies. Transforming a traditional supply chain into a cloud-based model needs due diligence and a more

Sustainable Cloud Computing in the Supply Chains

221

structured and disciplined approach. The maturity and deployment model of existing software must be checked before deciding about cloud-based software. Cloud-based supply chain solutions will be a liability if existing software, such as enterprise resource planning (ERP), customer relationship management (CRM), or business analytics systems, is deployed on-premises and cannot integrate seamlessly with cloud-based software.

References 1. Accenture (2022) How the cloud boosts supply chain innovation. https://www.accenture.com/ us-en/insights/supply-chain-operations/supply-chain-transformation-cloud 2. Behner P, Ehrhardt M (2016) “Digitization in pharma: Gaining an edge in operations.” Strategy: 4–18 https://www.strategyand.pwc.com/gx/en/insights/2016/digitization-in-pharma/ digitization-in-pharma.pdf 3. Borangiu T, Trentesaux D, Thomas A, Leitão P, Barata J (2019) Digital transformation of manufacturing through cloud services and resource virtualization. Comput Ind 108:150–162. https://doi.org/10.1016/j.compind.2019.01.006 4. Calatayud A, Mangan J, Christopher M (2019) The self-thinking supply chain. Supply Chain Manag: Int J 24(1):22–38. https://doi.org/10.1108/SCM-03-2018-0136 5. Chen LM, Chang LW (2021) Supply- and cyber-related disruptions in cloud supply chain firms: Determining the best recovery speeds. Transp Res Part E. https://doi.org/10.1016/j.tre.2021. 102347 6. Distefano M (2015) Cloud computing and the internet of things: Service architectures for data analysis and management. PhD thesis proposal, University Of Pisa Department Of Computer Science 7. Giannakis M, Spanaki K, Dubey R (2019) A cloud-based supply chain management system: Effects on supply chain responsiveness. J Enterp Inf Manag 32(4):585–607. https://doi.org/10. 1108/JEIM-05-2018-0106 8. Kari Korpela JH (2017) Digital supply chain transformation toward blockchain integration. In: Proceedings of the 50th Hawaii international conference on system sciences 9. Linder M, Galan F, Chapman C, Clayman S, Henricsson D, Elmroth E (2010) The cloud supply chain: A framework for information, monitoring, accounting, and billing. https://www.ee.ucl. ac.uk/~sclayman/docs/CloudComp2010.pdf 10. Pierce F (2018) Cloud computing in the supply chain. Cloud Computing in the Supply Chain | Supply Chain Magazine (supplychaindigital.com) 11. Rountree D, Castrillo I (2014) The basics of cloud computing. doi:https://doi.org/10.1016/ B978-0-12-405932-0.00001-3 12. Sabouhi F, Pishvaee MS, Jabalameli MS (2018) Resilient supply chain design under operational and disruption risks considering quantity discount: A case study of pharmaceutical supply chain. Comput Ind Eng 126:657–672. https://doi.org/10.1016/j.cie.2018.10.001 13. Shashi M (2022) The sustainability strategies in the pharmaceutical supply chain: A qualitative research. Int J Eng Adv Technol 11(6) 14. Shashi M (2022) Digitalization of pharmaceutical cold chain systems using IoT digital enabler. Int J Eng Adv Technol 11(5) 15. Sundarakarni B, Kamran R, Maheshwari P, Jain V (2021) Designing a hybrid cloud for a supply chain network of Industry 4.0: A theoretical framework. Benchmarking: Int J 28(5). doi:https:// doi.org/10.1108/BIJ-04-2018-0109 16. Tinankoria D, B B (2017) Cloud computing: A review of the concepts and deployment models. I J Inf Technol Comput Sci 6:50–58

Task Scheduling in Fog Assisted Cloud Environment Using Hybrid Metaheuristic Algorithm Kaustuva Chandra Dev, Bibhuti Bhusan Dash, Utpal Chandra De, Parthasarathi Pattnayak, Rabinarayan Satapthy, and Sudhansu Shekhar Patra Abstract An enormous volume of data is being produced for real-time processing as a result of increase in IoT devices and data sensors. The development of fog computing has made it possible to process data quickly. Fog nodes close to the user process the data to the extent necessary to satisfy the user’s requirements. Task scheduling is used to finish tasks in a set amount of time using a finite number of resources. The completion of tasks within the allotted time in fog computing is a key difficulty due to the increased amount of data that needs to be processed. Therefore, scheduling of tasks and resources is a crucial concern. Scheduling of tasks to the VMs of fog nodes is a NP-hard problem and in recent years, a lot of study has been conducted. This paper suggested a hybrid meta-heuristic algorithm grey wolf optimization (GWO) algorithm combined with particle swarm optimization (PSO) termed hybrid PSO and GWO (HPSO_GWO) to schedule the tasks to the VMs to optimize the QoS. Keywords Task scheduling · Meta-heuristic · PSO · GWO · QoS

K. C. Dev Trident Academy of Creative Technology, Bhubaneswar, India e-mail: [email protected] B. B. Dash · U. C. De · P. Pattnayak · S. S. Patra (B) School of Computer Applications, KIIT Deemed to be University, Bhubaneswar, India e-mail: [email protected] B. B. Dash e-mail: [email protected] U. C. De e-mail: [email protected] P. Pattnayak e-mail: [email protected] R. Satapthy Faculty of Emerging Technologies, Sri Sri University, Cuttack, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_16

223

224

K. C. Dev et al.

1 Introduction The cloud and fog computing has drawn attention as a novel concept due to the quickening development of Internet technology [1, 2]. Users can readily access the network thanks to cloud computing, a distributed computing solution that assigns programmable computer resources as needed and makes use of a pay-per-use business strategy [3]. By providing required services on time and profitably, service providers may earn profits thanks to the benefits of their hardware resources, whereas the users and the organizations also lease of virtual resources from service providers saving the cost without constructing their internal data centre [4]. The purpose of cloud computing is to provide users with cloud services using virtualized resources [5]. In general, the cloud service providers rent their services to the users, and the tasks submitted by the users are allocated and executed by the service offered by the cloud providers [6]. The service providers gradually expanding the scope and types of services they provide as a result of the greatly increasing demand for cloud services. However, because of the delay issue, fog computing emerged, and the main challenge is task scheduling because it is an NP-hard problem [7]. A scheduling approach based on assessment criteria that took into account resource and time utilization was developed by Liu et al. [8]. Wang et al. [9] proposed an efficient and energy-optimized job scheduling method in a variety of contexts. A few drawbacks of the current grey wolf optimization approach include limited precision and slow convergence speed. As a result, the HPSO GWO algorithm that this study offers combines particle swarm optimization and grey wolf optimization. This article is arranged as follows. Section 2 gives the previous works done in this area, the system model is shown in Sect. 3, Sect. 4 gives the proposed HPSO_GWO model, Sect. 5 shows the results analysis, and finally Sect. 6 concludes the paper with the scope for future work.

2 Related Work Researchers have proposed a variety of different approaches to improve the load balance schedule and they can be categorized as exact and nature-inspired metaheuristic algorithms. To improve performance and reduce costs while load balancing, VMs should be distributed among physical servers. On massive scales, metaheuristic techniques have historically outperformed precise algorithms. We outline the algorithm put forward in this area in the paragraphs that follow. The round robin (RR) method is the first algorithm examined in this work. This method processes each task in turn after placing it in a cyclic queue [10]. When the controller of the data centre receives a customer request, it informs the load balancer to give the requests to a different machine. From the list of virtual machines (VM), the load balancer selects one at random and provides the data centre controller the machine’s ID. With this approach, the subsequent requests are handled in a

Task Scheduling in Fog Assisted Cloud Environment Using Hybrid …

225

cyclical order. This algorithm’s difficulty with large-scale task processing is one of its challenges. A central load balancing method like the one mentioned in [11] may be another exact algorithm. As a precise algorithm, the min-min algorithm is presented in [12]. This algorithm’s goal is to shorten the makespan so that early, modest tasks can be distributed to low-power resources to balance the load. Additionally, the max– min method introduced in [13] is quite similar to the min-min algorithm; however, max–min algorithm discovers the maximum rather than the second minimum. It cannot be considered that using exact algorithms in situations involving largescale and multiple objectives is a good strategy. Because of this, meta-heuristic techniques derived from nature have been suggested for task scheduling in distributed environments. The genetic evolutionary algorithm is one of the most often utilized ones. A genetic algorithm (GA) is employed in Wei and Tian [14] for load balancing based on two routes. First, it boosts revenue for cloud environment providers. The second benefit is that it lowers user-side costs.

3 System Model When a task is submitted to a fog layer, through the task manager the task is added to the task queue, where it is then assigned to the virtual machine by the task scheduling technique. Every task is distinct and not preventive. The task scheduling paradigm for the fog aided cloud computing system is depicted in Fig. 1. The task queue on the VM is used to serially process tasks. m and len are the task’s two characteristics, where m stands for how many tasks there are. The task’s duration represented by million instructions (MI) for the symbol len. A VM has four properties: number of VMs, n; Million Instructions per second, MIPS; memory, RAM; bandwidth.

Fig. 1 The fog assisted cloud computing task scheduling model

226

K. C. Dev et al.

The basic metrics for assessing the efficiency of task scheduling in fog assisted cloud computing are makespan, profit, completion time, cost, and waiting time. Let Task = {T 1, T 2,...,Tm}. Task stands for the task queue that users have submitted to the fog layer, and number of tasks be denoted by m. Let T_length = {len1 , len2 ,...,lenm }. leni stands for the ith task’s length. VM = {VM1 , VM2 ,...,VMn }. VMj represents the jth VM. n is the number of VMs. ESC = {ESCij }mxn , ESCij = 1 if ith task is carried out on the VM j, else it is 0. ETC = {ETCij }mxn represents the expected completion time of the ith task on VMj , which is being calculated using the following formula: leni MIPS j

ETCi j =

(1)

MIPSj represents the execution speed of VMj . • Makespan Makespan is a crucial statistic for determining how well task scheduling in the fog works. The task’s completion time, or makespan, is calculated using the following formula and represents the entire operational time across all VMs: Makespan = max j

( m E

) ETCi j × ESCi j

(2)

i=1

• Cost The cost of the VM can be computed through the following formula. n E ( cost j X Cost = j=1

( m E

) E T Ci j X E SCi j

(3)

i=1

Costj represents the cost of the jth VM per unit time. The resource cost is related to execution speed in MIPS, memory RAM, and the bandwidth in heterogeneous environment. • Load / Load =

φ×

En j=1

Load j × VL j

n × Makespan

(4)

Here n represents the number of VMs, loadj denotes the degree of impact of execution speed in MIPS, memory RAM, and the bandwidth on the VM j. φ is the degree of imbalance of the fog system and is represented as:

Task Scheduling in Fog Assisted Cloud Environment Using Hybrid …

/

En j=1

φ=

(VL j − VL j )

2

n

load j = ξ × MIPS + δ × RAM + η × bandwidth VL j =

m E

(ETCi j × ESCi j )

i=1

En

VL j =

j=1

227

VL j

n

(5) (6)

(7)

(8)

Here, VLj is VM j’s running time, VL j the average running time of the VM, loadj is the jth VM’s load in terms of execution speed in MIPS, memory RAM, and the bandwidth, ζ, δ, η are three weight values, respectively. • Energy Consumption The fog system’s VMs are either active or idle. A VM consumes 0.6 times the amount of energy it consumes when it is idle. The energy consumption in Joules by the VMj is denoted as { } joules joules δj if VM j is in active state| α j if VM j is in idle state MI MI and δ j = 0.6× δ j where δ j = 10−8 × (P j )2 joules MI of a VM VMj in terms of joules is given by

joules . The total energy consumption MI

) ) ( ( E VM j = [ET j × δi + MS − ET j × P j joules The cloud system’s total energy consumption is given by Total Energy =

n E ) ( E VM j j=1

where n is the total no. of VMs. The objective function formula is derived as follows using the performance indicators mentioned above: Fitness = Total Energy × Cost × Load

(9)

228

K. C. Dev et al.

4 The Proposed HPSO_GWO Approach The authors in [15] first published the PSO algorithm, and its core reasoning was significantly impacted by simulations of animal social behaviour, such as bird flocking along with fish schooling. Before settling in a location where they may obtain food, the birds are either dispersed or move in groups while looking for food. There is usually one bird with food-smelling abilities while the other birds are hopping from place to place in search of food. This demonstrates that the bird is aware of its location where the food can be found and is carrying the appropriate food resource message. The birds will eventually swarm to the location as they are always communicating, they must know where food may be located, especially those that are helpful while moving from one location to another in quest of food. This method, which uses animal behaviour to compute functions of global optimization and problems, refers to each member of the swarm or crowd as a particle. In the PSO technique, two mathematical equations are used to update each crowd partner’s position inside the overall search space. These numerical formulas are: vi(k+1) = vik + c1 r1 ( pik − xik ) + c2 r2 (gb est − xik ) xik+1 = xik + vik+1 The goal of the GWO algorithm is to imitate the wolf leadership hierarchy and predatory behaviour while utilizing the grey wolf abilities of search, encirclement, hunting, and other predation-related tasks. The location of the ith wolf can be denoted as follows: assuming there are N wolves and that the search region is d: X i = (Xi1 , Xi2 , Xi3 ,…, Xid ). The fittest solution is taken into account as the alpha (α) wolf. The 2nd and 3rd best solutions are hence referred to as beta (β) and delta (δ) wolves, respectively. It is presumed that the remaining candidate solutions are omega (ω) wolves. In the algorithm, the location of the prey corresponds to the position of the alpha (α) wolf. Grey wolves’ circling behaviour can be quantitatively predicted as follows: | | D = |C × X p (t) − X (t)|

(10)

X (t + 1) = X p (t) − A × D

(11)

where t denotes the current iteration, the set X p (t) is the prey’s position vector, X(t) denotes grey wolf’s position vector and the set C denotes a control coefficient, which is represented by the subsequent formula: C = 2r1

(12)

Task Scheduling in Fog Assisted Cloud Environment Using Hybrid …

229

where the random variable r 1 falls between [0, 1]. The set A is convergence factor, can be computed as given below: A = 2ar2 − a a = 2(1 −

t Tmax

(13) )

(14)

where r 2 is a set of random numbers between [0, 1]. The control coefficient, set a, progressively falls throughout the duration of iterations from 2 to 0 with amax = 2 and amin = 0. The leader wolf leads the other wolves to encircle the prey when the grey wolves capture it. The wolf then guides the other wolves to kill the prey. The positions of the three wolves in the grey wolves that are closest to the prey can be used to determine where the prey is. This is the precise mathematical model: Dα = |C1 × X α (t) − X (t)|

(15)

Dβ = |C2 × X β (t) − X (t)|

(16)

Dδ = |C3 × X δ (t) − X (t)|

(17)

X 1 = X α − A 1 × Dα

(18)

X 2 = X β − A 2 × Dβ

(19)

X 3 = X δ − A 3 × Dδ

(20)

X (t + 1) =

X1 + X2 + X3 3

(21)

Singh and Singh’s study [16] led to the recommendation of the hybrid algorithm. The fundamental idea behind hybrid grey wolf particle swarm optimization algorithm is to enhance the algorithm’s capacity to exploit PSO while also investigating GWO in order to capitalize on the capabilities of both optimizers. In the hybrid algorithm, the first three agents’ locations in the quest space are altered as opposed to using conventional mathematical calculations, while the grey wolf’s exploitation along with exploration are controlled by the inertia constant. GWO is an estimation that depends on the population. The population of GWO begins as irregular and changes during the emphasis. GWO maintains balance between abuse and export. Exported is a tactic used in the field of searching to look into potential space focus quests. Utilizing the promise pointing to find the best encouraging pointing in a search area is called abuse. Every person is viewed as a potential solution to the current issues.

230

K. C. Dev et al.

Fig. 2 Initial population (vector representation)

The wellness work is decided at that point for all agreements. Alpha, beta, and gamma can therefore be differentiated. The following conditions are used to update the state of every wolf. They mathematically represented as: Dα = |C1 × X α (t) − W × X (t)|

(22)

Dβ = |C2 × X β (t) − W × X (t)|

(23)

Dδ = |C3 × X δ (t) − W × X (t)|

(24)

Combining PSO and GWO variations has affected the velocity and positions: ( ) ( ) ( ) Vik+1 = W × (Vik + c1r1 x1 − xik + c2 r2 x2 − xik + c3r3 x3 − xik xik+1 = xik + vik+1

(25) (26)

This section outlined the proposed algorithm, which consisted of five key components. The population, evaluation, as well as normalization with scaling. Population: Consider a situation where number of tasks m = 7 and number of VMs n = 3 and the tasks need to be scheduled to the VMs; a possible solution, as shown in Fig. 2, can be explored. T 2 and T 4 are both assigned to VM1 in Fig. 5. Evaluation: The initial population’s fitness value is assessed. This is where the fitness function (9) is used. Every virtual machine in the cloud centre is either active or idle. The overall energy consumption of a virtual machine is the sum of its active and idle energy consumption. When VMs are idle, they expend around 60% as much energy as when they are fully loaded. Normalization with Scaling Phase: The newly produced vector element f (t + 1) includes continuous information values. We need to transform the continuous values to discrete values (VM numbers). normalizedi (t + 1) =

f i (t + 1) − min (new_max − new_min) + new_min (27) max − min

Here new_max and new_min is the max and min value of the range, respectively. Figure 3 shows normalized along with scaling data from continuous to discrete.

Task Scheduling in Fog Assisted Cloud Environment Using Hybrid …

Vector element Normalized value after round off

0.4 2

1.07

0

3

1

0.9 2

0.89 2

231

0.1 1

1.23

0.02

3

1

Fig. 3 Normalization along with scaling from continuous to discrete

5 Results Analysis Our proposed HPSO_GWO is implemented under Matlab R2014a. We have taken two test cases (I and II) to study the behaviour of our suggested algorithm. Test Case I: Fixed number of VMs while number of tasks is variable. Test Case II: Fixed number of tasks while number of VMs is a variable.

Two ETC matrices have been used for the performance evaluation of the system different for both test cases. Figure 4 depicts the impact of makespan on the number of tasks arriving at the fog system for a fixed number of VM = 60. It is evident from the figure that the makespan increases as the number of tasks increases. Figure 5 shows the impact of response time on the number of tasks. With the increase of the number of tasks, the response time also increases for a fixed 60 VMs. Figures 6, 7, and 8 show the resource utilization, energy consumption, and fitness value versus the number of tasks concerning a fixed 60 VMs. In test case II, Fig. 9 illustrates the impact of makespan on the number of VMs with a fixed 750 number of tasks. We observe that the makespan decreases as the number

Fig. 4 Makespan with varying no. of tasks and fixed VMs = 60

232

K. C. Dev et al.

Fig. 5 Response time with varying no. of tasks and fixed VMs = 60

Fig. 6 Resource utilization (%) with varying no. of tasks and fixed VMs = 60

Fig. 7 Energy consumption (KJoules) with varying no. of tasks and fixed VMs = 60

Task Scheduling in Fog Assisted Cloud Environment Using Hybrid …

233

Fig. 8 Fitness value with varying no. of tasks and fixed VMs = 60

of VMs increases. Figure 10 expresses the effect of response time on the number of VMs. One may see that response time also decreases as the number of VMs increases for a fixed 750 number of tasks. Figure 11 depicts the resource utilization increases and then becomes constant with the increase in number of VMs. Figures 12 and 13 illustrate that the energy consumption, the fitness value increases as the number of VM increases for a fixed number of tasks. From Figs. 4, 5, 6, 7, 8, 9, 10, 11, 12, and 13, we observe that the performance of the HPSO_GWO algorithm outperforms or at par in comparison to all the other algorithms taken in the study, such as PSO, BAT, and GWO. Fig. 9 Makespan with varying no. of VMs and fixed tasks = 750

234 Fig. 10 Response time with varying number of VMs and fixed tasks = 750

Fig. 11 Response utilization with varying number of VMs and fixed tasks = 750

Fig. 12 Energy consumption (KJoules) with varying number of VMs and fixed number of tasks = 750

K. C. Dev et al.

Task Scheduling in Fog Assisted Cloud Environment Using Hybrid …

235

Fig. 13 Fitness value with varying number of VM and fixed 750 tasks

6 Conclusions and Future Work The paper discussed the task scheduling problems and optimization of energy consumption. Hybrid GWO with PSO meta-heuristic task scheduling algorithm has been implemented to optimize energy consumption and improved the QoS in conjunction with SLA. In terms of makespan, energy consumption, cost and other metrics, the suggested method outperforms competing algorithms. For the simulation process synthetic datasets has been taken in the paper. The limitation of the paper is synthetic dataset is used to study the performance of the proposed algorithm. However, benchmark datasets may be employed in future study to improve simulation outcomes.

References 1. Attiya I, Abd Elaziz M, Abualigah L, Nguyen TN, Abd El-Latif AA (2022) An improved hybrid swarm intelligence for scheduling IoT application tasks in the cloud. IEEE Trans Ind Inform 2. Rizvi N, Ramesh D, Rao PS, Mondal K (2022) Intelligent salp swarm scheduler with fitness based quasi-reflection method for scientific workflows in hybrid cloud-fog environment. IEEE Trans Autom Sci Eng 3. Ghanavati S, Abawajy J, Izadi D (2020) Automata-based dynamic fault tolerant task scheduling approach in fog computing. IEEE Trans Emerg Top Comput 4. Adhikari M, Srirama SN, Amgoth T (2019) Application offloading strategy for hierarchical fog environment through swarm optimization. IEEE Internet Things J 7(5):4317–4328 5. Kashani MH, Ahmadzadeh A, Mahdipour E (2020) Load balancing mechanisms in fog computing: a systematic review. arXiv preprint arXiv:2011.14706 6. Khurma RA, Aljarah I, Castillo PA (2021) Harris Hawks optimization: a formal analysis of its variants and applications. In: IJCCI, pp 88–95

236

K. C. Dev et al.

7. Swain CK (2021) Efficient task scheduling in cloud environment. Doctoral dissertation 8. Liu L, Qi D, Zhou N, Wu Y (2018) A task scheduling algorithm based on classification mining in fog computing environment. Wirel Commun Mob Comput Spec Issue Recent Adv CloudAware Mob Fog Comput 9. Wang S, Zhao T, Pang S (2020) Task scheduling algorithm based on improved firework algorithm in fog computing. IEEE Access 8:32385–32394 10. Ahmed T, Singh Y (2012) Analytic study of load balancing techniques using tool cloud analyst. Int J Eng Res Appl 2:1027–1030 11. Soni G, Kalra M (2014) A novel approach for load balancing in cloud data center. In: Advance computing conference (IACC) 12. Patel G, Mehta R, Bhoi U (2015) Enhanced load balanced min-min algorithm for static meta task scheduling in cloud computing. Proc Comput Sci 57:545–553 13. Bhoi U, Ramanuj PN (2013) Enhanced max–min task scheduling algorithm in cloud computing. Int J Appl Innov Eng Manag 2:259–264 14. Wei Y, Tian L (2012) Research on cloud design resources scheduling based on genetic algorithm. In: 2012 international conference on systems and informatics (ICSAI2012), pp 1–15 15. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of ICNN’95, vol 4. IEEE, pp 1942–1948 16. Singh N, Singh SB (2017) Hybrid algorithm of particle swarm optimization and grey wolf optimizer for improving convergence performance. J Appl Math 2017:1–15

Trust Model for Cloud Using Weighted KNN Classification for Better User Access Control Manikandan Rajagopal, S. Ramkumar, R. Gobinath, and J. Thimmiraja

Abstract The majority of the time, cloud computing is a service-based technology that provides Internet-based technological services. Cloud computing has had explosive growth since its debut, and it is now integrated into a wide variety of online services. These have the primary benefit of allowing thin clients to access the resources and services. Even while it could appear favorable, there are a lot of potential weak points for various types of assaults and cyber threats. Access control is one of the several protection layers that are available as part of cloud security solutions. In order to improve cloud security, this research introduces a unique access control mechanism. For granting users access to various resources, the suggested approach applies the trust concept. For the purpose of predicting trust, the KNN model was recently proposed, however the current approach for categorizing options is sensitive and unstable, particularly when an unbalanced data scenario occurs. Furthermore, it has been discovered that using the exponent distance as a weighting system improves classification performance and lowers variance. The prediction of the user’s trust levels using weighted K-means closest neighbors is presented in this research. According to the findings, the suggested approach is more effective in terms of throughput, cost, and delay. Keywords Access control · Nearest neighbor · Cloud environment and trust value M. Rajagopal (B) Lean Operations and Systems, School of Business and Management, CHRIST (Deemed to be University), Bangalore, India e-mail: [email protected] S. Ramkumar · R. Gobinath Department of Computer Science, School of Sciences, CHRIST (Deemed to be University), Bangalore, India e-mail: [email protected] R. Gobinath e-mail: [email protected] J. Thimmiraja Department of Information Technology, Dr. Mahalingam College of Engineering and Technology, Pollachi, Tamil Nadu, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_17

237

238

M. Rajagopal et al.

1 Introduction The invention of distributed computing, which provides services to the stakeholders in several areas, is cloud computing. These systems’ goal is to provide consumers with a low-cost, scalable, and online-based service [1]. Users bear less of the cost associated with setting up sophisticated computing equipment. Furthermore, the consumers are not required to purchase costly infrastructure for advanced processing [2]. The three main types of cloud computing are software as a service, platform as a service, and infrastructure as a service category of services provided via the cloud. These services are all implemented using various models [3]. The four deployment modes for these services private, public, community, and cloud are as follows. The ownership of data and processes is one of the primary drivers for enterprises switching to the cloud for their business operations [4]. If data and procedures are employed in a cloud environment, control over those elements may be lost. These are further weak targets for a variety of threats and assaults. It is a well-known truth that before beginning the services, confidence in the users is a need for both the business and the cloud service provider [5]. Before beginning the service, they need to develop a sense of mutual confidence. Users build their faith in a CSP on the security and quality of service levels. Based on the user profile as well as a number of other aspects, the CSP in turn believes in the users [6, 7]. The user access control discuss about safety security system for leak proof and data breach. Recently, the KNN model for trust prediction was proposed, however the current approach to categorizing options is sensitive and unstable, particularly when an unbalanced data scenario occurs. Furthermore, it has been discovered that using the exponent distance as a weighting system improves classification performance and lowers variance. This study suggests a technique for predicting a user’s trust rating using k-nearest neighbors based on these variables.

2 Short Literature Study Dhote and Mohan provided an approach for assessing trust that used the QoS criteria. The trustworthiness of the service providers in cloud systems is calculated using a fuzzy notion. As a result, the system’s performance review showed improved results [8]. The experiment takes into account variables including return time, dependability, and availability. Deng et al. presented the TRUSS, a dependable framework for service provider selection. The technique used an integrated strategy for the trust evaluation utilizing a combination of objective and subjective trust assessment in order to provide a more accurate assessment of the service quality based on trust [9]. The first method relies on the observation of QoS, whereas the second uses ratings obtained from customer input. The trials were carried out using data that had been synthesized beforehand, and the findings indicate that the suggested approach is superior to the other methods that are considered to be state of the art.

Trust Model for Cloud Using Weighted KNN Classification for Better …

239

yang et al. [10] concentrates on the safety of the data from the point of view of the various stakeholders, that is, on the basis of who owns the data and who provides it. According to the survey’s research, user control rules still need to be looked at more thoroughly, and there is a significant desire for a system that is even more effective at securing data [10]. The need for better data access regulations should be addressed first, followed by concerns about QoS and maintaining user and provider confidence.

3 Proposed Methodology The suggested technique is fully explained in the section that follows. For the purpose of predicting trust for a certain user, the technique employs the weighted k-nearest approach. The recommended model framework may be seen in Fig. 1. From the survey we found that no other studies used the trust-dependent model to identify the user access control. The framework is made up of a model for trust-dependent access control that comprises several modules with machine learning (ML) capabilities. Before receiving permission, a request from any user to access a cloud-based service must pass through a number of modules at different levels. Additionally, this protects the safety of all the resources and services. Different things with various reputations and trust ratings may be found in the resource’s catalog. The framework determines if a user is trustworthy before allowing access when they ask for a certain resource. It also guarantees that the CSP and the user continue to have mutual confidence since access will only be allowed to resources that are deemed trustworthy. Below, we offer a thorough explanation of each component.

3.1 Identity Management It is the user’s responsibility to save their authorization on their identification, such as their credentials, and it is accountable for this function. After that, it performs its function as an ideal interface between the user and the cloud system [11]. All of these activities, such as the registration of new users, the modification of existing user data, in addition to taking care of current users’ deletion. Every time a login request is received, the user’s identity is verified; if a match is found, access to the resource is granted; otherwise, authorization is revoked. When the service request has been authorized, this module will send it to the URM so that it may start the level 2 authentication.

240

M. Rajagopal et al.

Fig. 1 Overall architecture of the proposed work

3.2 Resource Management It is the responsibility of the RM to ensure that all of the records relating to the cloud-based service vendor (CSP) offerings are kept current. The requested service is found in the database using the catalogue after a user logs into the system, and access is allowed according to the user’s profile.

3.3 Request Management The POP module grants this particular module the user’s confidence and passes it along as a request space vector to the policy enforcement module and decisionmaking module.

Trust Model for Cloud Using Weighted KNN Classification for Better …

241

3.4 Policy Information Point When requests are received from a URM, it is the PIP module’s duty to provide the users with a trust value.

3.5 Decision and Policy Enforcement If a certain resource is requested, the DPE module plays a crucial role in determining whether it will be allocated or not. The URM retrieves the matching policy from the database as soon as the request is received by a DPE. Following a comparison of all user information with the applicable rules and the threshold value, the system decides whether to provide or deny access to the proper information. All user-related data, whether legitimate or incorrect, is gathered by the log file module. As a result, the company and CSP continue to have a mutually beneficial relationship.

3.6 Log Access Management (LAM) The users’ safe access to cloud services is the responsibility of this module. Logs are used to record user actions and accompanying behavior. To the trust module would be delivered a response after a request’s execution was complete. Due to the fact that it keeps a record of user trusts, it will be possible to change it in the future with fresh trust values.

3.7 Resource Ranking All of the resources that are accessible in the cloud are included in the RRM module. When the RRM receives the cloud’s trust reputation, all of these resources are subsequently graded according to their performance and capacity. Using ML approaches, there are numerous trust-based access control strategies. RRM makes the judgment as to which superior resource should be assigned to a user.

3.8 Trust Management Module (TTM) The TTM is in charge of valuing the user’s trust in real-time and doing so based on proof. These facts, which were gleaned from user behavior, are utilized to gauge how much confidence to place in a side. This module recalculates the value of trust by

242

M. Rajagopal et al.

using the log file that projects all of the interactions that occur between a user and a cloud-based service. Additionally, it determines a resource’s reputation inside to RRM for ranking on the cloud and providing the trust value.

4 Process for Authorization It is necessary to send a request to the CSP whenever there is a requirement to access any of the cloud services. Without the CSP engaging in any negotiations, the cloud in turn provides the services. In the service level agreement (SLA), the discussions would have been covered. The procedure of authentication is detailed step-by-step in the next section. • Users may only request a specific cloud resource if their IDAM authentication is successful. • The CSMR maintains an accessible catalog of services from which the users are given the choice to choose the ones they need. The module then sends the request to the IDAM. • Additionally, after gathering the user’s requirements from the CSMR, the IDAM presents a request to the URM through the requested data vectors. • The URM communicates the users identify to the POP module in order to determine the value of the trust, which then sends it on to the TMM through the PIP. • A particular user’s trust must now be added by the URM and sent to the DPE. • The DPE then queries the URM for information. In order to access the database of policies, the DPE then compares the vector that was received with those required. • In order for the user to access the service in a secure manner, the TMM ultimately provides protected access for the user. Additionally, it contains later-forwarded log files. A user’s and their behavior’s log files are continually processed by the TMM. • The suggested architecture proved successful in enabling secured accesses that are predicated on a degree of trust between users.

4.1 Trust Management Module (TMM) The TTM is in charge of valuing the user’s trust in real-time and doing so based on proof. These facts, which were gleaned from user behavior, are utilized to gauge how much confidence to place in a party. This module recalculates the value of trust by using the log file that projects all of the interactions that occur between a user and a cloud-based service. In addition to this, it determines the standing that a certain resource has inside the cloud and reports the trust value to RRM so that the trust may be graded. This is done so that the trust may be ranked in order of importance. Figure 2 shows a picture of the model. It should be noted that TMM offers many ways

Trust Model for Cloud Using Weighted KNN Classification for Better …

243

Fig. 2 Flow chart for KNN

to compute trust, and the part that follows explains how this may be done depending on the different functions. • User: Before gaining access to the materials, each user must register. All of the user’s information is included in the database that is kept for identification. The services are only available if the procedure is successful when a specific user wants an item, and many components are used to evaluate the user’s authenticity and authorization. • Behavior logs: All of a user’s actions in a particular cloud are recorded in the logs that the LAM module creates. Through the application of the anticipated trust based on ML techniques, the identical will faithfully capture the behavior of all users in reality. • Monitoring of SLA: On a real-time basis, this module records how cloud services are allocated. Additionally, this module keeps track of the resource performances. • Feature extraction module: All of the logs are sent into this module. The module usually extracts from the log files all the relevant parameters and their corresponding values. After that, the predictive module receives these variables and passes them on to the machine learning (ML)-based training algorithms. The Apache server log files are taken into consideration in the suggested solution. The weighted factors are then mixed with each parameter, and the contribution factor determines the corresponding strength. Here, it is discovered that certain of the characteristics are more important than the others. Indicators P1, P2,…Pn attacks and other unauthorized access requests are what Pn is designed to detect. In order

244

M. Rajagopal et al.

to calculate the trust levels, the appropriate output is to be utilized once the user’s behavior has been examined.

4.2 Weighted K-Nearest Neighbor Trust Value Prediction Data and parameters from the FE module are included into the module. KNN model was one of the versatile method in the machine learning technique. It was one of the easiest methods to implement and it give the high accuracy value for different prediction problem. Following data feeding, training is conducted on the associated K-means model. Specifically, in the case of unbalanced data, the traditional KNN model exhibits classification sensitivity to neighbor seek decisions. So, in order to improve categorization, performance accuracy, and low variance, weighting techniques were devised. The suggested technique (as in Fig. 3) believes that a certain candidate must be selected for the weighting scheme and utilizes the distance computation to its advantage for classification purposes. The impact of precipitation on the K neighbors in the suggested method’s data is conquered by the use of an enhanced KNN.  N A training set is provided T = xn ∈ R d n−1 has M classes, N stands for the set T ’s random numbers, and d stands for the feature’s dimension. The procedures shown below are used to acquire the class label using the query point x [9–15]. The query points’  unknown Kk near values in the supplied set T are first found, and represent the value of x in the k-nearest neighborthen allow T = xiNN , ciNN xi−1 hood, and let the set of K neighbors that are closest to x be signified by the symbol x1NN , x2NN , . . . . . . .., x KNN k depending on each object’s Euclidian distance from x, as explained in Eq. (1).

Fig. 3 Working procedure of k-nearest neighbor

Trust Model for Cloud Using Weighted KNN Classification for Better …

245

The weight of X and J−1 neighbors is estimated as well as the distribution of different weights among the K nearby neighbors.     dk + d j dk − d j , dk = d1 1dk = d1 . ω j = {exp − dk + d j dk − d1

(1)

The neighbors’ majority-weighted vote determines the point x being assigned to class C as the answer.    (2) c = argmaxc ω j . c = cNN j NN NN T x .C ∈ (J J )

4.3 K-Nearest Neighbor Weighted Following are the steps. Step 1. Step 2. Step 3. Step 4. Step 5. Step 6. Step 7. Step 8.

Measure the separations between the queries nearest neighbors. Decide on an increasing sequence for the distances. Searching for the query X’s k-nearest neighbors. The K-nearest neighbors’ weight computed. For i = 1 to K, do Steps 1–5. End for. Class term for the main weighted voters Y 1 and X1. End.

5 Results and Discussion Due to the experimental nature of the project, the JAVA simulator and the cloud-sim to the complexity of performing real-time tests and the broad range of the cloud. The performance parameters of throughput, delay, and accuracy, as well as benchmark techniques like Fl and KNN, are compared with those of to evaluate the performance of the suggested WKNN model (shown in Table 1). The FL and KNN’s traditional performance results, and the suggested approach are shown in Fig. 4, where throughput is taken into consideration. According to the Table 1 Comparison of performance results

Metrics

Methods FL

KNN

WKNN

Throughput (kbps)

450

572

590

Delay (s)

700

650

530

74

82

90

Accuracy (%)

246

M. Rajagopal et al.

Fig. 4 Throughput results versus classification methods

Throughput(kbps)

figure, the suggested WKNN model has a throughput of 590 kbps, which is higher than that of the FL and KNN technique. The graph shows how the FL and KNN have been compared algorithms using the suggested technique and using latency as the measure can be seen in Fig. 5. The graph shows that, in comparison to FL(700S) and KNN(650S), the suggested model has a reduced latency of 530 s. An analysis of the FL and KNN in comparison algorithms using the suggested technique and using accuracy as the measure is shown in the graph that can be seen in Fig. 6. The graph shows that, in comparison to FL (68%) and KNN (81%), the suggested model has a high accuracy of 87%. Figure 7 illustrates how the suggested WKNN technique’s trust value for communication results. As can be seen from the graph, the value of communication trust is 800 600 400

FL

200

KNN WKNN

0 FL

KNN

WKNN

Methods

800

Delay(sec)

Fig. 5 Comparing delay results with classification techniques

600 400

FL

200

KNN

0

WKNN FL

KNN

WKNN

Methods

100

ACCURACY(%)

Fig. 6 Accuracy results versus classification methods

80 60

FL

40

KNN

20

WKNN

0 FL

KNN

Methods

WKNN

Trust Model for Cloud Using Weighted KNN Classification for Better …

247

1

User trust value on communication

0.9 0.8 0.7 0.6

Normal users

0.5

Malicisous users

0.4 0.3 0.2 0.1 0 10

20

30 Iterations

40

50

Fig. 7 User trust value on communication results versus iterations

represented iteration on the X axis and Y axis, respectively. The graph shows that the suggested technique detects fraudulent users with a trust rating of less than 0.5.

6 Conclusion and Future Work The weighted K-nearest neighbor technique is used in this study’s presentation of an efficient trust assessment mechanism for use in making precise predictions of user trust compared with ordinary KNN. Prior to using the resource, the user may be trusted since the weighted method demonstrates increased classification performance, greater accuracy, and reduced volatility. The following are some of the contributions of the suggested system: (1) It makes a strategy for evaluating the effectiveness of the cloud with trust possible (2) Through the use of performance considerations in the selected phase, it hoped to simplify the IaaS selection process and provide better reliability in the service while utilizing clouds. On the basis of the suggested technique to further automate the deployment, future study will attempt to provide various strategies for broker selection and cloud utilization.

References 1. Chiregi M, Navimipour NJ (2017) A comprehensive study of the trust evaluation mechanisms in the cloud computing. J Serv Sci Res 9(1):1–30 2. Arora S, Dalal S (2019) Trust evaluation factors in cloud computing with open stack. J Comput Theor Nanosci 16(12):5073–5077

248

M. Rajagopal et al.

3. Rathi P, Ahuja H, Pandey K (2017) Rule based trust evaluation using fuzzy logic in cloud computing. In: 2017 6th international conference on reliability, infocom technologies and optimization (trends and future directions) (ICRITO), September. IEEE, pp 510–514 4. Sule MJ, Li M, Taylor G (2016) Trust modeling in cloud computing. In: 2016 IEEE symposium on service-oriented system engineering (SOSE), March. IEEE, pp 60–65 5. Abdullayeva F (2016) CPTrustworthiness: new robust model for trust evaluation in cloud computing. In: 2016 IEEE 10th international conference on application of information and communication technologies (AICT), October. IEEE, pp 1–6 6. Jain S (2016) A trust model in cloud computing based on fuzzy logic. In: 2016 IEEE international conference on recent trends in electronics, information & communication technology (RTEICT), May. IEEE, pp 47–52 7. Tang M, Dai X, Liu J, Chen J (2017) Towards a trust evaluation middleware for cloud service selection. Futur Gener Comput Syst 74:302–312 8. Dhote BL, Mohan GK (2021) Trust and security to shared data in cloud computing: open issues. In: International conference on advanced computing networking and informatics. Springer, Singapore, pp 117–126 9. Deng Z, Zhu X, Cheng D, Zong M, Zhang S (2022) Efficient kNN classification algorithm for big data. Neurocomputing 195:143–148 10. Yang H, Shah N, Baisheng N, Sulaiman K, Jianhui Z (2020) Developing an efficient deep learning-based trusted model for pervasive computing using an LSTM-based classification model. Complexity 2020:1–6 11. Maheswari K, Packia Amutha Priya P, Ramkumar S, Arun M (2020) Missing data handling by mean ımputation method and statistical analysis of classification algorithm. In: Haldorai A, Ramu A, Mohanram S, Onn C (eds) EAI international conference on big data innovation for sustainable cognitive computing. Springer, EAISICC, pp 137–149 12. Ghulam A, Amjad M, Maple C, Gregory E, Jaime L (2022) Safety, security and privacy in machine learning based Internet of Things. J Sens Actuator Netw 11(38):1–15 13. Popescu M, Keller JM (2016) Random projections fuzzy k-nearest neighbor (RPFKNN) for big data classification. In: 2022 IEEE international conference on fuzzy systems (FUZZ-IEEE), July. IEEE, pp 1813–1817 14. Maheswari K, Ramkumar S (2022) Analysis of error rate for various attributes to obtain the optimal decision tree. Int J Intell Enterp 9(4):458–472 15. Guo Y, Cao H, Han S, Sun Y, Bai Y (2018) Spectral–spatial hyperspectralimage classification with k-nearest neighbor and guided filter. IEEE Access 6:18582–18591

A Case Study of IoT-Based Biometric Cyber Security Systems Focused on the Banking Sector Sanjoy Krishna Mondol, Weining Tang, and Sakib Al Hasan

Abstract The biometric characteristics are explored in the context of a case study on IoT-based biometric cyber security systems focused on the banking sector for the verification and safety issues and accomplishments. Mostly with the daily increase in security violations and fraudulent transaction, the necessity for highly secure identifying systems and person verifies identity systems, particularly for the banking and finance sectors, becomes vitally vital. Many banking institutions see biometric technology as a very close answer to such security issues. Though IoT-based biometric technology has obtained acceptance in fields such as healthcare and various security activity, its use in banking security is still in its early stages. There is the tight relationship between biometric and human, physical, and behavioral factors, such modern technologies present a plethora of social, ethical, and systems difficulties. The main, success elements given by the case study served as a guideline for IoT-based biometrically enabled banking safety system planned by a significant banking company. This prototype study reveals that it is of primary importance to draw on a viable security plan that addresses user privacy concerns using smart sensor, the level of human endurance, institutional changes, and valid issues rather than dealing with technological issues related to gelling biometrics into existing information systems. However, if smart sensor-based biometric technology in banking is successfully used, privacy issues don’t persist. In numerous areas connected to immigration control and crime, they do not do and study the financial environment. All banks do not strengthen their safety measures due to socio-technical challenges with their IoT-based biometric solutions. This article fulfills the demand for a stanS. K. Mondol (B) · W. Tang School of Information Engineering, Huzhou University, Huzhou, China e-mail: [email protected] W. Tang e-mail: [email protected] S. A. Hasan School of Information Engineering, Huzhou Normal University, Huzhou, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_18

249

250

S. K. Mondol et al.

dard to recognize possible difficulties and success explanations for the application of sustainable IoT-based biometrics in the system for access control. Our research work is only the point of departure for researchers to further research IoT smart sensor-based biometric applications in multiple banking components. Keywords Internet of Things · Biometric · Smart wireless sensor · Cyber-security · Transaction fraud · Smart security system · Privacy concern · Criminology

1 Introduction Digital technology has improved the operating performance of the majority of banking systems. This tendency has made it clear that IT is the underpinning of every establishment at the moment. Digital payment is the electronic delivery and offering of banking products and services. Automated plate machines, internet banking, and online banking among others are the operations supplied by digital transactions. Ebanking has been around for several sorts of online transactions and procedures in most countries of the globe. E-banking, the fastest expanding aspect in banking, has been the most conspicuous electronic hardware in the financial industry. Traditional financial institutions are presently under intense pressure to adapt to new innovations from their stakeholders. Nevertheless, due to the inherent essence of this industry, data protection cannot be compromised. Users must demonstrate a high level of trust in their branch network based on the important of banking organizations’ relationships with their customers [17]. E-banking has been widespread for more than 20 years, resulting in several benefits from banking use for the banks and their individual consumers. Most of the banking organizations are conscious of the value of essential mechanism and top IT solutions and have someone who can help them achieve a competitive advantage with biometric technology playing a big role. Biometrics is a good way to actively recognize a person in a genre of features such as the geometry signature retina voice of an emerging technology fingerprint hand. It can play a major role as sophisticated technology protects banking assets and thereby creates a large secure banking environment. The concept of systems has been studied to play a very important role in modern IT security applications in posting key elements of security models and technologies. Much research has been done on the engineering aspects of developing solutions to meet demand through individual business and security processes. Few researchers have been observed in the laboratory to study the issues of biometric problem solving and what is the reason for its success and to suggest strategies for applying an effective biometric to achieve a more secure banking environment. But above all, many unresolved issues are involved with the implementation of top technologies, so biometrics is still in its cradle in the banking system. The background of the methods adopted in the study gives that the next section plays an important role in the development of the case study which is very much

A Case Study of IoT-Based Biometric Cyber Security Systems …

251

needed in the case study. Then, in the research, we represent the interest of modern multifactor authentication example technologies that are now recognized IS defenses that are generally utilized by banking scammer. We investigate the relevance of biometric security terms and their regional frameworks access control in these concerns to verify that our biometric technologies are free of flaws. We investigate the relevance of biometric-enabled security systems and their national patterns security systems in these concerns to verify that our biometric technologies are free of flaws. The banking sector is one of the targets to curb malicious activities by intruders. Surveys of the research and an examination of its authentication schemes the conceptual and applied foundations for this study are provided by information security and biometric technologies used in the banking industry. The effects of implementing biometric technologies on banking were examined from a variety of perspectives, including technological, management, economic, and moral considerations. These investigations resulted in the entire accomplishment elements as potential guidance for the proper adoption of biometric-based security systems in banking institutions. According to 2018, the world’s internet users were 3.9 billion, up from the previous year’s 3.65 billion. With the rise and broad internet, the banking business has been expanding significantly throughout the years. It also establishes a link between biometric online security, privacy, and human psychological traits such as emotions, cognitive science, and aesthetic appeal. This, in switch, provides fresh insights into guaranteeing a secure and reliable digital landscape using artificial intelligence techniques such as the most latest deep learning techniques. In order to provide e-commerce, quick access, and communication for their customs, the majority of companies and companies have changed their business into online services for improved effectiveness and accessibility. Responses from survey participants were investigated using survey questionnaires in this survey conducted in the banking sector’s cyber security department. We note that the formulation of a viable security plan, which addresses concerns for the private information of users, human concentration levels, policy procedures, and legal issues, is of great significance rather than addressing the technical questions of gelling biometrics into knowledge management tools. We highlight key performance factors to ensure a viable adoption by financial enterprises of biometricbased security solutions. Furthermore, we identify and discuss key concerns facing banks, which need to be dealt with as they have an effect on success.

2 Literature Review The usage of the Payment transactions and income for biometric verification employing Fingerprint identification is considered by many as a solution to the majority of theft and fraud. The intention of the study [5] was to examine the impact of the COVID-19 [12] epidemic on the banking sector, and it demonstrates that a financial meltdown occurred during the epidemic era. Customer pays declined significantly during the epidemic compared to the non-pandemic period, according to the findings. Cybercrime prevention techniques are being created in tandem with the advancement

252

S. K. Mondol et al.

of new technology, but cybercriminals are also developing at a comparable rate [6]. Individuals who use financial technology must be conscious of the risks associated with their use; only in this way can they be capable of recognizing and avoiding perceived risks. Offenders [11] are continuously devising new ways to abuse economic markets. An instance of smart city improvement is the computerized authentication of financial sectors, automobile registering issues, licensing registration, environmental testing, and insuring validity [20]. Biometric authentication offers various advantage over conventional authentication methods, as the use of biometric user identity system for user authentication in modern years has increased significantly. In their study, Onyesolu and Ezeani observed that most of their repliers chose biometric authentication system as their preferred identity identifier for online payment theft and fraud [19]. The authors in Oko and Oruh devised and developed the suggested biometric ATM authentication system to evaluate biometric authentication on Payment transactions and to integrate biometric authentication in causes extreme [13]. Daula and Murthy created a fingerprint-identification method that is used in Biometric Banking systems. Their system uses the Processing system for identity verification [15]. This system involved banks capturing the biometric fingerprints and cell numbers of clients when establishment of accounts. The bank customer presses his fingertips against the Machine’s fingerprint sensor on the machine’s system interface. The biometrics are then verified to all of those typically obtained by the Banking Services. If the fingerprints are identical, a four-digit code is generated and sent to the customer’s cell phone. Then you go to the ATM and enter these four digits. This approach doesn’t require the use of an ATM card. The technology is safe since it ensures that the identity of an individual cardholder who tries to make a Net banking transaction is verified and authenticated. Biswa et al. [2] also related the study to find a crypto-bio system for authentication in net banking systems. Their method was based only on the usage of retinal images. A Net banking biometric authentication scheme was proposed by Hossain et al. [14] Their technology used the Advanced Encryption standard processor instead of Triple Data Security. The usage of an AES processor and biometric authentication of the fingerprint, according to the study, helped secure the Net payment transactions. Earlier work on the authentication of Card payments in the fields of biometrical fingerprint followed a paradigm clientserver. The Machine system captures a cardholder’s scanned fingerprints that desire to execute a transaction and uploads them to a remote server for biometric verification. The program links to the financial institution’s central database, which seems maintained by the tracking system, and compares the fingerprint-identification designs given to the customers in the biometric recognition warehouse to those before the designs. Daula al. [4] and Murthy’s method follows the same paradigm and improves the process by generating four digits Personal ID number communicated through a Short Messaging Service to the cardholder. The Machine will then enter the four digits code and check it. Banking institutions have concentrated on virtual banking services based on web technology, new technologies and wireless and mobile banking systems. However, they are often the outcome of significantly more cyberrelated exposure to the expanding benefits of using surroundings anywhere and at any time. Service Denial Viruses, Worms Malware Website hacking of eroding data

A Case Study of IoT-Based Biometric Cyber Security Systems …

253

is a unique type of risk attack such as cyber sabotage and terrorism that must be managed as a preventive measure appropriately [21]. The de facto method can make biometric technology safe detection and personal testing. The deployment of such superior technology, with security hazards, is becoming increasingly involved. Data protection and compliance policies, the so-called data protection policies have significantly changed both in local and international affairs in recent years, [3] contributing to the success or failure of any new security strategy that may be established between conventional and conventional businesses, based on a variety of technical-economic reasons. Banking efforts to safeguard conventional energy against state-of-the-art technology and uncomfortable opponents remain a difficult issue [18]. The firms are suspicious about biotechnology. In emerging corporate networks, opponents can be identified and protected from each technology by accessing their networks and structures. Even if a company’s patching and software program are ideal, an opponent can target social engineering for nil days to enter the network [10]. As security violations and transaction frauds grow day by day, the necessity for extremely secure identity and personal information systems, particularly in the Banking and Finance industries, is becoming more crucial. While biometrics have gained momentum in sectors like healthcare and criminology, it has still being used, to protect banks [7]. The intimate relationship between biometrics and humans, physical characteristics and behavior, creates tremendous social, moral and operational issues in these systems. A survey has demonstrated that the use of several biometric technologies is apparently an acceptable level of awareness. Only the venous recognition of the hand was provided below, albeit this is likely to be understandable as this technique is no longer extremely widespread [9]. Even though procedure for the collection of the underlying technologies of hand vein and palm recognition appears to be a normal person, there are major variances. In this study it is remarkable that although these technologies are generally accepted, they are very dependent. The study showed, however, that those methods appear to be more widespread and known to users of fingerprint and face recognition [16]. These are often not taken or comprehended while contemplating some indomitable approaches. It typically brings advantages and cons because of the fact that each technology has been introduced. No exceptions are information technology and the Internet. Internet is an earthquake that has made it a crucial instrument for the management of commercial transactions, such as online banking, which has rendered distance, time, and location irrelevant for communication [1]. Online banking has attracted damaging attention to online banking, thanks to a multitude of complex attack types. Online banking is an alternative which, unlike banking system clients in online banking clients, provides banking services at anytime, anywhere, and is responsible to manage their accounts using laptops, Workstations, and cellphones, at all times.

254

S. K. Mondol et al.

3 Methodology The methodology of this research can be broken down into multiple stages. The banking situation of biometric-based cyber security in the banking sector is discovered during the preliminary investigation stage. The data collection method is the next step. This survey’s dataset was compiled from both primary and secondary sources. The acquired data was compiled in the third stage to resolve any deficiencies or inconsistencies. Several analysis measures were used in the fourth stage to determine the survey’s purpose (data analysis). According to the analysis, valuable data were collected in the fifth step. In the discussion chapter, numerous study findings were reviewed based on this data. Figure 1 represents a methodology for this study. The roles, responsibilities, access provisions, and authentication methods have been explicitly outlined for this research. A preliminary analysis of biometric technologies and access safety models prepared the way for the determination of the security framework in this study, in addition to the findings of the case study in this research effort. The working flowchart, presented for the fundamental kinds of access control tools utilizing fingerprint technology, shows an overview of this research that identifies the process map that supports the security management policies as well as user recommendations. A Biometric-Security integrated approach was developed with the help of a collective member to provide feedback, test, and propose changes

Fig. 1 System flow diagram of this research work

A Case Study of IoT-Based Biometric Cyber Security Systems …

255

Fig. 2 Work flow diagram of this research work

for their own financial transactions. In cooperation with the digital banking system user, modifications to the information security for the Institution’s appropriate security system were prepared. Employees were formally trained on the introduction and integration of biometrics into their work environment as a result of an action plan. The working procedure is depicted in the Flowchart 2.

3.1 Data Collection The purpose of the study was to figure out what aspects are at play when it comes to fingerprint authentication data security in banking systems. From various secondary information from scholarly publication, we obtained primary information about individuals for journal papers, assessment papers, book chapters, MSc theses, and Ph.D. theses paper to utilizing a field survey and an online survey. To provide the best

256

S. K. Mondol et al.

outcomes, specific and correct theoretical and practical information is collected in a systematic way. Users from multiple government bank and private banks, various financial corporation, colleges and universities students, and general individuals who are now involved in banking facilities were given the questionnaire. We asked them a set of questions connected to the topic of our study. Simultaneously, we evaluate the respondent’s ethical issues to ensure that their privacy security was properly and loyalty protected. For both online and offline-based field surveys, we did not attach any questions in the questionnaire form that could put respondents at risk. A total of 6000 questionnaires were sent out, with 3800 responses received. Both men’s and women’s responses have been received. There were 74.07% males and 26.93% females among them. People of various ages replied.

3.2 Data Processing Methods We circulated 6000 questionnaires and gathered 3800 through a field survey and an online survey. Google was employed to conduct the online survey. The identical questionnaires were used in both online and offline surveys. We utilized the Python (version: 3.9.7) computer language, together with many Python modules, and Spreadsheet Formulas to analyze data.

3.3 Data Analysis and Result Gender-based respondents We received 3800 replies, with 2691 men and 1109 women participating. Male respondents made up 72.07% of the entire, while female respondents made up 27.93%. The sexual preference frequency and percentage of the latest survey entire dataset are shown in Table 1. The ages of the responders are divided into three main categories: under 30, 30– 40, and above 40. Table 2 shows that the majority of responders (65%) were under the age of 30. 22 percent were between the ages of 30 and 40, and 13% were between the ages of 40 and 50. The architecture of the system represents the overall biometric system in banking in Figs. 3 and 4. The account holder’s biometric databases, server managements, biometrics electronically stored databases, network management, and biometric de-

Table 1 Repliers’ gender frequency distribution Gender Frequency Male Female

2691 1109

Frequency distribution (%) 74.07 26.93

A Case Study of IoT-Based Biometric Cyber Security Systems … Table 2 Frequency distribution according to responders age

257

Age

Frequency

Frequency distribution (%)

40

2478 830 492

65 22 13

Fig. 3 System architecture

duplication server are all described in this system design. Probabilistic frameworks and a biometric identification dataset. Communication between these many pieces is correctly configured. Banking services used by respondents We inquired as to whether or not our respondents now use banking services. Out of the 3800 responses, 74% said they use financial services, while 26% said they don’t use banking services or payment systems. Table 3 depicts the frequency distribution of replies use of banking facilities.

258

S. K. Mondol et al.

Fig. 4 IoT-based banking security system design Table 3 The frequency with which people use banking services is distributed

Do you use Frequency banking facilities?

Frequency distribution (%)

Yes No

74 26

2740 980

4 Results and Discussion We can only find a solution if we first comprehend the issue. To avoid this, we use machine learning, biometric recognition, data learning, and hybrid approaches [8]. These will be the system’s handles, and they will aid in the protection of data from intruders by employing the best optimization techniques to obtain precise data.

4.1 Victimized by Banking Fraud We received 3800 answers, with 2691 men’s and 1109 women’s among them using financial services. These responders are not victimized by banking fraud and 2000 people among them who are victimized by banking fraudulent activities. As shown in the graph of Fig. 5, 43% of people have been victims of fraudulent transactions and 57% people are not victimized by banking fraud.

4.2 Awareness Banking Fraud We questioned the repliers if they had heard of banking fraudulent activity and the results are shown in the graph of Fig. 6. The statistic shows that 76% of them

A Case Study of IoT-Based Biometric Cyber Security Systems …

259

Fig. 5 Victimized by banking fraud

Fig. 6 Various banking fraud activity rate

indicated they were unaware of any form of fraud in this industry. 14% thought about messaging and phone scamming related phishing, 12% thought about phishing, 6% thought about viruses and trojans, 8% thought about spyware and adware, 9% thought about card skimming, 9% thought about identity theft, and 3% thought about other banking frauds.

4.3 Feasibility of Affected by the Banking Fraudulent Activities For our case study purpose, we collected 2740 respondents view who are attached to the banking facilities. Among them, 1175 people are victimized by banking fraud and 1565 people are not face any kinds of banking fraudulent activities. We wanted

260

S. K. Mondol et al.

to find out that the banking fraudulent activities are how much familiar in people. We asked our responders about various types of banking fraud these are not attached with the biometric security system. Among these people, we find out that 86.3% of people haven’t any kind of knowledge about that. On the other hand, 13.7% of the total people have knowledge about banking fraud. It clearly represents in the figure that 36% of victimized people have previous experience with banking fraud and 64% of non-victimized people had no prior knowledge of banking fraud.

5 Conclusion The purpose of this study was to identify problems and challenges before the use of biometric security system technology is determined. This case study in the banking system transpired that achieving strong category of business integrity fact and defeating user concerns are critical. The study has also shown clearly that a risk management plan must address questions connected to ethical and social issues rather than coping with technological changes. The success elements presented in this article will enable banking organizations to create their strategies and operations with the necessary affability and modifications to ensure a successful adoption of biometric security. Acknowledgements This work was supported in part by the School of Information Engineering, Huzhou University, Huzhou, China.

References 1. Alese BK, Thompson AFB, Alowolodu OD, Blessing EO (2018) Multilevel authentication system for stemming crime in online banking. Interdiscip J Inf Knowl Manag 13:79 2. Biswas S, Roy AB, Ghosh K, Dey N (2012) A biometric authentication based secured ATM banking system. Int J Adv Res Comput Sci Softw Eng. ISSN 2277 3. Buckley O, Nurse JR (2019) The language of biometrics: analysing public perceptions. J Inf Secur Appl 47:112–119 4. Chanajitt R, Viriyasitavat W, Choo KKR (2018) Forensic analysis and security assessment of android m-banking apps. Aust J Forensic Sci 50(1):3–19 5. Darjana D, Wiryono S, Koesrindartoto D (2022) The covid-19 pandemic impact on banking sector. Asian Econ Lett 3(3) 6. Despotovi´c A, Parmakovic A, Miljkovic M (2023) Cybercrime and cyber security in fintech. In: Digital transformation of the financial industry: approaches and applications, pp 255–272. Springer International Publishing, Cham. https://doi.org/10.1007/978-3-031-23269-5_15 7. Gelb A, Decker C (2012) Cash at your fingertips: biometric technology for transfers in developing countries. Rev Policy Res 29(1):91–117 8. Ghelani D, Hua, T.K., Koduru, S.K.R.: Cyber security threats, vulnerabilities, and security solutions models in banking. Authorea Preprints (2022) 9. von Graevenitz GA (2007) Biometric authentication in relation to payment systems and ATMs. Datenschutz und Datensicherheit-DuD 31(9):681–683

A Case Study of IoT-Based Biometric Cyber Security Systems …

261

10. Hossian FS, Nawaz A, Grihan K (2013) Biometric authentication scheme for ATM banking system using energy efficient AES processor. Int J Inf Comput Sci 2(4):57–63 11. Hussain MG, Al Mahmud T (2017) A technique for perceiving abusive bangla comments. GUB J Sci Eng (GUBJSE) 4(1):11–18 12. Hussain MG, Shiren Y (2021) Recognition of covid-19 disease utilizing x-ray imaging of the chest using CNN. In: 2021 international conference on computing, electronics & communications engineering (iCCECE), pp 71–76. IEEE 13. Hutchinson D, Warren M (2003) Security for internet banking: a framework. Logist Inf Manag 14. Jaiswal AM, Bartere M (2014) Enhancing ATM security using fingerprint and GSM technology. Int J Comput Sci Mob Comput (IJCSMC) 3(4):28–32 15. Magutu PO, Mwangi M, Nyaoga RB, Ondimu GM, Kagu M, Mutai K, Kilonzo H, Nthenya P (2011) E-commerce products and services in the banking industry: the adoption and usage in commercial banks in Kenya. J Electron Bank Syst 16. Padmapriya V, Prakasam S (2013) Enhancing ATM security using fingerprint and GSM technology. Int. J. Comput. Appl. 80(16) 17. Rodrigues ARD, Ferreira FA, Teixeira FJ, Zopounidis C (2022) Artificial intelligence, digital transformation and cybersecurity in the banking sector: a multi-stakeholder cognition-driven framework. Res Int Bus Financ 60:101616 18. Shakil KA, Zareen FJ, Alam M, Jabin S (2020) Bamhealthcloud: a biometric authentication and data management system for healthcare data in cloud. J King Saud Univ-Comput Inf Sci 32(1):57–64 19. Silberglitt R, Antón PS, Howell DR, Wong A, Gassman N (2002) The global technology revolution 2020, in-depth analyses: Bio/nano/materials/information trends, drivers, barriers, and social implications, vol 303. Rand Corporation (2002) 20. Snigdah KF, Hussain MG (2019) Poster: smart traffic vehicle monitoring & authenticating system using GPS. In: International conference on sustainable technologies for industry 4.0 (STI). https://doi.org/10.13140/RG.2.2.10237.92642 21. Wazid M, Zeadally S, Das AK (2019) Mobile banking: evolution and threats: malware threats and security solutions. IEEE Consum Electron Mag 8(2):56–60

Blockchain Security Through SHA-512 Algorithm Implementation Using Python with NB-IoT Deployment in Food Supply Chain Chand Pasha Mohammed and Shakti Raj Chopra

Abstract In this paper, we propose the SHA-512 algorithm, a revised version of using SHA-512 further truncating its output to 256 bits. In comparison to the traditional SHA-256 algorithm, this results in a better and more efficient 256-bit hashing programming algorithm for 64-bit architectures. Furthermore, we address the security concerns associated with the Narrowband Internet of Things (NB-IoT) in food supply chain management by juxtaposing blockchain technology with the aid of NB-IoT. To authenticate data and generate the blockchain, our system employs the SHA-512 algorithm and a Python program, which helps to ensure the accuracy and transparency of the data exchange. The system can also detect damaged agricultural products during transportation, which is critical for preserving food quality. In terms of efficiency and security, the proposed method was evaluated and compared to the traditional SHA-256 algorithm. Our method outperforms the traditional algorithm in terms of efficiency while maintaining the same level of security, according to the results. The study’s findings have important implications for developing reliable and efficient data exchange systems in the Narrowband Internet of Things. Our method is thought to have appealing confidentiality, authentication, accountability, and quality characteristics, making it an ideal method for decentralized and distributed secure transportation with traceability. Keywords Narrowband Internet of Things · SHA-512 algorithm · Blockchain technology · Food supply chain management · Python

C. P. Mohammed (B) · S. R. Chopra Lovely Professional University, Phagwara, Punjab, India e-mail: [email protected] S. R. Chopra e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_19

263

264

C. P. Mohammed and S. R. Chopra

1 Introduction Millions of devices are connected through the IoT, which places a heavy burden on existing communication infrastructures. A radio cell may currently accommodate thousands of Smartphones [1], but in the future, typical mobile networks may accommodate millions of devices or sensors. No surprises there; energy usage [2], and network strain [3], would both increase dramatically. In order to solve this problem, the NB-IoT standard was implemented. Narrowband machine sensor networks are powered by radio waves powerful enough to go through solid concrete and reach even the farthest, most subterranean reaches of a building. With 5G networks potentially requiring minimal energy and running for many years without battery replacement [2], we are right in the thick of things when it comes to the future of communication. The machine’s own network of sensors will be used in one layer of the network. The intelligent network is segmented into numerous layers, or “slices,” each of which can be used for a specific purpose [3]. Because of the different network tiers, all apps will be able to communicate reliably with one another in accordance with their requirements. To give only a few examples, vending machines will be able to report on their own inside the machine sensor network [2], permanently installed streetlights will only be switched on or dimmed when needed, and parking spots would display whether they are available or not. Narrowband IoT has various opportunities for many different types of manufacturing [3]. A smart city is an example of the Internet of Things when a considerable number of narrowband IOT devices interact with each other to make a city safer. We could all profit from your inventiveness in developing other devices that can communicate with one another.

1.1 Blockchain Technology with SHA-512 A distributed and decentralized digital ledger that operates a rising number of records [4], is an innovative technology that has gained traction over the past few years, with businesses in a wide range of sectors adopting it. Data transfers may be made safely and efficiently thanks to blockchain, which is merely a digital, decentralized database [5]. Blockchain is typically associated with the financial sector due to its inherent safety [6]. However, the technology’s many potentials use in other sectors has yet to be fully realized [7]. Logistics in the food supply chain is one area where blockchain has the potential to have a significant impact. A blockchain is a distributed ledger in which each block of transactions is linked to the one before it. One of the ways to store the blockchain is as a simple database, or one may just use a flat file. It is the SHA-512 cryptographic hashing algorithm’s job to generate a unique hash for the header of each block in the blockchain, making it possible to identify individual blocks [8].

Blockchain Security Through SHA-512 Algorithm Implementation …

265

Fig. 1 Generic chain of blockchain technology

A block’s parent block is referenced by the “prior block hash” field in the block header. In other words, the hash of the preceding block is stored in the header of each child block [9]. Each block’s hash sequence is linked to that of its parent, in Fig. 1 creating a chain that can be traced back to the very first block [4]. The first building block has been laid. The four basic forms of blockchain networks are public blockchains, private blockchains, consortium blockchains, and hybrid blockchains [5]. Encryption is used to conceal sensitive information while it is being transferred over the internet or other computer networks, or while it is being stored on a computer system [10]. The goal of decryption is to reveal the meaning of secret information. Securely hiding data so that only authorized parties can access it. Cryptography is the study of methods to ensure that only the intended recipient and sender may read a message [7]. These parts come in three varieties. First, Certificate Authority Digital (CA) Second, Certification Bodies Thirdly, the Registrar’s Office Hardware, software, rules, processes, and methods for managing, distributing, using, storing, and rescinding digital certificates and public keys constitute a public key infrastructure (PKI) [8]. One key can be used for both the encryption and decryption processes in symmetric encryption (a secret key) with asymmetric encryption, communication is encrypted and decrypted with the help of two keys: a public one and a private one [9].

2 Narrowband IoT and Blockchain Technology Are Preparing for the Digitalization of Agriculture As the Internet of Things (IoT) becomes more popular in agriculture and farming, it is critical to understand how blockchain technology can be applied in this area [11].

266

C. P. Mohammed and S. R. Chopra

Blockchain technology is a type of distributed ledger that uses encrypted keys to store and exchange data over a network of computers [9, 12]. This means that data entered into the system is unchangeable and serves as a permanent record of events [13]. Blockchain technology has the ability to assist farmers in keeping track of their crops from seed to sale. Farmers will be able to keep an accurate record of where crops were planted [14], what fertilizers were used, how many seeds were sown, which seeds germinated, when harvests occurred [15], etc., by installing sensors on farm machinery and equipment. In this manner, farmers will be able to determine whether they are utilizing best practices to increase crop yields and reduce the expenses associated with crop failures caused by disease or pests [16]. Blockchain technology enables the exchange of value between two parties without the need for a middleman [17], such as a bank or credit card firm, and is used to track asset ownership, transfer funds, and buy and sell products directly. It also provides robust protection against data modification and theft. The Narrowband, IoT [11], and blockchain technology are appropriate for agricultural digitization. It assists in tracing each stage of the supply chain from farm to fork, allowing farmers to trace their produce back to the source for quality assurance purposes and allowing consumers to verify their food suppliers and prevent purchasing fake items.

2.1 Using Blockchain for Supply Chain Management A blockchain is a distributed ledger that chronologically links several blocks of data using cryptographic techniques to achieve extraordinary results. The Byzantine generals’ dilemma and the double spending problem are two major issues with a digital currency that blockchain was designed to address. The value of blockchain implementations in the supply chain must be recognized now. Blockchain technology facilitates rapid growth, reinforcement, and expansion throughout the supply chain. Imagine it as a huge ledger [18]. The blockchain is not owned by any central authority like traditional databases are. However, if more than half of the blockchain’s members agree, i.e., if a majority agrees, then the blockchain can accept new additions or improvements [6]. A blockchain’s other strengths lie in its ability to manage vast volumes of documents and in its ability to speed up processes while still guaranteeing that everyone has the same amount of control. Numerous supply chain factors can be altered via blockchain. Each blockchain transaction is stored on a block. Due to the huge number of copies on the ledger, these entries are eventually distributed over a greater number of machines, hence enhancing accessibility. Each block is only connected to the block that precedes it, and so on [19]. Therefore, blockchain security is challenging to compromise. The supply chain moves and stores goods. Blockchain technology is a transparent, safe [20], and anonymous method of data storage. Blockchain adoption has the potential to transform the supply chain by allowing businesses to track their items and ensure that they arrive safely at their destination more efficiently [21].

Blockchain Security Through SHA-512 Algorithm Implementation …

267

Due to its severe perishability and greater transparency, businesses in the grocery industry must track their products from their origins to the final consumer. For instance, the blockchain-aided supply chain management system can simply track the origin, processing, and delivery of any piece of object [22]. In addition, when it is sold, the operation as a whole becomes more efficient. Consequently, the end customer receives a greater level of service [23]. In an article titled “Blockchain: The Future of Supply Chain Management,” author Kate Mitchelmore describes how blockchain can be used to track the lifecycle of items [24]. “Using blockchain technology, manufacturers may promptly identify product faults and communicate them directly to suppliers, who can then recall products without waiting for government regulators to intervene or for consumers to report issues.”

3 Narrow Band-IoT (NB-IoT) in Exıstıng 5G Technology The LPWAN-based wireless IoT protocol forms the basis of Narrowband IoT (NBIoT). In order to facilitate NB-IoT devices and services via cellular wireless networks, 3GPP developed this specification. The 3GPP has developed the NB-IoT LPWAN standard. By utilizing a “guard band” between LTE channels or operating on their own, NB-IoT enables IoT devices to operate over carrier networks. Increasing cellular service availability is what NB-IoT is all about. NB-uplink IoT’s transmission repetitions and bandwidth allocation parameters facilitate this. New Internet of Things (IoT) gadgets and software are made possible with the help of NB-IoT. Especially in outlying areas, NB-IoT improves the efficiency of the system’s capacity and bandwidth while reducing the power consumption of connected devices. Multiple NB-IoT gadgets can function without recharging their batteries for a decade. There are currently 107 NB-IoT networks that have been introduced or deployed by 159 different operators. The 3GPP’s primary answer for expanding NB-coverage IoT’s is to send the same data and control signals over and over again. The new capability of repeating is taken into account by other NB-IoT initiatives. Similar to standard LTE systems, [3], the strategy will take into account a two-dimensional space, specifically the selection of the modulation and also coding scheme (MCS) level and the gumption of the repetition number, in order to perform a link adaptation for resource management to improve energy, [25], data rate, and coverage efficiency. A signal that is designed in this way can be sent across greater distances and with greater resilience to noise and interference [7]. To sum up, Most plans aim for a connection expenditure of 150–10 dB, which equates to a few miles in urban areas and tens of kilometers in rural ones [15]. Transmission with greater energy increases the dependability of decoding at the receiving end (or symbol). Thus, typical receiver sensitivity levels may be as low as −130 dBm [26]. Most low-power wide-area network (LPWAN) options employ either form of narrowband or spread spectrum modulating. Spread spectrum properly transports the distribution of a narrowband signal more than a wider frequency range while maintaining a consistent power level [9]. The fundamental transmission is a sound signal that is

268

C. P. Mohammed and S. R. Chopra

Fig. 2 Comparison of various IoT and progress towards 5G technology. (https://www. semanticscholar.org/paper/The-Narrowband-Internet-of-Things-(NB-IoT)-State-of-Migabo (Djouani/9e4c15383c7eb470d99f47c4331fda2c56ad4f1f)

secure, resilient against jamming, and less vulnerable to interference. Comparison of various IoT and progress towards 5G technology is shown in Fig. 2.

4 Model of Supplychain with Blockchain Manufacture X produces a pair of Mango Box, an Apple Box, and another identical Apple Box how this moves through the supply chain, and how we can use things like blockchain to track the movement of these items as well as the condition of these items so that by the time I get to the retailer or finally in the hands of a customer if there is an issue with any of these we can trace back to see where that issue started. Entities with data while moving products is shown in Fig. 3.

4.1 Python Proposed Flow Chart The process flow chart is shown in Fig. 4.

Blockchain Security Through SHA-512 Algorithm Implementation …

Fig. 3 Entities with data while moving products (https://www.educba.com/ids-tools/)

4.2 SHA-512 Algorithm

Producer Company Sells Agri products to Retailer. Input: import sha 512/ from hash library Import json /sha 512 is encryption algorithm Initialization block: Define Class of Blockchain: Product serial: it is a unique id assigned to the package of the item Time Stamp: It records all the changes in entities When the change was initiated: →From Manufacture to the transport company. Review to get all the details: Each entity’s results will print with a timestamp.

269

270

C. P. Mohammed and S. R. Chopra

Fig. 4 Flowchart

4.3 HASH Rule to Meet: Hash Must Start with X Values of 0’s Difficult = X. → Hash must be 00… Iteration 1: Previous Hash + Transaction + Index + nonce = 0 Result Invalid → 38U3IO5H3N98IGUEVJ903IOH4WT Iteration 2: Previous Hash + transaction + Index + nonce = 1 Result Invalid → NJK4HFGH56RTHFGHFGHFGHFGHFGH Iteration 3: Previous Hash + Transaction + Index + Nonce = 2 Result Valid → 00U2JWHEEFSJKDHFNKSLDFSDFSDSDFSD

Blockchain Security Through SHA-512 Algorithm Implementation …

271

4.4 Status of Flags in Each Entity • When you see that everything here was from the manufacturer and went to the transportation company, everything from a digital signature standpoint was approved and it looked good. There were no flags in this transaction. • When the retailer receives the products from the transportation company, one of those products is flagged ‘Yes’ and something was wrong with the product. • The retailer should go ahead and review the digital signature and turns the liability on to the transportation company because they accepted the block in the blockchain and said everything was good between the manufacturer and the transportation company. • Now Retailers can also trace that specific item back because it’s got its unique product serial number, which is right over here. The other two products are good so there’s nothing wrong there.

5 Implementation of Algorithm SHA-512 Check the product serial number, go through the actual chain, and see where that chain was broken or with whom that was a mistake. One of the apple boxes was damaged among the other two boxes, that, it would be very difficult to trace back to see where that happened if the due diligence is done in conjunction with the blockchain. This is a great way to add some accountability as well. Finally, when it hits the retailer, in this case, the retailer says that the box is damaged and sends this back to the vendor, but how does the money get exchanged and who is accountable for that? when the goods were transported from the transportation company to the retailer, said that the Mangos box was good, and the first Apple box was also good, but the second one was shown flagged because damaged. The results of the damaged product identity like unique serial number, Name, from, to, Status of Product, Digital Signature, and status of Flagged are verified using Python programming below.

5.1 Flag Status Between Manufacturer to the Transport Company Order Status of the Product between the Manufacturer to the Transport Company is shown in Fig.5.

272

C. P. Mohammed and S. R. Chopra

Fig. 5 Order status of the product between the manufacturer to the transport company

5.2 Flag Status Between the Transport Company to the Retailer Company Order Status of the Product between the Transport Company to the Retailer is shown in Fig.6.

Fig. 6 Order status of the product between the transport company to the retailer

Blockchain Security Through SHA-512 Algorithm Implementation …

273

Fig. 7 Order status of the product at the retailer company

Table 1 Order status of the product while moving from each entity Parameter

Manufacturer X

Transport Y

Retailer Z

Product-id

2

2

2

Product serial

50002001

50002001

50002001

Name

Apple box

Apple box

Apple box

From

Manufacturer—X

Transportation—Y

Retailer—Z

To

Transportation—X

Retailer—X

Manufacturer—X

Message

This product is in good This product is damaged This product is damaged order

Digital signature Approved

Rejected

Retailer review

Flagged

Y

Y

N

5.3 Flag Status of Review of the Retailer Company Order Status of the Product at the Retailer Company is shown in Fig.7. Order Status of the Product while moving from each entity is listed in Table 1.

6 Conclusion Some of the existing algorithms have come up for the implementation of security in blockchain technology and SHA-512 is advanced over the SHA-256 and is very popular and trusted by many reputed organizations further investigation of this

274

C. P. Mohammed and S. R. Chopra

research is to implementation of Algorithms in secured smart contracts for the automatic reversible of funds without any delay in among the entities of the supply chain in the scope of our research. Acknowledgements We would like to express our heartfelt gratitude to the esteemed Lovely Professional University for their contribution to the achievement of this initiative. It is a knowledge center where anyone may learn and upgrade their expertise in any field. Second, as a team, we worked together to accomplish this project successfully. As a result, we are pleased to say that our collaboration is good and that we will continue to have tremendous success in the future.

References 1. Ratasuk R, Vejlgaard B, Mangalvedhe N et al (2016) NB-IoT system for M2M communication. In: Proceedings of the IEEE wireless communications and networking conference workshops (WCNCW), Doha, Qatar, 3–6 Apr 2016. IEEE, New York, pp 428–432 2. Kim H, Cho S, Oh J et al (2017) Analysis of LTE and NB-IoT coexistence. In: Proceedings of the international conference on information and communication technology convergence (ICTC), Jeju, South Korea, 18–20 Oct 2017. IEEE, New York, pp 870–872 3. Mangalvedhe N, Ratasuk R, Ghosh A (2016) NB-IoT deployment study for low power wide area cellular IoT. In: Proceedings of the 27th annual international symposium on personal, indoor, and mobile radio communications (PIMRC), Valencia, 4–8 Sept 2016. IEEE, New York, pp 1–6 4. Henry R, Herzberg A, Kate A (2018) Blockchain access privacy: challenges and directions. IEEE Secur Priv 16(4):38–45 5. Ehmke C, Wessling F, Friedrich CM (2018) Proof-of-property—a lightweight and scalable blockchain protocol. In: Proceedings of the IEEE/ACM 1st international workshop on emerging trends in software engineering for blockchain (WETSEB), Gothenburg, 27 May–3 June 2018. IEEE, New York, pp 48–51 6. Awasthi S, Johri P, Khatri SK (2018) IoT based security model to enhance blockchain technology. In: Proceedings of the international conference on advances in computing and communication engineering, Paris, 22–23 June 2018. IEEE, New York, pp 133–137 7. Kshetri N (2017) Can blockchain strengthen the Internet of Things? IT Prof 19(4):68–72 8. Hong H, Sun Z (2017) Towards secure data sharing in cloud computing using attribute-based proxy re-encryption with keyword search. In: Proceedings of the IEEE 2nd international conference on cloud computing and big data analysis (ICCCBDA), Chengdu, China, 28–30 Apr 2017. IEEE, New York, pp 218–223 9. Hong H, Sun Z (2018) Sharing your privileges securely: a key-insulated attribute-based proxy re-encryption scheme for IoT. World Wide Web 21(3):595–607 10. Caro MP, Ali MS, Vecchio M et al (2018) Blockchain-based traceability in agri-food supply chain management: a practical implementation. In: Proceedings of the IoT vertical and topical summit on agriculture—Tuscany (IOT Tuscany), Tuscany, 8–9 May 2018. IEEE, New York, pp 1–4 11. Ellul J, Pace GJ (2018) AlkylVM: a virtual machine for smart contract blockchain connected Internet of Things. In: Proceedings of the 9th IFIP international conference on new technologies, mobility and security (NTMS), Paris, 26–28 Feb 2018. IEEE, New York, pp 1–4 12. Huh S, Cho S, Kim S (2017) Managing IoT devices using blockchain platform. In: Proceedings of the 19th international conference on advanced communication technology (ICACT), Bongpyeong, South Korea, 19–22 Feb 2017. IEEE, New York, p 464

Blockchain Security Through SHA-512 Algorithm Implementation …

275

13. Kravitz DW, Cooper J (2017) Securing user identity and transactions symbiotically: IoT meets blockchain. In: Proceedings of the global Internet of Things summit (GIoTS), Geneva, 6–9 June 2017. IEEE, New York, pp 1–6 14. Li S (2018) Application of blockchain technology in smart city infrastructure. In: Proceedings of the international conference on smart Internet of Things (SmartIoT), Xi’an, China, 17–19 Aug 2018. IEEE, New York, pp 276–382 15. Kataoka K, Gangwar S, Podili P (2018) Trust list: Internetwide and distributed IoT traffic management using blockchain and SDN. In: Proceedings of the 4th world forum on Internet of Things (WF-IoT), Singapore, 5–8 Feb 2018. IEEE, New York, pp 296–301 16. Bocek T, Rodrigues BB, Strasser T et al Blockchains everywhere—a use-case of blockchains in the pharma supply-chain. In: Proceedings of the 2017 IFIP/IEEE 17. Bocek T, Rodrigues BB, Strasser T et al (2017) Blockchains everywhere—a use-case of blockchains in the pharma supply-chain. In: Proceedings of the 2017 IFIP/IEEE symposium on integrated network and service management (IM), Lisbon, 8–12 May 2017. IEEE, New York, pp 772–777 18. Tapas N, Merlino G, Longo F (2018) Blockchain-based IoT-cloud authorization and delegation. In: Proceedings of the 9th international conference on smart computing (SMARTCOMP), Taormina, 18–20 June 2018. IEEE, New York, pp 411–416 19. Ouaddah A, Elkalam AA, Ouahman AA Towards a novel privacy-preserving access control model based on blockchain technology in IoT. In: Rocha Á,Serrhini M, Felgueiras C (eds) Europe and MENA Cooperation advances in information and communication technologies. Advances in intelligent systems and computing, vol 520. Springer, Cham, pp 523–533 20. Biham E, Chen R, Joux A, Carribault P, Lemuet C, Jalby W (2005) Collisions of SHA-0 and reduced SHA-1. In: Advances in cryptology—EUROCRYPT 2005. LNCS, vol 3494. Springer-Verlag, pp 36–57 21. den Boer B, Bosselaers A (1992) An attack on the last two rounds of MD4. In: Advances in cryptology—CRYPTO’91. LNCS, vol 576. Springer-Verlag, pp 194–203 22. den Boer B, Bosselaers A (1994) Collisions for the compression function of MD5. In: Advances in cryptology—CRYPTO’93. LNCS, vol 765. Springer-Verlag, pp 293–304 23. Chabaud F, Joux A (1998) Differential collisions in SHA-0. In: Advances in cryptology— CRYPTO’98. LNCS, vol 1462. Springer-Verlag, pp 56–71 24. Damgard I (1989) A design principle for hash functions. In: Advances in cryptology CRYPTO’89. LNCS, vol 435. Springer-Verlag, pp 416–427 25. Chang YCP, Chen S, Wang T-J (2016) Fog computing node system software architecture and potential applications for NB-IoT industry. In: Proceedings of the international computer symposium (ICS), Chiayi, Taiwan, 15–17 Dec 2016. IEEE, New York, pp 727–730 26. Shin E, Jo G (2017) Structure of NB-IoT NodeB system. In: Proceedings of the international conference on information and communication technology convergence (ICTC), Jeju, South Korea, 18–20 Oct 2017. IEEE, New York, pp 1269–1271

IOT-Based Whip-Smart Trash Bin Using LoRa WAN D. Dhinakaran, S. M. Udhaya Sankar, J. Ananya, and S. A. Roshnee

Abstract The ecosystem is contaminated, and human health is impacted by improper solid waste disposal. However, the majority of trash cans put in municipalities may be seen to be overflowing due to antiquated procedures. A real-time wireless closely monitoring device is therefore required to report to the proper authorities of the quantity of waste in containers in order for it to be removed immediately. So this brand-new idea of providing each and every citizen with an RFID card, a computing technology, as well as the concept of rewarding them with points via SMS for proper disposal, is put into action. This study provides an improvement and validation of an IoT self-powered, easily connectable alternative to track the number of empty whip-smart trash bins using a valuable surveillance center. Each trash can that needs to have the unfilled stage monitored can have trash bin monitoring units (BMU). Each BMU measures each garbage can’s level of emptiness and transmits that data to a unit—Wi-Fi access point (WAP). The primary IoT-equipped device is installed in each region to help connect many devices to a bin network. The LoRa devices communicate with each local machine to the controller IoT device, allowing extended communication. As a result, this tactic improves the usefulness of an IoTbased waste disposal management and collection smart city system communities. It simplifies the process of monitoring the trash bin in real-time. Keywords Waste disposal · RFID card · Internet of Things · Bin monitoring units · Wi-Fi access point

1 Introduction Municipal solid waste (MSW) is a significant source of dormant contamination. It is a necessary by-product of human activity, though. Urban regions continue to struggle with waste management, which seriously affects the environment and public health. Urban development and local bodies conducted a study that revealed, metropolis D. Dhinakaran (B) · S. M. Udhaya Sankar · J. Ananya · S. A. Roshnee Department of Information Technology, Velammal Institute of Technology, Chennai, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_20

277

278

D. Dhinakaran et al.

around the globe produces 1.5 billion tons of waste each year. In 2024, this amount is anticipated to go up to 2.1 billion tons. Trash cans are placed in numerous locations by managers of buildings, municipalities, or the control of waste firms in order to manage garbage waste effectively [1]. Undoubtedly, trucks are used by the municipality to collect rubbish daily. If the garbage bins are not fully filled. When garbage waste is only picked up once a week, garbage may spread out, spreading the waste across the neighborhood, polluting the surroundings, and putting people at risk of getting sick [2–4]. Therefore, it’s necessary to have a technology that can immediately notify the municipality of how much garbage is found in the waste bins. Monitoring garbage cans in cities in real-time can be accomplished with the help of technological advancements in sensor design, communication protocols, and remote monitoring techniques. To encourage people to properly dispose of trash by using a dustbin based on an RFID system for garbage bin level control offered with incentive systems. As each year goes along, more and more new technologies that are meant to benefit people are developed. There is no exception with the IoT [5]. The “Internet” was a person-to-person interaction. IoT is now bringing about a significant transformation. It facilitates communication from everything to everything. The IoT is a network of Internet-connected bins that can collect smartly and share data. Some claim that “the Internet of Things” will revolutionize society in a manner similar to how the industrial revolution occurred. The application of pervasive computer technology, including radio frequency identification (RFID) and wireless sensor networks, opens up a new avenue for improving waste management systems. Today, radio frequency identification (RFID) technology is growing more and more beneficial in several application fields, such as logistics, inventory, public transportation, and security [6]. The Arduino UNO microcontroller is used to interface with the sensors and communication tools. The ultrasonic sensor determines the amount of trash in the trash can. The IoT module is linked to the controller device, which uses LoRa to talk to it and changes the sensor value in the database constantly. The relevant individuals are instructed to collect the rubbish when the trash container is full. The most recent information is displayed on the LCD. Automatic methods are preferred today due to our hectic schedules, which makes life simpler and more accessible in every aspect and helps us accomplish tremendous success [7]. The most modern Internet technology to be developed is called IoT. Solid waste management is one of India’s most significant issues, whether in established or developing regions, because of the exponential growth in Internet users and how essential it has become to our everyday lives. Because waste is not collected effectively, it is common to observe that most trash cans next to the road become inundated [8]. It makes the environment unsanitary for the inhabitants and disperses an unpleasant odor. This contributes to the spread of numerous dangerous diseases and ailments. One of the numerous IoT applications that is essential to the infrastructure of solid waste management. Everybody now produces garbage, but the city government still has many challenges to overcome to collect, transport, and operates the system. The waste created is collected, stored, separated, transported, and recycled as part of solid waste management.

IOT-Based Whip-Smart Trash Bin Using LoRa WAN

279

2 Literature Survey Management of waste is a costly procedure as it takes a lot of time and money. The government’s efforts to enhance collection and disposal include the development of the biodegradable bin. Theivanathan et al. [9] provided a project using long-range communication methods like LoRa. The suggested method calculates more efficient routes for garbage trucks to use in order to perform an appropriate waste disposal process. It provides the luxury of enabling city service providers to interact with citizens using communication technology to improve livelihoods and systems that can raise the quality of life. An authorized person receives a text message alarm when a trash can is complete to create a series of experimental simulations focusing on the mentioned area as the outcome. Waste is also separated using image processing. This article will explore the concept of intelligent waste management, and a model for developing an IoT strategy for smart city implementation will be offered. The primary goals of Muthukumar et al. [10] were to decrease the expense and time associated with trash management while increasing the quality of waste disposal. Identify the most effective means of disposing of the waste filled above the predetermined threshold. They describe the real-time garbage monitoring of the container using an embedded system, followed by data collection and transmission to a central gateway and publication of the result along with the location of the containers on the Internet. In their study, Khoa et al. [11] developed a revolutionary strategy that effectively manages waste by foreseeing the likelihood of the garbage level in cans. Using graph theory and machine learning, the scheme may optimize waste pickup using the shortest route algorithm. They investigated to evaluate the viability and applicability of the system’s implementation. They check up data transfer on the LoRa Wan module, which is accomplished using a primary circuit that is inexpensive, easy to use, and reusable. By figuring out the fastest way to handle waste pickup, this technology shortens the duration of the process. A self-powered, LoRa WAN Integrated smart garbage bin level monitoring system was developed and validated by Jino Ramson and Jackuline Moni [12], who presented their findings. Garbage Bin Level Measurement Units (TBMU), implanted within every trash bin where its status has to be tracked, serve as the anticipated IoT systems. TBMU calculates the volume and location of rubbish in a bin after it examines the data and transmits it at a spectrum to a LoRaWAN network. In addition to acting as a concentration for the TBMUs, a LoRa WAN router transfers information between a TBMU and a Trash Bin Level Monitoring server. Users may view each bin’s condition and geography using a sophisticated graphical interface. The designed system’s efficacy, the wireless frequency between such a TBMU and a LoRaWAN port, the TBMU’s measured current usage and expected longevity, battery energy durations, as well as the price had all been investigated. A LoRa-based LPWAN vehicle-to-vehicle communication system for intelligent trash cans was developed by Vigneshwaran et al. [13]. A LoRa gateway, a remote diagnostic system, sensors to track garbage production, and a cloud platform makes up the vehicle-to-vehicle communication system. Many parts are interfaced to finish

280

D. Dhinakaran et al.

the process, including GPS, cameras, motors, and sensors. When the bin is full of trash, the sensors that track waste levels will notice when it overflows. The sensor alerts when it determines that the trash can needs to be cleared since it is complete. The LoRa gateway transmits the data about the trash can to another adjacent vehicle, an intelligent dustbin, by saving this data in the cloud platform. The informationreceiving smart trash bin will move to the location of the other trash can to replace it. The trash bin can communicate with other trash cans. With the use of LoRa technology, the trash can be managed, and a variety of operations, including brakes, may be carried out. The smart dustbin is a thoughtfully created answer to the social problems associated with waste disposal. This is an automated trash management technique. Dara et al. [14] created and put into use a prototype of a rubbish monitoring system that might be utilized to keep garbage-free in smart cities. An efficient solid waste management system might be created using accurate real-time data from the established procedure. The devised method produces a more precise database for the frequency and volume of waste pickup at each place. Residential areas used to be physically loaded using truck loaders in the traditional sense. They created a reliable system for monitoring the rubbish that can be used to track it. In real-time, the system is able to gather exact data that can be used as a tool in the future. An information management system receives the data. Regular garbage cans can also have level sensors fitted to them. The prototype can therefore be used in conventional garbage management infrastructure. Information about the amount of waste in the landfill can also be used to better efficiently organize waste pickup routes, leading to fewer overflowing bins and enhanced public health sanitation. Kumar et al. [15] developed a waste management system based on the IoT that measures the amount of waste in trash cans using sensor technology. Once it was discovered, the system immediately switched to GSM/GPRS-approved mode. A microcontroller in this system connects the sensor system and the GSM/GPRS system. The necessary data regarding the various garbage levels in multiple locations is tracked and integrated into an Android application. This led to the environment becoming green and to support for Swatch Bharat for cleanliness.

3 System Model One of the main advantages of implementing IoT is waste management. IoT is consequently being used sparingly to create waste bin level surveillance systems. In-depth research is done on the development of smart cities utilizing long-range telecommunications. IoT systems built using the LoRaWAN communication interface are used in this study. Applications requiring distributed sensor nodes that are low energy cost, and long-distance networking are particularly fit for LoRaWAN. The advantages of the LoRaWAN network interface can be leveraged to overcome the limitations of the IoT-based trash surveillance systems that are already in use [16–18]. There is currently an intelligent waste collection system based on LoRaWAN. This invention

IOT-Based Whip-Smart Trash Bin Using LoRa WAN

281

uses several sensors, including temperature sensors, an ultrasound sensor, and, load cells to determine the height of an unfilled garbage can and the temperature inside the container. The use of several sensors, however, necessitates greater power, necessitating frequent battery replacement. The sensor node’s transmission range did not yield any helpful information either. In order to fully explain our project, the initial inspiration for it came from retail markets where discounts are offered for purchases of at least 1000 or 500 rupees. Additionally, because people are naturally drawn to things they can get for free, encouraging them to use dustbins for garbage disposal was born out of this brainstorming process. This idea was then further examined and discussed in order to determine the number of people present in each district, such as Tamil Nadu. This is a fundamental need on the part of the populace. When individuals with the card arrive at the project site to dispose of their trash, they place the card on the RFID reader, which opens the trash can and displays the individual’s name and identification number on the LCD display. When waste is placed in the bin, it is collected, and utilizing the technology put up for the project, an SMS is sent successfully with a reward point and a thank-you note for using the smart garbage can. The intention of our project is to develop an intelligent garbage can that may alert the municipal corporation to empty the can by monitoring the level of rubbish that has been disposed of within as seen in Figs. 1 and 2. The transmitter and receiver are two distinct sets of equipment used in this situation. The Municipal Corporation will be the receiver, and the transmitter will be coupled to the Smart Garbage Bin [19–22]. A Lora transmitter, an LCD screen, an

Fig. 1 Communication of transmitter

282

D. Dhinakaran et al.

Fig. 2 Communication of receiver

RFID tag reader, and other gadgets can all be found in the transmitter. On the other hand, the receiver side of the device has a GSM, IOT, and Lora receiver attached to it. Ultrasonic sensors on the transmitter side analyze the echo the sensor receives back. Using this, we can determine the amount of trash in the trash and see the level of trash on the LCD. Each member of the society will receive an RFID tag, or radio frequency identification, which automatically recognizes and locate tags attached to objects. Data that has been electronically saved is kept on the tags. We can give the consumers a choice to use credit points by integrating the RFID reader with the controller. The following device is an ARDUINO UNO, a microcontroller used to store user data. To print readable text on the display that is attached to LORA, we have a 16 × 2 LCD module. Signals are sent to municipalities using the LORA (Low Power, Wide Area) networking protocol. The signal is sent to the office by the LORA if the garbage level is Low, Medium, or Full. When waste is disposed of in a Smart Garbage Bin using an RFID Tag, a message is transmitted to the municipal corporation via the GSM MODULE that is utilized for communications purposes on the receiver side. The credit points are also sent through GSM module to the user’s mobile device [23]. The IoT is a set of connections of actual physical things that come next, like cars, buildings, and other things. The online portal allows us to keep an eye on the multiple bins. Each container has its place. IoT will make monitoring extremely effective. The innovation in our project is the introduction of the reward point system to entice people to use intelligent bins.

IOT-Based Whip-Smart Trash Bin Using LoRa WAN

283

4 Performance Evaluation When we face a substantial power problem, we turn to Arduino. Factors like reliability, safety, and accessibility might be pretty important. However, the Arduino does provide some speed and low power advantages due to its status as a microcontroller.

4.1 Transmıtter We have a transformer and a splitter on the transmitter side that separates the current into several components. First, an ultrasonic sensor is included with the Garbage Bin. The HC-SR04 analyses the echo it gets from the Smart Bin. This HC-SR04 can be used to determine the quantity of trash in the trash bin, and the digital LCD will display that information [24]. The next step is to offer each user a card and RFID interface reader. It automatically recognizes and tracks tags affixed to things using an electromagnetic field. The tags hold the data that has been electronically stored. We can allow users to receive credit points by integrating the RFID reader with the controller as seen in Figs. 3 and 4. The Transmitter Interface ARDUINO UNO, which is nothing more than a microcontroller to store user data and LCD, is the third part of the transmitter that displays viewable content on a page.

Fig. 3 Initial setup of an implemented smart bin

284

D. Dhinakaran et al.

Fig. 4 Components of transmitter

4.2 Receıver The Interface Lora Receiver is the opposite. A signal is transmitted to a municipality using the Wide Area Networking protocol known as LORA as seen in Fig. 5. The signal is sent to the office by the LORA if the garbage level is low, medium, or full. The next step is to alert the user via the Interface GSM Module. For telecommunications purposes, we employ GSM [25]. The user receives the message regarding the credit points through this GSM Module. We also use the IOT Module to visit websites. A network of physical objects, such as buildings, vehicles, and other items, makes up the Internet of Things (IoT). The online portal allows us to keep an eye on the multiple bins. Each container has its own place as seen in Fig. 6. IoT will make monitoring extremely effective. Think about a user who wants to empty their waste into Smart Garbage Bin. They will use the RFID Card to open the Smart Bin, which holds all of the user’s information [26]. The HC-SR04 will assess the height of the container and be shown on the LCD screen as well as sent through LORA WAN to the receiver side (local company). Depending on how much trash the user produces credit points will be awarded to them. The GSM Module will deliver these reward points to the user’s mobile device [27]. In addition, these facts will be saved on a webpage, including the user’s Fig. 5 Components of receiver

IOT-Based Whip-Smart Trash Bin Using LoRa WAN

285

Fig. 6 Fill level status

name, the day, and time the waste was disposed of, and the number of credit points the user earned as seen in Figs. 7 and 8. The web server has been put up so that local government agencies can access information on their region’s trash cans and schedule timely removal. The time, human resources, and transportation costs will be decreased because the garbage collectors will know exactly where the trash cans are located and will drive in trucks following that knowledge [28–30]. The above solution primarily establishes a direct relationship between each citizen contributing to keeping his surroundings clean. There are numerous opportunities for development and enhancement with this concept.

Fig. 7 Credit level status

286

D. Dhinakaran et al.

Fig. 8 Credit updates to user

Accepting the project is a significant benefit for the people and the government. When we implement our project in the future for a smart city, and when the Tamil Nadu Corporation agrees with the idea, these reward points will be used for electricity bills, payment of water bills, or any government site functions. Cities present arguably the most significant opportunity for innovation and digital change for our entire civilization and the globe. We can create an urban future that is more resilient, sustainable, and adaptable through this change. Thus, our effort was successful in providing a solid justification for preserving the ecosystem.

5 Conclusion Using the Smart Waste Bin has the advantage of preventing bacterial and viral diseases in the garbage collector. With the help of this invention, the response time is quicker and takes up less time. This could lower the government’s overall spending. Smart Bins contribute to improve operational efficiency and a safer, cleaner, and more hygienic environment while lowering management expenses, resource consumption, and roadside emissions. Busy places like campuses, amusement parks, airports, train

IOT-Based Whip-Smart Trash Bin Using LoRa WAN

287

stations, and retail centers are perfect places to deploy the Smart Bin. Fewer waste collections would be necessary, requiring few labor, fuel, low emissions, and reduction of traffic in the number of waste bins needed. Using data analytics, collection routes and bin placement may be managed more effectively.

References 1. Hannan M, Arebey M, Begum RA, Basri H (2011) Radio frequency identification (RFID) and communication technologies for solid waste bin and truck monitoring system. Waste Manage 31(12):2406–2413 2. Wegedie KT (2018) Households solid waste generation and management behavior in case of Bahirdar City, Amhara National Regional State, Ethiopia. Cogent Environ Sci 4(1):1471025 3. Arebey M, Hannan MA, Basri H, Abdullah H (2009) Solid wastemonitoring and management using RFID, GIS and GSM. In: Proc. IEEE student conf. res. develop. (SCOReD), Nov 2009, pp 37–40 4. Hoornweg D, Perinaz B (2012) What a waste: a global review of solid waste management. Urban Dev Ser Knowl Pap 15:87–88 5. Chuang SY, Sahoo N, Lin HW, Chang YH (2019) Predictive maintenance with sensor data analytics on a Raspberry Pi-based experimental platform. Sensors 19(18):3884 6. Udhaya Sankar SM, Dhinakaran D, Cathrin Deboral C, Ramakrishnan M (2022) Safe routing approach by identifying and subsequently eliminating the attacks in MANET. Int J Eng Trends Technol 70(11):219–231. https://doi.org/10.14445/22315381/IJETT-V70I11P224 7. Yazici M, Basurra S, Gaber M (2018) Edge machine learning: enabling smart Internet of Things applications. Big Data Cogn Comput 2(3):26 8. Vigneshwaran S, Karthikeyan N, Mahalakshmi M, Manikandan V (2019) A smart dustbin using LoRa technology. Int J Sci Res Review 07(03):704–708 9. Theivanathan G, Bala Murugan T, Dhinesh M, Kalaiarasan S, HaaslinBilto LA (2021) Smart waste management using Lora. Ann RSCB 25(6):2011–2017. ISSN 1583-6258 10. Muthukumar S, Selvamurugan K, Santhosh Kumar K, Shalini V (2020) Smart bin waste management network using LoRa and Internet of Things. Int J Eng Res Appl 10(6):61–65 11. Khoa TA et al (2020) Waste management system using IoT-based machine learning in university. Wirel Commun Mob Comput 2020:6138637 12. Jino Ramson SR, Jackuline Moni D (2017) Wireless sensor networks based smart bin. Comput Electr Eng 64:337–353 13. Vigneswaran S, Kandasamy J, Johir MAH (2016) Sustainable operation of composting in solid waste management. Procedia Environ Sci 35:408–415 14. Dara PK, Byragi Reddy T, Gelaye KT (2017) A study on municipal solid waste management ın Visakhapatnam City. Int J Adv Res 5(6):1448–1453 15. Kumar SV, Kumaran TS, Kumar AK, Mathapati M (2017) Smart garbage monitoring and clearance system using Internet of Things. In: 2017 IEEE ınternational conference on smart technologies and management for computing, communication, controls, energy and materials (ICSTM), Chennai, India, pp 184–189. https://doi.org/10.1109/ICSTM.2017.8089148 16. Udhaya Sankar SM, Christo MS, Uma Priyadarsini PS (2023) Secure and energy concise route revamp technique in wireless sensor networks. Intell Autom Soft Comput 35(2):2337–2351 17. Dhinakaran D, Joe Prathap PM (2022) Protection of data privacy from vulnerability using twofish technique with Apriori algorithm in data mining. J Supercomput 78(16):17559–17593. https://doi.org/10.1007/s11227-022-04517-0 18. Vishnu S, Ramson SRJ, Senith S, Anagnostopoulos T, Abu-Mahfouz AM, Fan Z, Srinivasan S, Kirubaraj AA (2021) IoT-enabled solid waste management in smart cities. Smart Cities 4:1004–1017. https://doi.org/10.3390/smartcities4030053

288

D. Dhinakaran et al.

19. Gomathy G, Kalaiselvi P, Selvaraj D, Dhinakaran D, Anish TP, Arul Kumar D (2022) Automatic waste management based on IoT using a wireless sensor network. In: 2022 international conference on edge computing and applications (ICECAA), pp 629–634. https://doi.org/10. 1109/ICECAA55415.2022.9936351 20. Cerchecci M, Luti F, Mecocci A, Parrino S, Peruzzi G, Pozzebon A (2018) A low power IoT sensor node architecture for waste management within smart cities context. Sensors 18(4):1282 21. Mdukaza S, Isong B, Dladlu N, Abu-Mahfouz AM (2018) Analysis of IoT-enabled solutions in smart waste management. In: Proceedings of IECON 2018—44th annual conference of the IEEE industrial electronic society, October, pp 4639–4644 22. Dhinakaran D, Khanna MR, Panimalar SP, Anish TP, Kumar SP, Sudharson K (2022) Secure Android location tracking application with privacy enhanced technique. In: 2022 fifth international conference on computational intelligence and communication technologies (CCICT), pp 223–229. https://doi.org/10.1109/CCiCT56684.2022.00050 23. Jena Catherine Bel D, Esther C, Zionna Sen GB,Tamizhmalar D, Dhinakaran D, Anish TP (2022) Trustworthy cloud storage data protection based on blockchain technology. In: 2022 international conference on edge computing and applications (ICECAA), pp 538–543. https:// doi.org/10.1109/ICECAA55415.2022.9936299 24. Ziouzios D, Dasygenis M (2019) A smart recycling bin for waste classification. In: Proceedings of 2019 Panhellenic conference on electronics and telecommunications (PACET), November, pp 1–4 25. Ananth TS, Baskar M, Udhaya Sankar SM, Thiagarajan R, Arul Dalton G, Rajeshwari PR, Kumar AS, Suresh A (2021) Evaluation of low power consumption network on chip routing architecture. Microprocess Microsyst 82. https://doi.org/10.1016/j.micpro.2020.103809 26. Kirubanantham P, Udhaya Sankar SM, Amuthadevi C, Baskar M, Senthil Raja M, Karthik PC (2022) An intelligent web service group-based recommendation system for long-term composition. J Supercomput 78:1944–1960 27. Srinivasan L, Selvaraj D, Dhinakaran D, Anish TP (2023) IoT-based solution for paraplegic sufferer to send signals to physician via Internet. SSRG Int J Electr Electron Eng 10(1):41–52. https://doi.org/10.14445/23488379/IJEEE-V10I1P104 28. Selvaraj D, Udhaya Sankar SM, Dhinakaran D, Anish TP (2023) Outsourced analysis of encrypted graphs in the cloud with privacy protection. SSRG Int J Electr Electron Eng 10(1):53–62. https://doi.org/10.14445/23488379/IJEEE-V10I1P105 29. Faccio M, Persona A, Zanin G (2011) Waste collection multi objective model with real time traceability data. Waste Manage 31(12):2391–2405 30. Aruna Jasmine J, Nisha Jenipher V, Richard Jimreeves JS, Ravindran K (2021) A traceability set up using digitalization of data and accessibility. In: International conference on intelligent sustainable systems (ICISS), Tirunelveli, India. IEEE Xplore, pp 907–910. https://doi.org/10. 1109/ICISS49785.2020.9315938

Monitoring of Wireless Network System-Based Autonomous Farming Using IoT Protocols D. Faridha Banu , N. Kumaresan, K. Geetha devi, S. Priyanka, G. Swarna Shree, A. Roshan, and S. Meivel

Abstract Guide watering is actually still largely made use of in the agricultural industry, making use of standard drip and also easy sprinkling. Nevertheless, standard watering bodies are actually unproductive and also inexact, triggering either inadequate or even extreme sprinkling. Additionally, it is actually hard for farmers to anticipate the right amounts at the proper opportunity. Keeping track of the plant industry might additionally bring about individual inaccuracies and also be possibly dangerous for the backwoods. The Internet of Things (IoT) board is actually incorporated along with a pass on and also an RTC component to irrigate vegetation at particular opportunities and is additionally geared up with an easy infrared sensing unit to find intruders around the crop-field. Farmers might screen and also personally manage the watering procedure making use of an Android device to detect temperature, moisture, and humidity. Furthermore, they might personally switch on a buzzer to warn off any type of possible destructive star. The various IoT-based Autonomous Farming Systems using Message Queuing Telemetry Transport (MQTT), Constrained Application Protocol (CoAP), and Hypertext Transfer Protocol (HTTP) protocols are tested and surveyed. The proposed system provides secured and collected data from the agricultural system to analyze the plant vegetation and diseases and analyze provides an improvement in health. Keywords IoT · MQTT · CoAP and HTTP · Cloud server · DHT11 sensor

D. Faridha Banu (B) · K. Geetha devi · S. Priyanka · G. Swarna Shree · A. Roshan Department of ECE, Sri Eshwar College of Engineering, Coimbatore, India e-mail: [email protected] N. Kumaresan Department of ECE, Anna University Regional Campus, Coimbatore, India S. Meivel Department of ECE, M. Kumarasamy College of Engineering, Karur, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_21

289

290

D. Faridha Banu et al.

1 Introduction to Agriculture and Farming Systems Agriculture’ spring times coming from the Latin phrase “Ager” indicates property or even industry, and also “Culture” indicates cultivation. It indicates the scientific research and also fine craft of production crops and also animals for financial function. Horticulture is actually a craft of elevating plants coming from the dirt for the use of the human race. Horticulture is actually the mile stone within the record of individual people, due to the horticulture guy clearing up certain areas. Horticulture is actually one of the earliest and prime tasks of humanity. It is still an essential resource for property. Despite the world’s increasing industrialization and urbanization, horticulture still employed nearly half of the working population. In establishing nations, the horticulture field was a significant source of jobs and also added to the economic situation. The essential intention of horticulture is actually to raise more powerful and also more productive crops and also vegetation and also to aid all of them in their development through enhancing the soil and also providing water. Horticulture is actually a foundation of the Indian economic situation. In India, approximately sixty-four percent of the population depends on horticulture for their daily meals. Horticulture tasks are actually carefully handled through bodily elements all over the world. Today, Indian horticulture is actually certainly not an exemption to this. Today, India is actually dealing with two principal problems in horticulture. The first is meeting the growing demand for food, the second is providing agricultural products to an ever-increasing population, and the third is the unequal progression of horticulture and the changing trend of horticultural property use. India attempted to become self-sufficient in horticulture via the Five years strategy. As a result of its unique value, horticulture is becoming increasingly important in every five-year strategy, and top priority is given to the advancement of horticulture in our country. The research of property and also horticulture, from a geographical perspective, obtained more usefulness after 1950. At the start of the 1970s and also eventually, the new reformation brought about significant change in the horticulture industry, resulting in India becoming not only self-sufficient in food grains, but also a little high in quality. The process of horticulture progression is not adequately channelized as a result of unequal rains, a lack of simple commercial infrastructure centers, and unbalanced resource appropriation. The fresh reformation is actually successful just in the region of watering. Regardless of how hard the federal government tries, the small farmers are unlikely to profit from it. This creates a large gap between small and large farmers, as well as an unbalanced environment. To decrease this space, organized organizing is actually needed for this function. It is actually important to have the specifics of the location. In a lot of nations like India, the bulk of the populace relies on farming, and also the nations own nationwide earnings originate from farming. Despite this, or even the fact that modern innovation can be found all over, horticulture.

Monitoring of Wireless Network System-Based Autonomous Farming …

291

2 Review of Literature This function created a unit that will definitely immediately screen the horticulture areas. Also, as doing live online video streaming from the web server on its own, via raspberry pi electronic camera, for keeping track of the horticulture industry. The horticulture areas are actually checked for ecological temperature and moisture at a dirt dampness sensing unit. IoT and also cordless sensing unit nodes help to reduce attempts to detect agricultural areas. IoT also prevents the loss of horticulture specifications data source and also spares in the storing tool or even shadow for long life. It also provides continuous monitoring of all areas, including critical areas. Horticulture items depend on the atmosphere of the manufacturing facility, like family member moisture, PH of dirt, temp, and so on. The “popped the question” unit style was actually created so as to get more yields through recognizing the triggers [1]. Weather improvements as well as rain were unpredictable over many years. As a result of this, climate-smart procedures referred to as “brilliant farming” are actually embraced by a lot of farmers. In the current unit, community farmers may have grown the exact same plant for centuries, but over time, survival strategies, soil problems, parasite outbreaks, and conditions have all changed. Through making use of the popped the question unit technique, which detects the local area agricultural specifications, determines the site of the sensing unit, moves the record plant areas as well as performs plant checking. The improved information permits the farmers to deal with or even take advantage of these improvements. Acquiring real-time and historical environmental data is expected to aid in the achievement of dependable data administration/checking and use [2]. In the previous unit, agriculturists used to figure out the ripeness of soil as well as assumptions to create specific kinds of items. They failed to think of the amount of sprinkling, moisture as well as weather problems. Productivity depends entirely on the final stage of the collection on which it depends. They increased the effectiveness of the item in this pop the question unit, which appraises the attribute of the collect. To help compete with the difficulties in the area, IoT is actually made use of in offering reliability as well as traditional cultivation. They also used cordless sensing unit systems in precision farming, dividing the single plant for examination into tens or even hundreds of square feet. Additionally, it made use of various kinds of sensing units such as temperature level sensing units, moisture sensing units, dirt wetness sensing units, sprinkle amount sensing units, and ARM CPU [3]. The farmers are actually still making use of conventional procedures for farming, which causes reduced production of crops as well as fruits. Thus the plant’s produce could be boosted by utilizing automated machineries. However, by utilizing IoT, our experts can anticipate an increase in production as well as a decrease in expense by checking the effectiveness of the dirt, temperature level, and moisture checking. In the current unit, they made use of the conventional procedures for the plant’s production. Yet in the popped question unit, the mix of conventional procedures along with IoT as well as cordless sensing unit systems may be the key to farming modernization. The built-in unit is actually more dependable as well as favorable for farmers. The use of

292

D. Faridha Banu et al.

such a unit in the area may most undoubtedly aid in evolving the collection of the crops as well as international creation [4]. The carried out structure is made up of various sensing units as well as devices, and they are actually adjoined through remote control communication components. The sensing unit records are delivered as well as obtained from the customer point via a web connection, which was made possible in the Node MCU at the same time. A visible resource IoT system. This unit is actually made use of to keep the optimum functioning of the watering unit efficiently. The records could be looked at on the Point Talk application or even any type of website. The farmer may go over all of the information concerning the degrees, what opportunity they are actually performing, whether any changes are appearing or not, and whether the functions are actually being executed in opportunity. The number one task is actually to display the plant’s development making use of electronic means. This will certainly deliver the precise market values of numerous specifications upon which development depends. Besides, this version will certainly aid the farmer in displaying more than one property simultaneously. Checking areas via this unit demands much less manpower. Individuals with bodily impairments may be hired for checking areas [5]. Our experts’ purpose is to carry out a brilliant GPS-located remote control car that conducts numerous activities like checking areas to protect against burglaries, terrifying birds as well as pets, noticing dirt wetness material, splashing plant foods as well as chemicals, weeding, noticing dirt wetness, and so on. Brilliant watering will be implemented through the use of optimal amounts of sprinkle, based on the requirements of each plant style and also the soil. Eventually, our experts intend on applying brilliant storehouse administration, along with temperature level as well as moisture noticing for the gain of the items being actually saved, as well as diagnosis of the existence of any type of intruder that aims to take the items from the storehouse. All of these functions will be managed and checked using a remote control brilliant device and an Internet connection, and the functions will be carried out using interfacing sensing units, ZigBee components, and a microcontroller [6]. A lot of research is done in the field of farming, and many of them make use of cordless sensing unit systems that pick up data from various sensing units released at various nodes and also send it out via the cordless method. The gathered records deliver information about the numerous ecological variables. Checking the ecological variables is certainly not the most effective option to enhance the yield of crops. Certainly, there certainly are a lot of other variables that reduce the efficiency. For this reason, automation needs to be carried out in farming to eliminate these issues. In order to deliver an option to such issues, it is actually required to create an included unit which will boost efficiency in every phase. Yet, complete automation in farming is certainly not attained as a result of numerous problems. However, it is actually carried out in the research study amount; it’s not provided to the farmers as an item to obtain profit from the information. For this reason, this report packages approximately cultivating brilliant farming using IoT as well as offering it to the farmers. Application of such a unit in the area may most undoubtedly aid in boosting the produce of the crops as well as help to handle the sprinkler information efficiently, thereby lessening the wastefulness [7]. The layout a bot for dispensing the plant

Monitoring of Wireless Network System-Based Autonomous Farming …

293

foods for the dirt is based upon the vitamins and minerals quantity existing through assessing the dirt vitamins and minerals through the use of different color-sensing units. The dirt has actually been combined with the appropriate chemical option, and the RBG illuminations have been delivered via the dirt option as well as the mirrored lighting has been absorbed. Based upon the quantity of light mirrored from the option [4], the vitamins and mineral material have actually been evaluated. Yet this approach demands a different chemical option for each and every vitamin and mineral material [8]. The unit made use of an IoT system, including Unit under Review based upon the Agro-Meteorology unit for Viticulture Condition Cautioning. This unit is actually made use of to display the winery by making use of a cordless or even an actuator sensing unit, while the web server sub-system is actually made use of to broadcast records to the web server. An improved approach actually popped the question through writers in [9]. The paper results explain a unit that’s built to handle sprinkler circulation as well as electrical power administration for a watering unit. Within this particular layout, the optimum supply of water was actually determined making use of the approximate information of an ambiguous specialist approach. An identical method was used to pop the question by the writers in [10]. This later unit examined an event-based watering unit through the use of tomato vegetation. This approach is actually made use of to decrease sprinkle usage to boost the effectiveness of the unit. Aside from that, a unit popped the question through the writers in [11] that created a unit that made use of sustainable resources, including solar electrical power, to carry out automated watering unit. The major target of the approach is actually to create an inexpensive as well as time-based watering unit. An analogous approach to watering figure was posed by [12], which primarily focused on providing an active present day watering figure (MIS) that relies on humidity command utilized by Arduino Nano along with details of sugar farmstead adjustments. This task can easily reduce sprinkle usage while also protecting crops from harm. The short post in [13] built a figure to display vegetation as well as command the supply of water by means of a cell phone through the incorporation of numerous sensing units to discover dirt humidity as well as the temp of the plant. An identical procedure was performed on the question [14].

3 Proposed System 3.1 Overview This section describes the information of the popped question format as well as how it functions. The main point of the task will be specified as well as depicted with flowcharts and obstruct diagrams. ARUDINO presents the general outline of the popped question figure that uses ESP32 NodeMCU as the primary equipment. The microcontroller functions as the human brain of the figure, where the temp as well as moisture, infrared, as well as dirt humidity sensing units supply the electronic

294 Fig. 1 Proposed block diagram

D. Faridha Banu et al.

PH Sensor – [Soil detection] Temperature and Moisture

Relay IoT board

& Pump Motor LCD

details to as well as where it will certainly process the details. The input from these sensing units will be used to either activate or deactivate the electric motor pump based on the recommended guidelines. The dirt humidity sensing unit is actually placed inside the dirt, which lies close to the plant, to find the humidity of the dirt. The temp as well as moisture sensing units are actually located in the plant and are responsible for determining the temp as well as moisture of the neighboring sky. All the information that has actually been gathered coming from all of the sensing units will certainly be featured in the LCD situated close to the plant area for keeping an eye on reasons. The information gathered will be published to the firebase shadow utilizing a microcontroller, where it will be advised to the web server to be saved in the data bank. Furthermore, the built-in mobile phone application will certainly show the information obtained from the data bank. Customers can easily have accessibility as well as command of their automated watering figures as well as display the plants by means of their smart devices. Utilizing this technique, customers can easily display the plant effectively as well as monitor the actual time information on the health condition of the dirt, temperature, and moisture of the sky-encompassing plant. In addition to that, this figure can easily function as an invasion diagnosis figure, which prevents brows from coming from excess creatures that can easily harm the crops or even the existence of any sort of unapproved person. This could be performed with the assistance of the static infrared (PIR) sensing unit. The PIR sensing unit is actually affixed before the plant area to find any sort of type of invasion. If the PIR sensing unit discovers the invasion, the alarm system will certainly be turned on immediately the figure is designed to deliver the selected indicators to the data bank whenever the figure is in the automated or manual mode. These indicators could be tracked coming from a desktop computer, which functions as the data bank. Figure 1 shown the proposed system setup and Fig. 2 shown overall circuit diagram.

3.2 Analysis the Proposed System This section elaborates on the task’s pop the question stream from start to finish. The task could be separated into 4 periods as displayed in Fig. 3, while Fig. 4 presents the obstructed layout of the clever plant keeping an eye on as well as the automated watering body. The microcontroller is actually gotten in touch with an energy source

Monitoring of Wireless Network System-Based Autonomous Farming …

295

Fig. 2 Overall circuit diagram

to perform, which is actually after that attached to the numerous input sensing units as well as static infrared sensing units. The result components, which include the communicate, LCD, and buzzer, are actually attached to the NodeMCU to be given an input, thus turning on the sprinkle watering, displaying the current information of the sensing unit, and specifically turning on the alarm system. The microcontroller also has the capability to connect data to the Internet via Wi-Fi modern technology. The observing subsection offers a description of the obstruct component made use of in the figure, including information accomplishment, result, invasion diagnosis, shadow storing component, as well as keeping an eye on as well as evaluation component. The information accomplishment method could be pictorial in Fig. 5. This stage begins through initializing the LCD and is actually the succeeding method after initializing all of the sensing units as well as specifying the RTC. To determine the current information of plant health condition, pairs of sensing units are used. The 1st sensing unit made use of is actually a dirt humidity sensing unit. This sensing unit actions the health condition of the dirt based upon the protection gained due to the dirt’s humidity. This sensing unit has a pair of forms of input which are actually analogue as well as electronic. If the value is actually readied to analogue, the customer can easily observe the present voltage get through this sensing unit. Nevertheless, if the customer collects the worth in electronic form, the customer can easily read through the worth in ASCII form, which actually comes from 0 to 1023. The reduced value demonstrates that the dirt humidity is actually completely dried out as well as the other way around. The 2nd sensing unit that’s made use of within this particular task is actually a temp as well as moisture sensing unit (DHT11). This sensing unit acts on the present temperature as well as moisture neighboring the plant. Depending upon the volume of sprinkle of all-organic neighbors, including rainfall, the moisture as well as temperature analysis will certainly be much higher in wet times. Nevertheless, if the health condition is actually completely dried out, the moisture as well as temperature will certainly be reduced. Furthermore, all of the information gathered by these sensing units is actually gathered as well as delivered to shadow via Wi-Fi link. This stage will certainly, after that, remain to the next stage as shown through ‘B’, which embodies the trespasser diagnosis stage. If the figure discovers that opportunity is actually daytime, the figure will certainly determine the

296

D. Faridha Banu et al.

Start

HTTP request

DHT11 connect

Enabled CoAP protocol

Enabled MQTT protocol

Display the Real-time data (Temperature and humidity, and PH

Secured data (humidity and Temperature and PH value)

Fig. 3 Proposed method

Fig. 4 Results of Thingspeak.com—cloud server

health condition of the dirt by utilizing a dirt humidity sensing unit. It improves the health condition of the dirt in the course of the day. If the sensing unit finds that dirt resides in a completely dried out health condition, the serial display will certainly show notification “completely dry out” on the OLED, as could be seen in Fig. 8. In this particular circumstance, the sprinkle pump will certainly be switched on for 6 s to irrigate the vegetation utilizing the communicate component.

Monitoring of Wireless Network System-Based Autonomous Farming …

297

Fig. 5 Programming of MQTT protocol

3.3 Proposed Method The proposed method is shown in Fig. 4. Nevertheless, if the sensing unit discovers that the health condition of the dirt is actually currently damp, the serial display will certainly show a “damp” notification on the LCD display as well as the sprinkle pump will certainly be switched on for simply 3 secs. The choice to switch on the sprinkle pump in a dirt-damaged condition is actually to guarantee the quality as well as sustainability of the vegetation up till the next sprinkling. Nevertheless, it is very important to note that the volume of sprinkle within this particular health condition may be slow and might merely be a spray. This is actually to stay away from oversprinkling of the currently damp dirt, which can harm the vegetation. If the sensing unit detects that the dirt is quite damp, or if the sensing unit is placed at ground level, the sprinkler pump will not be activated. Furthermore, if the RTC component detects that the time has changed to evening or that the temperature has risen above 38 °C, the sprinkler pump will not be activated. The clever watering as well as keeping an eye on the figure is actually after that nourished as an input to the RTC component as shown through the ‘D’ port in the body. The result coming from this RTC is actually nourished into the succeeding stage in ‘E’. The information is stored in the shadow, as well as IoT treatment, such as the Firebase Data Bank System. After the information is actually gathered at the details area, the information is actually sent out back to the cell phone as well as various other tools to be displayed by the farmer. The farmer can easily get an alert if the trespasser is actually developed as well as the temperature of the plant area is actually warm. The data transmitted via Wi-Fi by NodeMCU will be routed to a shadow web server located in the vicinity. This shadow web server lies

298

D. Faridha Banu et al.

within the PC. The data collected by the shadow web server will then be broadcast to the Firebase Data Bank System, which will save the data collected by the sensing units. In addition to that, this system can easily show the present information that has actually been broadcast coming from the microcontroller. This shadow data bank could be used to send an alert to the farmer if a specific limit is reached.

3.4 Selection of IoT The IoT device selected based on Protocols and speed of displaying information and battery backup. IoT had low voltage and reprogramming to activate the system or reset the same process. Figure 4 shown the results of Thingspeak.com—cloud server to deliver the ability display readings of DHT11 sensor, PH level sensor, and output of DC motor pumps using relays.

3.5 Result of Thingspeak Cloud Server Figure 4a and b show the results of Thingspeak cloud server conveying the readings of DHT11 sensor, pH level sensor and output of DC motor pumps using relays.

3.6 Programming of COAP and MQTT Protocol Google Firebase is controlling monitoring devices to continuously display the temperature and humidity data. HTTP and MQTT protocols are provided to secure the data using encryption and decryption and all the information gathered and published to Firebase. Figure 5 shows how to prevent the storage of DHT data in a database. The sort of invasion detected real-time data to protect the system, and it built a firewall for the system using IoT protocols. Figure 5 shows programming of MQTT protocol and Figs. 6, 7, and 8 shown results of Silo_ABC in CoAP device. It is cloud server results from real-time database of Silo A place, Silo B place, and Silo C place.

4 Conclusion The various IoT-based autonomous farming systems using MQTT, CoAP, and HTTP protocols are designed and surveyed. The proposed system provides secured and collected data from the agricultural system to analyze the plant vegetation and diseases, and the analysis provides an improvement in health. The proposed system

Monitoring of Wireless Network System-Based Autonomous Farming …

299

Fig. 6 Results of Silo A in CoAP devices

Fig. 7 Results of Silo B in CoAP devices

can be used as an autonomous irrigation system when it reaches high-priority agricultural parameters using IoT cloud data in real-time. The plant industry requires lowcost products with intelligent irrigation and unmanned systems [1]. The proposed system provided a real-time report to the farmers and users to determine the quality of the plant’s leaves, seeds, and fruits. Farmers might screen and also personally manage the watering procedure using an Android device. MQTT, CoAP, and HTTP protocols are surveyed to secure the farming data. Only users have access to this information. Every day, vital plant farming data is stored, secured, and collected. All data is linked to cloud servers and can be hidden using the user’s account [5]. When searching the data, the proposed system gave an alert and informed the owners using

300

D. Faridha Banu et al.

Fig. 8 Results of Silo C in CoAP devices

Table 1 Comparison of protocols Protocols data

MQTT—success rate

CoAP—success rate

HTTP—success rate

1

6560 data

7100 data

938 data

2

6530 data

7050 data

930 data

3

6501 data

7001 data

935 data

4

6505 data

6990 data

931 data

5

6505 data

6850 data

938 data

6

6520 data

6700 data

934 data

7

6510 data

6650 data

920 data

8

6500 data

6600 data

915 data

9

6450 data

6500 data

914 data

10

6300 data

6400 data

910 data

MQTT, CoAP, and HTTP protocols before tapping the data. The proposed system has successfully addressed the stated problems and achieved the objectives of providing efficient, low power water consumption based on specific conditions. The final results show the success rate of IoT agricultural data. In comparison to HTTP and MQTT success rates, we received the highest CoAP success rate. In Table 1, the success rate provided provisions for the security of data and real-time data on the cloud server for monitoring.

References 1. Patil SN, Jadhav MB (2019) Smart agriculture monitoring system using IoT. Int J Adv Res Comput Commun Eng 2. Lashitha Vishnu Priya P, Sai Harshith N, Ramesh NVK (2018) Smart agriculture monitoring

Monitoring of Wireless Network System-Based Autonomous Farming …

301

system using IoT. Int J Eng Technol 3. Bouarourou S, Boulaalam A (2019) Services search techniques architecture for the Internet of Things. In: International conference on artificial intelligence and symbolic computation, pp 226–236 4. Kurniawan F, Nurhayati H, Arif YM, Harini S, Nugroho SMS, Hariadi M (2018) Smart monitoring agriculture based on Internet of Things. In: Proceedings of the 2nd East Indonesia conference on computer and information technology: Internet of Things for industry, EIConCIT 2018, p 363 5. Ashifuddinmondal M, Rehena Z (2018) IoT based intelligent agriculture field monitoring system. In: Proceedings of the 8th international conference Confluence 2018 on cloud computing, data science and engineering, Confluence 2018, pp 625–629 6. Meivel S, Maheswari S (2022) Monitoring of potato crops based on multispectral image feature extraction with vegetation indices. J Multidimens Syst Signal Process. https://doi.org/10.1007/ s11045-021-00809-5 7. Meivel S, Maheswari S (2022) Quality management of healthcare food production in agricultural forest fields using vegetation indices with multispectral drone mapping images. J Environ Prot Ecol 23(1):266–279 8. Meivel S, Sindhwani N, Anand R, Pandey D, Alnuaim AA, Altheneyan AS, Jabarulla MY, Lelisho ME (2022) Mask detection and social distance identification using Internet of Things and faster R-CNN algorithm. Comput Intell Neurosci 2022:2103975. https://doi.org/10.1155/ 2022/2103975 9. Sindhwani N, Anand R, Meivel S, Shukla R, Yadav MP, Yadav V (2021) Performance analysis of deep neural networks using computer vision. INIS, EAI Endorsed Trans Ind Netw Intell Syst. https://doi.org/10.4108/eai.13-10-2021.171318 10. Meivel S, Maheswari S (2020) Remote sensing analysis of agricultural drone. In: UASG 2019 - 1st international conference on unmanned aerial system in geomatics, IIT Roorkee greater Noida, Delhi, 6–7 Apr 2019, published in J Indian Soc Remote Sens - Springer 49(11):689–701. https://doi.org/10.1007/s12524-020-01244-y 11. Meivel S, Maheswari S (2020) Optimization of agricultural smart system using remote sensible NDVI and NIR thermal image analysis techniques. In: 2020 international conference for emerging technology (INCET). IEEE Explore. 978-1-7281-6221-8/20. https://doi.org/10.1109/ incet49848.2020.9154185 12. Meivel S, Elakkiya S, Kartheeswari V, Preethika KV (2023) Wireless underground soil networks-based multiparameter monitoring system for mining areas. In: Ranganathan G, Fernando X, Rocha Á (eds) Inventive communication and computational technologies. Lecture notes in networks and systems, vol 383. Springer, Singapore. https://doi.org/10.1007/978-98119-4960-9_26 13. Öksal Ö, Tekinerdogan B (2019) Architecture design approach for IoT-based farm management information systems. Precis Agric 20:926–958. https://doi.org/10.1007/s11119-018-09624-8 14. Pandey MK, Garg D, Agrahari NK, Singh S (2021) IoT-based smart irrigation system. In: Saini HS, Sayal R, Govardhan A, Buyya R (eds) Innovations in computer science and engineering. Lecture notes in networks and systems, vol 171. Springer, Singapore. https://doi.org/10.1007/ 978-981-33-4543-0_23

A Brief Survey on Enhanced Quality of Service Mechanisms in Wireless Sensor Network for Secure Data Transmission Pavan Vamsi Mohan Movva and Radhika Rani Chintala

Abstract Increasing demand for wireless sensors in a wide variety of applications has elevated Quality of Service (QoS) concerns to the forefront of the wireless sensor industry. QoS needs vary from application to application. In order to keep tabs on and gather data on the physical state of a given area, a typical Wireless Sensor Network (WSN) utilizes a combination of sensing technology. There might be either physical or logical communications between nodes. The military, business, and health sectors, as well as the environmental and urban sectors, can all benefit from WSNs because of the low-cost and rapidly deployable solutions they provide. WSNs are expected to play a significant role in the sensing of a wide variety of applications in the near future. Safe data transmission via a WSN network is a major difficulty. This is because WSNs are frequently used in unsupervised or even hostile settings. In recent years, routing approaches have mostly focused on metrics like trust, resilience, energy preservation, etc. However, other security solutions have lately emerged, taking into account the security challenges in WSNs as well. Since wireless sensors applications are becoming increasingly popular, QoS has emerged as a critical concern. The available resources of sensors and the numerous applications running across these networks have varied limits in their nature and requirements, making it tough and challenging to guarantee quality of service in WSNs. Quality of service has historically been concerned with network-level measures like delay, throughput, etc. This research provides a brief survey on numerous issues in WSN including routing, secure data transmission, malicious actions detection, and encoding of sensory data for secure data transmission. Keywords Wireless sensor networks · Quality of service · Routing · Encoding · Malicious actions · Secure data transmission

P. V. M. Movva (B) · R. R. Chintala Department of Computer Science Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, AP, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_22

303

304

P. V. M. Movva and R. R. Chintala

1 Introduction WSNs are one of the fastest-growing fields in the field of data processing and communication networks [1]. The foundation of WSNs is the communication between physically dispersed sensor nodes, which share data primarily about their surrounding environments with one another. Wildlife, real-time target tracking, transit, entertainment, combat, building safety monitoring, agriculture, etc. are just a few of the many sectors where WSN has proven useful [2]. Many sensor nodes, all linked together wirelessly, collect data about their surroundings in a WSN. Sensor nodes in the network’s structure utilize a single integrated circuit that contains all the necessary electronic components. Since the entire sensor runs on a single tiny battery [3], the network’s longevity is directly tied to the sensor’s power usage [4]. The sensors in the network are supported by a Base Station (BS), which acts as the network’s portal to the outside world. But energy consumption is still a major barrier to widespread use of this technology [5], especially in use cases where a long network lifetime and excellent quality of service are essential. Architecture of Wireless Sensor Networks is shown in Fig. 1. For many years, monitoring critical infrastructure, factories, battlefields, and other public and private spaces has been a matter of enormous importance in both civilian and military settings [6]. Recent years have seen a surge in the development of sensor nodes, resulting in ever-smaller nodes at ever-lower prices with breakthroughs in microelectronics, highly integrated electronics, and improved energy accumulators [7]. The concept was that the sensor nodes could work together wirelessly to keep tabs on whatever was going on, regardless of where they might be located, just by configuring themselves and organizing themselves as they went along in an ad hoc fashion [8]. The computing power, memory, and transmission range of the sensor nodes are severely limited due to the sensor’s energy, typically a battery that should endure for the sensor’s lifespan [9]. Due to this, the nodes cannot independently carry

Fig. 1 Wireless sensor networks

A Brief Survey on Enhanced Quality of Service Mechanisms in Wireless …

305

out computationally demanding activities or produce useful outcomes [10]. Accordingly, the sensor nodes need to collaborate to monitor larger regions, accumulate measured values, and transmit them to a location in the network where the data can be readout and analyzed. Data packet routing from a source to a destination is a significant topic of study in WSNs. Energy efficiency is a crucial consideration in the design of routing protocols for WSNs due to the scarcity of available power [11]. Each sensor’s transmission range is extremely low to conserve power, which means that data packets that need to be sent across the network must be passed across numerous hops [12]. The routing must be fault-tolerant and continuously adapt while consuming as little energy as feasible in order to survive topology shifts, interferences from environmental factors or enemies, node failures, and dwindling power supplies [13]. With current routing data, packets can be redirected away from vulnerable nodes, preventing a total network condense [14]. Moreover, the routing algorithm needs to consider load balancing to prevent overloaded nodes from causing the network to fragment and subsequently lose connections. In addition, WSN routing methods should think about fusing sensed data to cut down on duplicate data transmissions [15]. While the routing of data packets in WSNs is a crucial service that enables communication, security concerns in this area have been mostly disregarded. Instead, modern routing protocols are focused on characteristics like availability, resilience, responsiveness, and power efficiency [16]. However, in almost all application areas where WSNs are used, sensors are placed in hostile or unaccompanied situations [17], resulting in the potential for opponents to launch certain threats against sensor nodes, and the failure to consider possible security issues in the region of routing can be fatal [18]. Due to the ease with which adversaries can gain physical access to the sensors, node capture and compromise has emerged as a pressing issue. The communication process in WSN is shown in Fig. 2. Quality of service is a fuzzy concept that might signify different things to different people in the academic and technical world. Quality of Service in WSNs can be

Fig. 2 Communication process in WSN

306

P. V. M. Movva and R. R. Chintala

analyzed from two angles: the application level and the network level [19]. QoS factors unique to the application, such as sensor-node measurements, deployment, coverage, and number of active sensor nodes, are meant by the former term. In this context, efficiency is how well the underlying communication network serves the application in terms of factors like bandwidth and power usage [20, 21]. Increased use of WSNs can be attributed to the maturation of wireless networks and the proliferation of multifunctional sensors equipped with processing and communication capabilities. Depending on the type of application, WSNs can offer a more precise or dependable monitoring service. When it comes to satisfying the varying needs of various types of applications, quality of service can serve as a crucial mechanism. Due to limitations such as resource limitations and changeable topology, wired networkstyle QoS techniques aren’t suitable for WSNs. Providing QoS parameter assurances in real-time applications is one of the many difficulties associated with WSNs. As a result, middleware should offer novel techniques to sustain QoS for longer and even to adapt itself when the needed QoS and the status of the application change. In order to provide QoS in a Wireless Sensor Network, the underlying middleware should be constructed taking into account trade-offs between performance indicators such as available bandwidth or throughput, data delivery latency, and energy usage. Malicious node assaults are a major security issue in the WSN. When these malicious nodes attack a network, they target the security and availability of neighboring nodes. Attacks on WSNs come in many forms, including the Sybil assault, select forward attack, hello flood attack, sinkhole attack, false routing information attack, wormhole attack, and many others. Multiple security threats can compromise the data as it travels from its origin to its destination. To counteract the problems caused by network layer security assaults, secure routing has been set up throughout the network. Also, WSN-to-WSN connections require key management for authentication and encryption. Due to the distributed nature of WSNs and the absence of a centralized management structure, potential security threats go unchecked and unmanaged. The capability and collaboration of WSNs are also impacted by malicious acts from unreliable sensors. Lightweight symmetric key cryptography, localized authentication and encryption protocol, elliptic curve cryptography, predetermined key management scheme, and malicious aggregator identificationare all examples of existing methods used for secure interaction and malicious node detection in WSNs.

2 Literature Survey In order to address the problem of fault tolerance in hierarchical topology-based wireless sensor networks (WSNs), Quoc et al. [1] suggested a hybrid fault-tolerant routing scheme in this research. Clustering and assigning sensor nodes produce the hierarchical topology. As a result, the network space is partitioned into small square grids, and the cluster heads of these grids are each symbolized by a Gaussian integer. This interconnection of cluster hubs is the basis of the Gaussian network.

A Brief Survey on Enhanced Quality of Service Mechanisms in Wireless …

307

The research offers a hybrid Fault-tolerant Clustering routing mechanism based on the Gaussian network for Wireless sensor networks (FCGW), taking advantage of the symmetry of nodes, the shortest route in the Gaussian network, and the benefits of multipath routing. In this study, Wang et al. [2] introduced the destination-oriented routing algorithm (DORA), a novel multichain routing approach. For WSNs that aim to conserve energy, the suggested architecture creates a novel multichain routing scheme for sending packets. The simulation results show that the proposed DORA extends the lifetime of the network by 60% compared to the Remote Procedure Call (RPC) protocol and by 100% compared to standard Power-Efficient GAthering in Sensor Information Systems (PEGASIS). To extend the lifetime of software-defined multihop wireless sensor networks (SDWSNs), Jurado-Lasso et al. [3] offered an energy-aware routing method and a control overhead reduction technique. The goal here is to reduce the energy used by WSNs serving the Industrial IoT while maintaining service quality (IIoT). The author demonstrated that, in comparison to the conventional shortest-path technique, the suggested method increases the WSN’s lifetime by 6.5% on average, while simultaneously decreasing the control overhead by around 12%. Lifespan is a key indicator of WSN performance, however existing hierarchical routing protocols have problems with overloaded nodes that might cause them to fail prematurely. In response to this issue, Wang et al. [4] suggested an LEMH routing method, which stands for low-energy first electoral multipath alternating multihop. To prevent the nodes from becoming overloaded, an adaptively changing contention radius is assigned, allowing the nodes to pool their remaining power and cast a vote for candidates in the cluster heads election. Next, the active nodes use two-dimensional competition parameters to determine the ultimate cluster leader. Due to the constrained nature of sensor nodes, Adil et al. [5] proposed a dynamic cluster-based static routing protocol (DCBSRP) that makes use of both the ad hoc on-demand distance vector (AODV) routing protocol and the low-energy adaptive clustering hierarchy (LEACH) protocol to provide a highly effective hybrid routing scheme. The proposed approach employs static routing within the selected clusters using the AODV routing protocol, with the cluster head (CH) nodes being generated dynamically for a predetermined time period.

2022

2021

Quoc et al. [1]

Wang et al. [2] DORA: a destination-oriented routing algorithm for energy-balanced wireless sensor networks

A hybrid fault-tolerant routing based on Gaussian network for wireless sensor network

Year of publication Manuscript title

Author name

The author introduced the destination-oriented routing algorithm (DORA), a novel multichain routing approach. For WSNs that aim to conserve energy, the suggested architecture creates a novel multichain routing scheme for sending packets

The author suggested a hybrid fault-tolerant routing scheme in this research. Clustering and assigning sensor nodes produce the hierarchical topology. As a result, the network space is partitioned into small square grids, and the cluster heads of these grids are each symbolized by a Gaussian integer

Proposed model

(continued)

The nodes in the routing process can be validated for avoiding malicious action in the network. The energy consumption can also be reduced by using the on and off strategy

The proposed model is prone to attacks that can be reduced using trust factor models. The packet loss ratio can be reduced for the enhanced quality of service

Limitations

308 P. V. M. Movva and R. R. Chintala

LEMH: low-energy-first electoral multipath alternating multihop routing algorithm for wireless sensor networks

Wang et al. [4]

2022

Energy-aware routing for software-defined multihop wireless sensor networks

Year of publication Manuscript title

Jurado-Lasso et al. [3] 2021

Author name

(continued)

The author suggested an LEMH routing method, which stands for low-energy first electoral multipath alternating multihop. To prevent the nodes from becoming overloaded, an adaptively changing contention radius is assigned, allowing the nodes to pool their remaining power and cast a vote for candidates in the cluster heads election

The author offered an energy-aware routing method and a control overhead reduction technique. The goal here is to reduce the energy used by WSNs serving the Industrial IoT while maintaining service quality (IIoT). While a centralized controller’s ability to provide a birds-eye perspective of the sensor network is invaluable, the additional control overhead it introduces into the network does come at the expense of energy efficiency

Proposed model

(continued)

The proposed model considers multipath detection that increases the delay in the communication that reduces the system performance. The system performance can be enhanced by rerouting models that avoid delays

The proposed model considered for energy aware routing can be further enhanced by considering limited nodes that increase the delay levels in the communication

Limitations

A Brief Survey on Enhanced Quality of Service Mechanisms in Wireless … 309

2020

An energy proficient load balancing routing scheme for wireless sensor networks to maximize their lifespan in an operational environment

Year of publication Manuscript title

Author name

Adil et al. [5]

(continued) The author proposed a dynamic cluster-based static routing protocol (DCBSRP) that makes use of both the ad hoc on-demand distance vector (AODV) routing protocol and the low-energy adaptive clustering hierarchy (LEACH) protocol to provide a highly effective hybrid routing scheme. The proposed approach employs static routing within the selected clusters using the AODV routing protocol, with the cluster head (CH) nodes being generated dynamically for a predetermined time period. For a set period of time (T), the proposed scheme’s static routing condition mandates that all cluster nodes route their data through a single CH node

Proposed model

The proposed model performs based on AODV. The model considers malicious nodes also that degrade the network transmission rate. The routing can also be performed dynamically that increases the lifespan

Limitations

310 P. V. M. Movva and R. R. Chintala

A Brief Survey on Enhanced Quality of Service Mechanisms in Wireless …

311

Wireless sensor networks (WSNs) have two key issues, security and energy consumption, because of their features of limited resources and dynamic topology. It is true that trust-based solutions are practical now for dealing with a wide range of undesirable node behaviors, but several threats, like high energy usage, communication bottlenecks, and malicious nodes, persist. Hu et al. [6] presented a novel trustbased safe and energy efficient routing protocol (TBSEER) as a means of addressing these issues. To counteract black hole, selective forwarding, sinkhole, and hello flood assaults, TBSEER computes the global trust value using adaptive direct trust value, indirect trust value, and energy trust value. Attacks involving the loss of a smart card and attacks that capture an entire network’s worth of nodes are two of the most typical causes of security breaches. While the former has been studied extensively, node capture attacks have received very little academic interest. Wang et al. [7] make a significant contribution in systematically exploring node capture threats against multi-factor user authentication techniques for WSNs, which would go a long way toward mitigating this unfavorable condition. The author began by looking into the background of node capture attacks and then categorize them into 10 distinct subtypes based on the nodes targeted, the capabilities of the adversary, and the vulnerabilities that are exploited. An anonymous three-factor authentication and access control approach for realtime applications in WSNs was proposed by Shin and Kwon [8]. In this work, the author presented a system architecture for the Internet of Things (IoT) that takes into account the combination of WSNs and 5G. A privacy-preserving authentication, authorization, and key agreement mechanism for WSNs in 5G-integrated IoT is proposed based on a cryptanalysis of the scheme and system architecture. Unfortunately, it is discovered that the technique is vulnerable to user collusion and desynchronization attacks and does not guarantee sensor-node anonymity. Using a noninvertible transformation approach called manipulatable Haar Transform (MHT), Yang and Wang [9] offered a privacy-preserving ECG-based authentication system. The proposed authentication process safeguards the privacy of individuals’ critical health and identity information stored in ECG data by providing secure intranode identification for WBSNs. Strong performance and efficiency of the proposed system are demonstrated by experiment results on two public databases and a real Internet of Things device. Additional proof of the MHT’s reliability comes from the field of security analysis. Moghadam et al. [10] studied Majid Alotaibi’s schema and find some potential vulnerabilities. Further, the author had detailed a security attack that can be launched against the suggested protocol. In addition, the author had devised a mutual authentication and key agreement mechanism based on Elliptic-Curve Diffie–Hellman (ECDH) to fix M. Alotaibi’s system’s security flaws. The author used the Scyther tool to construct own technique, assessed its security characteristics by hand, and compared it to other methods.

312

P. V. M. Movva and R. R. Chintala

3 Proposed Methodologies Wireless sensor networks are dispersed at will and tasked with performing widespread monitoring. Due to its restricted power and computing capabilities, WSN makes data aggregation a difficult task. The data may be passed on to a hostile node, which is a problem in data aggregation. In order to guarantee the security and longevity of any network, a secure, malicious-free environment is required. Despite the difficulty, strict safety procedures are required to keep this network operational. As the nodes of the wireless sensor network will be deployed in an unsecured and potentially dangerous environment, they will be vulnerable to attack. The primary functions of a wireless sensor network include monitoring, data gathering, and reporting, all of which necessitate a safe environment in which to transmit sensitive information. When it comes to ensuring safe data transfer, route selection is just as crucial. The selection of a malicious node in a route is a potentially catastrophic event that can have serious consequences for the operation of the network. The deployment of wireless sensor networks has enabled the monitoring of physical and environmental conditions in remote and inaccessible areas. Since their resources are restricted and they are used in severe settings, they will experience breakdowns and malicious attacks. If the sensor nodes are compromised, the base station may receive erroneous information. Therefore, confident operation of networks requires accurate and quick identification of hostile and malicious nodes. Better network speed can be achieved by quickly identifying malicious nodes without penalizing healthy ones in the process. The literature survey identified several limitations of the WSN that need to be overcome for enhanced quality of service by creating an effective system for identifying trusted routes by keeping tabs on network node activity using an auditor node and to design a framework for detecting the malicious node in WSNs by performing analysis of node behavior and data transmission levels with energy consumption levels, and finally to design a model for node authentication using effective cryptography models for securing the data during the data transmission process. The proposed model’s efficacy must be evaluated with the help of Detection Accuracy (DA) and False Alarm Rate (FAR). It is important to evaluate the suggested model’s QoS factors, such as its packet loss ratio, packet delivery rate, throughput, time levels for identifying malicious nodes, energy usage, and so on.

4 Conclusion One of the most pressing challenges in WSN is meeting application QoS requirements while yet delivering a service abstraction at a high enough level to be useful. The QoS in wireless sensor networks is an enormous research subject that presents a diverse array of challenges that might be explored further to uncover new insights on how to advance this crucial field. The primary operating parameters of WSNs and its components can be enhanced through the use of many techniques, taking into account the

A Brief Survey on Enhanced Quality of Service Mechanisms in Wireless …

313

quality of service metrics required by each application. Providing an adequate level of security for data transfer is one of the primary problems of WSNs. Mainly, this is due to limitations in available battery life, processing power, and storage space. When it comes to making sensor nodes work together and communicate, the routing protocol is one of the most important services needed in WSNs. Most existing WSN routing algorithms prioritized less-specific parameters like throughput, energy efficiency, and network robustness, while neglecting to give much thought to security. Because WSNs are frequently implemented in unattended or hostile contexts where private data and communication must be secured, neglecting security measures for WSN routing protocols is careless. As a result, this research surveys a range of routingrelated, data transmission related, and malicious actions related security concerns in WSNs. Traditional security measures, as noted, cannot be directly transferred to the area of WSNs without adaptations; instead, novel security approaches must be developed to account for the unique qualities of the sensor nodes, the fundamental security requirements of WSNs, and the various types of attacks that can be launched against WSNs. This research performs a brief survey on the routing, security limitations of WSN that help many researchers to refer to the issues and to propose novel solution for enhanced quality of service in WSN.

References 1. Quoc DN, Liu N, Guo D (2022) A hybrid fault-tolerant routing based on Gaussian network for wireless sensor network. J Commun Netw 24(1):37–46. https://doi.org/10.23919/JCN.2021. 000028 2. Wang K, Yu C-M, Wang L-C (2021) DORA: a destination-oriented routing algorithm for energy-balanced wireless sensor networks. IEEE Internet Things J 8(3):2080–2081. https:// doi.org/10.1109/JIOT.2020.3025039 3. Jurado-Lasso FF, Clarke K, Cadavid AN, Nirmalathas A (2021) Energy-aware routing for software-defined multihop wireless sensor networks. IEEE Sens J 21(8):10174–10182. https:// doi.org/10.1109/JSEN.2021.3059789 4. Wang Z, Shao L, Yang S, Wang J (2022) LEMH: low-energy-first electoral multipath alternating multihop routing algorithm for wireless sensor networks. IEEE Sens J 22(16):16687–16704. https://doi.org/10.1109/JSEN.2022.3191321 5. Adil M, Khan R, Ali J, Roh B-H, Ta QTH, Almaiah MA (2020) An energy proficient load balancing routing scheme for wireless sensor networks to maximize their lifespan in an operational environment. IEEE Access 8:163209–163224. https://doi.org/10.1109/ACCESS.2020. 3020310 6. Hu H, Han Y, Yao M, Song X (2022) Trust based secure and energy efficient routing protocol for wireless sensor networks. IEEE Access 10:10585–10596. https://doi.org/10.1109/ACCESS. 2021.3075959 7. Wang C, Wang D, Tu Y, Xu G, Wang H (2022) Understanding node capture attacks in user authentication schemes for wireless sensor networks. IEEE Trans Dependable Secur Comput 19(1):507–523. https://doi.org/10.1109/TDSC.2020.2974220 8. Shin S, Kwon T (2020) A privacy-preserving authentication, authorization, and key agreement scheme for wireless sensor networks in 5G-integrated Internet of Things. IEEE Access 8:67555–67571. https://doi.org/10.1109/ACCESS.2020.2985719

314

P. V. M. Movva and R. R. Chintala

9. Yang W, Wang S (2022) A privacy-preserving ECG-based authentication system for securing wireless body sensor networks. IEEE Internet Things J 9(8):6148–6158. https://doi.org/10. 1109/JIOT.2021.3109609 10. Moghadam MF, Nikooghadam M, Jabban MABA, Alishahi M, Mortazavi L, Mohajerzadeh A (2020) An efficient authentication and key agreement scheme based on ECDH for wireless sensor network. IEEE Access 8:73182–73192. https://doi.org/10.1109/ACCESS.2020.298 7764 11. Saleem MA, Shamshad S, Ahmed S, Ghaffar Z, Mahmood K (2021) Security analysis on “a secure three-factor user authentication protocol with forward secrecy for wireless medical sensor network systems.” IEEE Syst J 15(4):5557–5559. https://doi.org/10.1109/JSYST.2021. 3073537 12. Zhao R, Khalid M, Dobre OA, Wang X (2022) Physical layer node authentication in underwater acoustic sensor networks using time-reversal. IEEE Sens J 22(4):3796–3809. https://doi.org/ 10.1109/JSEN.2022.3142160 13. Kar J, Naik K, Abdelkader T (2021) A secure and lightweight protocol for message authentication in wireless sensor networks. IEEE Syst J 15(3):3808–3819. https://doi.org/10.1109/ JSYST.2020.3015424 14. Zou S, Cao Q, Wang C, Huang Z, Xu G (2022) A robust two-factor user authentication schemebased ECC for smart home in IoT. IEEE Syst J 16(3):4938–4949. https://doi.org/10.1109/ JSYST.2021.3127438 15. Kumar M, Mukherjee P, Verma K, Verma S, Rawat DB (2022) Improved deep convolutional neural network based malicious node detection and energy-efficient data transmission in wireless sensor networks. IEEE Trans Netw Sci Eng 9(5):3272–3281. https://doi.org/10.1109/ TNSE.2021.3098011 16. Ding J, Wang H, Wu Y (2022) The detection scheme against selective forwarding of smart malicious nodes with reinforcement learning in wireless sensor networks. IEEE Sens J 22(13):13696–13706. https://doi.org/10.1109/JSEN.2022.3176462 17. Li L et al (2020) A secure random key distribution scheme against node replication attacks in industrial wireless sensor systems. IEEE Trans Industr Inf 16(3):2091–2101. https://doi.org/ 10.1109/TII.2019.2927296 18. Pang B, Teng Z, Sun H, Du C, Li M, Zhu W (2021) A malicious node detection strategy based on fuzzy trust model and the ABC algorithm in wireless sensor network. IEEE Wirel Commun Lett 10(8):1613–1617. https://doi.org/10.1109/LWC.2021.3070630 19. Fang K, Wang T, Zhou X, Ren Y, Guo H, Li J (2022) A TOPSIS-based relocalization algorithm in wireless sensor networks. IEEE Trans Industr Inf 18(2):1322–1332. https://doi.org/10.1109/ TII.2021.3076770 20. Teng Z, Pang B, Du C, Li Z (2020) Malicious node identification strategy with environmental parameters. IEEE Access 8:149522–149530. https://doi.org/10.1109/ACCESS.2020.3013840 21. Gharib A, Ibnkahla M (2021) Security aware cluster head selection with coverage and energy optimization in WSNs for IoT. In: Proceedings of the IEEE international conference on communications, June, pp 1–6

A Mobile Application Model for Differently Abled Using CNN, RNN and NLP P. Rachana, B. Rajalakshmi, Sweta Leena, and B. Sunandhita

Abstract Inability to hear, speak and see are considered some of the major disabilities which make it difficult for the people with these disabilities (differently abled people) to communicate with others. These differently abled people make use of other modes of communication, such as sign language, for communicating efficiently. But often the differently abled people face difficulty even when they try to communicate using sign language, mainly due to 2 reasons: firstly, many people do not understand sign language and secondly, even if the people understand sign language, long distance communication is still difficult for the differently abled. So, a sign language converter is being developed to address these issues. In the modern world, sign language converters play a vital role in bridging the communication gap between the differently abled and the others. This sign language converter is being developed as an android application to make it more convenient for people to use and to reach a wide range of audience. This application comes with a sign language conversion framework, which is built using deep neural networks. The conversion model will be able to translate sign language to text (words or sentences)/speech and vice-versa, as per the user’s request. The sign language conversion model will be trained with the help of CNN using a sign language dataset to accurately predict the output. RNN model is also being used here to reduce the time complexity involved and for handling speech inputs (audio files). The sign language conversion system will use a microphone (in case of speech input) and a camera (in case of sign language video input). The converter will involve image capture, binarization, classification, hand shape edge recognition and feature extraction. Keywords Sign language · Convolutional neural network (CNN) · Recurrent neural network (RNN) · Voice assistant · Android application · Differently abled

P. Rachana · B. Rajalakshmi · S. Leena (B) · B. Sunandhita Department of Computer Science and Engineering, New Horizon College of Engineering, Bangalore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_23

315

316

P. Rachana et al.

1 Introduction There are 700 million people with disabilities in the world, and there are 143 existing sign languages (a variety of dialects). Sign language, as complex as spoken language, has thousands of characters formed by specific gestures (related to the hands) and facial expressions, each representing subtle variations in hand movement, shape, position, and facial expression. People usually find themselves in a difficult situation in society because these people cannot communicate efficiently with the differently abled people as compared to those who are learning the language of differently abled i.e. sign language. With the advent of multimedia, animation, and other computing technologies, it is becoming possible to bridge the communication gap between differently abled people and others. Sign language is a visual/gestural language that serves as a primary tool of communication for people with disabilities, similar to how spoken language is used, for communicating with others. Differently abled people encounter difficulties in communicating with other people using spoken language. The machine learning-based projects help translate words and phrases into sign language for the differently abled people and from sign language into words and phrases for the general public. An effective way for gesture recognition will also be implemented i.e. the system will be able to detect hand gestures effectively to produce the correct output for the user. So this application will prove useful even for users who do not know sign language properly. A voice assistant system will also be implemented which will prove to be useful for visually impaired people. To make the application more efficient, the deep learning models will be pre-trained with the help of CNN (Convolutional Neural Network) that helps train the model using the sign language dataset. The dataset would contain enough signs, gestures, numbers and their corresponding meaning. This large dataset will help the system to easily learn the language and predict the output for the input data given by the user. Using a large dataset will also help in improving the accuracy involved in predicting the correct output. NLP (Natural Language Processing) helps systems to understand and to communicate with humans in human language. NLP supports many languages that enhance human–machine communication. This makes it simpler for anyone to use the application. RNNs (Recurrent Neural Networks) help manage time complexity. To produce effective output, the system must be able to predict the output within seconds of a given input. As such, RNNs help the system predict outputs faster. Also, RNN is very useful for handling audio inputs. All of these technologies will make the models work well and help the models produce more accurate output.

2 Literature Survey Badhe and Kulkarni developed an Indian Sign Language (ISL) translator system [1] that helps differently abled people to communicate with other people using ISL. Indian Sign Language is really helpful for all differently abled people who would

A Mobile Application Model for Differently Abled Using CNN, RNN …

317

want to communicate using ISL. Machines are trained and assisted in converting sign language using convolutional and recurrent neural networks. The whole system is based on 5 sets of data to make sign language more efficient. This system converts the gestures, numbers and alphabets of Indian sign language into English. The algorithm does data collection and then gesture preprocessing for tracking hand movements. It uses a combinatorial algorithm and performs recognition using template matching. This model would convert sign language into alphabets but it can’t be used for producing complete sentences. Kasukurthi et al. developed an American Sign Language (ASL) converter system which is based on American Sign Language [2]. NLP (Natural Language Processing) translates American Sign Language into words and then into proper complete sentences. The paper published proposes a model to recognize the American Sign Language alphabet from RGB images. The images are resized and preprocessed by the deep neural network before predicting the output. Rao and Kishore developed a “Multifunctional Selfie Sign Recognition” model [3]. This model uses the webcam in selfie mode to record the video of the conversation involving sign language in order to convert it into text. This model allows people with hearing disabilities to operate the mobile app by themselves. The manipulation algorithm consists of keyframe extraction, face detection, hand search space identification, head and hand part extraction, and fuzzy hand head shape segmentation for multiple features. Handheld composition, grammar, form signatures, and orientation features are merged into one single dataset. Eric and Mathiyalagan published a journal about the research conducted on biased neural networks [4]. This study shows how neural networks help to recognize data fast. Bisani et al. developed a speech transcription system [5] which converts speech into text. This paper published doesn’t talk about conversion of speech into sign language or text into sign language. Keysers et al. published a paper “Deformation models for image recognition” [6] which mentions the different deformation models for image recognition. Convolutional Neural Networks (CNN) play a major role in image recognition. Dreuw et al. published a paper “Tracking using dynamic programming for appearance-based sign language recognition” [7] which presents a novel tracking algorithm that uses dynamic programming to determine the path of target objects and that is able to track an arbitrary number of different objects. The trackback method used to track the targets avoids taking possibly wrong local decisions and thus reconstructs the best tracking paths using the whole observation sequence. The tracking method can be compared to the nonlinear time alignment in automatic speech recognition (ASR) and can analogously be integrated into a hidden Markov model based recognition process. This paper mentions how the method can be applied to the tracking of hands and the face for automatic sign language recognition. Neidle et al. published a paper on the syntax of American sign language [8]. This paper contains the correct data set of American sign languages and also the images for small sentences.

318

P. Rachana et al.

Tripathi et al. published a paper titled “Continuous Indian Sign Language and Gesture Recognition” [9]. This paper proposes a model which converts ISL and other common gestures into text, mainly useful for people with hearing and speaking disabilities. The proposed model involves a continuous gesture recognition system for sign language that requires both hands to perform gestures. Recognizing sign language gestures from other commonly used gestures is a very challenging research topic. The model has solved this problem using a gradient-based keyframe extraction method. The keyframes are useful for dividing sign language gestures into characters or numbers and removing non-informative frames. After splitting up the gestures, each character is treated as an isolated gesture. The extraction of preprocessed gesture features happened by using Orientation Histogram. Deora and Bajaj published a paper titled “Indian Sign Language Recognition” [10] which contains a study of Indian Sign Language and also the concept on how to convert Indian Sign Language (ISL) into speech. Bantupally and Xie published a paper titled “American Sign Language (ASL) recognition using Deep Learning and Computer Vision” [11] which tells about a sign language recognition model that converts American Sign Language into text. Chempavathy et al. published a paper titled “An Exploration into Secure IoT Networks using Deep Learning methodologies” [12]. This paper tells about the Deep Learning methodologies that can be adopted to ensure secure communication between devices in IoT networks. Software Defined Networks (SDN) can help in handling security threats in IoT networks dynamically and in an adaptive manner. Chempavathy et al. published a paper titled “AI based chatbots using Deep Learning networks in Education” [13]. A chatbot is a software program programmed to interact with humans using artificial intelligence in messaging platforms. The chatbot introduced in this paper mainly concerns itself with the college activities. If a particular person in a college has a query about which the person wants to inquire, the individual will have to go from 1 place to another inside the huge college campus in order to gather the required information. This paper mentions a college friendly chatbot which would prove to be very useful for the scholars. The tools used for this model are artificial intelligence, such as natural language input and deep learning methods like deep neural networks. This model uses LSTM (Long-Short Term Memory) which is an extension of RNN, in order to process the user input. It trains the chatbot using these techniques. Kulkarni et al. published a paper titled “Speech to Indian Sign Language (ISL) translator” [14] which mentions a model that converts speech into ISL. Ankita et al. developed a model to convert audio into sign language [15]. This model helps to recognize the speech and converts the speech into sign language, which will help differently abled people, who are not able to hear. A complete literature survey has been performed on almost 25 to 30 papers published on the topic “Sign Language Conversion”. There are some common drawbacks observed in almost all the existing models. It has been observed that few of the existing models convert sign language to speech and text and few other models convert speech and text into sign language. But from research, it has been observed

A Mobile Application Model for Differently Abled Using CNN, RNN …

319

that there has been no model developed which would do both. Also, a mobile application for sign language conversion hasn’t been implemented yet. In the existing models, there is no feature of using voice assistants for sign language conversion. The use of voice assistants in sign language conversion will prove to be very useful for visually impaired people as well.

3 Concepts Used 3.1 Convolutional Neural Network (CNN) Convolutional Neural Network (CNN) is used to study higher-order capabilities inside the records/datasets through classifications. It can study images, different types of signs and many different elements from the dataset given. The efficacy of CNNs in image classification is highly efficient than any other technology in deep learning. As the underneath determination illustrates, CNNs are top in identifying features by using segments of the images. Architecture of CNN: CNN classifies the input image based on the dataset on which it has been trained. There are many variants of CNN. However, 3 layers are common in all these variants. These layers are shown in Fig. 1. Three predominant layers: (i) Input layer, (ii) Feature-extraction layer, (iii) Classification layer. The input layer accepts images as inputs like shape and size of the photograph. These images can be grayscale or RGB images (An RGB can be viewed as three different images stacked on top of each other). The output layer is a fully connected layer which uses softmax activation function to produce the output. The feature

Fig. 1 Architecture of CNN

320

P. Rachana et al.

Fig. 2 Architecture of RNN

extraction layer is made up of continuous sets of convolution, relu and pooling layers.

3.2 Recurrent Neural Network (RNN) Recurrent Neural Network (RNN) is a type of neural network mainly used for handling time-series data or sequential data. Models which receive audio files as input are mainly designed using RNN. RNN is different from other feed-forward neural networks. In RNN, the output from the previous step is fed as input to the current step, unlike other feed-forward neural networks. Architecture of RNN: Recurrent Neural Network is a superset of feed-forward neural network however it generates the idea of recurrent connection. These recurrent edges span adjoining time-steps (e.g., a preceding time-step). For any sequential or timeseries data, Recurrent Neural Networks should be used. RNN have the ability to model N number of input vectors and are not restricted to a single input vector. The architecture of RNN is shown in Fig. 2.

4 Implementation Methodology Sign language converter enables to bridge the communication gap between differently abled and others. The sign language converter developed is a mobile application which includes technology like CNN, RNN and NLP. Firstly, the deep learning models for conversion of sign language to speech/text and speech/text into sign language are trained and developed. These models are integrated with android to build a full-fledged android application. Voice assistant is also added. The deep learning model is trained on the sign language dataset using CNN. OpenCV is used for allowing webcam access. (CamShift, in OpenCV, is used for hand tracking). VideoWriter(), an OpenCV method, is used to record the video which is then converted into

A Mobile Application Model for Differently Abled Using CNN, RNN …

321

text. Every recorded sign is converted into an alphabet. These letters are grouped into a word (based on the time gap). NLP is used for sentence generation. The text, when required, is used as it is or is converted into speech using gTTS API tool (Google Textto-speech) which supports several languages including English, French, German etc. and saves the speech as audio file. So, the user will now have speech as well as text. The user will be able to download the audio file so that it can be sent to whoever the user wants. RNN is used to handle the audio files which become the input when speech is being converted into sign language. Deep learning models developed will be integrated with android by implementing flask API. When the user opens the app, 2 options will be displayed: convert sign language to speech/text and speech/text to sign language. When the user selects the first option, the user will be provided with 2 options: to record the video or start live streaming. Live streaming option is used when the person to communicate with, is sitting beside and the recording option has been provided for long distance communication. When the user selects “record video” option, the webcam will get switched on and the user will be able to record his/her message in sign language. Once the user is done recording, the user will be asked to click the “ok” button after which the recorded video will be converted into speech or text depending on the option selected by the user. When the user clicks the live streaming option, the webcam will get switched on and the user will be asked to select 1 option between speech and text. The sign language will then be converted into speech or text. When the user selects the 2nd option i.e. speech/text to sign language, the user will be asked to select any 1 option between “speech” and “text”. When the user selects the “speech” option, the user will be asked to speak. Whatever the user speaks will be recorded. Once the user is done speaking and clicks the “ok” button, this recorded speech will be converted to text and then into sign language (video). When the user selects the “text” option, the user will be asked to type out the text which will then be converted into sign language. This application is further enhanced by incorporating voice assistant to make the app more useful for visually impaired people as well. The model is straightforward to apply; any person can use the application without putting in any effort. The diagram below explains how the sign language converter works. Dataset performs an essential function in sign language conversion because it enables the system to predict the latest phrases or new signs. The users can also record their audio or video and send it to some other person after conversion, if required. The process of conversion from sign language to speech is shown in Fig. 3.

5 Expected Results The home page of the application displays 2 options to the users: convert sign to speech/text and speech/text to sign. This is shown in Fig. 4. When the user selects one of these options, the user is directed to the corresponding next page.

322

P. Rachana et al.

Fig. 3 Process of conversion from sign language to speech

Figure 5 shows the “sign to speech/text” module. When the user chooses the 1st option in the home page, two more options are displayed to the user: to record the video and to live stream. Based on the user’s choice, the user is redirected to the corresponding next page. Figure 6 is the live streaming module for conversion of sign to speech and text. Here, the camera is switched on (after the user allows access to the camera). The input signs shown by the user in the camera are captured and based on the user’s choice, the sign is converted into speech or text. In Fig. 6, the user has chosen the text option and based on the user’s sign the corresponding text has been displayed as output. For visually impaired people, voice assistants are being implemented as well. The voice assistant model uses inbuilt Python library files such as pyttsx3 which converts text to speech so that the system can communicate in human-understandable language. The model also uses a PyWhatKit library file which helps the system to search and extract data from the internet. So, the PyWhatKit library helps to connect our model to YouTube and Google search as well. The inbuilt library file “datetime” informs users about the current date and time on request from the users. The below displayed figures are sample codes and outputs of how the voice assistant will work. These outputs are similar to what the voice assistant in the application is expected to produce but are not the exact outputs. Figure 7 shows speech recognition technique to recognize the voice. This model will work on users’ commands.

A Mobile Application Model for Differently Abled Using CNN, RNN …

323

Fig. 4 Home page

In Fig. 8, the user has asked for the information regarding Albert Einstein, using the user’s voice. The system has, then, collected the required information from Wikipedia and displayed this to the user. In Fig. 9, the user has given a command to the model to play a particular song. So, the model has redirected the user to the Youtube page of that song and starts playing the song on Youtube. Figure 10 shows the output on Youtube.

324

P. Rachana et al.

Fig. 5 Sign to text/speech module

6 Conclusion Sign language converter is the cutting-edge worlds’ new clean and handy manner to communicate with the people who can’t hear or speak. With the help of this app, there would be no communication gap between the differently abled and others. This application offers a clever and handy UI which would make it easier for all kinds of users to use this application. A voice assistant is also provided which would prove to be very useful for visually impaired people to interact with the app. The users would be able to select any option between sign to text/speech and speech/text to sign. Additionally, the application gives a choice to the user to record their audio or video. After recording, the corresponding conversion will be done by the application

A Mobile Application Model for Differently Abled Using CNN, RNN …

325

Fig. 6 Live streaming

and then the user can send this converted video or audio to any other customer, thus facilitating long distance communication. Since it is a mobile application, users will be able to use the app anytime, anywhere. Also, users will not have to spend any cash on extra hardware tools in order to use the sign language converter model. The conversion will simply happen with the click of a button. This application can be used by differently abled to communicate with the staff in hospitals, employees in banks, officers in police stations, with friends and family in any case of emergency etc., even if the other person doesn’t understand sign language. It will also open up job opportunities for differently abled in various fields.

326

Fig. 7 Voice assistant

Fig. 8 Accessing Wikipedia information using voice

P. Rachana et al.

A Mobile Application Model for Differently Abled Using CNN, RNN …

Fig. 9 Accessing Youtube

Fig. 10 Output of the code in Fig. 9

327

328

P. Rachana et al.

References 1. Badhe PC, Kulkarni V (2015) ISL translator using gesture recognition algorithm. In: 2015 IEEE international conference on CGVIS, Bhubaneswar, pp 195–200 2. Kasukurthi N, Rokad B, Bidani S (2014) ASL alphabet recognition using deep learning 3. Rao GA, Kishore PVV (2018) Selfie sign language recognition with multiple features on adaboost multilabel multiclass classifier. J Eng Sci Technol 13(8):2352–2368 4. Eric PV, Mathiyalagan R (2021) An efficient intrusion detection system using improved bias based convolutional neural network classifier. Turk J Comput Math Edu 12(6). ISSN 1309-4653 5. Bisani M, Gollan C, Hoffmeister B (2006) The 2006 RWTH parliamentary speeches transcription system. In: ICSLP, Pittsburgh, PA, USA, September 6. Keysers D, Deselaers T, Gollan C, Ney H (2007) Deformation models for image recognition. IEEE Trans PAMI (to appear) 7. Dreuw P, Deselaers T, Rybach D, Keysers D, Ney H (2006) Tracking using dynamic programming for appearance-based sign language recognition. In: IEEE international conference on automatic face and gesture recognition, Southampton, April, pp 293–298 8. Neidle C, Kegl J, MacLaughlin D, Bahan B, Lee RG The syntax of American sign language. MIT Press 9. Tripathi K, Baranwal N, Nandi GC (2015) Continuous Indian sign language and gesture recognition. In: IMCIP 2015. Elsevier 10. Deora D, Bajaj N (2012) Indian sign language (ISL) recognition. In: The 21st international conference of emerging technology trends in computer science 11. Bantupally K, Xie Y (2018) American sign language recognition using deep learning and computer vision. In: IEEE international conference on big data 12. Chempavathy B, Deshmukh VM, Datta A, Shiva AT, Singh G (2022) An exploration into secure IoT networks using deep learning methodologies. In: 2022 international conference for advancement in technology (ICONAT), Goa, India, 21–22 Jan 2022 13. Chempavathy B, Prabhu SN, Varshitha DR, Vinita, Lokeswari Y (2022) AI based Chatbots using deep neural networks in education. In: Proceedings of the 2nd international conference on artificial intelligence and smart energy, ICAIS 2022, pp 124–130 14. Kulkarni A, Dhanush V, Singh PN (2021) Speech to Indian sign language translator. In: Proceedings of the 3rd international conference on integrated intelligent computing communication and security (ICIIC 2021) 15. Ankita H, Sarika N, Anita M (2020) Audio to sign language translation for deaf people. Int J Eng Innov Technol 9(10)

A Survey Paper: On Path Planning Strategies Based on Classical and Heuristic Methods Tryambak Kumar Ojha and Subir Kumar Das

Abstract Path Planning algorithm is widely used in unmanned vehicles and autonomous robots. The applications of these robots and vehicles are huge. Though the basic requirement of these algorithms is to find the path from source to goal, the implementation process is different for different algorithms. Based on their characteristics, these algorithms can be differentiated into two types. One is the classical method, and another is the heuristic method. In this paper, some popular classical methods like Sub-Goal Method, Potential Field, Dijkstra’s Algorithm, Rapidlyexplore Random Tree Cell, and Probabilistic Road Map, and a few heuristic methods like Ant Colony Optimization, Genetic Algorithm, and Bug Algorithm are briefly described. Keywords Path planning · Classical method · Heuristic method

1 Introduction In the last few decades, problem-related to autonomous vehicles/automated robots has become a promising field for the researchers. In different areas like military field, cleaning purpose, and security environments, this unmanned robot has taken a promising role. An autonomous vehicle with effective path planning technology not only saves a lot of time but also reduces the operation cost of the robot [1]. The main objective of an autonomous robot is to find the goal from the source point with proper handling of static or movable obstacles. In this purpose, navigation plays a huge role. In the context of mobile robotics, the process of choosing and maintaining a route or trajectory to a target point is defined as navigation. Based on the environments the way of navigation may vary. 2D and 3D environments are the two major environments that are applied to resolve many path planning problems. Real life environment is a 3D environment, and because of that a 2D map is unable to accurately depict the overlapping areas T. K. Ojha (B) · S. K. Das Department of CSE, Supreme Knowledge Foundation, Mankundu, West Bengal, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_24

329

330

T. K. Ojha and S. K. Das

of the environment. Different dimensional projections are being applied to convert it into 3D planes. Thus, planning in 3D spaces is becoming more practical and significant task [2]. In other way, environment can be divided in two categories, static environment and dynamic environment. Static environment is an environment where all the obstacles are static in nature. In dynamic environment, the obstacles in the environment might be static, or dynamic or may be the combination of static and dynamic. Irrespective of static or dynamic environments, path planning process can also be distinguished on the basis of navigation. These divisions are global path planning and local path planning. In case of global path planning, the robot is aware of all environmental data prior to beginning the path planning. In this kind algorithm, the starting point and the ending point are known to the robot along with all the exact details of the environment and the related obstacles. In contrast, the robot has virtually no knowledge of the surroundings prior to beginning in case of local path planning. Local path planning is frequently carried out in unpredictable or dynamic environment. While the robot is travelling, local path planning is carried out using information from nearby sensors. In this instance, the robot has the capacity to produce a new course in reaction to environmental changes. The purpose of path planning is to identify a continuous path that connects a system from its original configuration to its desired state, which is a Non-deterministic Polynomial-time (NP) hard problem. As the system’s degrees of freedom rise, the problem becomes more difficult [3]. The following points should be taken into considerations before execution of any path planning algorithm. First of all, it has to be checked whether there exist any feasible path in between source and the goal. The other is to direct the robot’s path to avoid objects in outer space. And third is to find the optimized path among the probable paths [4]. Principle of mobile robot global/local path planning is shown in Fig. 1. Environmental Modelling: A proper environmental model will help to achieve better environmental variables. It reduces useless planning, and significantly reduces the amount of computations before the mobile robot’s global path planning. Optimization Criteria: To reduce path length and increase smoothness and safety degree, optimization is used. Path Search Algorithm: Heuristic and classical methods are the two types of path search algorithm that can be used to solve the robot path planning issue. Fig. 1 Principle of mobile robot global/local path planning [1]

Environmental Modelling

Optimization Criteria

Path Search Algorithm

Optimum Path

A Survey Paper: On Path Planning Strategies Based on Classical …

331

Fig. 2 Classification of path planning algorithm

Optimum Path: Normally, considering path length, smoothness and safety degree, optimum path can be determined [1, 5]. Navigational Technique: Since 1970s, researchers have introduced various methodologies to get more appropriate and robust way for navigation. Irrespective of global and local path planning methods, the solutions are mainly distinguished as classical approach and heuristic/reactive approach [3, 5]. Classification of path planning algorithm is shown in Fig. 2.

2 Classical Approaches Because there were no artificial intelligence approaches earlier, classical approaches were particularly popular for resolving navigational issues. In this non-deterministic approach, the result is not available if and only if there is no solution [6]. Though this method can provide more accurate and exact shortest path, it is not appropriate for complex or large environments [7, 8]. Such approaches’ primary drawbacks are the computational complexity and the incapacity to deal with uncertainty. There are multiple classical methods. Among these classical methods, the significant methods

332

T. K. Ojha and S. K. Das

are Sub-Goal Method (SG), Potential Field Method (PFM), and Cell Decomposition Method (CD), Probabilistic Road Map (PRM) Dijkstra’s Algorithm (DA), and Rapidly-explore Random Tree (RRT) [9]. These strategies are chosen because they have a solid reputation in the field and are frequently and productively used to solve path planning issues. PF, RRT, and PRM are also utilized as a part of local path planning problems. Map representation of the environment is the first requirement for classical path planning. Grid map, point cloud map, Voronoi diagram map, euclidean signed distance fields, etc. are used to construct the map of the environment [10].

2.1 Sub-Goal Method This method was introduced long before. The main advantage of this algorithm is that it reduces the complexity. These sub objectives serve as pivots for the main purpose rather than being used to store data. These sub-goals, known as dynamic sub-goals, are renewed in accordance with the change since surroundings change over time. Sampling-based Roadmap of Trees (SRT) is the pioneer of sub-goal method. In case of real time application, sub-goal method provides better result than SRT [8, 11].

2.2 Potential Field Method This method is applicable for collision free navigation. Collision free navigation can also be handled by artificial intelligence. In contrast, the potential field direction offers freedom in terms of choosing the potential fields’ function and is straightforward to realize and apply. Since a higher degree of environmental information and perception is available using cutting edge technologies like GPS and others, this potential field method can be applied properly and smoothly. This method consists of two forces, attractive force and repulsive force. In attractive force, the goal has the advantage to attract a robot towards it, and the repulsive force is generated by the obstacles [9].

2.3 Cell Decomposition The robot’s open space is separated into discrete units called cells. The objective of this CD technique is to offer a collision free route to the target. In CD, the goal is to minimize the environment by employing a cell-based representation [8, 9, 11]. Here the whole environment is divided into connected regions called as cells, and a graph is constructed through adjacent cells. In such a network, cells are represented by vertices, and edges are the link cells with the same border. Lastly the connection between cell associated to the starting point and the cell associated to the goal is to

A Survey Paper: On Path Planning Strategies Based on Classical …

333

be determined, and also the sequence of collision free cells from the start to goal has to be calculated [9].

2.4 Dijkstra’s Algorithm This is a Breath First Search algorithm applied to get the shortest path in between the source vertex and other vertices. This shortest path is also called link distance. Here source point, end point, and other obstacles or walls are considered as vertex. In an environment, there must be at least two vertices. There must be ‘n’ number of vertices where ‘n’ can be from two to finite number of vertices. The distance between two vertices is considered as edge. Multi level directory is a modified Dijkstra’s algorithm that provides better result than normal Dijkstra’s algorithm [12].

2.5 Rapidly-Explore Random Tree Cell This is a motion planning technique that constructs a graph to discover the viable route from the starting state to the desired state [13]. It is not an asymptotically optimal algorithm. The RRT algorithm examines pathways at random, leading to a lot of repeated investigations. The graph G describes the probable movement from any node, for example from X. X->Y means, node Y is reachable from node X. A set of vertices V is defined first, having the starting state as xs , and the goal state as xg . Initially a set of edge E is also defined having empty set. Then a random state xr is considered and the distance in between the xs and xr is calculated. Comparing this, the nearest xn state is calculated. Continuing this process, the shortest distance is determined. This enables the multi-robot team to create path plans faster [13, 14]. Q-RRT and PIB-RRT are the advanced forms of RRT [15]. This concept solves the problems with kinematic constraints.

2.6 Probabilistic Road Map This is a geometry-based methodology. Planner ultimately locates a solution through random sampling, if one exists. This is also called local planner [15]. Here the first step is to learn the whole map, then in the second step the shortest path is calculated. In map learning process, initialization is the first step, where the obstacles and free area are distinguished. Then these two are differentiated using two different colours (normally in black and white) [16]. Then ‘n’ number of collision free points is randomly sampled. The value of ‘n’ in this case must be more than one point. These points must not intercept the obstacles. The third step is domain calculation, where a predefined domain value is being set and all the sample points are connected

334

T. K. Ojha and S. K. Das

considering the domain value. Using this step, the distance between the point and obstacle is determined. Then edge connection and collision detection are done [17]. Then the shortest path is calculated based on different algorithms.

3 Heuristic Approach In contrast to classical techniques, which generate a global path based on previous information (such as an environment map) and then execute it, heuristic methods which is also called deterministic method create a subpopulation of potential movements in every iteration [9]. It is also observed that the heuristic approach is becoming more popular than classical approaches [8]. This method is less dependent of environment, and is dynamic in nature. This method seldom gives a deadlock condition or irresolvable states [6, 18]. Here, the global minimum is calculated based of the local minima found through different heuristic approaches. Heuristic method is a path breaking step in the field of path planning algorithm. Among many heuristic methods, Artificial Neural Network (ANN), Ant Colony Optimization (ACO), Genetic Algorithm (GA) and Bug Algorithm are very popular.

3.1 Artificial Neural Network This is a method which creates complex relationships between inputs and outputs. Here, three steps are present to get the proper path between source and goal. These three steps are (i) interpreting the sensory data, (ii) obstacle avoidance, and (iii) path planning. The gradient descent approach is used to train the neural network to minimize the loss function in order to carry out the path planning procedure [18]. The reinforcement learning is trial and error method with can be applied in path planning problem. Apart from that, temporal definition learning and Q-learning can handle path planning problems efficiently [19]. The performance of this neural network can be improved by increasing the learning dataset and the number of sensors attached to the robot. Nine popular neural network models are (i) Levenberg–Marquardt Algorithm, (ii) BFGS quasi-Newton back-propagation, (iii) Resilient back-propagation. (iv) Scaled conjugate gradient back-propagation, (v) Conjugate gradient backpropagation with Powell-Beale restarts, (vi) Conjugate gradient back-propagation with Fletcher-Reeves updates, (vii) Conjugate gradient back-propagation with PolakRibiére updates, (viii) One-step secant back-propagation, (ix) Gradient descent with momentum and adaptive learning rate back-propagation [20]. Flowchart of neural network is shown in Fig. 3.

A Survey Paper: On Path Planning Strategies Based on Classical …

Way Point Generation

Binary Map Construction

Randomize End Point

PRM (Probabilistic Road Map)

NN Training

Hidden Layer Required

Forecast Testing

Forecast the path

335

Result Analysis

Time for Training

Training

Calculate total distance

Performance Gradient

Obtain Performance

Plot comparison of PRM

Forecast distance

Fig. 3 Flowchart of neural network [20]

3.2 Ant Colony Optimization The invention of path planning optimization population-based algorithms has drawn inspiration from nature from both computer scientists and biologists. This algorithm is based on the behaviour of ants and the technique they use to identify the shortest path. When ants search for foods from the source state, they try to find the shortest path. For that, when it starts searching for the best root, it releases a chemical substance along the road, called pheromones. Because more ants have used the shorter path, it will have a greater pheromone concentration [21]. So, the next ant finds the route where the pheromone concentration becomes higher by evaporation. The entire ant colony is finally focused along the optimal path [1][22]. Ants use heuristic data in addition to pheromone trails. The inverse of the distance between two sites is how heuristic information is defined [23]. Ni j = 1/di j

(1)

where N ij is the heuristic information between i and j nodes, and d ij is the distance between i and j nodes. From the above equation, it is observed that, if the value of ‘d’ decreases, then the heuristic value gets improved. Flowchart of ACO is shown in Fig. 4.

336

T. K. Ojha and S. K. Das

Fig. 4 Flowchart of ACO [21]

3.3 Genetic Algorithm A path planning technique based on GA is presented for small-scale robot mobility in a dynamic environment. This algorithm can handle static as well as dynamic obstacles [24]. This algorithm is inspired by biology. Every gene on a chromosome is encoded with the appropriate parameters to enable GA to determine the best route. Calculating cost function is the first step in GA. Cost Function = w1 D + w2 O + w3 C

(2)

where w1 , w2 , w3 are the constants, D is the differential distance, O is the number of path coordinates that matches with obstacles, and C is the number of changes in direction. Having lowest cost function gives better solution. From the cost function,

A Survey Paper: On Path Planning Strategies Based on Classical …

337

the collision with the obstacle can also be determined, because in case of collision, the cost function increases drastically. Selection is carried out following initialization using algorithms like key cells, clearness-based roadmap methods, random walk, potential field, and greedy approach methods. Using cost function, the fitness of the chromosome is being determined from the set [25, 26]. Then, crossover and mutation operations are performed in order to create a new population. Among these, the best chromosomes are selected [27].

3.4 Bug Algorithm The Bug algorithms are well-known, straightforward, and absolute algorithms used in local path planning and low-sensor mobile robot navigation [28, 29]. A* algorithm was the popular path planning algorithm because of the simplicity and optimality guarantee. But this algorithm is not appropriate for real world robot because A* paths frequently change their heading and are confined to following grid edges [30]. Despite of the development of A* algorithm, there exist some drawbacks in this algorithm. Bug algorithm is an algorithm which tries to reduce the boundaries. The primary Bug algorithm is known as Bug0 algorithm. It follows the basic algorithm, i.e. follow towards goal; if any obstacle found, do wall following, and after getting free area, then again head towards the goal. But there exists a chance for infinite loop. For the betterment of that algorithm, Bug1 algorithm had arrived. But it may take infinite time to reach that goal. Bug2 algorithm is the modified version of Bug1 algorithm. Here, a middle line (m-line) is being drawn in between starting point and goal point. Then, the robot heads towards the goal following the middle line. If any obstacle is found on the way, then wall follow is done until the robot reaches the m-line again. Then it starts following the m-line [31]. When the obstacle is sensed, these above algorithms are unable to detect right direction. For that reason, the robot may take huge time to reach the goal. In Tangent Bug algorithm, robot can decide how to navigate the obstacles. To identify the direction, heuristic method is being applied. Motion-to-goal and boundary following are the two behaviours that Tangent Bug switches between. In the motion-to-target mode, the robot advances directly towards the target or a particular obstacle’s vertex. In next step, boundary following the local tangent graph moves the robot around an obstacle boundary. This algorithm can decide in which direction the robot should move [32]. There are some bug algorithms applications used in dynamic situations to avoid moving obstacles, for bio-inspired snake like or centipede like robots [33, 34, 35]. Flowchart of Bug2 algorithm is shown in Fig. 5.

338

T. K. Ojha and S. K. Das

Fig. 5 Flowchart of Bug2 algorithm

4 Scope Heuristic principles are incorporated into bio-inspired algorithms, which are excellent at handling complex and dynamic unstructured constraint as well as NP-hard problems [23]. Movable obstacles also increase the complexity. There is a huge research area in this field to increase accuracy and handle the complexity [36, 37]. It should also be taken into consideration that the algorithm should explore the environment as minimally as possible to achieve the goal. Furthermore, a quality algorithm can be created by taking into account the four factors such as, time complexity, finding the shortest path, accuracy, and cost.

5 Conclusion The different path planning algorithms were and are an attractive research field for the researchers. Application of these algorithms through robot can reduce time as well as can solve many difficult tasks easily. As a result, many researchers are dedicating their work in this research. Planning the best route for mobile robots to take from one location to another to avoid obstacles is the ultimate objective. Previously many classical approaches were evolved to solve this problem. After the development of artificial intelligence, it has been successfully applied into path planning algorithm. Different natural and biological events like Ant Colony Optimization, Bug Algorithm, Genetic Algorithm, etc. have inspired the researchers to introduce new ideas. In the bug algorithm, it has been observed that the algorithm is being developed to reduce the complexity.

A Survey Paper: On Path Planning Strategies Based on Classical …

339

Acknowlegdement All the esteemed faculties of the institution (SKFGI) have generously shared their valuable suggestion. Dr. S. K. Das’s continuous guidance is also an inspiration for this paper.

References 1. Zhang H-y, Lin W-M, Chen A-X (2018) Path planning for the mobile robot: a review. Symmetry 10:450. https://doi.org/10.3390/sym10100450 2. Stoyanov T, Magnusson M, Andreasson H, Lilienthal AJ (2010) Path planning in 3D environments using the normal distributions transform. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, pp. 3263–3268, https://doi.org/10.1109/ IROS.2010.5650789 3. Karur K, Sharma N, Dharmatti C, Siegel J (2021) A survey of path planning algorithms for mobile robots. Vehicles 3:448–468. https://doi.org/10.3390/vehicles3030027 4. Zhi C, Xiumin S (2019) Research on path planning of mobile robot based on A* algorithm, Internatıonal J Engıneerıng Res Technol (IJERT) 08(11) 5. Patle BK, Ganesh Babu L, Anish Pandey DRK, Parhi, Jagadeesh A (2019) A review: On path planning strategies for navigation of mobile robot. Defence Technol 15(4), 582–606. ISSN 2214–9147 6. Wahab MN, Nefti-Meziani S, Atyabi A (2020) A comparative review on mobile robot path planning: classical or meta-heuristic methods? Ann Rev Control 50, 233–252. ISSN 1367–5788 7. Nagabhushan P, Manohara Pai MM (2001) Cognition of free space for planning the shortest path: a framed free space approach. Pattern Recognit Lett 22(9), 971–982. ISSN 0167–8655 8. Mac Thi T, Copot C, Tran DT, De Keyser R (2016) Heuristic approaches in robot path planning: a survey. Robot Auton Syst 86:13–28 9. Atyabi A, Powers D (2013) Review of classical and heuristic-based navigation and path planning approaches. Int J Adv Comput Technol (IJACT). 5:1–14 10. Dong L, Zichen H, Chunwei S, Changyin S (2021) A review of mobile robot motion planning methods: from classical motion planning workflows to reinforcement learning-based architectures. ArXiv preprint arXiv:2108.13619 11. Liu H, Wan W, Zha H (2010) A dynamic subgoal path planner for unpredictable environments. In: 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, pp. 994–1001, https://doi.org/10.1109/ROBOT.2010.5509324 12. Abdullah S, Iyal F, Makhtar S, Jamal M, Amri A (2015) Robotic ındoor path planning using Dijkstra’s algorithm with multi-layer dictionaries. https://doi.org/10.1109/ICISSEC.2015.737 1031 13. Belkhouche F (2009) Reactive path planning in a dynamic environment. IEEE Trans Rob 25(4):902–911. https://doi.org/10.1109/TRO.2009.2022441 14. Jaillet L, Porta J (2012) Asymptotically-optimal path planning on manifolds. https://doi.org/ 10.15607/RSS.2012.VIII.019 15. Kavraki LE, Svestka P, Latombe J-C, Overmars MH (1996) Probabilistic roadmaps for path planning in high-dimensional configuration spaces. IEEE Trans Robot Autom 12(4):566–580. https://doi.org/10.1109/70.508439 16. Van den Berg J, Stilman M, Kuffner J, Lin Ming, Manocha D (2009) Path planning among movable obstacles: a probabilistically complete approach. Springer Tracts Adv Robot 57.https://doi.org/10.1007/978-3-642-00312-7_37 17. Kavraki LE, Kolountzakis MN, Latombe JC (1996) Analysis of probabilistic roadmaps for path planning. In: Proceedings of IEEE ınternational conference on robotics and automation, Minneapolis, MN, USA, vol. 4, pp. 3020–3025, https://doi.org/10.1109/ROBOT.1996.509171

340

T. K. Ojha and S. K. Das

18. Yu, J et al. (2020) The path planning of mobile robot by neural networks and hierarchical reinforcement learning. Front Neurorobot 14 19. Ullah, Z et al. (2018) RL and ANN based modular path planning controller for resourceconstrained robots in the ındoor complex dynamic environment. IEEE Access 6, 74557–74568 20. H, B. & E, V. (2018) Comparative study of neural networks in path planning for catering robots. Proc Comput Sci 133:417–423. https://doi.org/10.1016/j.procs.2018.07.051 21. Brand M, Masuda M, Wehner N, Yu X-H (2010) Ant colony optimization algorithm for robot path planning. 2010 International Conference On Computer Design and Applications, Qinhuangdao, China, 2010, pp. V3-436–V3-440, https://doi.org/10.1109/ICCDA.2010.554 1300 22. Cong YZ, Ponnambalam SG (2009) Mobile robot path planning using ant colony optimization. In: 2009 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Singapore, 2009, pp. 851–856, https://doi.org/10.1109/AIM.2009.5229903 23. Yang, L et al. (2016) Survey of Robot 3D path planning algorithms. J Control Sci Eng 2016(2016):1–22 24. Ibrahim MF, Adilah Z, Bakar A, Hussain A (2023) Genetic algorithm-based robot path planning 25. Lamini C et al. (2018) Genetic algorithm based approach for autonomous mobile robot path planning. Proc Comput Sci 127:180–189 26. Hu Y, Yang SX (2004) A knowledge based genetic algorithm for path planning of a mobile robot. IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA ‘04. 2004, New Orleans, LA, USA, 2004, pp. 4350–4355 Vol.5, https://doi.org/10.1109/ROBOT. 2004.1302402 27. Pötter Neto CA, de Carvalho Bertoli G, Saotome O (2020) 2D and 3D A algorithm comparison for UAS traffic management systems. In: 2020 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 2020, pp. 72–76, https://doi.org/10.1109/ICUAS48674. 2020.9214028 28. Dutta AK, Debnath SK, Das SK, Local path planning of mobile robot using critical-pointbug algorithm avoiding static obstacles. IAES Int J Robot Autom 5(3):182–187 29. Das S, Dutta A, Debnath S (2020) OperativeCriticalPointBug algorithm-local path planning of mobile robot avoiding obstacles. Indones J Electr Eng Comput Sci 18:1646. https://doi.org/ 10.11591/ijeecs.v18.i3.pp1646-1656. 30. Bhanu Chander V, Asokan T, Ravindran B (2016) A new Multi-Bug Path Planning algorithm for robot navigation in known environments. In: 2016 IEEE Region 10 Conference (TENCON), Singapore, 2016, pp. 3363–3367, https://doi.org/10.1109/TENCON.2016.7848676 31. Yufka A, Parlaktuna O (2009) Performance comparison of BUG algorithms for mobile robots. https://doi.org/10.13140/RG.2.1.2043.7920 32. Mohamed EF, El-Metwally K, Hanafy AR (2011) An improved Tangent Bug method integrated with artificial potential field for multi-robot path planning. In: 2011 International Symposium on Innovations in Intelligent Systems and Applications, Istanbul, Turkey, 2011, pp. 555–559, https://doi.org/10.1109/INISTA.2011.5946136 33. Das S, Roy K, Pandey T, Kumar A, Dutta Ajoy, Debnath S (2020) Modified critical point – a bug algorithm for path planning and obstacle avoiding of mobile robot 351–356. https://doi. org/10.1109/ICCSP48568.2020.9182347 34. Dutta AK, Debnath SK, Das SK (2018) Path planning of snake-like robot in presence of static obstacles using bug algorithm. Advances in Computer, Communication and Control, Springer, pp. 449–458 35. Das SK, Dutta A, Debnath S (2019) Development of path planning algorithm of centipede ınspired wheeled robot in presence of static and moving obstacles using modifiedcriticalsnakebug algorithm. IAES Int J Artif Intell (IJ-AI) 8:95. https://doi.org/10.11591/ijai.v8.i2. pp95-106

A Survey Paper: On Path Planning Strategies Based on Classical …

341

36. van den Berg J, Stilman M, Kuffner J, Lin M, Manocha D (2009) Path Planning Among Movable Obstacles: A Probabilistically Complete Approach. Springer Tracts in Advanced Robotics. 57 https://doi.org/10.1007/978-3-642-00312-7_37 37. Nieuwenhuisen D, van der Stappen AF, Overmars MH (2005) Path planning for pushing a disk using compliance. In: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS. 714–720. https://doi.org/10.1109/IROS.2005.1545603

Actual Problems and Analysis of Anti-avoidance Tax Measures in the EAEU and EU Countries in the Context of Digitalization of the Economy Irina A. Zhuravleva , Natalia V. Nazarova, and Natalia V. Levoshich Abstract The purpose of the scientific research is to consider the directions for improving the international tax policy of individual countries of the post-Soviet space, but which, on the one hand, have chosen the path of integration into the EU countries, and, on the other hand, the formation of the EAEU, based on a system analysis, systemonomic and analytical approach, using generalization methods, comparison, deduction, modeling and induction. The findings demonstrated the desire of the countries under study to improve domestic and foreign tax policies, but revealed existing problems, in particular, the risks of insufficient tax collection, erosion of tax bases, the use of aggressive tax planning, the impact of the political situation on the openness and friendliness of international tax policy, low managerial potential in the tax legal environment, high tax administration costs. Keywords Plan BEPS · International taxation · Taxation of post-Soviet countries · Digitalization of the economy · Tax administration

1 Introduction The relevance of the study is due to the trends in the formation of a multipolar world order with the predominant role of large integration spaces. After the collapse of the Soviet Union in 1991 (in fact, the largest geopolitical association in the world at that time), the republics that were part of it formed their own tax systems and international legislation, however, globalization processes led to imbalances in profits and tax abuses, despite the integration of the post-Soviet states to various commonwealths I. A. Zhuravleva (B) · N. V. Nazarova · N. V. Levoshich Financial University Under the Government of the Russian Federation, Moscow, Russia e-mail: [email protected] N. V. Nazarova e-mail: [email protected] N. V. Levoshich e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_25

343

344

I. A. Zhuravleva et al.

(CIS, EU, EAEU, etc.). To combat tax abuses, a plan was developed to prevent base erosion and shift profits to low-tax jurisdictions (BEPS plan) [1]. Considering the implementation of the Base Erosion and Profit Shifting (BEPS) plan within the framework of the post-Soviet countries of the EAEU and the EU, it is necessary to note the existence of a number of problematic issues. At the present stage of development of national tax systems, the countries in question apply the practice of the OECD in order to improve them; they implement the actions of the BEPS plan in their national tax legislation in varying degrees of adaptability, using digital technologies and creating new information spaces. The main problem for the countries considered in the scientific study is the significant financial costs for the introduction and administration of new provisions in the national norms of tax and financial law, digital technologies, information aggregates. Not all countries are ready and can financially and economically afford to implement these BEPS actions (in whole or in part) on the basis of the digital platforms of the world. These problems are common for all states and for interstate relations, and to some extent threaten the strategic, political, financial and economic interests of countries, therefore it will be relevant to put forward a hypothesis of a nationwide approach and cooperation in this modern direction based on system economic analysis and the creation of a single digital platforms and adaptive budget information technologies. The theoretical significance of the study lies in the development of theoretical and methodological provisions concerning the policy of international taxation and the role of international organizations in this aspect. At present, the “cooperation” of states on the basis of international organizations to develop concepts for the development of some aspects of the policy of international taxation, the application of the provisions of the BEPS plan is more relevant than ever, given the seriousness and scale of the problems facing the governments of the countries under study, including the Russian Federation. The performed scientific work led to the formulation of a number of the following tasks: – systematically analyze the anti-avoidance measures of the EAEU and EU member states, considering their national tax systems and the implementation of a number of BEPS plans; – to study Double Taxation Avoidance Agreements (DTAAs) in the countries under study and analyze their practical significance, identify problems; – to conduct a regression analysis of anti-avoidance measures on the example of Russia; The concept of analyzing anti-avoidance tax measures in the context of the countries that were part of the USSR, now choosing the EU or the EAEU, is proposed in the scientific community for the first time, however, it is considered partially according to the format of the publication. Countries not included in these unions, occupying the position of an observer or included in local unsettled associations, are not considered in this article. Russia undoubtedly plays a significant role in the financial and economic structure of the EAEU, and those models and developments in terms of tax regulation and neutralization of existing gaps in international cooperation are

Actual Problems and Analysis of Anti-avoidance Tax Measures …

345

taken into account both by the member countries of the EAEU and other Eurasian communities, therefore, in scientific work, the regression the analysis is carried out on the example of the Russian Federation. Currently, most countries from the former USSR do not have confirmed up-to-date statistical data regarding permanent establishments and controlled foreign companies, these data are either not published or not available at all (for example, in a number of countries there are no CFCs). Therefore, regression analysis for different groups of countries is not possible in this study.

2 Materials and Methods 2.1 Descriptive Part Within the framework of the countries under consideration, the EAEU is the main economic union, but there are states that have joined the EU, for example, the tax systems of Estonia, Latvia and Lithuania are moving closer to the tax system of the EU countries, and the tax systems of Armenia, the Russian Federation, Belarus, Kyrgyzstan and Kazakhstan are being consolidated within the framework of the EAEU. To date, a number of changes are taking place, incl. international taxation, which is aimed at improving the national legislation of countries [2]. The tax systems of the countries under consideration, although they are based on a systemonomic nature, remain imperfect. According to the OECD, countries annually lose between $100 billion and $240 billion in revenue due to tax avoidance by multinational companies (TNCs). As part of its mandate, the OECD has developed a plan to prevent base erosion and profit shifting to low-tax jurisdictions. Analysis of the Central Bank of the Russian Federation (CBR) and the International Monetary Fund (IMF) showed that the Russian Federation is losing up to 1 trillion. rub. per year in tax avoidance schemes. According to Janský & Palanský, about 85 billion dollars of corporate profits are withdrawn from the Russian Federation, which amounts to 17 billion dollars of shortfalls in the country’s budget system [3]. Analyzing the tax systems of the EAEU member countries, we note that these countries have formed a unified tax administration system. Harmonization in terms of indirect taxation is gaining momentum based of a single economic and digital space. According to the European Commission, in 2016, the EU lost 46 billion euros as a result of tax evasion at the international level [4]. Therefore, a common, systemic and systemonomic, unified approach is needed to prevent various options for the use of BEPS [5]. To do this, at the initial stage of solving urgent problems, the OECD developed a BEPS plan, which so far consists of 15 actions.

346

I. A. Zhuravleva et al.

The inclusive framework for the BEPS plan was created by the OECD to ensure that interested countries and jurisdictions, including developing countries, can participate equally in the development of standards on issues related to BEPS. The inclusive structure of the BEPS plan includes Armenia, Belarus, Georgia, Kazakhstan, Latvia, Lithuania, Russia, Ukraine, Estonia. All members of the inclusive BEPS plan structure are committed to meeting the four minimum standards of the BEPS plan, which are: – – – –

Harmful tax practices (Action 5); Prevention of abuses under DTT (Action 6); CbCR (Action 13); MAP (Action 14).

Implementing of the BEPS plan is carried out through implementation in national legislation. Action 5 of the BEPS plan also includes a transparency framework that involves the spontaneous exchange of information on tax rulings. Spontaneous or proactive exchange digitally is the transfer of data, information and documents by the tax authority of one country to the tax authority of another country without prior request, at its own request [6]. Action 6 of the BEPS aims to combat an attempt by an individual or entity to access tax benefits under a DTT between two countries without being a resident of either country. Ratification of the MLI is an important element for the effective implementation of Action 6 of the BEPS plan. Implementation of the BEPS plan is also done through the Preamble, Primary Purpose Test (PPT), and Limitation of Benefits Article (LOB) [7]. CbCR is a type of information exchange that aims to provide the tax authorities with a general overview of the activities and tax risks of TNCs. CbCR is used for TNCs with annual consolidated revenue of more than 750 million euros [8]. According to the BEPS plan published by the OECD, TNCs must report annually for each country in which they do business, on the amount of revenue, profit before tax, income tax, number of employees, tangible assets, retained earnings, etc. Next, we will consider the launches of MAP stages in the countries of the EAEU and the EU [9]. For example, in the Russian Federation there is MAP, but there is limited experience in resolving cases under this procedure and a small list of MAPs. There are a small number of new cases filed each year and 32 pending cases as of December 31, 2019. In general, RF corresponds to half of the elements of Action 14 of the BEPS plan. MAP is in force in Lithuania, but there is limited experience in resolving cases under this procedure and a small list of MAPs. In order to fully comply with all areas of an effective dispute resolution mechanism, Lithuania needs to update and amend a certain number of its DTTs. MLI is a new tool in Russian tax practice. The ratification and signing of the MLI complemented the trends that affect international taxation in the Russian Federation [10]. To date, the MLI covers 71 DTTs.

Actual Problems and Analysis of Anti-avoidance Tax Measures …

347

In Latvia, the MLI will not apply to the DTT with North Macedonia and Germany. However, North Macedonia has added a tax treaty with Latvia as part of the MLI. It can be concluded that all those actions that have not been implemented by countries, or implemented, but do not fully show the imperfections, shortcomings, gaps and problems in the systemic functioning of the tax system of states. However, in order to implement these actions, significant technical, informational and digital financial resources are needed, but not all countries have such opportunities to implement the BEPS plan. For example, the tax authorities of the United States of America (USA) spent about $380 million to administer and implement the Foreign Account Tax Reporting Act (FATCA) [11]. Countries use two Model Tax Conventions (MTCs) to enter into DTTs: the OECD MTC and the UN MTC. OECD MNEs are mainly used by developed countries, while UN MNEs are used by developing countries. DTTs are the source of international law. Currently, more than 3,000 DTTs regulate a significant proportion of cross-border investments, which are based on the OECD MNEs. It should be noted that in general, there is a significant number of existing DTTs between the EAEU member countries and the EU, and problems in the international taxation of the countries under consideration concern a small number of DTTs and are related to the fact that: – there are no contracts themselves; – signed, but not entered into force; – tax treaties initialed but not signed. Increasing transparency is a key element in the fight against tax evasion and tax avoidance. Automatic exchange of information covered 84 million financial accounts for a total of 10 trillion euros in 2019, during which 107 billion euros of additional revenues were revealed worldwide [12]. Developing countries received e30 billion in additional revenue from voluntary disclosure programs and offshore tax investigations. The IMF noted that offshore deposits are down by 8–12% after the conclusion of bilateral agreements on the exchange of information on request. And under the agreement on the automatic exchange of information, offshore deposits are reduced by 25%. Further, as part of solving urgent problems of international tax relations and ongoing digitalization in this direction, we will consider the following types of information exchange: information exchange based on FATCA, information exchange on request (EOIR), information exchange according to the common reporting standard (CRS) and exchange via CbCR. The data show that in addition to the inclusive framework for BEPS, there is a Global Forum on Transparency and Information Exchange for Tax Purposes, which includes Azerbaijan, Armenia, Belarus, Georgia, Kazakhstan, Latvia, Lithuania, Moldova, Russia and Estonia. The main direction of his work is to increase the transparency of tax systems and establish an effective exchange of information for tax purposes. Almost all countries

348

I. A. Zhuravleva et al.

of the EAEU and the EU exchange information based of FATCA. In 2014, the United States suspended negotiations with the Russian Federation on the FATCA agreement. EOIR is an important tool for the tax authorities to ensure that taxpayers pay their taxes in full and on time. Armenia, Estonia, Georgia, Latvia, Lithuania, Russian Federation, Ukraine have the necessary domestic legal framework for the spontaneous exchange of information. And in Kazakhstan, the necessary domestic legal framework for spontaneous exchange has not yet been created. The Russian Federation takes the exchange of information seriously and therefore supports most international initiatives. The Russian Federation participates in the Tax Administration Forum, in the OECD Committee on Fiscal Issues, in the European Organization of Tax Administrations, etc. The Russian Federation in 2017 updated its tax legislation to implement CbCR. Summarizing the above, the following conclusions can be drawn: – EAEU and EU countries implement BEPS countermeasures (to varying degrees) in their international tax policies; – the network of international tax treaties continues to develop, some DTTs are being revised, but there are still problems with international tax relations between the countries of the former USSR. Therefore, it is necessary to actively sign and implement agreements to further avoid double taxation; – the exchange of information based on digital technologies remains a significant tool in the fight against tax evasion, increasing tax transparency; – the exchange of information is not significant and there is a need to continue to increase the volume of data exchange; – there are still open questions: whether developed countries will provide information in full and whether the tax authorities of the countries in question will properly apply the data received. To assess the introduced anti-evasion rules, it is necessary to propose a certain methodology that would reveal the dependence of income tax and personal income tax revenues in the budget system of each country. The article carried out a correlation-regression analysis, which reflects a direct dependence on the number of opening permanent representative offices and the growth in the profits of these companies on the filling of the budget, as well as for controlled companies. At the moment, there is no single methodology in the world that would reliably determine such a dependence, but this analysis shows a positive trend in the introduction of the provisions of the BEPS plan and their implementation in the national tax legislation of the EAEU member countries. The OECD purposefully developed these measures in order to effectively implement anti-avoidance measures. They give positive results. In the future, the OECD may develop guidelines for evaluating the effectiveness of each provision of the BEPS plan, which directly, not indirectly, will prove the economic correctness of the chosen international tax policy in terms of anti-avoidance measures.

Actual Problems and Analysis of Anti-avoidance Tax Measures …

349

2.2 Research Part The countries of the EAEU and the EU are implementing rules in their national tax systems that prevent tax evasion, ensuring fair tax competition and transparency. Therefore, it is necessary to consider and evaluate how these implemented rules affect budget revenues. The introduction of the provision on the “Permanent Representation” (PR) in the Tax Code of the Russian Federation gave the budget additional revenues. Thus, Fig. 1 reflects the positive dynamics of income from PR for the period 2017–2019. In 2020, receipts were 12% less than last year. This decline is explained by the pandemic that affected the whole world in 2020–2021. From Fig. 2, we can say that in 2021 the regional budgets received 15% less personal income tax from CFCs than last year. The Boston Consulting Group noted that Russian citizens have assets abroad worth more than $400 billion. The Federal Tax Service said that more than 13 trillion rub. Russians keep on accounts abroad. In 2021, 66% of personal income tax with CFC was paid by residents of Moscow. The International Monetary Fund (IMF) noted that the implementation of thin capitalization and transfer pricing rules negatively affects the investment of TNCs. With an average corporate tax rate of 27%, the thin capitalization rule reduces TNC investment by an average of 20%. Transfer pricing rules have reduced investment in TNCs by more than 11%. Let’s consider the impact of the number of PRs on corporate 30,5

27,8

29,6

2,2

2,3

1,5

1,8

2017

2018

2019

2020

26,8

Receipts from PR, billion rubles. Number of PR, thousand units.

Fig. 1 Proceeds from Permanent Representations (PR) in the Russian Federation, billion rubles (Source Compiled by the authors)

Fig. 2 Receipts from CFCs in the Russian Federation, billion rubles (Source Compiled by the authors)

350

I. A. Zhuravleva et al.

Table 1 Initial data for regression analysis of the impact of the number of PRs on corporate income tax receipts for 2020 by constituent entities of the Russian Federation (Source Compiled by the authors) District of the Russian Federation Number of PRs, units (X) Receipts from corporate income tax, billion rubles (Y) Central Federal District

1054

1788,68

Northwestern Federal District

219

446,64

North Caucasian Federal District

20

31,35

Southern Federal District

119

185,00

Volga Federal District

147

371,99

Ural Federal District

87

472,08

Siberian Federal District

54

351,00

Far Eastern Federal District

84

371,38

income tax receipts. This requires a regression analysis [13]. Table 1 presents data on the number of PRs and corporate income tax receipts for each district of the Russian Federation for 2020. To do this, we take the initial data for a regression analysis of the impact of the number of PRs on corporate income tax receipts for 2020 by federal districts of the Russian Federation. Figure 3 shows the relationship between the two inputs (the number of PRs and income tax receipts) and gives an idea of whether there is a linear relationship between X and Y. From the figure, it can be said that there is a direct linear relationship between the two data. Linear dependence can be represented as an equation: y = 1, 54x + 158, 61

(1)

So, according to the initial data, a regression analysis was made, which is reflected in Table 2. Based on this table, the following conclusions can be drawn:

Fig. 3 Scatterplot for regression analysis of the impact of the number of PRs on corporate income tax receipts for 2020 (Source Compiled by the authors)

103,338

6

7

Remainder

Total

56,605

158,608

1,541

Y

X

0,145

Standard error

Coefficients

2,038,266

1,934,929

1

Regression

10,599

t-Statistic

P-Meaning

2,802

0,0310 0

17,223

1,185

20,101

Bottom 95%

1,934,929112,346

F

8

Observations MS

131,236

Standard error

SS

0,941

df

0,949

Normalized R-square

Analysis of variance

0,974

Multiple R

R-square

Regression statistics

1,897

297,114

Top 95%

0

1,185

20,102

Bottom 95,0%

Significance F

Table 2 Regression analysis of the impact of the number of PRs on corporate income tax receipts for 2020 (Source Compiled by the authors)

1,897

297,115

Top 95,0%

Actual Problems and Analysis of Anti-avoidance Tax Measures … 351

352

I. A. Zhuravleva et al.

– The coefficient of determination is a summary measure of the overall quality of the regression equation. The coefficient of determination varies from 0 to 1, and the closer it is to one, the better the model will be. In this case, the coefficient of determination is 0.95, therefore, the regression model is qualitative; – The significance of the coefficients of the regression equation is checked using the Student’s test. The value of T critical will be equal to 2.45. T-statistics are greater than critical T, therefore the coefficients 158.61 and 1.54 are statistically significant; – Fisher test. F critical is 5.99. F > F critical , hence the regression equation is statistically significant. In addition, it is possible to analyze the impact of CFCs on corporate income tax receipts in the Russian Federation. We also use regression analysis for this. Table 3 shows the initial data on the number of CFCs and corporate income tax receipts within each federal district of the Russian Federation. The initial data for the regression analysis of the impact of the number of CFCs on corporate income tax receipts for 2020 is also taken for the Federal Districts of the Russian Federation. The scatterplot shows whether there is a linear relationship between X and Y. Figure 4 shows that there is a direct linear relationship between X and Y, which is given by the equation: y = 0.59x + 206.18

(2)

After conducting the regression analysis, as reflected in Table 4, we can draw the following conclusions: Table 3 Initial data for regression analysis of the impact of the number of CFCs on corporate income tax receipts for 2020 by constituent entities of the Russian Federation (Source Compiled by the authors) District of the Russian Federation

Number of CFCs, units (X)

Receipts from corporate income tax, billion rubles (Y)

Central Federal District

2647

1788,68

Northwestern Federal District

589

446,64

North Caucasian Federal District

12

31,35

Southern Federal District

150

185,00

Volga Federal District

151

371,99

Ural federal district

114

472,08

Siberian Federal District

351

351,00

Far Eastern Federal District

12

371,38

Actual Problems and Analysis of Anti-avoidance Tax Measures …

353

Fig. 4 Scatterplot for regression analysis of the impact of the number of CFCs on corporate income tax receipts for 2020 (Source Compiled by the authors)

– the coefficient of determination, reflecting the quality of the model, is 94%, therefore the regression model is qualitative; – Student’s test (testing the significance of the coefficients of the model). T critical is 2.45. T statistics > T critical, therefore, the model coefficients of 0.59 and 206.18 are statistically significant; – Fisher test. F critical is 5.99. F > F critical , therefore, the model equation is statistically significant. Thus, after conducting a regression analysis and studying the statistical data of countries, we can say that the implemented anti-avoidance measures give a positive result to the dynamics of budget revenues, increase in tax transparency and in fair tax competition.

3 Results & Discussion Starting from 2023, it is planned to implement a package of measures (Pillar 1 and Pillar 2) to change the taxation of TNCs. The goal of Pillar 1 and Pillar 2 is to redistribute the excess profits of TNCs to source countries and introduce a global minimum profit tax rate of 15%, respectively. The OECD has estimated the impact of Pillar 1 and Pillar 2 [14]. The global net increase in tax revenue is estimated at up to 4% of global corporate income tax revenue, or US$100 billion per year. Income tax revenue gains are broadly similar across high, middle, and low income countries. A significant reduction in profit shifts is expected as a result of the cumulative effect of Pillar 1 and Pillar 2. Pillar 1 will cover about 100 MNCs worldwide and will cover the redistribution of about US$125 billion in taxable profits. Armenia, Belarus, Estonia, Georgia, Kazakhstan, Latvia, Lithuania, Russia have already joined Pillar 1 and Pillar 2. For CbCR, the OECD makes the following recommendations for improvement: – Armenia needs to take measures as soon as possible to implement the domestic legal and administrative framework for CbCR.

df

61,148

0,063

206,177

0,588

U

X

Standard error

6

7

Remainder

Total

Coefficients

1

Regression

SS

9,338

3,372

t-statistic

2,038,266,435

131,217,293

1,907,049,142

0

0,015

P- Meaning

21,869,549

1,907,049,142

MS

8

Observations

Analysis of variance

0,925 147,884

Normalized R-square

Standard error

0,967 0,936

Multiple R

R-square

Regression statistics

0,434

56,554

Bottom 95%

87,201

F

0,743

355,800

Top 95%

0,434

56,554

Bottom 95,0%

0

0,743

355,800

Top 95,0%

Significance F

Table 4 Regression analysis of the impact of the number of CFCs on corporate income tax receipts for 2020 (Source Compiled by the authors)

354 I. A. Zhuravleva et al.

Actual Problems and Analysis of Anti-avoidance Tax Measures …

355

– Georgia does not have bilateral relations for CbCR exchange. Therefore, Georgia needs to have valid agreements of the competent authorities with the countries of the inclusive framework; – Kazakhstan is encouraged to take steps to put in place processes to ensure that information exchange is carried out under the terms of reference regarding the structure of information exchange; – it is recommended that Latvia change or clarify that the rule for calculating the annual threshold for consolidated group income is used in a manner consistent with OECD guidance on exchange rate fluctuations in relation to TNCs whose ultimate parent company is not in Latvia; – Estonia, Lithuania meet all the requirements of Action 13 of the BEPS plan and do not give any recommendations for improvement by the OECD [15]. Regarding ways to improve the automatic exchange of information, the OECD makes its recommendations: – Azerbaijan should amend its domestic legal framework to ensure that the approach to identifying controlling persons under the automatic exchange of information standard is consistent with the approach to determining beneficial owners under its internal anti-money laundering procedures; – Estonia should remove supplementary funded pension insurance contracts from the list of excluded accounts, as they do not meet the requirements of the automatic exchange of information standard; – Latvia should include the definition of “managed” in relation to the definition of an investment entity; – Lithuania complies with all the requirements of the standard for automatic information exchange and does not provide any recommendations from the OECD. On January 20, 2022, the OECD released a new version of the TP guide for TNCs and tax administrations. The OECD Guidelines on Transfer Pricing provide guidance on the use of the “arm’s length principle”, which is an international consensus on the assessment for tax purposes of the profits of cross-border transactions between related companies [16]. Improvement within the framework of integration processes is an important issue. The processes of globalization and digitalization in the world are fundamentally changing economic relations. Therefore, the processes of tax administration and collection of taxes should also keep up with global trends. To do this, the OECD every year develops new projects and proposals to improve international taxation and prevent tax avoidance, erosion of tax bases and the withdrawal of profits, elements of aggressive tax planning. It should be noted that the Russian Federation in relation to international tax policy sets a certain vector of direction, and the rest of the countries join it.

356

I. A. Zhuravleva et al.

4 Conclusion The scientific study examined the tax systems of a number of countries of the EAEU and the EU. It can be noted that, in general, countries are moving in two directions in terms of tax harmonization: some countries (such as Latvia, Lithuania, Estonia) are improving tax systems within the EU, other countries (Russia, Armenia, Kyrgyzstan, Kazakhstan, Belarus)—within the framework of EAEU. A study was carried out of general and special anti-avoidance measures in the post-Soviet countries and it was concluded that it is necessary to continue the introduction of anti-avoidance measures and improve the already introduced rules. It is also possible to implement targeted anti-avoidance measures, which are more narrowly focused rules than SAAR. A regression analysis of the impact of CFCs and PPs on revenues to the budgets of the Russian Federation showed that there is a direct linear relationship between the introduction of anti-avoidance measures and tax revenues to the country’s budgets. In other words, the more CFCs and PPs, the more tax revenues went to the budgets. However, IMF data showed that anti-avoidance rules could have a negative impact on investment. Therefore, the state in its tax policy, internal and external, should be based not only on fiscal goals, but also on stimulating ones. In terms of data exchange, FATCA-based information exchange, information exchange on request (EOIR), information exchange under the common reporting standard (CRS) and CbCR exchange are already in use. At the present stage of digitalization of economies, countries need to create unified information system blocks/platforms/databases at the level of economic unions to provide access to such electronic servers/database storages, which will speed up the process of information exchange between tax jurisdictions. This should be fixed in separate Memorandums/Regulations/Directives of the countries or in agreements on avoidance of double taxation or similar cross-country agreements. Digitalization of the processes of intercountry exchange of tax information, on the one hand, will accelerate the process of legalizing aggressive planning, withdrawing money to low-tax jurisdictions and, on the other hand, will increase the effectiveness of anti-evasion measures, taking into account existing geopolitical events. Thus, the countries of the EAEU and the EU are improving within the framework of international tax policy. However, there are still open questions and problems of information exchange, implementation of the BEPS plan, tax agreements, etc. The main obstacles to improving the tax system are the political situation in the country and significant government spending on tax administration and the introduction of information technology. The application of the proposals and results of the study is possible within the framework of improving the model tax code developed by the CIS countries in the national tax systems of the countries under consideration, in creating a supranational tax document that contributes to solving problems of tax administration.

Actual Problems and Analysis of Anti-avoidance Tax Measures …

357

References 1. Organisation for Economic Co-operation and Development, https://www.oecd.org/tax/beps/ beps-actions/ 2. Zhuravleva IA, Nazarova NV, Kozharinov AV, Levoshich NV (2021) The national tax system at the present stage of development. Socio-economic Systems: Paradigms for the Future. Springer International Publishing (2021), pp. 1607–1615 3. Janský P, Palanský M (2019) Estimating the scale of profit shifting and tax revenue losses related to foreign direct investment. Int Tax Public Finance 26(5):1048–1103 4. The European Commission, https://commission.europa.eu/strategy-and-policy/reporting/ann ual-activity-reports_en 5. Zhuravleva IA (2020) The direction of reforming the tax system on the basis of the scientific systemonomic author’s model: nalogonomy. Perspectives on Economic Development – Public Policy, Culture, and Economic Development Edited by Ryan Merlin Yonk and Vito Bobek. p.125–145. Intechopen, London 6. Organisation for Economic Co-operation and Development, Action 5, https://www.oecd.org/ tax/beps/beps-actions/action5/ 7. Organisation for Economic Co-operation and Development, Action 6, https://www.oecd.org/ tax/beps/beps-actions/action6/ 8. Library of OECD, https://www.oecd-ilibrary.org/taxation/oecd-g20-base-erosion-and-profitshifting-project_23132612 9. Organisation for Economic Co-operation and Development, MAP Review, https://www.oecd. org/tax/dispute/country-map-profiles.htm 10. Multilateral Convention to Implement Tax Treaty Related Measures to Prevent BEPS, https://www.oecd.org/tax/treaties/multilateral-convention-to-implement-tax-treaty-rel ated-measures-to-prevent-beps.htm 11. Internatıonal monetary fund, https://www.imf.org/en/Countries/USA 12. The Automatic Exchange of Information, https://www.oecd.org/tax/automatic-exchange/ 13. What is Regression Analysis? Types, Techniques, Examples, https://www.knowledgehut.com/ blog/data-science/regression-analysis-and-its-techniques-in-data-science 14. Zhuravleva IA, Lysenkova AI (2021) Global Tax Reform: Pillar I, Pillar II. Innovative development of the economy, 5 (65), pp. 142–149. LLC “Scientific Consulting Center”, Yoshkar-Ola (2021) (in Russ.) 15. Organisation for Economic Co-operation and Development, Action 13, https://www.oecd.org/ tax/beps/beps-actions/action13/ 16. OECD Transfer Pricing Guidelines for Multinational Enterprises and Tax Administrations 2022, https://www.oecd.org/tax/transfer-pricing/oecd-transfer-pricing-guidelines-formultinational-enterprises-and-tax-administrations-20769717.htm

An Alternative Approach to Smart Air Conditioner with Multiple Power Sources Md. Rawshan Habib, W. M. H. Nimsara Warnasuriya, Md. Mobusshar Islam, Md. Apu Ahmed, Sibaji Roy, Md. Shahnewaz Tanvir, and Md. Rashedul Arefin

Abstract This project demonstrates the methodology and implementation of thermoelectric air conditioner using Peltier effect. The positives of thermoelectric coolant over traditional coolants include their tiny size and weight, lack of moving components, and working fluid. The suggested system comprises two aspects, and when DC power passes via it, heat is transferred through one end onto another end, causing one end to become colder and the other to become hotter. The cold side drops beneath room temperature, whereas the heated side is connected to a heating element to maintain room temperature. Regarding reduced temperature in special purposes, many refrigerators could be spiraled collectively. Electrical energy which is needed to power the cooler can be supplied from a solar panel which is beneficial for the rural areas. The proposed model contains multiple power sources for better efficiency. The proposed prototype is economical and tested utilizing proper components, with promising outcomes. Keywords Thermoelectric · Air condition · Smart system · Alternative approach · Peltier

Md R. Habib Murdoch University, Murdoch, Australia W. M. H. Nimsara Warnasuriya Edith Cowan University, Joondalup, Australia Md Mobusshar Islam American International University—Bangladesh, Dhaka, Bangladesh Md Apu Ahmed · S. Roy Chemnitz University of Technology, Chemnitz, Germany Md Shahnewaz Tanvir (B) South Dakota School of Mines and Technology, Rapid City, USA e-mail: [email protected] Md R. Arefin Ahsanullah University of Science & Technology, Dhaka, Bangladesh © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_26

359

360

Md. R. Habib et al.

1 Introduction A scheme called an air-conditioning unit carries out four distinct tasks at once. These include controlling the air temperature, humidity levels, controlling the airflow, and controlling the quality of air. A structure’s heating, cooling, airflow, and humidification are often provided by air conditioners. To avoid the development of dangerous bacteria in the airways and to guarantee efficient functioning, such devices must be meticulously maintained. Since open windows would interfere with the management device’s efforts to keep continuous air quality, air-conditioned structures frequently feature closed windows, unusually skyscrapers. In accordance with the needs and preferences of each user, there are nowadays numerous multiple kinds of airconditioning systems available. Certain air conditioners, including window units, package modules, split systems, packed terminal air conditioners (ACs) such as those found in resorts, or compact ductless air conditioners, utilize direct expansion coils for cooling. Additional models, usually commercial air conditioners for big commercial facilities, used CHW for air conditioning. The coils in the air-conditioning system must be set to a degree that is cooler than the air, regardless of the kind of air conditioner being utilized [1]. Figure 1 shows the basic structure of an air conditioner. One of the main everyday electricity consumers in homes is the air-conditioning system. In order to preserve power and consequently lower consumer expenses, it is essential to use energy-efficient air conditioners. According to a study presented in [2], inverter-type air conditioners typically use 33% fewer electricity compared to conventional air conditioners. Fig. 1 Basic structure of an air conditioner [1]

An Alternative Approach to Smart Air Conditioner with Multiple Power …

361

A thermoelectric (TE) cooler, also known as a TE interface or Peltier cooler, is an electrical configuration made of semiconductors which serves as a miniature heating system. Heat is transferred via a TE interface between both sides by using a DC power supply with reduced voltage. As a result, one unit aspect would be refrigerated, whereas the other aspect is warmed. Electricity could be produced via thermoelectric devices (Seebeck effect) [3] and for utilizing in controlling the temperature (Peltier effect). Till now, in addition to being immensely effective for particular usage like satellite, the electric power generating technology has also been used to provide power in distant locations. Very minimum power purposes, including traveling coolers or compact freezers, have shown success with refrigeration system. For high thermal power thermoelectricity can be role model through achieving higher effectiveness and less costs capabilities. Since they do not require any liquids or motors, thermoelectric camp freezers represent the most durable and dependable type of portable unit. A single module provides sufficient power; therefore, no additional electrical controllers or amplifier is required, which lowers the cost further. Just several degrees colder than ambient temperature is all that certain containers need to be kept in resort coolers, which are compact and use little cooling power (normally cooling temperatures are not needed). Durability is not crucial in such situation. In a world influenced by cooling system, wherein vapor compression chilling is the predominant popular technique for air-conditioning of open realm and houses, it appears that no thermoelectric air-conditioning option would thrive. Nevertheless, there are some circumstances when installing air-conditioning equipment is challenging and a thermoelectric alternative may be preferable. In this prototype model, the main focusing point is efficient smart air-conditioning system with the help of thermoelectric device known as Peltier. This prototype model is very much useful for the rural areas, where it is difficult to achieve continuous power from the electric supply. As this prototype system has the ability to generate power through solar panel and there is no compressor types’ element for the reason, this system can be very much convenient for the rural areas. The current ACs use coolants including Freon, Ammonia, and so on to provide cooling. Air-conditioning system using Peltier device is implemented and benefitted over conventional system which has been focused in this project. While using traditional coolants may reach optimal production, one of the main drawbacks is the release of hazardous gases. Thermoelectric ACs are another solution to such issues.

2 Technological Advancement Related to AC Humans now depend on ACs far more than they did in the past for a top standard and far more pleasant way of life. ACs raise the occupancy rate of buildings in both industrialized and developing nations. Additionally, this causes AC energy usage to increase quickly. The effectiveness, innovation, convenience of people, and power usage have all played a role in the development of air conditioners. Window, split, fixed, and convertible frequency and recently introduced smart type are all examples

362

Md. R. Habib et al.

of how far ACs have come. Window-type ACs were converted to split ones in the 1990s by relocating the compressor outside to minimize noise and please users. From 2000, the rising cost of oil and the need for energy efficiency have made it necessary to switch from fixed frequency to convertible regulation of compressors. Some cutting-edge technologies such as mobile phones, internet of things, and 4G communication systems are being utilized by experts to present smart ACs with advanced control schemes [4]. Figure 2 provides a simple illustration of how the air conditioner’s operation evolved. Over time, several modeling methodologies have been used to investigate the impact of reducing the electricity usage of air conditioners. Integrating a storage system is an option that should be taken into account since passive thermal management may not be sufficient to accomplish the goals of lowering costs and peak shifting at changeable fuel prices [5]. Ice storage is a potential option for ACs’ development in future. Ice storage ACs have a cooling supply storage device that may liquefy

Fig. 2 Technological advancement of ACs [4]

An Alternative Approach to Smart Air Conditioner with Multiple Power …

363

ice and discharge cold at peak times, preserve cold in the energy grid’s troughs, and significantly lower costs. The cooling systems for ice storage are an excellent regulated load. The use of ice storage to lower the expense of meeting refrigeration needs has received a lot of attention [6]. The fundamental construction and operating method of a versatile solar-aided air-source AC having hot water machine have been introduced in [7]. The assessment of the methods that use heat pumps alone to produce hot water, heat pumps combined with solar energy, and conventional water heaters reveals that it may minimize energy and cost. Additionally, it can cut significant amount of carbon emissions when used in combined function. A possible method for storing energy and distributing it when needed is thermal energy storage (TES). A novel dual-circuit TES for AC is proposed in [8] which is rated at 3.5 kWh. The purpose of the research presented in [9] is to find the best atmospheric cooling for lowering heat stress while also improving thermal comfort in individuals. A transportable thermoelectric AC has been created to provide localized cooling for people. It is demonstrated that the prototype refrigeration capability is more than 26.7 W, as well as the novel system operates quietly even at high refrigeration capacities, opening up a wide range of uses for enhanced refrigeration, including ventilators, buildings’ air-conditioning, and other areas. Renewable energies are quickly replacing existing energy production sources. As a result to the high degree of uncertainty associated with renewable generating, including wind and solar, this change increases the difficulty of matching the demand and supply for electricity. By making use of the thermal inertia of dwellings, residential AC can offer balancing functions. The majority of homes have more than one zone. In these homes, a single air conditioner cools numerous unique spaces, each with its own thermostat. The dynamics of air temperature are also connected between zones. An appropriate aggregate model is established, and thus, a mathematical model is provided in [10] for the dynamical behavior of two-zone ACs with non-disruptive control methods. Thermoelectric power generation (TEG) has been utilized in [11] to harvest energy from ACs. In this work, eight TEGs are linked to an heat sink made of aluminum and heated on one side. Three aluminum water-cooled blocks served as the support for the TEGs’ cooled side. According to the findings, surveillance systems can be operated by using this harvested energy. To minimize energy use and increase energy efficiency in public facilities, experts developed a multi-energy source air-conditioning system for big apartments in China. The findings show that the system using numerous energy sources is more efficient and cost-effective. The operational costs are reduced even though the start-up expenditure is higher when compared to other methods [12]. By regulating airflow and area temperature, internet of things platform is developed in [13] to reduce the power usage of ACs. The ambient temperature in the test zone and the supplied temperature are considered to be the governing units. The regulator would instruct the compressor to shut off if the supply air temperature indicates that the evaporator becomes cool or if the test zone temperature reaches the desired level. As a result, the compressor runs less frequently and consume less fuel. With this method, the average annual reduction in energy use is 18,496.74 kWh. Eco-cost analysis is essential for decreasing pollution and identifying practical solutions to the world’s energy shortage. Hence, a novel eco-cost evaluation technique

364

Md. R. Habib et al.

for split ACs is suggested in [14]. A novel approach is created to address the undesirable lacking data from the data collection procedure. Following that, frameworks for split ACs’ environmental and non-environmental cost evaluation are developed in accordance. The suggested method’s key drawback is that it heavily depends on component data to estimate costs accurately. ACs may operate incorrectly due to improper implementation or accumulated defects over the course of the hardware’s lifespan. A brand-new automatic fault detection method is introduced in [15] that can identify issues in household ACs and notify the owner. The suggested algorithm performs automated defect detection during the hardware lifecycle, including soon after deployment, using only the facility’s thermostat and the external air temperature.

3 Design and Implementation of the Proposed System The smart AC that is being suggested is made with practical tools. Microcontroller, temperature sensor, LCD display, GSM module, mobile device, IR sensor, Peltier device, etc. are the main pieces of equipment. This project is about smart airconditioning system using multiple source and also an alternative option for the traditional system. Overall process flowchart is given in Fig. 3 which shows how total system is working including the automatic process of the system which makes the system a smart one. The suggested prototype’s block diagram is shown in Fig. 4. Four Peltier devices are connected in series in the middle of the circuit diagram that is illustrated in Fig. 5. Two heat sinks are covered the total series of the Peltier device in each side. A total of four exhaust fans two in each side are connected to take the heat away. Microcontroller is basically making the whole process automatic. Peltier devices are connected with microcontroller with TEC driver, and exhaust fans are also connected there with the help of motor driver. GSM module, remote control transmitter, receiver, sensor, and user panel display are also connected in the central processing unit which is Arduino Mega. Power supply board is shown in different blocks in the block diagram. As the system is multiple power source operating system, thus the power board acts as a vital role in this proposed project. Peltier air-conditioning system has been partially automated with the help of microcontroller. Temperature sensor output is shown in the LCD display with help of microcontroller. GSM module is attached in the microcontroller to make the project controllable thorough mobile device. It can recognize command from remote as well as from mobile device when the user is not reachable in the project implemented area. With the help of microcontroller and the switching network with several relays, it is possible to achieve the multiple source operations. As the microcontroller only does the command written in the coding, so whole operating procedure needs to be described in coding for microcontroller. IR sensor is connected in the microcontroller for remote control operation in the specific area. This CPU will drive both fans which is mounted both sides of the Peltier device with the help of motor driver. Working principle of the Peltier driver is also dependent to it as it drives the Peltier driver.

An Alternative Approach to Smart Air Conditioner with Multiple Power …

Fig. 3 Flowchart of the whole process

Fig. 4 Block diagram of the proposed system

365

366

Md. R. Habib et al.

Fig. 5 Circuit diagram of the suggested device

Prototype model of the suggested device is illustrated in Fig. 6, whereas output wave shape is shown in Fig. 7.

4 Discussion The prototype of the proposed system is developed and observed carefully. Concept of thermoelectric cooling through Peltier device is strengthen after completing the project. Implemented system is a prototype model; thus, some minor error arises and cannot be avoided due to shortage of time. For maximum efficiency, four Peltier devices have been used. Short-circuit protection and circuit breaker have been included for better safety. As the cooling part of the system can condense the air and water availability may occur so there is protection also for unwanted water and thus short circuit can be ignored. Prototype model is successful in conditioning the air; thus, the primary objective has been achieved. Multiple sources are included and the fraction of temperature also can be controlled by the remote; thus, the secondary objective is also achieved. At the very beginning, there were several major problems arise; among them, power consumption and sufficient temperature providence were the most challenging. Though there was some minor error, the

An Alternative Approach to Smart Air Conditioner with Multiple Power …

367

Fig. 6 Prototype model of the suggested device

Fig. 7 Output wave shape

project was successfully implemented and satisfaction has achieved after observing the output. Despite making any significant modifications to the arrangements, the proposal is being carried out in accordance with the listed procedures. The technique was initially taught through theoretical investigation and competitive analysis. The practical refrigeration platform’s concept and operational execution followed. The primary proposal’s final stage was the controller. Evaluation and the designing of

368

Md. R. Habib et al.

more enhancements have been performed in order to complete the present project phase. This prototype can be implemented for commercial purposes in future. With the passing of time, it might possible that thermoelectric cooling method has taken whole place of the traditional air-conditioning method. There are several improvements available in future which include airflow improvement and power consumption improvement.

5 Conclusion This research shows how cutting-edge technology is used to create an AC unit. Solar energy is used to energize a Peltier component, which is used in the AC system. Upon examining the outcome of the preliminary design, it is believed that such a environmentally beneficial effort may be employed for regional heating and cooling purposes. The suggested methodology is put into practice utilizing the proper components covered in the literature, and the outcomes are good. It could be observed that such a suggested model can produce a fair amount of cooling. Even though concept performs effectively in confined spaces, a more adaptable variant might be created for broad use. Air conditioning cannot be viewed as a choice nowadays since it is no longer considered a privilege but rather as a daily essential for a healthier lifestyle. However, in light of the current climate change, environmental protection cannot be ignored. Though the COP of the system is low, but other advantages took over it and further research can increase the COP. Thus, thermoelectric solar air conditioning is more effective and environmentally favorable than the present mechanical method.

References 1. Kubba S (2017) Impact of energy and atmosphere. In: Handbook of green building design and construction: LEED, BREEAM, and Green Globes, 2nd edn 2. Salleh SF, et al (2018) Electricity and cost savings from utilization of highly energy efficient air conditioners in malaysia. In: IEEE Student Conference on Research and Development. Selangor, pp. 1–4 3. Ahmed K, et al (2019) Modeling of a thermoelectric generator to produce electrical power by utilizing waste heat. In: 2nd International Conference on Innovation in Engineering and Technology. Dhaka, pp. 1–4 4. Cheng CC, Lee D (2014) Smart sensors enable smart air conditioning control. Sensors 14:11179–11203 5. Sheha MN, Powell KM (2018) Dynamic real-time optimization of air-conditioning systems in residential houses with a battery energy storage under different electricity pricing structures. In: 13th International Symposium on Process Systems Engineering, San Diego, pp. 2527–2532 6. Feng P, et al (2021) Distributed scheduling of multiple ıce-storage air conditioning systems. In: 7th ınternational conference on ınformation, cybernetics, and computational social systems, Guangzhou, pp. 382–386

An Alternative Approach to Smart Air Conditioner with Multiple Power …

369

7. Chaoyang J, Jianbo C, Cong Z, Jiaxian Z (2011) The development of multifunctional solar assisted air source air-conditioner with hot water machine and its operation analysis. In: International conference on electric technology and civil engineering, Lushan, pp. 1680–1683 8. Goyal A, Kozubal E, Woods J, Nofal M, Al-Hallaj S (2021) Design and performance evaluation of a dual-circuit thermal energy storage module for air conditioners. Appl Energy 292:116843 9. Ma K, Zuo Z, Wang W (2023) Design and experimental study of an outdoor portable thermoelectric air-conditioning system. Appl Therm Eng 219:119471 10. Nugroho SA, Granitsas IM, Mathieu JL, Hiskens IA (2022) Aggregate modeling and nondisruptive control of residential air conditioning systems with two-zone cooling capacity. American Control Conference, Atlanta, pp. 4668–4675 11. Mona Y, Chaichana C, Rattanamongkhonkun K, Thiangchanta S (2022) Energy harvesting from air conditioners by using a thermoelectric application. Energy Rep 8:456–462 12. Zheng X et al (2016) Benefit analysis of air conditioning systems using multiple energy sources in public buildings. Appl Therm Eng 107:709–718 13. Thongkaew S, Charitkuan C (2018) IoT for energy saving of split-type air conditioner by controlling supply air and area temperature. In: 22nd ınternational computer science and engineering conference, Chiang Mai, pp. 1–4 14. Fu G et al (2020) Eco-cost analysis of split air-conditioner using activity-based costing method. IEEE Access 8:54952–54962 15. Chintala R, Winkler J, Jin X (2021) Automated fault detection of residential air-conditioning systems using thermostat drive cycles. Energy Buildings 236:110691

An Analysis on Daily and Sports Activities’ Recognition Using Data Analytics and Artificial Intelligence Model S. Maheswari and V. Radha

Abstract The understanding of collective tactical behavior has developed into an essential component of sports data analysis. The utilization of ever-increasing quantities of complex information requires the usage of data analysis methods that are both interactive and automatic. The collection and analysis of sportsperson monitoring data are a standard practice in professional team sports. Since being sedentary was found to be one of the primary risk factors of death, classifying activities that people do on a daily basis as well as those that they do for sports has become an important activity that may improve the quality of human life. These actions entail many different types of fundamental occurrences. It is necessary to be able to identify the action being carried out by the human in order to construct a human–computer interaction system. A module for recognizing daily and sports activities would do feature extraction from the available sensor data in order to find relevant information. The suggested system centers on the automatic recognition of the actions carried out by humans through the utilization of the raw data collected from Internet of Things wearable sensors. This paper’s goal is to conduct an analysis of the existing system that is related for daily and sports activity recognition system. Keywords Artificial intelligence · Daily and sports activities · Data analysis · Internet of Things · Sensors

1 Introduction Many games have undergone significant amounts of modernization as a result of the proliferation of a variety of new technologies that have been introduced into the realm of sports. Goal line tracking, player performance tracking, and video-assisted smart referee systems require mounting many cameras and electromechanical components. These are “tracking systems.” Managers and coaches used to struggle to track player S. Maheswari (B) · V. Radha Department of Computer Science, Avinashilingam Institute for Home Science and Higher Education for Women, Coimbatore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_27

371

372

S. Maheswari and V. Radha

Fig. 1 Framework of daily and sports activities’ recognition

performance and game statistics [1]. However, AI and IoT have changed how fans play and enjoy video games. By using several sensors to capture real-time data, the Internet of Things (IoT) simplifies data collecting. Machine learning methods in AI simplify predictions. This research examines how these technology advances can quantify team performance to predict sports outcomes. This research analyzes team data using wearable technologies, sensors, tracking systems, and integration technology. After that, various machine learning systems can forecast game outcomes more accurately. Figure 1 illustrates recognition. The “activity recognition” chain recognizes daily activities on the right, while sensors on the left provide the data [2]. Currently, neural networks are the most successful technology for activity recognition. The two methods that are most frequently employed for this purpose are ML models and DL models. Cloud storage is best option for storing enormous amounts of data. In this work, Google Drive used cloud storage to keep data files.

2 Literature Review Daily and sports activities validated Inanc et al. (2018). Log histograms from onedimensional local binary patterns derived statistics. Wearable sensors recorded these. Extreme learning machines identified traits. Act and gender recognition require different feature extraction methods [3]. Mu et al. (2022) employed wearable sensors to automatically recognize three exercises—squatting, jumping jacks, and flicking

An Analysis on Daily and Sports Activities’ Recognition Using Data …

373

back—that help the elderly with balance, strength, and endurance. A wearable sensor and machine learning-based recognition algorithms check workout conditions. A 100 Hz three-axis accelerometer sampled ten people from four locations nine times (chest, wrist, waist, and ankle). Testing took 20 s per workout. In comparison research, decision tree, KNN, SVM, or Naive Bayes models dictated recognition outcomes. The study indicated that sensor position and classification model affect recognition. Decision tree was most accurate at 92.52% across all four locations [4]. Hsu and colleagues (2018) reduced inertial signal feature dimensions using nonparametric weighted feature extraction and principal component analysis. Twenty-three lab participants wore the wearable inertial sensor network on their wrists and ankles while undertaking ten domestic tasks and 11 sports to test the network and algorithm. Tenfold cross-validation rates of 98.23% for ten common domestic activities and 99.55% for 11 athletic activities verified the proposed wearable inertial sensor network and activity detection algorithm [5]. Experiments found ten household activities. Tuncer et al. (2020) presented the MK-LDP approach, which collects 2560 features from raw sensor signals, RFINCA picks 512 most significant ones, and the SVM for HAR uses them. Raw sensor data generate 2560 MK-LDP features. Three tests confirm MK-LDP and RFNCA-based HAR, example: gender, activity, and gender-activity classification. Gender, daily sports, and gender and sports together have the highest accuracy rates of 99.47%, 99.71%, and 99.36%. MK-LDP-based methods are compared to deep learning and state-of-the-art methodologies [6]. MKLDP and RFNCA-based sensor signal-based HAR. Barshan et al. (2020) developed wearable motion sensor unit-invariant techniques for daily and sports activity detection. Two sensory-based sequences locate each unit at any stiff body part. These sequence sets reduce activity recognition accuracy slowly, unlike the reference system. Wearable system preprocessing can simply incorporate the proposed representations [7]. Smartphone Motion Recognition Jakaria Rabbi and Team Use Machine Learning (2021). They analyzed a UCI Repository Smartphone sensor dataset with 96.33% accuracy. SVM works well on large datasets, but it takes a long time to train [8]. IoT and fog computing forecast cardiac disease (glucose level sensors, respiration rate sensors, temperature sensors, oxygen level sensors, EMG sensors, EEG sensors, and ECG sensors). Cascaded Deep Learning Models were assisted by K. Butchi Raju and others (2022). GSO-94% CCNN accuracy [9]. Kun Xia et al. (2020) constructed a 95.85% accurate HAR system employing LSTM-CNN architecture and WISDM dataset. Our strategy accurately identifies few model parameters [10]. Smartphone sensor CT-PCA and SVM algorithms by Chen et al. (2017) reliably recognize human activities 95%. OSVMs employ a limited part of a dataset and cannot attain satisfactory accuracy for huge datasets [11]. Bashar et al. (2020) developed a 95.79% accurate NCA-based Support Vector Machine approach for smartphone-based human activity recognition with feature selection and dense neural network. Smartphones detected human behavior this way. Feature selection simplifies model learning by reducing size. Faster model training loses time due to feature selection. More features speed model training. Avoiding the slower learning model requires a more efficient training framework [12]. Sangeetha et al. (2021) (online dataset) extracted essential variables

374

S. Maheswari and V. Radha

from the huge dataset using Neighborhood Component Analysis (NCA) and seven workout types. This reduces operational costs significantly. Random Forest and KNearest Neighbor revealed seven human tendencies [13]. KNN, Random Forest, and many neighbors characterize activity. Best possible solutions are achieved by understand the problem, analyze limitations of existing system, find the alternate strategies and expected outcome. List all possibilities to achieve the target and choose a best solution. The problem of activity classification is essentially one of time series. Supervised learning includes the process of time-series classification. It may be used for forecasting and transferring sensor data, and it can be used to make predictions from historical data using analytical techniques.

3 Challenges in Existing System Computer vision research has focused on identifying common and sporty activities. Background clutter, partial occlusion, and changes in scale, viewpoint, lighting, and look make video and still photo classification challenging. Video surveillance, human–computer interaction, and robotics need numerous activity recognition systems to characterize human behavior [14]. The main challenges are: • Each technique’s data type and amount depend on the algorithm’s capacity to handle diverse and/or large data. The model struggled to distinguish standing and walking as the most prevalent states. • Due to factors such as crowded backgrounds, complex camera motion, significant intra-class variation, and difficulties associated with data collecting, the creation of a completely automated activity identification system is not an easy undertaking. One of the most essential aspects that one have to take into consideration is the standard of the raw materials that are used to compile the information. A significant number of the datasets that are currently available for activity recognition were recorded in supervised settings, with participant actors carrying out predetermined tasks. In addition, a few of the datasets do not cover a general topic but rather concentrate on a particular activity or collection of activities, such as sports or straightforward behaviors that are often carried out by a single actor. On the other hand, these constraints make the scenario given here an unrealistic one that does not take into account real-world circumstances and does not satisfy the requirements that should be met by an ideal human activity dataset, as described earlier [14]. Despite this, a number of datasets for activity recognition have been suggested, all of which take into account the aforementioned requirements. Table 2 lists the challenges. The rise of the Internet of Things (IoT) and wearable technologies has increased interest in the detection of sensor-based activities. This research, which was preceded by an analysis of various applications for sensors, focuses on the automatic detection of people’s actions through the utilization of raw data obtained from Internet of Things wearable sensors, which was downloaded from the UCI repository. The

An Analysis on Daily and Sports Activities’ Recognition Using Data …

375

Internet of Things generates massive amounts of data with different modalities, data quality, and velocity. Additionally, the Internet of Things generates more data. Designing a daily life and sports activity recognition system requires intelligent data processing and analysis. This article analyzes machine learning and deep learning strategies for IoT data problems. This work’s biggest contribution is a taxonomy of optimization, machine learning, and deep learning algorithms. This taxonomy describes how data are processed to get higher-level information.

4 Methods for Analysis The routine collection and storing of vast amounts of data have become much simpler as a result of recent breakthroughs in more advanced technology. These developments can now be used to help decision-making in a variety of applications. Nevertheless, in the majority of countries, there is an initial requirement for the collection and consolidation of data about persons in a digital format. After that, the data that were obtained are going to be evaluated for the purpose of detecting any activity. For the purpose of this study, the daily and sports activities dataset is utilized to make predictions on a variety of activities. This body of work advocated the use of speedier pre-trained machine learning models for the purpose of classifying various activities based on huge datasets. Because automated systems are so important, computationally intelligent methods like data analytics and artificial intelligence (ML and DL) are being used to classify activities like these [15]. This study studied optimization (feature selection), machine learning (ML), and deep learning (DL) methodologies for recognizing sports activities. However, recent improvements in technology have made it easier to routinely capture and store massive amounts of data that can be used to make important judgments. This study examined sitting, standing, lying on one’s back and right side, climbing and descending stairs, and other human behaviors. Moving freely in an elevator, staying still, walking on a treadmill at 4 km/h, walking in a parking lot, running on a treadmill at 8 km/h, walking in a parking lot. Steppers, cross-trainers, and exercise bikes can be used horizontally and vertically to get in shape. Basketball, rowing, jumping. The experiment’s conceptual diagram is shown in Fig. 2 below. The primary objective of this study is to investigate ML and DL approaches, both of which have a huge number of attributes, each of which has the potential to affect the suggested model’s overall performance.

4.1 Feature Selection Algorithms By removing irrelevant or redundant features, the feature selection technique could reach an acceptable classification accuracy. Feature selection is vital to classification problems since it improves accuracy and convergence. These optimizers select

376

S. Maheswari and V. Radha

Fig. 2 Framework for proposed methodology

the best characteristics from a dataset [16]. Metaheuristic algorithms use problemindependent methods despite having diverse processes. They are not greedy, and thus, they can explore the solution space until the convergence condition is met to find the best solutions. It must yet tweak its hyper settings. The number of iterations, hidden layers, neurons, and learning rate are hyperparameters. • The Genetic Algorithm (GA) is a technique for finding the best possible solutions to problems by modeling the process of natural evolutionary selection and focusing on every member of the population. • The particle swarm optimization, often known as PSO, is another form of population-based optimization that was developed by mimicking the collective behavior of birds in order to find the best solution. • Comparable to GA, the differential evolution algorithm (DE) is utilized in hybrid models with the purpose of conducting global searches for optimal solutions. • Other metaheuristic algorithms including cuckoo search (CS), bat algorithm (BA), and grey wolf optimizer (GWO), among others, have also been widely used to improve the accuracy of prediction in recent years. Metaheuristic algorithms like PSOGSA and modified cuckoo search and differential evolution algorithm can optimize parameters based on hybrid model complexity. Modified cuckoo search and differential evolution algorithm are others (MCSDE). DE’s global optimization aids the hybrid method. It strategically avoids local optimization throughout this period. If fused with the DE-based algorithm, feature selection algorithms can achieve global optimization in the search space, narrowing search boundaries. The feature selection as in Fig. 3 will be converted into a form of input that is more dependable and appropriate for the classifier to use in order to categorize the numerous feature categories. In this research, we developed the chaotically based GWO optimal characteristics for machine algorithms to categorize human actions. In the meantime, the accuracy of the recommended method has been evaluated in comparison to that of the machine learning algorithms PSO, GWO, and Whale [16]. Feature selection is a technique for reducing the input variable to a ML model by

An Analysis on Daily and Sports Activities’ Recognition Using Data … Fig. 3 Feature selection system

377

Input Features

Feature Selection

Performance Evaluation Results are compared using Accuracy for assessing the best feature selection method with ML algorithms

selecting relevant data. Automatically selecting important features for an ML model based on a metaheuristic algorithm is the approach that leads to the best results.

5 Machine Learning-Based Approaches ML techniques can be broken down into three categories: unsupervised learning, semi-supervised learning, and supervised learning. All three of these categories can be utilized for classification tasks. All classification methods, including the more traditional machine learning techniques, are still commonly used in practice. Standard machine learning approaches typically beat deep learning techniques for a given issue, and they do so regularly on fewer datasets. This is true despite the fact that deep learning is a potentially fruitful field of study. Examples of traditional supervised machine learning algorithms include Support Vector Machines (SVMs), Naive Bayes (NB), Logistic Regression (LogR), K-Nearest Neighbor (kNN), Random Forest (RF), and Decision Trees (DTs) [17].

5.1 Logistic Regression This well-known approach for classifying data is a representative of the category known as Generalized Linear Models. When modeling the likelihood of an experiment’s results, Logistic Regression is a useful tool. This algorithm is also known by its alternative name, Maximum Entropy.

5.2 Naive Bayes This efficient approach of categorization is used to organize data in accordance with the probability involved. Even while processing millions of records, this algorithm

378

S. Maheswari and V. Radha

maintains its excellent performance. It merely organizes the data into categories by employing a variety of probabilities and the Bayes theorem. When using the Naive Bayes model, the class that has the highest probability is the one that is considered to be the predicted class. Maximum a posterior is another term for the Nave Bayes algorithm. In a number of different domains, Naive Bayes has both benefits and cons to provide. The technique is fast and very scalable in its application. It can be used for binary classification as well as classification with multiple classes. It is possible to use it with relatively tiny datasets, yet it will still produce useful results.

5.3 Support Vector Machine The classification and regression processes can both benefit from the application of this method. A hyperplane is created as a means of dividing the classes. With this method, regression works exceptionally well, and the SVM’s impact becomes more significant as the number of dimensions increases. Even when the sample number is greater than the dimension number, SVM is still able to function effectively. It does not function well with massive datasets, which is another one of its many drawbacks. Cross-validation is utilized rather frequently [17] in order to increase the computational efficiency of SVM models.

5.4 Decision Tree The primary objective of DT is to produce a tree in a step-by-step manner while simultaneously dividing the information into a number of distinct subsets. The management of category as well as numerical data is possible with this. The Gini index as well as the information gain parameter can be utilized in order to determine which attribute will be used for the further division of the dataset. When the Gini index is utilized, decision trees are referred to as Classification and Regression Trees (CARTs); however, when information gain is utilized, they are referred to as ID3 [17].

5.5 K-Nearest Neighbor (KNN) This method is easy to understand and may be utilized in a variety of contexts, including the recognition of patterns and the detection of intrusions. The value of k is initially set, and it can be 3, 4, etc. The current data points are used to calculate the distance between the data points from which we want to determine the class, and the K-Nearest Neighbors will decide the class of the new data point. This method uses the current data points to calculate the distance between the data points from

An Analysis on Daily and Sports Activities’ Recognition Using Data …

379

which we want to determine the class. The winner of the class will be determined by popular voting [17].

5.6 Random Forest It is a compilation of different classification and regression techniques that make use of decision tree models. Within the context of this tactic, planting additional trees frequently leads to an increase in both the strategy’s effectiveness and its efficiency. The bootstrap method is utilized in order to derive a representative sample of data points from a previously established training set. Create a decision tree by basing its construction on the outcomes of the phase that came before it. If we utilize the first two approaches, we will end up with a significant number of trees. The construction of each tree will amount to a vote for one particular data point. Find the overall classification that received the majority vote from the decision tree [17].

6 Deep Learning Approach The term “deep learning” refers to a machine learning approach that is implemented using artificial neural networks (ANNs). A deep learning network is composed of an input layer, a number of hidden layers, and an output layer. Deep Neural Networks (DNNs), Deep Belief Networks (DBNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Long Short-Term Memory (LSTM) are some of the types of neural networks (LSTM). Given the prevalence of RNN in the field of text analysis, we settled on the decision to predict ADR using Long Short-Term Memory (LSTM), an upgraded form of RNN. RNNs, which are utilized for text processing, make use of sequential data. RNNs are referred to as recurrent networks due to the fact that they perform the same operation multiple times for each component in a system. The results of preceding calculations are used to determine the output of the network. RNNs are equipped with a memory that allows them to maintain tabs on the computations they have performed in the past [18].

6.1 Convolutional Neural Network CNNs, also known as convolutional neural networks, use the convolutional feature to decrease computation and weight when training with multi-dimensional input. This is accomplished through the usage of the CNN training data. A convolutional neural network (CNN) is a multi-layer network that can detect 2D input and minimize the necessary parameters to improve the training performance of the backpropagation

380

S. Maheswari and V. Radha

technique. CNNs are typically used in conjunction with image recognition software. In addition to this, various quantities of features may be extracted from the data that have not been labeled [18].

6.2 Recurrent Neural Network (RNN) A form of ANN known as an RNN displays temporal and dynamic properties by use of directed connections made between nodes in the network. Input and output units are also part of a recurrent neural network (RNN), in addition to the storage components known as hidden units, which are able to recall previously learnt information via directional loops. The RNN model is enhanced by the fact that the dataflow is unidirectional; information is passed from the input units to the hidden ones. This particular variety of DNN allows both supervised as well as unsupervised learnings to take place [18].

6.3 Deep Belief Networks (DBNs) A DBN, also known as a deep belief network, is a type of probabilistic neural network that is created by stacking a few RBMs to produce multiple hidden layers. The RBM makes use of either the Bernoulli–Bernoulli or Gaussian–Bernoulli greedy learning strategies on a layer-by-layer basis. The methods described earlier are applied while dealing with constant data, whereas the ones described further on deal with binary data. The direction of your actions is not guaranteed until you reach the final two levels of the DBN. In addition to this, the hidden layer of the RBM that came before it is linked to the visible layer of each subsequent RBM. The pre-training phase and the fine-tuning phase are both viable options for the DBN training process. The first stage will make use of unsupervised learning and training from the bottom up, while the second stage should make use of supervised learning and training from the top down in order to fine-tune the network’s parameters [18].

6.4 Deep Neural Networks (DNNs) “Deep neural network” is a term that refers to a feedforward neural network that also has a few hidden layers (DNN). A fully connected layer, a max-pooling layer, and a convolutional layer are incorporated into its construction. During the training phase, the parameters of the DNN will be adjusted in order to reduce the amount of misclassification error on the training dataset. It is recommended that one layer of the DNN can be taught at a time during this phase. Auto-encoders need to be trained for every disguised layer that is present.

An Analysis on Daily and Sports Activities’ Recognition Using Data …

381

6.5 Auto-Encoder An auto-encoder is a sort of ANN that may be utilized to extract features and minimize dimension. This is accomplished through the use of unsupervised learning. It teaches data coding while is also being utilized to transmit data from input to output. In order to accomplish this, it is equipped with an encoder, which transforms the data being entered into a code, and a decoder, which applies the code to the data being input. The layer of an auto-input encoder has smaller dimensions than the other layers, including the output layer, the concealed layer, and the other layers. In addition, the learning process makes use of a mechanism known as backpropagation.

6.6 Long Short-Term Memory Every RNN has its own RNN modules. LSTM is a chain of four neural networks and several memory cells. LSTM units have cells, input gates, output gates, and forget gates. Three gates govern data entry and exit, and the cell can retain data indefinitely. LSTM can categorize, evaluate, and predict infinite-lifetime time series. It requires time and resources to train even simple models [18].

7 Discussion and Analysis Table 1 presents an analysis of the publications that were read for this review. All of the publications that were examined for the subject of human daily activity recognition revealed that various algorithms were better suited to various approaches in accordance with their specific characteristics. Evaluation of the existing system’s performance is shown in below Fig. 4. The kind of data that is used, the features that are extracted, and the expectations about the amount of power that is saved all play critical roles in selecting the best algorithm. Two of the articles that were looked at dealt with authenticating users by observing either their hand motions or their behavioral patterns. Another publication that was looked at presented a methodology for picking the machine learning algorithms that are the most successful. The final piece of the puzzle for ensuring the safety of Internet of Things devices involved identifying and characterizing IoT devices that exhibited aberrant behavior. The performance evaluation of feature selection and ML models is done through statistical parameters, such as accuracy score, precision, recall, F1-score on IOT sensor datasets [20–24].

382

S. Maheswari and V. Radha

Table 1 Examination of the current system with sensor data Author and year

Title

Ilisei et al. (2019)

Human-activity HMM recognition with smartphone sensors. In OTM Confederated International Conferences

Methods

Dataset

Accuracy (%)

Smartphone sensor 90 data

Moreira et al. (2021) Human activity recognition for indoor localization using smartphone inertial sensors

ConvLSTM Smartphone sensor data

73

Mu et al. (2022)

Wearable sensing and physical exercise recognition

DT

Sensor data

92.5

Voicu et al. (2019)

Human physical activity recognition using smartphone sensors

MLP

Smartphone sensor data

93

Nakano et al. (2017)

Effect of dynamic feature for human activity recognition using smartphone sensors

CNN

Smartphone sensor data

98

8 Conclusion This study provides a comprehensive evaluation of recent and cutting-edge research developments in the subject of classifying daily and athletic activities. The physical exertion of players is a component of sports, and the behavioral interactions that occur between athletes make it possible to capture data such as the duration, a description of the action, and a count of the occurrences of the action. The emergence of competitive data has thus provided a basis for research in competitive sports and provides a stage for the investigation of the law of human life and human tendencies. The proliferation of big data in sports is responsible for the development of both the Internet and sports. The use of artificial intelligence and other data analysis tools is becoming increasingly popular in the field of sports analysis of enormous volumes of data. It is required to establish a new idea known as smart data in order to transform raw data into smart data in order to address the issues posed by the current system. These challenges can be met via the application of advanced techniques. In addition, this study offers a complete review of the preexisting, freely accessible daily activity categorization as well as the sports activity classification. In conclusion, this study presents the characteristics of potential future research directions and discusses some unanswered questions about the recognition of daily and sporting activities. We did

An Analysis on Daily and Sports Activities’ Recognition Using Data …

383

Table 2 Challenges in sports activities’ classification Challenge

Description

Solution

Acquisition of data

In sports that receive insufficient funding, adequate data are frequently lacking

Transfer learning from funded fields

Combining AI and sportspersons community

Insufficient AI researchers and developers seek sports-related applications

Providing sports data to AI students and researchers to build a network of sports pros and scientists. Sports science students and researchers can employ AI approaches

Ensuring that professionals maintain control at all times

The participants in a sport are the ones who need to maintain control of the decisions

Instead of automating decision-making, provide athletes interactive plans. AI can help. Sportspeople are needed to test AI

Describe the capabilities of the AI’s results

Many people who work in the sports industry are concerned about the potential of AI systems to explain things

“Explainable AI” could improve AI–sports relations and teach how to face unexpected problems. AI presentation should be based on sports science

Predictive models that are not Predictive models for AI need fragile less data. Sports science formerly used models for data analysis. Descriptive models are not generalizable

Most models over fit because they have too many parameters for the data

Accuracy (%)

Performance analysis with Accuracy 120 100 80 60 40 20 0 Series1

90

73

HMM

ConvLST M

90

73

92.5

93

98

DT

MLP

CNN

92.5

93

98

Algorithms

Fig. 4 Performance analysis of existing system

not propose any new study design categories because we utilized the ones that already existed. In experimental research, several researchers did not employ feature selection strategies prior to the classification process. Future work will establish the novelty

384

S. Maheswari and V. Radha

of the proposed design using a metaheuristic approach to improve the sports activity predictions’ accuracy.

References 1. Isaac LD, Janani I (2022) Team sports result prediction using machine learning and IoT. In: VVSSS Chakravarthy, W Flores-Fuentes, V Bhateja, B Biswal (eds) Advances in microelectronics, embedded systems and IoT. Lecture Notes in Electrical Engineering, vol 838. Springer, Singapore 2. ˙Inanç N, Kayri M, Ertu˘grul ÖF (2018) Recognition of daily and sports activities. IEEE Int Conf Big Data (Big Data) 2018:2216–2220 3. Mu X, Min C-H (2022) Wearable sensing and physical exercise recognition. IEEE World AI IoT Congress (AIIoT) 2022:413–417 4. Hsu Y-L, Yang S-C, Chang H-C, Lai H-C (2018) Human daily and sport activity recognition using a wearable inertial sensor network. IEEE Access 6:31715–31728 5. Tuncer T, Ertam F, Dogan S, Subasi A (2020) An automated daily sports activities and gender recognition method based on novel multikernel local diamond pattern using sensor signals. IEEE Trans Instrum Meas 69(12):9441–9448 6. Barshan B, Yurtman A (2020) Classifying daily and sports activities invariantly to the positioning of wearable motion sensor units. IEEE Internet Things J 7(6):4801–4815 7. Rabbi J, Fuad MTH, Awal MA (2021) Human activity analysis and recognition from smartphones using machine learning techniques, [Online]. Available: http://arxiv.org/abs/2103. 16490 8. Butchi Raju K, Dara S, Vidyarthi A, MNSSVKR Gupta V, Khan B (2022) Smart heart disease prediction system with IoT and fog computing sectors enabled by cascaded deep learning model. Comput Intell Neurosci 2022:Article ID 1070697, 22 9. Xia K, Tang T, Mao Z, Zhang Z, Qu H, Li H (2020) Wearable smart multimeter equipped with AR glasses based on IoT platform. IEEE Instrum Meas Mag 23(7):40–45 10. Chen Z, Zhu Q, Soh YC, Zhang L (2017) Robust human activity recognition using smartphone sensors via ct-pca and online svm. IEEE Trans Industr Inf 13(6):3070–3080 11. Bashar SK, Al Fahim A, Chon KH (2020) Smartphone based human activity recognition with feature selection and dense neural network. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2020, pp. 5888–5891 12. SK et al. (2021) Machine learning-based human activity recognition using neighbourhood component analysis. In: 2021 5th International Conference on Computing Methodologies and Communication (ICCMC), 2021, pp. 1080–1084 13. Wang H, Zhao J, Li J, Tian L, Tu P, Cao T, An Y, Wang K, Li S (2020) Wearable sensorbased human activity recognition using hybrid deep learning techniques. In: Security and Communication Networks, vol. 2020, Article ID 2132138, p. 12 14. Gupta S (2021) Deep learning based human activity recognition (HAR) using wearable sensor data. Int J Inf Manag Data Insights 1(2):100046, ISSN 2667-0968 15. Fan C, Gao F (2021) Enhanced human activity recognition using wearable sensors via a hybrid feature selection method. Sensors 21:6434 16. Dirgová Luptáková I, Kubovˇcík M, Pospíchal J (1911) Wearable sensor-based human activity recognition with transformer model. Sensors 2022:22 17. Zhang S, Li Y, Zhang S, Shahabi F, Xia S, Deng Y, Alshurafa N (2022) Deep learning in human activity recognition with wearable sensors: a review on advances. Sensors 22:1476 18. Ehatisham-ul-Haq M, Azam MA (2020) Opportunistic sensing for inferring in-the-wild human contexts based on activity pattern recognition using smart computing, vol. 106, pp. 375–392 19. Shafiq M, Tian Z, Sun Y, Du X, Guizani M (2020) Selection of effective machine learning algorithm and Bot-IoT attacks traffic identification for internet of things in smart city. Futur Gener Comput Syst 107:433–442

An Analysis on Daily and Sports Activities’ Recognition Using Data …

385

20. Ketu S, Mishra P (2020) Performance analysis of machine learning algorithms for IoT-based human activity recognition 21. Sarbishei O (2019) A platform and methodology enabling real-time motion pattern recognition on low-power smart devices. In: 2019 IEEE 5th World Forum on Internet of Things, pp. 269–272 22. Subasi A, Radhwan M, Kurdi R, Khateeb K (2018) IoT based mobile healthcare system for human activity recognition. In: 2018 15th Learning and Technology Conference (L&T), pp. 29– 34 23. Ehastisham-ul-Haq M, Azam M, Naeem U, Amin Y, Loo J (2018) Continuous authentication of smartphone users based on activity pattern recognition using passive mobile sensing. J Netw Comput Appl 109:24–35 24. Buriro A, Crispo B, Zhauniarovich Y (2017) Please hold on: Unobtrusive user authentication using smartphone’s built-in sensors. In: 2017 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), pp. 1–8. IEEE, New Delhi

Analysis of the Results of Evaluation in Higher Technological Institutes of Ecuador, Using Data Mining Techniques Diego Cale, Verónica Chimbo, María Cristina Moreira , and Yeferson Torres Berru Abstract The Technical and Technological Institutes are recognized as Institutions of Higher Education in Ecuador, grant third-level degrees and increase the opportunity for academic training in the Country; being higher education institutions, they have a firm commitment and responsibility to comply with the standards of quality of the Higher Education Quality Assurance Council (CACES); in this article, it is proposed through the use of data mining and unsupervised learning techniques such as hierarchical and K-means algorithms, to analyze the results of the evaluation process external implemented in 2020 of the Higher Technical and Technological Institutes of Ecuador. With the following indicators: Organization, Teaching, Research and Development, Social Projection, Resources and Infrastructure and Students of the 110 Higher Education Institutes (IES) evaluated, obtaining 4 groups of institutions (1 group of accredited, 2 groups of non-accredited, 1 group of accredited and nonaccredited). From the results obtained, it is possible to demonstrate the impact of the indicators evaluated in the different evaluation criteria and the incidence that they have in Institutional development, as well as the commitments that institutions must assume to guarantee the quality of education through this permanent evaluation process to maintain the accredited category. Keywords Unsupervised learning · K-means cluster · Hierarchical · Quality of education

D. Cale · V. Chimbo Instituto Superior Universitario Tecnológico del Azuay, Cuenca, Ecuador M. C. Moreira · Y. T. Berru (B) Instituto Superior Tecnológico Sudamericano, Loja, Ecuador e-mail: [email protected] Y. T. Berru Universidad Internacional del Ecuador, Loja, Ecuador © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_28

387

388

D. Cale et al.

1 Introduction The improvement of the quality of Higher Education Institutions is of utmost importance to provide an education service within the reach of social needs through the implementation of accreditation processes [1], which has been arousing more and more interest in the world. Given that the modern world is abundant with information, the academic reality where there is a tendency to change what is already in disuse by modifying it in the best possible way and strengthen what is giving efficient results in bringing a technological education at the forefront of real, virtual, and current knowledge [2]. That is the reason for the need of implementing projects aimed at a good management of skills, abilities, and aptitudes in both teachers and students because currently the academia must move a higher level of intellectual enrichment that is the unbalancing factor in solving the problems that concerns a society as complex as ours [3]. In Higher Education Institutions, the new generations acquire skills, knowledge, and values that will enable them to make appropriate decisions in their professional and personal lives. Therefore, the contents of the teaching’s values and skills with which the student is equipped must be relevant to the new social needs [4]. In this sense, relevance should be the main criterion when evaluating whether Higher Education Institutions (HEIs) are fulfilling their social function. Over the years, Higher Education Institutions have been concerned about their relevance and responsiveness to society, and due to this, social changes have shaped Higher Education Institutions. Relevance must be evaluated considering the original characteristics of the institution, its diversity, its different missions and objectives, and its organization. This first analysis makes it possible to link relevance with quality. Efforts to improve the quality of higher education, such as accreditation systems, cannot and should not omit the evaluation of its relevance [5]. Quality is the result of a set of actions that respond to specific social needs that exist at a very specific time [6]. Therefore, accreditation for the purpose of quality assurance cannot be based on a single, universal model and cannot arise only from theory and abstraction, nor according to market trends [7]. In general, accreditation is the external quality review process used in higher education to examine quality assurance and quality improvement in colleges and higher education programs [8]. The process usually results in the granting of recognition (yes or no, a score on a multi-level scale, a combination of letter and number grade system, an operating license, or a deferred conditional recognition) for a limited period [9]. In Ecuador, the regulation of institutional quality improvement started since October 12, 2010, in the Official Gazette No. 298, where the approval of the new Organic Law for Higher Education is published, which establishes that the National System of Higher Education is directed by the Higher Education Council (CES, by its Spanish acronym), as well as the National System for Evaluation and Accreditation of Higher Education (SNEAES, by its Spanish acronym), which is directed by the Council for Evaluation, Accreditation, and Quality Assurance of Higher Education

Analysis of the Results of Evaluation in Higher Technological Institutes …

389

(CEAACES, by its Spanish acronym). Both CES and CEAACES, being entities of public law, will operate autonomously and independently, maintaining the principles of coordination, co-operation, and harmony, in accordance with the Constitution and Law, as well as, according to the requirements of the college community. Ecuador has Colleges and Polytechnic Institutes with nationally recognized academic excellence; however, there is a significant number of Higher Education Institutions that are in the process of accreditation to achieve the established standards. As shown in Fig. 1, the process of accreditation of the IEES by CACES begins with the approval of the evaluation model 2020 where the criteria and indicators to be evaluated are specified, then the information of the evidence for each indicator is uploaded in the CACES system. Then, the peer evaluators that will carry out the evaluation and the on-site visit for the verification of the evidence are formed, then the preliminary reports of the evaluation are made, and if there are no appeals, the final report is sent where the results obtained by the HEI are made known. The Committee for Evaluation, Accreditation, and Quality Assurance of Higher Education (CEAACES) carried out the institutional evaluation process for 219 Higher Technical and Technological Institutes (ISTT, by its Spanish acronym), and the evaluation was carried out by expert auditors. From the results obtained in the evaluation, these institutions were categorized as follows: 47 institutes achieved the category of “Accredited,” 64 as “Accreditation Process Conditioned,” 80 as “Accreditation Process Strongly Conditioned,” and 28 institutes as “non-Accredited.” Therefore, the objective of this work is to apply the clustering techniques for the K-means and hierarchical unsupervised learning data modeling for the analysis of the results obtained from the external evaluation of indicators, criteria, and sub-criteria carried out by CACES in which the HEIs were evaluated.

2 Related Works A large number of important research papers have been made as a contribution to this work, from which results have been obtained that have changed and encouraged to create or transform public policies aimed at improving higher education, among which the following stand out [10] the research of Martinez Mosco and Vazques, which emphasize the importance of implementing Higher Education Systems to be responsible for the evaluation process of HEIs, taking the social component as the main concept, on which the educational institution is based and to which it owes its existence. It is evident that during the last two decades, Ecuador has had two Constitutions, which are the reflection of different models not only from the point of view of legal institutions but also from the way of understanding the State, going from a liberal one to a Constitutional one of Rights and Justice, a process in which higher education widely recovers its role in all the main concepts and important functions [11].

390

D. Cale et al.

Fig. 1 Process of evaluation by CACES experts

Sanchez et al. [5] propose a reflection on the challenges of quality management in Higher Education Institutions in Ecuador, in close relation to the current accreditation model. Quality in Higher Education Institutions is a challenge worldwide and, specifically, in Ecuador. In this sense, there are two ways, which complement each other, to achieve the desired quality: on the one hand, its external quality assurance, based on evaluation and accreditation models, and on the other hand, its management within the educational institutions. This paper is framed in the results obtained by the HEIs of Ecuador in the process of college accreditation, in the international and national backgrounds of this process, and reviews the evaluation of college quality in countries that have initiated the implementation of standards and indicators to guarantee it. It has been found that there are no noticeable differences between these

Analysis of the Results of Evaluation in Higher Technological Institutes …

391

parameters and those established by the Evaluation Model for Ecuadorian Colleges and Polytechnic Institutions.

3 Materials and Methods In this study, the results obtained from the evaluation for accreditation purposes by CACES are processed and the impact of each of the indicators evaluated according to the Evaluation Model for Higher Technical and Technological Institutes in the process of accreditation in 2020 is analyzed, which has 21 qualitative indicators and 11 quantitative indicators and for which the CRISP DM [12] data analysis methodology is used. The assessments of the qualitative indicators come from the analysis of the external evaluation committee aligned with the fundamental elements that make up each qualitative indicator. The evaluations of the quantitative indicators come from a mathematical calculation, based on the validation of variables that is performed by CACES technicians, according to the formulas established in the evaluation model [13] proposed by the institution in 2020. In general, the indicators are of different nature, for example, the average salary of a full-time professor, the percentage of full-time professors, or the strategic planning (which for each HEI takes one of the categories mentioned in the preceding paragraph). The next step is to try to make all indicators comparable; this is achieved by assigning to each indicator value a performance scale, which corresponds to a number between zero and one (or between zero and one hundred, as a percentage), where zero indicates non-compliance with a standard and zero indicates total compliance with it. The mathematical function that allows this transformation, in the context of decision theory, is called the utility function, which is also usually presented graphically. In the case of quantitative indicators, a utility function is defined for each of them. However, for all qualitative indicators, a single utility function has been determined, which is obtained by determining each of the four categories previously considered: Satisfactory = 1, Nearly Satisfactory = 0.7, Slightly Satisfactory = 0.35, and Poor = 0. The weightings of the indicators are not equal, and they are expressed through a percentage obtained from the weighting assigned to each indicator in the global evaluation; therefore, the sum of all the weightings must be equal to 100%. The following table shows the overall percentage assessment that the IEES must obtain for each criterion. Table 2 shows the weights of each indicator of the criteria described above, which are established as: WEIGHT A, WEIGHT B, and WEIGHT C, and such weights have been obtained through a workshop carried out with professors with deep knowledge of the technical and technological training. The weights are expressed through a percentage; therefore, the sum of all weights must equal 100%. The weight to be considered for IEES accreditation depends on the results of the external evaluation process based on the reality of the system.

392

D. Cale et al.

Finally, it is desired to obtain for each a global score for each HEI, which is called performance index. This is achieved through the following method: The value obtained assigned as the utility of an indicator (value between zero and one) is multiplied by the corresponding weighting, and then, all the values thus obtained are added. To understand the consistency of this method, consider the two extreme cases: if all the indicators have a utility of zero, the global performance will also be zero; on the other hand, if all the indicators reach a utility of one, adding the corresponding products will result in the sum of the weightings, which will be equal to 100; thus, the performance index takes percentage values between zero and one hundred. The Evaluation Model (Framework) for Higher Technical and Technological Institutes in the process of accreditation is composed of six criteria: Organization, Teaching, Research and development, Social Outreach, Resources and infrastructure and Students, 14 sub-criteria and 32 indicators. The following is a brief description of the dimensions of the institution’s performance covered by each criterion, as shown in Fig. 2.

3.1 Data Sample The data are obtained by collecting the results of the evaluation by CACES according to the 2020 institutional evaluation model for Higher Technical and Technological Institutes on the website of resolutions [14] issued on July 29, 2021. They were structured as follows with data such as province, whether it was accredited or not, criteria, and indicators with their respective model weights. The data used for the evaluation are available at https://bit.ly/EvaluacionCACESInstitutos, according to the evaluation model with its different levels defined by origin, criteria, and indicators, each with its values of model weights established in Tables 1 and 2. The data is structured in the following manner: province, type of weighting for the evaluation, criteria, indicators, weighting of the model and weighting obtained by the institution. Data used for the evaluation are available at https://bit.ly/EvaluacionCA CESInstitutos, according to the evaluation model with its different levels defined by origin, criteria, sub-criteria, and indicators.

3.2 Data Preparation The evaluation process of the Institutes of Higher Education was developed with the 2020 evaluation model [9], a fundamental resource to interpret the evaluation model, and to implement data analysis techniques. Since there are different indicators and weightings in the evaluation process, processing techniques were implemented for exploratory data analysis [15], and the Tableau tool was used for visualization.

Analysis of the Results of Evaluation in Higher Technological Institutes …

393

Social Management 1. ORGANIZATION Institutional Management

Professor Management

Training Process Management 2. TEACHING Training and Development

EVALUATION OF ISTT LEARNING ENVIRONMENT

Salaries

Implementation and Results 3. RESEARCH AND DEVELOPMENT (R&D) Planning

4. SOCIAL OUTREACH

Outreach Management

Basic Infrastructure

Library 5. INFRASTRUCTURE RESOURCES Virtual Interaction

Laboratory / Workshops and Practice Areas

STUDENTS

Students and Graduates Support

Fig. 2 Items by model of the Technical and Technological Institutes [14]

3.3 Data Modeling Data processing was carried out by implementing two clustering techniques for data modeling. Hierarchical clustering with the Ward method and Euclidean distance between two points p and q which is defined as the length of the segment joining both

394 Table 1 Assessment by criteria

D. Cale et al. N

Criterion

Model weight (%)

1

Organization

20

2

Teaching

33 12

3

Investigation and development

4

Link with society

5

Resources and Infrastructure

6

Students Total

6 21 8 100

points. Hierarchical clustering combined with descriptive statistics [16] allowed us to determine the ideal number of centers for the K-means clustering. Once the dendrogram was created, the extent to which its structure reflects the original distances between observations was evaluated using the correlation coefficient between the cophenetic distances of the dendrogram (height of the nodes) and the original distance matrix. Another technique used is the K-means clustering method [17], which groups the observations in different K-clusters (centers). Based on the evaluations previously performed, it was determined that the value of K = 4, using the elbow method with 1000 iterations to ensure the reliability of the process.

4 Results Through Resolution No. 036-SO-09-CACES-202, on July 19, 2021, the Regulations for determining the results of the evaluation process of higher Technical and Technological Institutes in the accreditation process are approved. Article 8 of the regulation, because of the external evaluation process, it is greater than or equal to 605, CACES determines that the Higher Technical or Technological Institute qualifies as accredited, it will provide for the continuity of its operation, and the validity of the accreditation will be three years from the notification of the results. Otherwise according to Article 9 on the non-accredited. As a result of the external evaluation process, a value of less than 60% is given, CACES determines that the Higher Technical or Technological Institute qualifies as non-accredited, and CACES will order the formulation and implementation of an improvement plan of up to three years, which will have the accompaniment of this Organization, after which a new external evaluation will be carried out. As can be seen in the result of the application of clustering, the bar chart in Fig. 3 identifies the percentage of accreditation or non-accreditation by criterion, in which in most cases, the average of accredited institutes is 50% higher than those that are not accredited, except for the infrastructure criterion, which are biased values generated by the nature of the place, whether self-managed or donations.

Analysis of the Results of Evaluation in Higher Technological Institutes …

395

Table 2 Weightings of indicators N. Indicator

Type A (%) Type B (%) Type C (%)

1

Strategic and operational planning

6

6

6

2

Inter-institutional relations for development

3

3

3

3

Affirmative action

4

4

4

4

Gender equity

4

4

4

5

Accountability

3

3

3

6

Internships

4

4.30

4

7

Curricular follow-up and update

2

2.30

2

8

Workload for FT professors

3

3.30

3

9

Professor selection

2.80

3.10

2.80

10 Professor evaluation

2.80

3.10

2.80

11 Teaching training affinity

3.50

3.80

3.50

12 Professional practice of PT professors

2.40

2.70

2.40

13 Average monthly salary of PT and FT professors 3

3.30

3

14 Average hourly wage of PT professors

3

0

3

15 Graduate education

4

4.30

4

16 Professional development

2.50

2.80

2.50

17 Research and development planning

5

5

5

18 Research or development projects

3

3

3

19 Publications

4

4

4

20 Outreach planning

2

2

2

21 Outreach execution

4

4

4

22 Library

2.50

2.50

2.60

23 Teaching positions for FT professors

2.50

2.50

2.60

24 Classrooms

2.50

2.50

2.60

25 Safety

2

2

2.10

26 Basic welfare conditions

2

2

2.10

27 Functionality 1 and sufficiency 1

1.50

1.50

0

28 Functionality 2 and sufficiency 2

3.50

3.50

3.90

29 Bandwidth

1.50

1.50

2

30 Virtual environment

3

3

3.10

31 Student support

4

4

4

32 Alumni’s following

4

4

4

396

D. Cale et al.

Fig. 3 Scores by criterion

The data of the evaluated HEIs have also been organized by province as shown in Fig. 4, with a total of 110 institutes, of which among the highest percentages, the province of Pichincha has 22.73% of the evaluated institutes, that is, 25 institutions, Guayas has 11.82%, which represents a total of 13 institutions, and finally the province of Chimborazo has 9.09%, which represents 10 institutions, It should be noted that in the eastern and border provinces the number drops drastically, which may indicate a certain location bias, since the provinces with the greatest economic impact have higher percentages.. Figure 5 shows the distribution of the clusters with each of the institutes: Cluster 1 with 34 institutes, Cluster 2 with 19 institutes, Cluster 3 with 19 institutes, Cluster 4 with 38 institutes, and each of these clusters represents a group of institutes that share more similarities among themselves according to the evaluation results and indicators, which suggests that the methodology proposed by the researchers allows focusing the evaluation according to the characteristics described above. Figure 6 shows the number of accredited institutes by cluster, showing that in Cluster 1, 50% of the institutes were accredited, in Clusters 2 and 3, there are no accredited institutes (0%), and in Cluster 4, 95% of the institutes were accredited. This indicates that according to the institute’s own characteristics and the results obtained in certain indicators, its possibility of accreditation can go up or down drastically. As in two clusters, 100% of the institutions have not been accredited.

Analysis of the Results of Evaluation in Higher Technological Institutes …

397

Fig. 4 Institutes by province [14]

The results obtained per criterion should be evaluated with respect to the assigned clusters, and Fig. 7 shows the scores obtained (management, teaching, graduate follow-up, infrastructure, research, and outreach), highlighting the close to zero values for Cluster 3, low values for Cluster 2, medium values for Cluster 3 and high values for Cluster 4. Cluster 1 has mid-range values in management, education, graduates, and research as well as high values in infrastructure. On the other hand, Cluster 2 has average values in administration, teaching, and infrastructure and exceptionally low values in the other indicators. In Cluster 3, the institutes do not meet any of the aspects to be evaluated because their numbers on all indicators are extremely low. Cluster 4, on the other hand, places institutes where most of the indicators are met, with only the indicator to be improved being research. All these are shown in Table 3.

398

Fig. 5 K-means cluster

Fig. 6 Number of accredited institutes by cluster

D. Cale et al.

Analysis of the Results of Evaluation in Higher Technological Institutes …

399

Fig. 7 Scores by cluster and criteria’s

Table 3 Shows the distribution of the four general clusters with respect to the acceptance criteria Cluster

Criterion Management

Teaching

Graduates

Infrastructure

Research

Outreach

Cluster 1

3.41

7.3

1.58

5.2

1.5

2.39

Cluster 2

1.33

2.28

0.55

2.57

0.51

0.8

Cluster 3

0.15

0.24

0.02

0.71

0.00

0.07

Cluster 4

5.6

9.10

2.15

6.25

3.21

4.85

5 Conclusions In the context of this research, it can be determined that the quality standards are an intrinsic reference of the Higher Education System, which promote qualities and characteristics that every institution must possess for a quality operation in the national context and based on international references. In this way, HEIs are active entities in national development. The Evaluation Model for Higher Technical and Technological Institutes in the process of accreditation is composed of six criteria: Organization, Teaching, Research and development, Social Outreach, Resources and infrastructure and Students, 14 sub-criteria and 32 indicators, thus allowing to measure the performance of each institution. It is also important to point out that there is a certain bias in the results due to the realities of each HEI at the time of evaluation in terms of facilities or infrastructure, self-management initiatives, among others.

400

D. Cale et al.

By implementing the clustering technique to the evaluation results, it is possible to differentiate, through segmentation, that the percentage of accredited HEIs is 50% higher than the non-accredited ones, except for the infrastructure criterion, showing that to become an accredited institution, strict compliance with each of the criteria mentioned in this study is required. Concluding that the process of evaluation of HEIs in Ecuador has promoted educational excellence, thus presenting a new training alternative in higher education; however, they are resurgent entities that need to be assisted with resources, infrastructure, and budget, according to the analysis of the results and evidence obtained. The evaluation process is a measure of quality that is good for higher education; nevertheless, it is suggested that the evaluation model can be adapted to the different realities of each HEI. It is also suggested that public policies for sustainable development in education can be proposed, as well as support the contribution of science, technology, and innovation that the Technical and Technological Institutes contribute to the productive matrix of Ecuador. The organization ought to gradually foster a culture of ongoing internal assessment to ensure adherence to quality standards, alongside a steadfast dedication to training that aligns with the requirements of society at large and, notably, the rigorous demands of the job market. This should be achieved through the effective execution of key functions such as community engagement, education, and social and cultural scientific research.

References 1. Keiser JC, Lawrenz F, Appleton J. Technical Education Curriculum Assessment 2. Levin M, McKechnie T, Khalid S, Grantcharov TP, Goldenberg M (2019) Automated methods of technical skill assessment in surgery: a systematic review 3. Alejandro Castro Jaén: Calidad en la Educación Superior. Caso Ecuador, (2018) 4. Shane Wilson Lange (2020) Examining Career And Technical Education Practitioner Preparation And Pr ation And Professional De essional Development Needs 5. Jacqueline Sánchez L, Chávez J, Javier Mendoza C (2018) La calidad en la educación superior: una mirada al proceso de evaluación y acreditación de universidades del Ecuador 6. Dobozy E (2017) The pre-designed lesson: teaching with transdisciplinary pedagogical templates (Tpts) 7. Pérez IR: La acreditación de los programas educativos, ¿eleva la calidad de educación? 8. Cruz López Y, Escrigas C, López Segrera F, Sanyal BC, Tres J. la acreditación para la garantía de la calidad y el compromiso social de las universidades: ¿qué está en juego? 9. López YC (2009) La acreditación como mecanismo para la garantía del compromiso social de las universidades 10. Mosco AM, Vázquez P (2012) La importancia de la evaluación en las instituciones educativas conforme a la nueva Ley Orgánica de Educación Superior en el Ecuador. RIEE. Revista Iberoamericana de Evaluación Educativa 11. Edgar Enrique Orozco Inca, Aida Isabel Jaya Escobar, Fridel Julio Ramos Azcuy, Rosa Mayelín Guerra Bretaña (2020) Retos a la gestión de la calidad en las instituciones de educación superior en Ecuador 12. Wirth R, Hipp J (2022) CRISP-DM: Towards a Standard Process Model for Data Minin

Analysis of the Results of Evaluation in Higher Technological Institutes …

401

13. CACES (2020) MODELO DE EVALUACIÓN INSTITUCIONAL PARA LOS INSTITUTOS SUPERIORES TÉCNICOS Y TECNOLÓGICOS 14. CACES: RESOLUCIONES 2021. 15. Ouenniche J, Perez OJU, Ettouhami A (2019) A new EDAS-based in-sample-out-of-sample classifier for risk-class prediction 16. Macqueen J: Some methods for classification and analysis of multivariate observations 17. Kapil S, Chawla M (2017) Performance evaluation of K-means clustering algorithm with various distance metrics, (2017)

Appointment Scheduler for VIP Using Python and Twilio N. Hari Krishna, V. Sai Hari Krishna, Shaik Nazeer Basha, Vasu Deva Polineni, and Akhil Vinjam

Abstract An appointment plays a typical role and acts as a link between many people. Scheduling an appointment should take place whenever we need to meet a VIP. Nowadays, these scheduling’s are done through a phone call with personal assistants or by booking a slot at reception, etc. In the majority of cases, we are unable to get an appointment or need to wait for a longer time to have a meeting. So, the project “Appointment Scheduler for VIPs using Python and Twilio” aims at managing the appointments of VIPs. The system is open to the public, who may use it to inquire about appointments and reserve a time slot based on availability. The personal assistant of the VIP will manage the schedule seamlessly, and they will have the ability to accept, reject, or assign some other time for clients’ appointments. A message alert will be sent to the client when the appointment has been confirmed. As a result, this paper concentrates on a thorough analysis of the architecture and benefits of online appointment management and scheduling. Keywords Appointment · Message · Scheduling · Application · Very Important Person (VIP) · Public · Twilio · Python

1 Introduction In general, every person needs to schedule an appointment, which needs to be confirmed or verified. In today’s society, each person must visit the business and make an appointment at the reception or submit a letter to the VIPs requesting the appointment. If a person has a good reputation, they can make a call with their personal assistants and know the availability of the VIP to confirm his appointment. The current methods of scheduling an appointment have some disadvantages, such as the request from the person taking a long time to reach the VIP, and this approval taking even longer. The person needs to wait for a longer time to meet the VIP within N. H. Krishna (B) · V. S. H. Krishna · S. N. Basha · V. D. Polineni · A. Vinjam Department of CSE, KKR & KSR Institute of Technology and Sciences, Guntur, Andhra Pradesh, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_29

403

404

N. H. Krishna et al.

a day, or sometimes, they do not even get a change of appointment. The confirmation of scheduled appointments is unknown, and due to the hectic schedule of VIPs, the appointments are also getting canceled. The client is ignorant of these strange circumstances. When it comes to PAs, managing all of the tasks is challenging because they need to keep a schedule by writing it down in a book. So, to overcome all these problems, we developed a connecting public application that deals between VIPs and the public. The proposed work in this paper is an “Appointment Scheduler for VIP using Python and Twilio” that uses an Android application or web application platform, to help users to book an appointment with a VIP most reliably and easily. The communication gap between the end users and the VIP is reduced. Users are occasionally informed of changes to the real-time status of a scheduled appointment through messages and the user’s dashboard. The personal assistant enables the end user to be informed, scheduled, and rescheduled with ease. With the help of this application, maintaining and scheduling appointments are much easier than using the traditional method of scheduling, which is a manual approach. This certain application helps to inform the user of the real-time status of the appointment without the intervention or efforts of a personal assistant, making it simpler for the personal assistant to keep the records. The application is developed using technologies like HTML, CSS, Bootstrap, Python with the Kivy framework or Flask frameworks, MySql, and Twilio. The user interface is developed using frontend technologies like HTML, CSS, and Bootstrap. Backend computing is done with the Kivy and Flask frameworks in Python, and for maintaining databases, we use MySql. Twilio is used as a messaging platform to connect users informally. The above Fig. 1 represents a complete overview of the process that takes place at the time of usage. Through the internet, the application is connected to the databases and backend cloud. We are using a messaging platform to send an SMS to the clients about the status of an appointment. MSG91 is a cloud-based messaging platform that allows businesses to send SMS, voice, and email messages to their customers. It is a transactional messaging service that can be used to send notifications, confirmations, reminders, and other types of messages.

2 Literature Survey [1] An online web application called Bookazor is used to schedule appointments with salons, event coordinators, household services, etc. This application is used to show all the categories in sequential order, and the users can select anyone according to their choice. Based on the user’s location, all the nearby companies will be shown to them. Users can book an appointment or cancel it according to their requirements. Business persons can register their company on the website using “Create Business Profile.” An extra feature of the chatbot has also been implemented that helps customers to get immediate answers to their queries. The major drawbacks of this system are that

Appointment Scheduler for VIP Using Python and Twilio

405

Fig. 1 Complete overview of the application

every company needs to register on this website; otherwise, it would not be shown. Each parlor would not be so busy to warrant booking a slot, so it would be a waste of money. Providing these kinds of services would not be efficient for household workers, and many cannot handle this application. [2] The smart appointment booking method that is suggested in this paper gives patients a simple way to choose a doctor’s appointment online. Authentication of patients’ records is provided. Admins can view, edit, or add hospital and doctor details. Doctors can log in and check the requests made by the patients. He may approve or schedule another slot for the patient. An SMS alert will be sent to the patients every time a change has taken place in an appointment. Future enhancements for this project can be done by providing video conferencing between doctors and patients. In this system, the administrator needs to enter all the details of hospitals and doctors and maintain the complete system. They must contact the administrator because he serves as an intermediary if there are any changes, like the addition of a new doctor. Providing a chance for a patient to cancel his appointment leads to a waste of time for doctors. This system cannot be used in emergency situations. [3] This study’s objective was to evaluate the advantages and difficulties of using Spring Boot to implement a vaccine registration system. The user can login or register if he is a new user. He can book a slot for vaccination in his nearby locations and verify the already booked slot. A mail alert will be sent to inform them of their appointment. Admin needs to maintain all the information on hospitals and vaccines. He can manage the slots by deleting the booked slots of patients and adding/removing details of the hospital. Keycloak was used for authentication and authorization purposes. The major drawbacks of the system are that a user can create multiple accounts and book multiple slots. There is no proper authentication of users. The admin needs to contact the hospital to get the details of the vaccines present and get them updated.

406

N. H. Krishna et al.

A future enhancement of registering a person as a volunteer helps for a smooth run of vaccination drives across different places in the country. [4] This article offers a perfect design paradigm for creating a college-wide integrated student appointment management system with professors. The privilege of canceling a schedule is granted to the student, who can also schedule an appointment with a lecturer and join any projects that the lecturer has assigned. Based on the availability of free time for lecturers, they can accept or reject the appointments of students. Also, they can give a review at the end of the appointment. An additional feature of adding the scheduled appointment to the Google Calendar can be added. A student can directly meet this lecturer, so providing this type of system would not be very useful for every college. [5] Kalaycı and Arslan together published a paper discussing the issues that customers are facing by just waiting in the queue until their turn comes. They used a dynamic programming algorithm approach and assortment optimization to solve the issues. Based on the number of employees at the branch, the online ticketing system has been made available to customers within the selected local branches of the bank. This system helps the customers by reducing the waiting time and workload for the branches and increasing customer priority. A few drawbacks of this system, such as most of the people who come to banks are old, and they might do not know how to use this type of technology. All people may not take the same amount of time to get their work done. In these cases, providing a certain time to a customer would not be possible. [6] According to Tirthkumar, Surajsinh’s appointment scheduling for hospitals will play an important role. They developed a web application where users can book their appointments, and all the details of the users are maintained in that web application. Using a URL provided by healthcare providers, users can choose a slot and book an appointment with a respected doctor. Without any intervention from the staff, all the details of the patient are retrieved whenever the appointment is booked. Its main flaw is that there are security concerns. Maintaining all the data in a web application is not the best part. The hospital staff is skilled enough to maintain all the data and update the details about the patient in the portal. [7] It is a mobile application that the user can download on their phone to schedule appointments with the necessary doctors. If they are using the application for the first time, they must register first. If not, they can log in using the credentials they were given when registering. After successfully logging into the application, the user has three departments appointment, Emergency, and blood bank. The patient’s tasks are completed by choosing the doctor’s available time, and after that, the request is sent to the admin, who refers to the appointment and sets the time for the patient. The users can use the GPS to plan a route to the hospital, and the blood bank is used to locate the donors and can also be used to sign up new donors. The status of the patient’s appointment will be updated to indicate that it has been successfully added to the list of appointments. The program makes use of I page Server, Android 6.0 (Marshmallow) installed packages, Android Studio 2.1.1, and SDK plug-in JDK. [8] The system uses a two-stage stochastic mixed-integer linear programming model (SMILP), a data-driven simulation model, and minimized resources to work

Appointment Scheduler for VIP Using Python and Twilio

407

effectively by receiving the patients to the clinics while also reducing the direct and indirect wait times for the patients. The system uses these numerical algorithms for clinic patient flow simulation, call center simulation, data and study design, an optimal weekly scheduling template, computational results, and insights. The scheduling template’s robustness can be estimated using mean-risk performance metrics like Conditional-Value-at-Risk (Cvar), which is the system’s future scope. [9] An innovative mathematical modal and Monte Carlo method are used to create a charging station management system for electric vehicles that satisfy the customer’s charge capacity and transformer capacity, which should both be constants, and the below actions need to be carried out by the system. The primary analysis of lithium battery charging, the charging station mathematical model, exception handling, parameter setting, the optimal control algorithm, and simulation analysis are all covered. After these, the charging station for the electric vehicles will be developed to maintain the feasibility of the amount of charging to be shared between the vehicles and transformers. The system’s flaw is that it can be difficult to use a cutting-edge mathematical model when the vehicles will be damaged or altered. [10] In this system, the main goal is to calculate the approximate probability of a patient missing or attending the registered medical appointment. The system was primarily developed for King Faisal Specialist Hospital and Research Data Center (KFSH&RC) using an appointment no-show algorithm, artificial intelligence, and prediction. In the future, system research on multi-modal with a multistage machine learning platform will increase the prediction modal. This will reduce the risk to the hospital in both financial expenses. [11] This system uses an SMS scheduler to schedule an appointment with the doctor by providing all the additional information that will help the doctor to arrange all the patient’s requirements when the patient comes to the hospital. The system’s low-power Raspberry Pi needs a power supply of 5 V and 2 A, and the server will automatically update the patient with the doctor’s available time slots, so they can schedule appointments. There are two types of appointments; they are static and dynamic. Avoid all the conflicts caused by the patients and resolve them. [12] This paper uses a new mathematical modal to develop an appointment system that helps to optimize the system by using mixed-integer programming, the tabu algorithm, and a simulation approach. The system first uses the tabu search algorithm to reduce the patient’s waiting time. All these mathematical algorithms are used to manage and provide an appointment service that will never cause any conflicts between the appointments; to boost the test demand waiting time and study waiting time, five scenarios are used. [13] The tourism industry has entered the e-commerce era. Seventy % of travel reservations are made online channel. In 2014, the amount of in-transitional tourist arrivals reached 1133 million and re-laded receipts were about US$ 1245 billion. We have two types of booking available on the ORS: booking without prepayment and booking with prepayment. Booking without prepayment allows the customer to place his booking without making any payment. A booking contract with prepayment allows the customer to pay the full price for the room he books at the hotel when he places the booking. In this part of the series, we examine how a hotel’s customers’

408

N. H. Krishna et al.

utility models are used to investigate their hotel booking decisions and model the hotel’s profit-maximizing problem. We assume that all consumers book one type of room through the Online Reservations Service (ORS) of one hotel. The hotel and the customer are two sides of a Stackelberg game, where the hotel designs its contracts and announces related price(s) based on its anticipation of the customer’s reply at first. The customer places their booking in response to the hotel’s initial offer. Using Stackelberg game analysis, this paper determines the hotel’s optimal pricing policy for two booking contracts with different payment requirements. The optimal strategy highly depends on the variable operational costs of the hotel. Different types of hotels will employ different types of booking contracts that have different prepayment provisions. [14] Book the room is an application that makes booking a conference room easy. It is being developed on Flutter, an open-source UI development kit created by Google. Flutter apps are written in the Dart language. The application will provide the user interface to login and book a conference for entries. Firebase is used to link the backend application. Only authorized users can use the application. In this case, we have created a Flutter application in which users have to book rooms according to the time slots available. Here, Flutter is used to store the booking details. An available time slot is checked using the stored data on a server. This app provides an easy way for users to book their rooms in advance. It can be used by both Android and iOS users. [15] With the rapid changes in the infrastructure industry, many technological advances are made to make life easier, and developing smarter conference rooms in offices has a great scope for innovation and digital transformation. With the help of automation, they created a Smart Meeting Room (SMR) architecture to address a number of pressing issues pertaining to the environment of a meeting room. The smart rooms can interact with users, provide necessary arrangements according to the user, and communicate directly with other smart rooms. They used various sensors, like temperature sensors, humidity sensors, and PIR sensors (Passive Infrared—this sensor is used in thermal sensing applications, such as security and motion detection. They are commonly used in security alarms, motion detection alarms, and automatic lighting applications with a range of 10 m)-based motion detectors and Ericsson’s APPIoT framework for smooth discovery, attachment, and data sharing between devices nearby. They used the APPIOT framework through Wi-Fi which can be operated from a web application or mobile application. This helps in communication from one room to another, checking availability or cancelation of any meetings to give an instant booking to the other waiting employees. And, all the data are stored in the cloud. Some of its features are a secured door lock system, new meeting requests, an auto-hospitality system, an appliance management system, a smart sanitation system, and a popup notification system. [16] In this, the author mentioned the frequent problem of hotel booking, which is not being able to find a suitable hotel and often using hotel booking websites to find the most suitable hotel. Using this, the user compares hotels and their services, price per night, availability of public transport, hotel location, reviews, and ratings of hotels based on previous guests’ experiences. They have taken on the problem of

Appointment Scheduler for VIP Using Python and Twilio

409

booking systems not having a tool for planning activities near the hotel or nearby cities or places intended for the customers. Based on the problems mentioned above, they proposed an expert hotel booking system. The idea of the expert system is to interconnect the hotel with different places, activities, and events taking place around it, and based on user requirements, it suggests suitable hotel services and activities for the customer. The expert system will be connected to a simple survey to find the most suitable hotel according to guest preferences. The expert system uses a database that contains IF-THEN rules for determining which hotel services and activities around the hotel are more suitable for the customer. [17] In this, the author developed a management platform for hotel rooms’ online booking and guest information (like whether the guest is currently occupying the room or went outside so that cleaning and food delivery services can be planned). Here, they used the Kendo UI frontend framework design and the Spring, Spring MVC, and MyBatis (SSM) framework to deal with background programs. The staff can use this to quickly complete the hotel guest room management task, providing better and faster service for guests. They used three layers (the user interface layer, the business logic layer, and the data storage layer) for their system. The users of this system are divided into two categories: ordinary users and systems administrators. These modules make up the entirety of the system. Administrator’s User Management Module, Room Management Module, Reservation Room Module, Housing Management Module, check out and Consumption Management Module; this system solves the majority of the current problems. The first goal is to reduce the number of staff, followed by consumer groups. This can reduce the booking hassle and shorten the entire housing process, both for hotel managers and consumers. [18] The author specifies the problem where there are some circumstances under which a meeting room may be underutilized since the duration that it is occupied is sometimes not exactly as it is booked. For example, a meeting may last shorter than expected but still reserve a spot in the scheduling system when it is over. Ad-hoc and IOT will be used by the author to implement real-time occupancy detection in this system because the system cannot update in real time. To determine whether there are people in the meeting room, they use PIR sensor fusion devices and microphones. So, the users can check the availability of the meeting room in the web application. [19–21].

3 Proposed Work The above architecture in Fig. 2 represents the complete workflow of the system. In this application, the users can log in to access the dashboard. In order to access the application, a new user must first register by providing all the required information. Users can check the availability of the VIP, and according to their availability, they can book an appointment by providing a message that contains the detailed purpose of the meeting. In this application, the VIPs’ availability calendar is synced between

410

N. H. Krishna et al.

Fig. 2 System architecture of the proposed system

the user page and the VIPs/PA page. This calendar can be updated whenever a user books a slot or directly from the VIP page. A user booking a slot: The “Book Appointment” option is available after the user logs in to the application. When the user chooses this option, he will navigate to the booking page. The booking page consists of a calendar that shows the availability of the VIP for each day. Every day is divided into multiple slots, and each slot is represented by three different colors, namely red, green, and yellow. The red color indicates that the slot is not available and is restricted to booking. The green color indicates that the slot is available and can be booked. The yellow color indicates that the appointment is pending approval of that slot for another user, but you can still book the slot. If the purpose of your appointment has a higher priority than the other users, then your appointment could be scheduled. If the PA accepts the appointment request, then that slot will be automatically updated to red. Allowing the VIP to update their availability: The availability of the VIP can also be updated by the VIP/PA themselves using the VIP page. The VIP’s complete schedule is maintained by his assistant. So, the assistant can just log in to his dashboard and check all appointment details for clients. He can read the purpose of the meeting and, based on the importance of the message from the client, approve or reject it. He also has the ability to assign some other time if he would like to change the requested slot. As soon as the request of the client is updated by the personal assistant, a message alert will be sent to the client regarding his appointment, and it will be updated in the user’s dashboard. Message monitoring using Twilio: Twilio is a cloud-based communications platform that offers a range of voice and messaging services, including SMS and MMS messaging. To monitor and track messages sent through the Twilio platform, the following methods can be used:

Appointment Scheduler for VIP Using Python and Twilio

411

1. Message logs: Twilio maintains a detailed log of all messages sent and received through the platform. Users can access these logs to view the status of individual messages, such as whether they were delivered successfully or if there was an error. 2. Twilio’s API: Twilio is providing a set of APIs that allow users to interact with the Twilio platform programmatically. The APIs can be used to retrieve message logs and other information about messages, as well as to send messages, make phone calls, and perform other actions. When using the Twilio platform to send messages or place calls, error logs keep track of any problems in great detail. These logs can be used for various purposes, including: 1. Troubleshooting: Error logs provide detailed information about the cause of an error, which can help users to troubleshoot and resolve issues quickly. 2. Monitoring: Error logs can be used to monitor the performance of the Twilio service and identify any recurring issues that may need to be addressed. 3. Auditing: Error logs can be used to audit the usage of the Twilio service, for example, to track the number of failed messages or calls over a specific period of time. 4. Performance Optimization: Error logs can be used to optimize the performance of the application by identifying bottlenecks or areas of inefficiency in the code or the service. Overall, Twilio error logs provide valuable information that can be used to improve the reliability and performance of the Twilio service, and to ensure that any issues that arise are resolved quickly and efficiently. Algorithms: Client 1. 2. 3. 4. 5.

Users can log in to the application with their credentials. Checks the availability of VIPs. Enters a text message based on the reason for an appointment. Book an appointment by choosing available slots. Waits for confirmation.

VIP/PA 1. VIP or Personal Assistant logins into the application with valid credentials. 2. Checks all the client’s requests for an appointment. 3. Based on the request the PA can a. Approve a meeting. b. Reject the appointment. c. Assigns some other free time according to the schedule. 4. Message notification is sent to the client and the scheduled appointment is updated in the users and VIPs dashboard.

412

N. H. Krishna et al.

The VIP might occasionally be tasked with additional urgent work or any other crucial meetings. At these times, there would be no possibility of attending the scheduled appointment. The clients may be unaware of these situations. So, we added a feature where the personal assistant has the ability to alter the approved slots for a meeting. On a busy day for VIPs, the assistant can cancel all the appointments at-a-time on that certain day by just providing the reason for cancelation. A message alert will be sent to all the clients regarding the update on the appointment. Features of the algorithm: 1. Scalability: The algorithm can handle a large number of appointments and participants. 2. Efficiency: The algorithm can schedule appointments quickly and with minimal computational resources. 3. Flexibility: Using this algorithm, it is easy to reschedule and intimating the user regarding appointments. 4. User preferences: The algorithm considers the user preferences and allows the users to schedule appointments based on VIP availability.

4 Results The outlook of the proposed system is shown with the system interface of the appointment scheduler. It contains a Login page, Register page, User’s dashboard, VIPs dashboard, etc. The above Fig. 3 shows the login page of the proposed system. The user can login with his credentials. If a person is a new user, he can create a new profile by clicking the “Sign Up” button. After logging in, the user will be navigated to this dashboard. Here, the user gets some details about the schedule appointment option to book a slot for meeting with the VIP, and he can check his booked appointment along with the status of the appointment as shown in the above Fig. 4. When a user decides to make an appointment, he will be taken to the appointment booking page, as seen above. The user can select a certain date, and on that particular date, he can see all the available slots. As mentioned earlier, the user can choose a time slot from the green or yellow list, and by briefly stating the meeting’s goals, he can set up an appointment. Appointment Booking Page is shown in Fig. 5. As shown in the above Fig. 6, all the requests of clients are displayed in a table format. Following reading the appointment’s purpose, the VIP/PA can select one of the three options—approve the meeting, reject the appointment, or schedule a different time for that person. A message will be delivered to the clients each time the VIP modifies the approval form, as demonstrated in the above Fig. 7. Software Specifications:

Appointment Scheduler for VIP Using Python and Twilio

413

Fig. 3 Login page

1. Platform support: The application is compatible with various platforms, including web, mobile, and desktop. 2. User interface: The application is more user-friendly interface that is easy to navigate and understand for users. 3. Calendar integration: The integration of VIPs calendar with users and VIP pages is done to ensure that appointments are synced across the application. 4. Notifications and reminders: The application is able to send notifications and reminders to users via SMS or push notifications to ensure that they do not miss their appointments. 5. Security: The system has robust security measures in place to protect the data and privacy of VIP.

414 Fig. 4 Users’ dashboard

Fig. 5 Appointment booking page

N. H. Krishna et al.

Appointment Scheduler for VIP Using Python and Twilio

415

Fig. 6 VIP’s dashboard

5 Conclusion The core reason for establishing a complete online-based appointment scheduler with the VIPs is to enable a fair, convenient, and timely manner for the public. In this paper, we carried out a comprehensive application systematic procedure to enhance the appointment booking process. Development of the application is done using the Kivy and Flask frameworks of Python, HTML, CSS, Bootstrap, MySql, and Twilio. The current system of scheduling an appointment is completely done through a consultant at an office or a phone call. This application helps both users and VIPs, as it bridges the gap between users and VIPs, removing all the bureaucrats and making appointment booking easy. The VIPs can easily schedule, reject, and reschedule the appointments. Along with VIPs, higher level government officials can also profit from using this application. The User-Friendly interface is one of the major advantages over the previous systems. We developed the application in such a way that even an illiterate can use the application with ease due to its UI. In the future, we are planning to include a to-do list to maintain all the day-to-day activity of VIP and update multi-language support, including regional languages.

416

N. H. Krishna et al.

Fig. 7 Message generation

Acknowledgements We would like to express our gratitude to Dr.Venkata.Kishore Kumar Rejeti, and Asst. Prof. N. Hari Krishna for their support, suggestions, and interest in our work.

References 1. Akshay V, Kumar SA, Alagappan RM, Gnanavel S (2019) BOOKAZOR—an online appointment booking system. In: 2019 international conference on vision towards emerging trends in communication and networking (ViTECoN), pp. 1–6, https://doi.org/10.1109/ViTECoN.2019. 8899460 2. Sri Gowthem S, Kaliyamurthie KP (2015) Smart appointment reservation system. In: 2015 International journal of innovative research in science, engineering and technology, DOI:https:// doi.org/10.15680/IJIRSET.2015.0406191 3. Sai VT, Kalyan D, Chaitanya U, Vijaya Lakshmi D (2011) Vaccine registration system integrated with keycloak. In: 2022 International journal of trendy research in engineering and technology. https://doi.org/10.54473/IJTRET.2022.6501 4. Qaffas A, Barker T (2012) Online appointment management system. https://www.researchg ate.net/publication/266171377 5. Kalaycı S, Arslan S (2017) A dynamic programming-based optimization approach for appointment scheduling in banking. Int. Conf. Comput. Sci. Eng. (UBMK) 2017:625–629. https://doi. org/10.1109/UBMK.2017.8093482

Appointment Scheduler for VIP Using Python and Twilio

417

6. Sayali H. Doshi et al. (2022) E-Appointment and Scheduling System. Int J Recent Sci Res. 13(07):1889–1891. DOI: https://doi.org/10.24327/ijrsr.2022.1307.0395 7. John SS, Jacob SE, Jacob R, Kurian RP, Salim ST (2018) “DOC-ON”. Int J Creative Res Thoughts (IJCRT), ISSN:2320–2882, 6(2):193–197, April 2018, Available at: http://www. ijcrt.org/papers/IJCRT1892366.pdf 8. Anvaryazdi SF, Venkatachalam S, Chinnam RB (2020) Appointment scheduling at outpatient clinics using two-stage stochastic programming approach. IEEE Access 8:175297–175305. https://doi.org/10.1109/ACCESS.2020.3025997 9. Liu R, Zong X, Mu X (2017) Electric vehicle charging control system based on the characteristics of charging power. Chinese Autom Congr (CAC) 2017:3860–3840. https://doi.org/10. 1109/CAC.2017.8243449 10. Moharram A, Altamimi S, Alshammari R (2021) Data analytics and predictive modeling for appointments no-show at a tertiary care hospital. In: 2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA), 2021, pp. 275–277, doi: https://doi.org/ 10.1109/CAIDA51941.2021.9425258 11. Chimaladinne L, Sonti N (2017) Automatic token allocation system through mobile in primary care. In: 2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS), 2017, pp. 3836–3839, doi: https://doi.org/10.1109/ICECDS.2017.839 0181 12. Ala A, Chen F (2020) An appointment scheduling optimization method in healthcare with simulation approach. In: 2020 IEEE 7th International Conference on Industrial Engineering and Applications (ICIEA), 2020, pp. 833–837, doi: https://doi.org/10.1109/ICIEA49774.2020. 9101995 13. Miao ZW, Wei T, Lan YQ (2016) Hotel’s online booking segementation for heterogenous customers. In: 2016 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), 2016, pp. 1846–1850, doi: https://doi.org/10.1109/IEEM.2016.7798197 14. Praveen et al. (2020) Conference room booking application using flutter. In: 2020 international conference on communication and signal processing (ICCSP), 2020, pp. 0348–0350, doi: https://doi.org/10.1109/ICCSP48568.2020.9182183 15. Saravanan M, Das A (2017) Smart real-time meeting room. In: 2017 IEEE Region 10 Symposium (TENSYMP), 2017, pp. 1–5, doi: https://doi.org/10.1109/TENCONSpring.2017.807 0069 16. B. Walek, O. Hosek and R. Farana, “Proposal of expert system for hotel booking system,” 2016 17th International Carpathian Control Conference (ICCC), 2016, pp. 804–807, doi: https://doi. org/10.1109/CarpathianCC.2016.7501206. 17. Shan-Shan M, Chun S, Jing-Feng X (2018) Design and implementation of hotel room information management system based on Kendo UI Front-End framework. In: 2018 4th Annual International Conference on Network and Information Systems for Computers (ICNISC), 2018, pp. 452–455, doi: https://doi.org/10.1109/ICNISC.2018.00098 18. Duc L Tran et al. (2016) A smart meeting room scheduling and management system with utilization control and ad-hoc support based on real-time occupancy detection. In: 2016 IEEE Sixth International Conference on Communications and Electronics (ICCE), 2016, pp. 186– 191, doi: https://doi.org/10.1109/CCE.2016.7562634 19. Rejeti VKK (2021) Effective routing protocol in mobile ADHOC network using individual node energy. In: International journal advanced research engineering a technology (IJARET), Vol. 12, Issue 2, February 2021, pp.445–453, ISSN Print: 0976–6480 and ISSN Online: 0976–6499, DOI: https://doi.org/10.34218/IJARET.12.2.2020.042 20. Prasanna Kumar B (2021) Identification of piracy videos using accelerated Kaze features. In: The International journal of analytical and experimental modal analysis, ISSN No: 0886–9367, Volume XIII, Issue VI, June/2021 21. Rejeti VKK (2021) Distributed denial of service attack prevention from traffic flow for network performance enhancement. In: Proceedings of the second international conference on smart electronics and communication (ICOSEC) in IEEE. DVD Part Number: CFP21V90-DVD; ISBN: 978-1-6654-3367-9

Artificial Intelligence Application for Healthcare Industry: Cases of Developed and Emerging Markets Olga Shvetsova, Mohammed Feroz, Sergey Salkutsan, and Aleksei Efimov

Abstract This paper discusses the trends, perspectives, advantages, and risks of artificial intelligence (AI) applications as fast developing trend in healthcare industries. Comparative case study of developed and emerging markets is chosen as the main research method in this study due to the high market demand and government support. Authors analyse external and internal factors of effective AI implications in those markets and discuss problems and future directions. As a result, authors develop some recommendations for entrepreneurs how to choose and manage successful AI tool and apply with it healthcare industries. Results of this research could be useful for global entrepreneurs, practitioners, and market research specialists. Keywords Artificial intelligence (AI) · Case study · Healthcare industry · Developed market · Emerging market

1 Introduction Recent technological and connectivity advancements have resulted in the growth of Artificial Intelligence (AI) applications across a variety of industries. Through a range of industrial, consumer, and public sector implications, AI is projected to offer value to all businesses. The concept and advancement of computer systems capable of doing tasks that normally require human intellect, such as visual perception, speech recognition, decision-making, and language translation, is known as artificial O. Shvetsova Department of Industrial Management, Korea University of Technology and Education, Cheonan, South Korea e-mail: [email protected] M. Feroz (B) Vignan’s Foundation for Science, Technology and Research, Vadlamudi, Guntur, AP, India e-mail: [email protected] S. Salkutsan · A. Efimov Peter the Great St. Petersburg Polytechnic University, St Petersburg, Russia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_30

419

420

O. Shvetsova et al.

intelligence. Artificial intelligence is assisting in all areas of the healthcare industry. In healthcare, AI capabilities are actively exploited, particularly in diagnosis, therapy, healthcare administration, and drug discovery. Healthcare is more conservative than other industries because the cost of error is significant. AI abilities are expected to help increase access to quality healthcare, reduce errors, and improve medical worker training. AI is the study of teaching computers to perform things that humans can do more successfully right now. A critical artificial intelligence concern in the industry is patient research technique for diagnosis, treatment, and recovery. Telemedicine, robots (specifically robotic surgeons), and high-tech treatments are among the fastestgrowing applications of AI in healthcare. Trends and mental health can be tracked using machine learning. It is simple to use AI to evaluate hundreds of online actions to uncover issues of suicidality and loneliness. The necessary infrastructure has undergone significant improvement and will continue to advance in the near future. By 2026, telehealth is anticipated to reach a value of $185.6 billion. Although most cloud storage systems are quite safe, they may not always adhere to regulatory requirements regarding protected health information (Fig. 1) [1, p. 94]. Artificial intelligence has gained a lot of attention as a practical technology in 2022 across several industries, particularly for healthcare. Many different industries have seen the success of artificial intelligence, which continues to fascinate top-tier corporations [2, p. 23]. Even if there are many instances of AI benefiting many industries, it looks like it’s still feared (or at least disputed) in the healthcare industry [3, p. 23]. Patients might be open to AI being used in healthcare services, but there are issues about data privacy and machine mistakes on the one hand. Another significant

Fig. 1 Percentage of AI used in healthcare (Source Davenport [1])

Artificial Intelligence Application for Healthcare Industry: Cases …

421

issue people now face is the lengthy wait times in the lobby to see a doctor and receive counsel [4, pp. 262–267]. To foster trust, patients should be able to participate in the processes. These days, it’s crucial to understand that going to the doctor doesn’t just mean being sick; it also means being offered options that could help you prevent illnesses entirely. Although technology is improving daily, very few patients choose to have robotic procedures because they are seen to be high risk [5, p. 78]. The use of miniature surgical instruments and the ability to replace huge incisions with a series of quarter-inch incisions make robotic procedures minimally invasive in every case. Utilising this invention may result in a 21% reduction in the amount of time a patient spends in the hospital after surgery [6, p. 25]. Once more, this can be a preferable choice for the patients as well as the hospital administration [7, p. 23]. So, this paper addresses two research questions: 1. How does AI technology help healthcare industry to develop treatment? 2. What are the strategies of AI implementation in developed and emerging markets? As a consequence of the rapid development of new technology, new solutions will definitely emerge, while existing ones will probably improve. AI in healthcare offers prospects in a wide range of fields, including diagnostics, wearable technology, virtual assistants, and wellness management, as it goes beyond simply analysing medical information [8]. AI can sense, comprehend, and act; thus, it can assist people in both clinical and administrative tasks.

2 How Does AI Help Healthcare Industries Medical decision-support instruments help clinical practitioners employ medical tools and software to carry out their work [9]. Robotic surgery, diagnostic imaging, treatment outcome prediction, and remote patient monitoring all make use of artificial intelligence. Tools that support patient decision-making are often the medical equipment and software that patients or caregivers utilise directly [10]. Examples include chatbots or other online self-diagnosis tools, as well as lifestyle products like fitness trackers. The main purpose of AI utilised in therapeutics development is to find novel medications and cures. Hence, above four are the main AI types being considered in today’s fast changing world. 1. Support in Clinical Decisions In order to assist physicians gather, analyse, and draw conclusions from enormous volumes of patient data in order to make the best therapeutic judgements, clinical decision support, commonly known as CDS, is an important area in artificial intelligence which may be beneficial. As a consequence of advancements in medical image analysis, such as computer—aided diagnosis, and data processing, such as machine learning, our understanding of disease processes and management has grown [11]. Using natural language processing (NLP), doctors may more readily extract all pertinent information from patient reports [12, p. 51]. Massive volumes of data

422

O. Shvetsova et al.

may be stored and analysed by artificial intelligence, which can then be utilised to build knowledge bases, make patient evaluation and guidance easier, and ultimately improve clinical decision support [13]. 2. Chatbots can improve primary care and triage Medical chatbots are conversational systems powered by AI that make it simple for patients, insurance providers, and medical staff to communicate with one another. Medical chatbots are conversational systems powered by AI that make it simple for patients, insurance providers, and medical staff to communicate with one another. These bots can also aid in providing pertinent healthcare information to the appropriate parties at the appropriate time. Chatbots for medicine or healthcare may be used for a number of purposes, including enhancing patient experiences, supporting medical staff, enhancing healthcare processes, and gaining insightful information. One of the most developed and important AI-powered healthcare technologies to date, medical chatbots have the ability to completely change how payers, healthcare professionals, and patients interact with one another. 3. Surgical Robots Robotic surgery is improving patient outcomes and physician performance in less invasive procedures. What was formerly a subjective activity is now becoming precisely quantifiable motion sequences thanks to this development. Autonomous robotic surgery may increase medical productivity, safety, and dependability. The study discovered that autonomous anastomosis has challenges with regard to intricate imaging, tissue tracking, and surgical planning. 4. Virtual Nursing Assistants Virtual healthcare staff may employ AI technology to do everything from converse with patients to route them to the most appropriate and efficient care environment. These digital healthcare staff may check patients, react to requests, and give emergency assistance 24 h a day, seven days per week. In order to reduce unneeded hospital stays, several AI-powered virtual nursing assistant systems now allow more frequent communication between patients and careers in between office visits. The first virtual nursing assistant in the world, Care Angel, can even do health examinations using AI and voice. 5. Aiding in accurate diagnosis In terms of effectively detecting, anticipating, and diagnosing diseases, AI has the potential to outperform human doctors. Likewise, AI algorithms have been found not only to be accurate and precise in specialty-level diagnosis, but also to be cost-effective in detecting diabetic retinopathy. PathAI, for example, is working on machine learning algorithms that will assist physicians in making more efficient treatment. The company’s current objectives include lowering cancer diagnostic inaccuracy and creating ways for personalised medical therapy. Buoy Health is a symptom and cure checker powered by AI that employs algorithms to identify and treat sickness. It works like this: a chatbot listens to a patient’s symptoms and health

Artificial Intelligence Application for Healthcare Industry: Cases …

423

problems, then directs the patient to the appropriate care based on its diagnosis. For example, Indian SigTuple system performs all analyses normally conducted on blood samples, including differential counts, and red cell and platelets morphology analysis. 6. Minimising the burden of EHR EHR deployment has presented a variety of difficulties, including cognitive overload, endless paperwork, and user burnout. EHRs have been a significant contributor to the digitalization of the healthcare business. AI is now being used by EHR developers to provide better user-friendly interfaces and automate a few monotonous tasks that take up a lot of the users’ time. Although dictation and speech recognition programmes help to improve clinical documentation, natural language processing (NLP) strategies might not be as effective. AI may also assist with regular inbox requests, such as reminders for medication refills. Additionally, it can aid in prioritising tasks that call for the clinician’s attention, making it simpler for users to manage their to-do lists. The AI technology not only gives more insight into patients’ needs but also helps develop therapist techniques and training for mental health. For example, there are some ways that AI has improved mental health therapy: keeping therapy standards high with quality control, refining diagnosis and assigning the right therapist, monitoring patient progress and altering treatment where necessary, etc. AI pattern recognition using neural networks is currently the most popular method for pattern detection. Neural networks are based on parallel subunits referred to as neurons that simulate human decision-making. The nature of the implementation of AI could mean such corporations, clinics, and public bodies will have a greater than typical role in obtaining, utilising, and protecting patient health information. This raises privacy issues relating to implementation and data security.

3 AI Healthcare Industry Applications in Different Countries 3.1 Case Study of India There are several sectors that make up the Indian healthcare Industry (Fig. 2). Following an examination of businesses creating AI-based health products, health professionals utilising AI, and researchers exploring the possibilities of AI and health, it was discovered that AI is used in many different ways across the various categories, including: 1. Hospitals Healthcare organisations, mid-tier and upper-tier private hospitals and clinics, as well as government entities such as healthcare centres, district hospitals, and general hospitals, are included. AI, both descriptive and predictive, is being used in hospitals

424

O. Shvetsova et al.

Fig. 2 Top AI Indian companies and start-ups in healthcare industry (Source Iliashenko [14])

in India to help with a variety of activities. For example, the Manipal Group of Hospitals and IBM’s Watson for Oncology have collaborated to help physicians diagnose and treat seven different forms of cancer. Artificial intelligence (AI) is used in this case to analyse data, gather evidence, and improve the report’s quality, which increases patient confidence. Patients must express their consent while being fully informed about the surgery. Watson has recently faced criticism from doctors all around the world, however, for purportedly acting as “a mechanical turk”—a human-driven engine disguising itself as artificial intelligence. Aravind Eye Care Systems formerly assisted Google in the development of their retinal screening system by providing photos to train its image parsing algorithms. Currently, Aravind Eye Care Systems is collaborating with Google Brain. It is currently aiming to implement it into normal patient care after completing successful clinical studies to identify early indicators of diabetesrelated eye disease. Private healthcare organisations including Fortis Healthcare, Apollo Hospitals, L V Prasad Eye Institute (LVPEI), Narayana Health, and Max Healthcare leverage tools like Microsoft Azure, Machine Learning, Data Analytics, CRM online, and Office 365 to enhance patient care. 2. Pharmaceuticals The pharmaceutical industry involves various processes, such as the manufacture, extraction, processing, purification, and packaging of chemical substances for use as human or animal medications. In India, pharmaceutical companies are using descriptive and predictive AI in their operations and are also developing and testing prescriptive AI prototypes. The most common application of AI in the pharmaceutical industry is drug discovery, where it is used to perform tasks that would otherwise be impossible for even a team of people to manually complete, like searching through all literature on a specific molecule for a drug (targeted molecule discovery). Abbott Healthcare has utilised India as a trial ground for cutting-edge technological

Artificial Intelligence Application for Healthcare Industry: Cases …

425

breakthroughs including heart and liver applications and vertigo exercises (which use augmented and virtual reality). Pharmarack is a software-as-a-service (SaaS) program that uses AI to automate the administration of the pharmaceutical supply chain. 3. Diagnostics These are establishments and labs that provide analytical or diagnostic services. India is home to start-up businesses that specialise in using AI to diagnose disease in addition to larger corporations like Google and IBM. It indicates that diagnostics in India are using descriptive and predictive AI based on an evaluation of the solutions selected. For instance, Orbuculum utilises AI to forecast diseases including cancer, diabetes, neurological disorders, and cardiovascular diseases using genetic data, while Qure.ai employs deep learning technology to help diagnose disease and offer personalised treatment options from healthcare imaging data. Companies are tackling this problem by utilising technology to aid with mental health difficulties, typically in the form of chatbots that provide counselling while keeping privacy. AI is being used in India to improve mental wellness with chatbots like Wysa. Chatbots are also seen as a non-judgmental interface that is more sympathetic to patients’ issues, which may encourage people to speak openly about their concerns. For example, chatbots have provided a forum for displaced workers in India’s IT sector to express their worries about potential job losses in the current turmoil facing the industry [14, p. 25]. It sounds like Wysa and Woebot are both chatbots that use machine learning and data from smartphone sensors to help individuals manage their mental health. They may be able to detect changes in communication, activity, and sleep habits, and provide support and guidance to individuals who may be at risk for depression. It’s important to note that these chatbots are not a replacement for professional medical treatment and should not be used as a sole source of support for individuals with mental health concerns. If you are experiencing a mental health crisis, it is important to seek help from a qualified healthcare professional. 4. Medical supplies and Equipment This category includes establishments that produce surgical, dental, orthopaedic, ophthalmologic, laboratory, and other medical equipment and supplies. According to an analysis of solutions used, enterprises making medical supplies and equipment in India appear to be using descriptive and predictive AI. As with Philips IntelliSpace Consultative Critical Care, AI is also being utilised to monitor patients’ vital signs in ICUs and alert doctors in the event of any anomalies. In the implantable cardiovascular defibrillator (ICD), which monitors heart rhythms and automatically delivers shocks in the event of an anomaly, it can also be used therapeutically. 5. Medical Insurance This includes health insurance and medical reimbursement plans, which cover a person’s hospitalisation expenses incurred as a result of illness. Descriptive and

426

O. Shvetsova et al.

predictive AI are being utilised by Indian organisations that offer medical insurance, according to an examination of the solutions employed. Big data may also be used by insurers to spot diseases in their early stages and lower the likelihood of consequences from medical treatment. Insurance companies in India can only manage operations at the moment. Boing, a chatbot that responds to consumer inquiries about auto and health insurance, is used by Bajaj Allianz General Insurance. ICICI Lombard sells insurance using the MyRA chatbot platform. The email bot Spok from HDFC Life claims to be the first of its kind in India to automatically read, comprehend, categorise, and react. 6. Telehealth SigTuple can analyse blood slides and provide a pathology report without the assistance of a pathologist. This service may be used in remote regions for a fraction of the cost. Microsoft has collaborated with the Government of Telangana to harness cloudbased analytics for the Rashtriya Bal Swasthya Karyakram programme by utilising MINE (Microsoft Intelligent Network for Eyecare), an AI platform. The Philips Innovation Campus (PIC) in Bengaluru is utilising technology to make healthcare more accessible and affordable. Philips IntelliSpace Consultative Critical Care, which was created in collaboration with Fortis Escorts Heart Institute, Delhi, enables hospitals to oversee several intensive care units (ICUs) from a single command centre that may be situated in a different region. Telemedicine, however, currently faces infrastructural challenges. Technology is being used at the Bengaluru-based Philips Innovation Campus (PIC) to make healthcare more accessible and cheaper. They have created software (Mobile Obstetrics Monitoring) to track and manage high-risk pregnancies as well as solutions for TB diagnosis from chest X-rays.

3.2 Case Study of South Korea A survey found that in South Korea, 42 tertiary hospitals have consistently used electronic medical records (EMRs) since 2015, and that over 90.5% of hospitals in the country have adopted EMRs on average. This is in contrast to a 2016 study that found that among OECD nations (Fig. 3). However, South Korea’s higher ranking hospitals had a higher proportion of EMR use. The World Health Organization has identified South Korea as having the highest level of medical IT infrastructure. This study was the first to attempt to analyse clinical research and assess the adoption rates of various data repository systems for the secondary use of clinical data, such as CDW, CDM, and CTMS. More than half of tertiary institutions have implemented at least one of these research support systems that allow for the secondary use of clinical data. In contrast to traditional technologies, which deal with physical domains, artificial intelligence (AI) technology has consequences in more psychological domains such as experience, cognition, and expert judgement. This is where artificial intelligence technology is forging new ground. With the advent of deep learning technology,

Artificial Intelligence Application for Healthcare Industry: Cases …

427

Fig. 3 Market size of AI healthcare South Korea 2018–2023 (Source Global Data Report [22])

the efficacy of machine learning algorithms for pattern recognition has substantially risen, and the capacity of AI technology to analyse data patterns has approached that of a normal human competence for various jobs (e.g., image recognition and speech recognition) [15; 16, p. 90, 17, p. 85]. South Korea the inventive spirit, rapid population ageing, and technical advancement of Korea are well-known. The government confronts a significant challenge in increasing healthcare spending, which already accounts for 7.7% of GDP and might exacerbate healthcare disparities if wages remain flat. Others argue that the impact of ageing on healthcare spending has been minimal, but this is likely due to the fact that the GDP has been rapidly expanding and is predicted to decline in the coming years as the working class ages more rapidly. Only 10% of hospitals in the country, which has a 90% privatised healthcare system, are public. As a result, the out-of-pocket cost for some therapies may be substantial, perhaps approaching 50%. Despite this, the government covers the bulk of healthcare expenditures through the Medical Aid Programme, which pays 100% of qualified charges, the National Health Insurance Programme, which has co-payments of 14%, and the Long-Term Care Insurance Programme, respectively (20 per cent). Regional health inequalities occur despite the country’s well-regulated and strong healthcare system since 90% of doctors serve metropolitan regions while 80% of the population lives in urban areas. The key reasons for the healthcare system’s inefficiency are its fragmentation and the rivalry amongst healthcare institutions rather than their cooperation [18, p. 334]. A number of South Korean healthtech companies that offer artificial intelligence (AI)-based imaging solutions are working more often with domestic and international firms to meet the growing demand in hospitals. The diagnostic imaging market in

428

O. Shvetsova et al.

South Korea will develop between 2023 and 2030 at a compound annual growth rate of about 5%, according to GlobalData, a prominent data and analytics company [19, p. 30]. “Integrating AI and machine learning techniques into medical diagnostics would enable better and quicker diagnosis, hence advancing the nation’s healthcare AI ecosystem”, says Damini Goyal, Medical Devices Analyst at GlobalData [20, 21]. The potential for South Korea’s innovation and global appeal is growing as a result of increased demand for high-precision medical equipment in military hospitals and quicker approval procedures. Military partnerships with businesses selling AIpowered medical equipment have proven enhanced patient care in military settings, where there is a dearth of diagnostic specialists, hence raising the demand [22]. This is part of the Armed Forces Medical System AI Transformation initiative.

4 Results and Discussion. Comparison of AI Healthcare Application Between India and South Korea The Indian healthcare business is presently concentrating on a few major objectives, including universal access to public health services and the affordability of highquality universal healthcare, both curative and preventive. While the core goals of South Korean healthcare varied differently, they include enhancing institutional and environmental conditions for healthy living for all people, balancing justice and efficiency in healthcare, and improving life quality. Both countries have separate goals for the future, but it all depends on technological progress. These countries’ quick adoption of technology such as AI would benefit the whole healthcare business in both countries (Tables 1 and 2, Fig. 4) [22]. Cloud computing in healthcare has the ability to improve healthcare delivery quality while reducing budgetary burden, allowing governments to handle healthcare concerns efficiently and quickly. Japan, South Korea, and Singapore are successful instances of how cloud computing may be utilised to create statewide databases of electronic health records; telemedicine, genetic databases to enable cutting-edge research and cancer treatment, real-time health monitoring for the aged population, and health cities that power the economy through the medical sector, tourism, and research. This study investigates these nations, highlights the motivators and impediments to cloud use in healthcare, and offers policy recommendations to support effective cloud adoption for public health advances. Authors suggest some challenges for AI applications in healthcare industry: 1. Architecture

Table 1 Healthcare system index (2022)

Index

India

South Korea

Healthcare System Index

65.67

82.91

Source Global data report [22]

Artificial Intelligence Application for Healthcare Industry: Cases …

429

Table 2 Comparative study of AI in healthcare industry: South Korea and India Indicator

India

South Korea

Strategy

AI as timely epidemic outbreak prediction, remotion of diagnostics and treatment and optimization of health resource allocation

Understanding the AI’s base data; AI’s assumption about patients and diseases; and how much decision weight can be placed on an AI recommendation

Sources

Digital infrastructure include the Healthlocker—an electronic national health registry and cloud-based data storage system; greater collaboration between government, technology companies and traditional healthcare providers

Well-curated and labelled data; machine learning; datasets over algorithms

Limitations

Budget; cooperative strategies

Competencies; technology combination

Source made by authors

Fig. 4 Healthcare Index of India and South Korea, 2022. Source Global data report [22]

Lack of appropriate architecture is one of the main obstacles to integrating AI in healthcare in India. Many Indian start-ups have established themselves abroad since cloud computing architecture is mostly available on servers located outside of India. The issue of software compatibility for the adoption of AI-driven healthcare is further complicated by the fact that many medical equipment used for diagnostic or therapeutic purposes are imported from nations other than India. However, South Korea’s healthcare system is more technologically sophisticated than India’s because artificial intelligence is being used to guide surgical robots, identify connections between genetic codes, and increase hospital productivity. 2. Practicum Issues

430

O. Shvetsova et al.

Lack of AI-trained employees is another barrier preventing India from implementing AI in healthcare. The use of AI in healthcare is greatly constrained by the lack of qualified personnel. The Korean government is heavily funding artificial intelligence. The Korean government realised they needed more AI programmers to make this a reality. There are now plans for six further AI institutions. One of India’s key challenges is the high initial expenditure required to implement AI in healthcare. The Indian government is still making little expenditures in fields such as artificial intelligence and research. 3. Liability and Accountability In India, medical personnel are likely to be prosecuted in situations of medical negligence. However, it is unclear how accountability and culpability would be decided if the doctor makes a mistake due to a bug in the AI-based system. It is necessary to enact regulations governing liability and accountability for AI in healthcare, as well as to adopt guidelines outlining the boundaries of the healthcare system where AI should not be permitted to take control [23]. 4. Assurity Healthcare and artificial intelligence in India. Acceptance of the findings offered by AI algorithms is one of the challenges with the use of AI in Healthcare in India. AI-based judgements made by doctors must be explicable, especially in India, where the doctor–patient connection is completely trusted. Furthermore, there is a lack of understanding of AI, its applications, and benefits not just among medical and healthcare experts, but also among the general people, making it incredibly difficult for the Indian government to go forward and truly consider growth [24]. 5. Data protection and privacy AI can secure patient data by quickly detecting and responding to data breaches. Data privacy is one of the most significant barriers to AI adoption in healthcare [25, p. 3492]. A significant quantity of personal health information is available online throughout the cloud computing ecosystem, constituting a data security risk. More than 35,000 people’s medical information was stolen from a diagnostic lab in Maharashtra, India. 6. Inequality concerns Inequality is another cause of concern. Data used to build AI systems, for example, may under-represent minority communities, resulting in unfair outcomes. Algorithms of artificial intelligence can give unfair results based on skewed data (such as race, gender, age, and religion), which may be more acceptable for particular Indian groups than others. In India, there is no availability of regulatory structures in place to control AI technology development and ensure its quality, privacy, and security within the country. This is one of the most major barriers to India’s widespread adoption of artificial intelligence-based healthcare.

Artificial Intelligence Application for Healthcare Industry: Cases …

431

5 Conclusion AI technology assists in the resolution of health concerns in India; however, it is constrained by the sort of medical information accessible and the lack of human attributes in some areas. The AI programme tries to replace human intervention but is incapable of justifying and communicating information. Few operations performed by medical institutions’ clients resulted in deception or fraudulent activity. AI applications are employed in patient education, medical research, diagnostics, medical treatments, and wellness care decision—making process. AI systems will progress and be able to do a broader range of activities without the assistance of humans control or their input. The development and use of AI promotes and drives industry innovation in a transparent and in line with the public interest manner. High-quality de-identified photos may be used by AI-based technologies to help machine learning models find biomarkers and enhance the results of cancer research. Oncology might use tools like these as well. The Comprehensive Archive of Imaging programme (India’s first de-identified cancer picture collection) was just introduced by Tata Medical Centre and the Indian Institute of Technology. AI shouldn’t be used to automate healthcare decision-making; rather, it should complement it. Some of these risks can be reduced by implementing real human control by allowing doctors to comment on proposed AI model architectures and by establishing a continuous learning loop. To improve cooperation between academics, government, business, NGOs, and patient advocacy organisations, the government must also engage in and create public–private partnerships across the healthcare sector. To offer enough monitoring for privacy, fairness, and openness, they should scale governance and regulatory processes. Predictive analytics combined with artificial intelligence (AI) for early detection can be a potent tool for targeted public health interventions, particularly in the setting of constrained healthcare resources and slow illness diagnosis outside of metropolitan centres. The Indian healthcare sector presents an opportunity to address these disparities and advance AI development through the use of AI-enabled solutions.

References 1. Davenport T, Kalakota R (2019) The potential for artificial intelligence in healthcare. Futur Healthc J 6(2):94 2. Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, Wang Y, Dong Q, Shen H, Wang Y (2017) Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol 2(4):23–26 3. Topol E (2019) Deep medicine: how artificial intelligence can make healthcare human again. Hachette UK 4. Shaikh F, Dehmeshki J, Bisdas S, Roettger-Dupont D, Kubassova O, Aziz M, Awan O (2021) Artificial intelligence-based clinical decision support systems using advanced medical imaging and radiomics. Curr Probl Diagn Radiol 50(2):262–267 5. Dawoodbhoy FM, Delaney J, Cecula P, Yu J, Peacock I, Tan J, Cox B (2021) AI in patient flow: applications of artificial intelligence to improve patient flow in NHS acute mental health inpatient units. Heliyon 7(5):69–93

432

O. Shvetsova et al.

6. Jadczyk T, Wojakowski W, Tendera M, Henry TD, Egnaczyk G, Shreenivas S (2021) Artificial intelligence can improve patient management at the time of a pandemic: the role of voice technology. J Med Internet Res 23(5):22–59 7. Haleem A, Javaid M, Singh RP, Suman R (2022) Medical 4.0 technologies for healthcare: Features, capabilities, and applications. Internet of Things and Cyber-Physical Systems 8. Global Report. India. https://www.anl.gov/cels/using-ai-to-enhance-robotic-surgery-and-imp rove-patient-outcomes (last accessed 12 December 2022) 9. Sarath Kumar Boddu R, Sreenivasa Chakravarthi D, Venkateswararao N, Chakravarthy SK, Devarajan A, Ramakant Kunekar P (2021) The effects of artificial intelligence and medical technology on the life of human (2021) 10. Science Report. https://senseaboutscience.org/activities/guides/ (last accessed 19 December 2022) 11. Global Forum. https://www.weforum.org/agenda/2022/10/ai-in-healthcare-india-trillion-dol lar/ (last accessed 01 December 2022) 12. Liaw CY, Guvendiren M (2017) Current and emerging applications of 3D printing in medicine. Biofabrication 9(2):24–102 13. Song YJ (2009) The South Korean health care system, 52(3):206–209 (2009) JMAJ 14. Iliashenko O, Bikkulova Z, Dubgorn A (2019) Opportunities and challenges of artificial intelligence in healthcare. In E3S Web of Conferences, EDP Sciences, vol. 110, pp. 20–28 15. Korea AI Report https://www.mohw.go.kr/eng/pl/pl0103.jsp?PAR_MENU_ID=1003& MENU_ID=100326 (last accessed 03 December 2022) 16. Park YR, Shin SY (2017) Status and direction of healthcare data in Korea for artificial intelligence. Hanyang Med Rev 37(2):86–92 17. Park CW, Seo SW, Kang N, Ko B, Choi BW, Park CM, Chang DK, Kim H, Kim H, Lee H, Jang J (2020) Artificial intelligence in health care: Current applications and issues. J Korean Med Sci 35(42):56–96 18. Raghavan A, Demircioglu MA, Taeihagh A (2022) Public health innovation through cloud adoption: a comparative analysis of drivers and barriers in Japan, South Korea, and Singapore. Int J Environ Res Public Health 18(1):334 19. Shin SY (2018) Issues and solutions of healthcare data de-identification: the case of South Korea. J Korean Med Sci 33(5):23–42 20. Samsung website. https://www.samsunghospital.com/gb/language/m_english/about/new sView.do?bno=944&bbs_id=009001 (last accessed on 12 December 2022) 21. Korea BioMed report https://www.koreabiomed.com/news/articleView.html?idxno=2469 (last accessed 12 December 2022) 22. Global Data report. https://www.globaldata.com/media/medical-devices/south-korea-priori tizes-ai-transformation-of-diagnostic-imaging-devices/ (last accessed 12 December 2022) 23. India AI Report. https://www.india-briefing.com/news/how-high-tech-innovation-is-advanc ing-indias-healthcare-capabilities-26268.html/ (last accessed 12 December 2022) 24. India Economic Report. https://indiaai.gov.in/country/south-korea?standard=privacy (last accessed 12 December 2022) 25. Shvetsova OA, Lee JH (2020) Minimizing the environmental impact of industrial production: Evidence from South Korean waste treatment investment projects. Appl Sci 10:3489–3510 (2020) Switzerland

Automated ADR Analysis from Twitter Data Using N-Gram-Based Feature Extraction Methods and Supervised Learning Classification K. Priya and A. Anbarasi

Abstract A sizable dataset regarding customer preferences and experiences with a range of products is available on online discussion forums and review websites. This data can be utilized to draw illuminating conclusions using opinion mining techniques like sentiment analysis. The pharmaceutical industry’s online consumer reviews will be examined in this essay. This field’s online user reviews frequently include information on a number of subjects, such as drug efficacy and side effects, which makes automatic analysis both exciting and challenging. Assessing viewpoints on the many aspects of medication reviews, however, can provide informative information, support of making decision, and enhance monitoring the public health through the disclosure of the accumulated knowledge. This research introduces a technique that automatically detects ADRs that are openly referenced in public data in order to identify the ADR impacts of COVISHIELD. Additionally, it offers a method that increases and visually illustrates how well sentiment analysis and machine learning algorithms anticipate outcomes. The work compared classical and quantum SVM algorithms in terms of performance metrics. Results clearly show that the quantum classifier outperformed others in terms of accuracy. Keywords Adverse drug reaction · COVISHIELD · COVID-19 · Sentiment analysis · Machine learning

K. Priya (B) Department of Computer Science, AVP College of Arts and Science, Tirupur, Tamil Nadu 641652, India e-mail: [email protected] A. Anbarasi Department of Computer Science, Govt. Arts and Science College for Women, Puliyakulam, Coimbatore, Tamil Nadu 641045, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_31

433

434

K. Priya and A. Anbarasi

1 Introduction Clinical studies and other types of testing are now commonly used to determine whether or not a pharmaceutical product is safe to use. These kinds of investigations are typically carried out with a limited number of research participants over the course of a constrained amount of time and under controlled conditions. Thus, patient selection and treatment environments can affect drug efficacy and adverse drug reactions (ADRs). Therefore, pharmacovigilance, also known as post-marketing drug surveillance, is extremely important for the purpose of ensuring the safety of a drug after it has been made available [1]. The COVID-19 virus is quickly dispersing to new locations all over the world. As of April 14, 2020, there have been 1.99 million cases of COVID-19 reported across 210 countries and territories in 219.747 cases, and 128,000 people have lost their lives as a result of the virus. The COVID-19 epidemic, which has devastated the entirety of the world, has resulted in the deaths of millions of people. The COVID19 virus could only be effectively treated by using the appropriate vaccine. Both the frequency of adverse drug reactions (ADR) related to the COVISHIELD vaccination and the practical applicability of sentiment analysis, opinion mining, and various machine learning techniques were the primary focuses of this research. However, one of the most crucial steps in auto-processing and analyzing large amounts of unstructured data is converting inherent properties into numerical ratings. It’s crucial. Many sentiment analysis methods use sentiment lexicons. These methods use textual units and opinion words from sentiment-polarity-annotated lexicons to identify sentiment keywords and patterns in natural language texts. This project seeks sentiment patterns in natural language texts. Sentiment analysis of patient data, which focuses on drug experience, is a difficult research field that is currently popular. This interest applies to patient data analysis. The lack of annotated data, necessary for sentiment classification, is a major issue. Tagged data, which can solve multiple problems at once, is an especially rare type of [2] data. In addition, the accessibility of labeled data is extremely industry-specific. Patients who suffer from certain diseases are more likely to discuss their treatment experiences than patients who suffer from other types of disorders. This research investigates the application of sentiment analysis to drug reviews in order to determine both the efficacy of a COVISHIELD medicine and the type of adverse drug reaction that was caused by a drug by using the reviews of that drug. As a consequence of this, it is believed that aspect-based sentiment analysis encounters challenges when attempting to categorize effectiveness and side effects. The block diagram of the system architecture that was used for this paper can be found in Fig. 1. The architecture gathers datasets, extracts features, and builds a classification model. The proposed model applies ML techniques to the various features that are gleaned from Uni-grams, Bi-grams, and Tri-grams, respectively.

Automated ADR Analysis from Twitter Data Using N-Gram-Based …

435

Fig. 1 System architecture

2 Literature Review El Rahman et al. [3] proposed a method for performing sentiment analysis using an unsupervised machine learning algorithm. This article compares the popularity of McDonald’s with KFC and provides facts about both. The data was fed into several models and evaluated using cross-validation and f -score [3]. Rasika et al. implemented many sentiment analysis methods in 2018. Prakruthi et al. suggested using the Twitter API for real-time sentiment analysis (2018). A pie chart compares preprocessed tweets to the Sentiment Classification-based Conception of Actual Twitter Data [4]. Positive, negative, or neutral tweets are categorized. The research by Geetika et al. aimed to provide a collection of techniques for classifying product reviews and sentences based on data from Twitter using semantic analysis and machine learning. Utilizing a tagged Twitter dataset, the research’s main objective was to evaluate a substantial number of Twitter reviews. The accuracy was recalculated after continuing the semantic analysis on WorldNet and indicated an

436

K. Priya and A. Anbarasi

increase of 1.7%, going from 88.2 to 89.9% [5]. By using the French language, Lardon et al. [6] covered European bases. A list of 33 drugs that were under French pharmacovigilance supervision was used to generate the Martindale and DRUGDEX databases. A Lin et al. study on ADR identification word representation strategies focused on this. It used word2vec and global vectors, two cutting-edge methods for incorporating words, together with token normalization (GloVe). The cluster tendencies of word2vec were discovered to be comparable to GloVe’s after manually evaluating the clusters generated by these two techniques [6]. Zhang et al. [7] examined cholesterol-lowering drug ADRs using WHO-UMC. Manually labeled ADR data from 579 Chinese event notifications is searchable. These occurrences were mapped using WHO-ART, a framework for characterizing and classifying ADRs by severity, and 171 words, 129 from WHO-ART and 42 from self-description. Word embeddings improve NLP and are widely used in research [7]. L. A. P. A. Ribeiro et al. improved tweet classification preprocessing with a labeled database and ontology. This study focuses on negative tweets about studies, research dissemination, and subject-matter experts on Twitter. Despite extensive preprocessing, social networks can detect negative events. A database of 30,000 tweets from Twitter that mentioned four different drugs was analyzed, but no negative effects were discovered. To boost the classification model for ADRs’ accuracy, task monitoring is required. 50 opinions, or 5% of the total, were discovered to be ADRs from a database of 1,000 posts [8]. Numerous researches have tried to advance cross-domain sentiment classification or domain adaptation, but not at the level of elements of pharmaceutical reviews, but rather among various entities like goods, entertainment, or dining venues. This section delivers a complete, organized analysis of the literature on sentiment analysis across domains (see Table 1).

3 Methodology Today, patient reviews that specifically mention certain medications and the related adverse effects are the most often used information source for post-marketing drug care investigation. The main sort of computation technique employed in this study to build content-based classifiers is feature learning-based approaches. This study extracts data attributes from user input using NLP techniques and then builds models for ADR categorization using standard machine learning techniques. Classification models, dataset gathering and preprocessing, and feature extraction make up the suggested architecture. The following are some ways that the research has advanced: The necessary dataset is collected in Step 1 using a Python. In step 2, preprocessing is completed. (Python provides high-level libraries to clean tweets like removing links, special characters, etc.) Step three is feature extraction. (The dataset makes available the words used to indicate adverse reactions to COVID-19 vaccinations.) Models for Classification, Step Four (Supervised ML algorithms have been chosen for effective Opinion Classification).

Automated ADR Analysis from Twitter Data Using N-Gram-Based …

437

Table 1 Comparative analysis of the existing system S. no.

Author (year)

Dataset

Title

Methods

Accuracy (%)

1

A. H. Sweidan et al. (2021) [15]

AskaPatient, WebMD, DrugBank, Twitter, n2c2 2018 and TAC 2017

Sentence-level aspect-based sentiment analysis for classifying adverse drug reactions (ADRs) using hybrid ontology-XLNet transfer learning

Bidirectional long short-term memory (Bi-LSTM) networks

98

2

S. Liu et al. (2019) [16]

Drug reviews

Extracting features with medical sentiment lexicon and position encoding for drug reviews

Random forest

62

3

M. Zhang AskaPatient et al. (2019) [17]

Adverse drug event Combining CNN detection using a and LSTM weakly supervised convolutional neural network and recurrent neural network model

86.72

4

A. Cocos Twitter posts et al. (2017) [18]

Deep learning for pharmacovigilance: recurrent neural network architectures for labeling adverse drug reactions in Twitter posts

75

5

K. Lee et al. Twitter posts (2017) [19]

Adverse drug event Semi-supervised detection in tweets CNN with semi-supervised convolutional neural networks

RNN

70.21

3.1 Dataset Collection On the social networking platform Twitter, research on public opinion is frequently published. The Twitter API (Application Programming Interface) 1.0 in R Tool was utilized in order to collect data from Twitter; however, this tool was limited in that it could only search for recently published tweets. Users can access tweets in real time by using the streaming API provided by the Twitter API. We collected 24,748 tweets about COVISHIELD by using Twitter API with the R tool (see Fig. 2).

438

K. Priya and A. Anbarasi

Fig. 2 Extraction of Twitter data

This tactic, despite being conservative, makes tweets concerning drugs more relevant. Effects are physical or psychological signs and circumstances that drug users encounter. The tweets about medications that were collected didn’t all discuss their adverse effects. This phase includes developing a Twitter API and getting tweets in compliance with the guidelines, such as obtaining tweets from a certain person or tweets containing a particular keyword [9]. The extraction of tweets that are linguistically or geographically specific is possible with the help of the Twitter API. The data can be accessed in any format for convenience, such as .txt, .csv, and .doc. This work developed a Twitter API to collect tweets. All tweets referencing the names of drugs and their side effects were found by typing “COVISHIELD” into the search bar. The downloaded file was kept in .csv file format. According to the findings of our investigation, the vast majority of relevant tweets were opinion tweets. These are tweets that narrate the author’s experiences with the medication and how they reacted to it.

3.2 Preprocessing To get decent results while analyzing tweets, it is crucial to use text preprocessing methods [10]. The preprocessing phases involve a number of steps, which are shown in the image below (see Fig. 3).

Fig. 3 Steps in preprocessing

Automated ADR Analysis from Twitter Data Using N-Gram-Based …

439

Removal of URL: Opinion analysis has nothing to do with URLs. Therefore, for successful analysis, URLs should be deleted from the tweets. Converting to Lower Case: Tweets will contain text that combines upper and lower case letters. As a result, Twitter data is changed to lower case to make it simpler to analyze. User name Removal: A user name appears in practically every sentence on Twitter. There is no opinion in their presence. Therefore, removing it during the preprocessing stage is a crucial step. Eliminating Punctuation (#, @, etc.): Punctuation makes no addition to understanding a person’s views. Therefore, to make the analytical process easier, they should be eliminated. Remove Blank Spaces: In order to help with the tokenization of the tweets, this step is performed to remove any unnecessary blank spaces. Breaking a statement into words is known as tokenization. Stop-word removal: It is a method for getting rid of terms that are frequently used but have no real significance or purpose in text classification. By doing this, the corpus size is decreased without vital data being lost. Stop words like “are,” “is,” “am,” and similar expressions are meant to remove any emphasis on emotions in order to compress the dataset. Lemmatization: The major goal of the lemmatization method is to delete inflectional completions while also producing the lemma, which is a word’s dictionary form. This approach uses lexical and morphological analysis to properly modify the words in tweets. Stemming: This term describes a simple experimental technique that involves cutting off the ends of words.

3.3 Feature Extraction In order to use the learning system effectively, the feature extraction method is used to reduce unwanted fluctuations in the Twitter data and prevent computationally expensive processes. In order to improve the categorization system’s efficiency, improving the robustness of a feature often demands less work from the classifier. Starting with a base set of measured (preprocessed) data, the feature extraction procedure generates derived values that are intended to be helpful [15]. This method expedites the subsequent learning stages and, in some cases, enhances human interpretations. Using n-grams is the best way to turn text into structured features. Create some context by dividing lengthy content into word groups. In order to increase accuracy, we develop a bi-grams-efficient feature extraction method in this paper. a. N-Grams The text is organized using the N-Gram approach in a matrix (term, weight), where each phrase is assigned a weight based on how frequently it appears. The words can be composed of 1, 2, or more N-grams or can be a fundamental unit (token). Bag of

440

K. Priya and A. Anbarasi

words is another name for a bag of n-grams. Any combination of n tokens (words) constitutes an n-gram [11]. An contiguous series of “n” elements in a given sequence of input text constitutes a “n-gram technique” in the context of computational morphology. Syllables, phonemes, letters, words, or base pairings that allow for use are some examples of the objects. Usually, a text or speech corpus was used as the source of the input data for this method. According to this system, an n-gram of size “1” is referred to as a “unigram,” a “bigram,” and a “trigram.” Sometimes, the value of “n” is used to refer to larger sizes, such as “4-g” and “5-g”. Because this method captures more context around each word, it may be more beneficial than BoW. This has a price, too, as a bag of n-grams can generate a feature set that is considerably larger and more sparse than a bag of words. Generally speaking, 3-g is about as much as we need because going above that rarely improves presentation due to sparsity. It is called as a unigram if we take the value of n to be 1, a bigram if n is 2, a trigram if n is 3, and so on. If we take the adage, “COVISHIELDis a better vaccine,” as an example. If N = 2, then the results will be “COVISHIELD is,” “Is better,” and “Better Vaccine.”

3.4 Supervised Classification An association is discovered between a group of input variables X and an output variable Y during the supervised learning process. On the basis of real-world data, this association is then used to forecast the outcomes of hypothetical data. The majority of practically useful ML algorithms use supervised learning [12]. Following the completion of the data classification process, the algorithm then learns to make predictions based on the input data. Several different types of machine learning (ML) classification strategies, such as the Q-SVM (Quadratic-SVM), L-SVM (LinearSVM), and LDA, are utilized in this investigation (Linear Discriminant Analysis) and PNN (Probabilistic Neural Network) and show how well each approach performs using features that were extracted. SVM: SVM can be used for linear and nonlinear binary classification. Because datasets are frequently not linearly separable, SVMs provide the best surface to distinguish between positive and negative training feature samples based on the experimental threat (training set and test set error) reduction principle. In a highdimensional feature space, a decision boundary is represented by this method using hyper-planes. The vectorized data is split into two categories by this hyperplane, which also identifies a result, allowing a decision to be made based on this support vector. The SVM’s operational methodology can be summarized as follows [12]. Given is a training set of “N” linearly divisible elements with feature vector “x” of “d” dimensions. To achieve twin optimization, where α ε R N and y ε {1, −1}

Automated ADR Analysis from Twitter Data Using N-Gram-Based …

441

Then the outcome of SVMs can be described as follows: ⎧ ⎫ n n n E ⎨ E ⎬ E ∗ → → − → α = argmin − αi + αi α j yi y j ⎩ ⎭ i=1

i=1 j=1

where n E

αi yi = 0; 0 ≤ α i ≤ C

i=1

SVM classification divides the linear dataset into two classes using a single hyperplane. For nonlinear datasets with more than two classes, kernel functions arrange the data into a higher-dimensional linearly separable space. This algorithm accepts a variety of input features, classifies them, and then uses SVM with a linear and quadratic kernel to specify the hyperplane. Next, it verifies the authenticity and correctness of the results. LDA: Supervised classification can yield LDA. The random variable X comes from one of the K classes, and there are class-specific probability densities named (x). To represent all classes, discriminant rules divided the data space into K separate pieces. Discriminant analysis classification of x simply indicates that x belongs to class j if x is in one of these zones. Two allocation criteria are logically used to find the region of the data x. Maximum Likelihood Rule: Assuming that each class has an equal chance of occurring, assign x to class j if j = argmaxi f i (x) Bayesian rule: Assign x to class j if we are aware of the class prior probabilities j = argmaxi πi f i (x) Linear discriminant analysis: If it is assumed that the data is from a multivariate Gaussian distribution and that the distribution of X can be defined by its mean () and covariance, explicit versions of the aforementioned allocation rules can be created (). If the data x has the highest likelihood of any of the K classes for I = 1, …, K, then we categorize the data to class j according to the Bayesian rule: δy(X ) = log f i (x) + logπi The aforementioned function is referred to as a discriminant function. Note the application of log-likelihood in this case. In other words, the discriminant function tells us of the probability that data x belongs to each class. The decision boundary between any two classes, k and l, is therefore the set of x where two discriminant

442

K. Priya and A. Anbarasi

functions have the same value. Any data that is on the decision boundary is equally likely to originate from either class since we were unable to make a decision. LDA happens when we assume that the covariance in each of the K classes is equal. In other words, instead of having a covariance matrix for each class, all classes share a single one. Therefore, it is possible to obtain the discriminant function stated below: 1 δk (X ) = X T E −1 μk − μkT E −1 μk + logπk 2 You can see that this is a linear function in x. The term “linear discriminant analysis” refers to the fact that each decision border between classes is also a linear function in x [13]. Without the equal covariance assumption, the likelihood’s quadratic term does not cancel out, hence the resulting discriminant function is a quadratic function in x: (

) x − μk )T Ek−1 (X − μk + logπk

In this case, the decision boundary is quadratic in x. This is called linear discriminant analysis (LDA). PNN: PNNs use statistical Bayesian classification algorithms. The multilayered feedforward network organizes functions into input, pattern, summation, and output layers. Input layer nodes hold measurements. The pattern layer is fully coupled to the input layer with one neuron per training set pattern. The pattern layer outputs are sometimes connected to the summing units, depending on the pattern class. PNN model stages [14]. 1. All of the neurons in the pattern layer receive input measurements from the input layer neurons. 2. Using the provided set of data points, the second layer forms the Gaussian kernel function. 3. The third layer averages the outcomes for each review class in step three. 4. The fourth layer conducts a vote, choosing the value with the highest percentage, and the class label is then decided.

4 Results and Discussion The features that were retrieved are used in the testing and training processes. Data were used for training and testing at a rate of 80% and 20%, respectively, for evaluation. The evaluation of the various algorithms is done using the TP, FP, TN, FN, sensitivity, specificity, and classification accuracy metrics. It shows how the various algorithms we explored for this topic’s results successfully alter their parameters to be appropriate for this prediction task at this point. This study makes use of Python software. The complex tool Python is composed of a number of library files.

Automated ADR Analysis from Twitter Data Using N-Gram-Based …

443

Fig. 4 Confusion matrix format

4.1 Performance Measures Parameters The training and testing processes use the characteristics that are chosen from the datasets. Data were used for training and testing at a rate of 80% and 20%, respectively, for evaluation. The evaluation is done using the parameters TP, FP, TN, FN, Precision, Recall, F1-Score, and classification Accuracy for the various algorithms [12]. • • • •

TP—Number of normal features is correctly classified as normal features. TN—Number of ADR features is correctly classified as ADR features. FP—Number of features is wrongly classified as ADR features. FN—Number of ADR features is wrongly classified as normal features (see Fig. 4).

The accuracy value is the proportion of the accurate number of predictions. It can be determined using the below equation: Accuracy =

(TP + TN) (TP + TN + FP + FN)

In this research, the confusion matrix has been employed to describe the functionality of the FE algorithms. It enables the visualization of an algorithm’s execution. Additionally, it makes class-level ambiguity simple to identify. With the help of a few machine learning algorithms, this procedure aims to evaluate how well Uni-grams, Bi-grams, and Tri-grams work [12].

4.2 Experimental Setup The entire model is run on a 64-bit macOS system with an Intel 2.6 GHz 8-core i7 processor, 16 GB of 2400 MHz DDR4 RAM, and a Radeon Pro 560X 4 GB GPU. The programs are all written in Python 3.8 and run in the Anaconda environment. In this section, the superiority of the suggested architecture and the effectiveness of other current models are displayed. Below are many performance evaluation metrics values for predicting the ADR in COVISHIELD vaccination reviews, including Accuracy, Sensitivity, Specificity, Precision, and Recall.

444

K. Priya and A. Anbarasi

Table 2 Performance evaluation with accuracy (%)

ML algorithm

Uni-gram

Bi-grams

Tri-grams

PNN

80.5

78.9

89.3

LDA

87.7

80.9

88.2

L-SVM

90.7

91.0

92.4

Q-SVM

91.5

92.8

93.5

Metirc Value (%)

Performance Analysis with Evaluation Metircs 100 80 60 PNN

LDA

L-SVM

Q-SVM

ML Algorithms Uni-gram

Bi-grams

Tri-grams

Fig. 5 Performance evaluation of FE methods with ML classifiers

Following the conclusion of the feature extraction processes, the extracted features are fed into the ML classifiers for classification, and the classifier’s performance is assessed. The classification accuracy of various classifiers using various feature sets, such as Uni-gram, Bi-grams, and Tri-grams, is evaluated as part of the performance evaluation process. It is noted that DT significantly improves classification accuracy (see Table 2). According to the aforementioned findings, when compared to other algorithms, the Q-SVM classifier produces the best results for all features. Figure 5 clearly demonstrates that most machine learning algorithms produce the best results when all features are combined (Uni-grams, Bi-grams, and Tri-grams). The Python machine learning toolbox’s feature extraction and ML algorithm are used to classify the ADR found in the testing dataset and evaluate the effectiveness of the suggested methodology. As a result, the Q-SVM algorithm’s accuracy rate was 93.5% for features based on trigrams.

5 Conclusion The primary goal of this study is to investigate opinion analysis based on machine learning and its significance in the COVISHIELD vaccine ADR analysis. With the help of the Twitter dataset, the performances of Feature Extraction methods such as N-grams-based approaches are contrasted with those of various ML approaches. To achieve the best outcomes, this research concentrated on feature extraction and

Automated ADR Analysis from Twitter Data Using N-Gram-Based …

445

ML algorithm properties. In order to determine the best outcome feasible, many ML algorithms are used, their performance measurements are obtained, and then the results of the algorithms are compared. When compared to other methods, the Tri-grams with the Q-SVM method may predict outcomes more accurately. Future works could use various data domains and classification algorithms with a dataset of considerably more reviews to test the limitations of the proposed method.

References 1. Min Z (2019) Drugs reviews sentiment analysis using weakly supervised model. In: 2019 IEEE international conference on artificial intelligence and computer applications (ICAICA), pp 332–336. https://doi.org/10.1109/ICAICA.2019.8873466 2. Meškele D, Frasincar F (May 2020) ALDONAr: a hybrid solution for sentence-level aspectbased sentiment analysis using a lexicalized domain ontology and a regularized neural attention model. Inf Process Manag 57(3). Art. no. 102211 3. El Rahman SA, AlOtaibi FA, AlShehri WA (2019) Sentiment analysis of Twitter data. In: 2019 international conference on computer and information sciences (ICCIS). IEEE, pp 1–4 4. Wagh R, Punde P (2018) Survey on sentiment analysis using twitter dataset. In: 2018 second international conference on electronics, communication and aerospace technology (ICECA). IEEE, pp 208–211 5. Gautam G, Yadav D (2014) Sentiment analysis of twitter data using machine learning approaches and semantic analysis 6. Lardon J, Bellet F, Aboukhamis R, Asfari H, Souvignet J, Jaulent M-C, Beyens M-N, Lillo-LeLouet A, Bousquet C (2018) Evaluating twitter as a complementary data source for pharmacovigilance. Exp Opin Drug Safety 17(8):763–774. pMID: 29991282 7. Zhang Y, Wang X, Shen L, Hou Z, Guo Z, Li J (2018) Identifying adverse drug reactions of hypolipidemic drugs from Chinese adverse event reports. In: 2018 IEEE international conference on healthcare informatics workshop (ICHI-W). IEEE, pp 72–73 8. Ribeiro LAPA, Cinalli D, Garcia ACB (2021) Discovering adverse drug reactions from twitter: a sentiment analysis perspective. In: 2021 IEEE 24th international conference on computer supported cooperative work in design (CSCWD), pp 1172–1177. https://doi.org/10.1109/CSC WD49262.2021.9437783 9. Akhtyamova L, Alexandrov M, Cardiff J (eds) (2017) Adverse drug extraction in twitter data using convolutional neural network. In: Lyon, France: IEEE; 28th international workshop on database and expert systems applications (DEXA) 10. Tayeb HF, Karabatak M, Varol C (2020) Time series database preprocessing for data mining using python. In: 2020 8th international symposium on digital forensics and security (ISDFS), pp 1–4 11. Rajesh P, Suseendran G (2020) Prediction of N-gram language models using sentiment analysis on e-learning reviews. In: 2020 international conference on intelligent engineering and management (ICIEM), pp 510–514 12. Xu W, Zhu Z, Wang L (2022) Comparative analysis of different machine learning algorithms in classification. In: 2022 international conference on big data, information and computer network (BDICN), pp 257–263 13. Sudibyo U, Rustad S, Nurtantio Andono P, Zainul Fanani A, Purwanto P, Muljono M (2020) A novel approach on linear discriminant analysis (LDA). In: 2020 international seminar on application for technology of information and communication (iSemantic), pp 131–136 14. Savchenko V (2020) Probabilistic neural network with complex exponential activation functions in image recognition. IEEE Trans Neural Netw Learn Syst 31(2):651–660

446

K. Priya and A. Anbarasi

15. Sweidan H, El-Bendary N, Al-Feel H (2021) Sentence-level aspect-based sentiment analysis for classifying adverse drug reactions (ADRs) using hybrid ontology-XLNet transfer learning. IEEE Access 9:90828–90846. https://doi.org/10.1109/ACCESS.2021.3091394 16. Liu S, Lee I (2019) ‘Extracting features with medical sentiment lexicon and position encoding for drug reviews.’ Health Inf Sci Syst 7(1):11 17. Zhang M, Geng G (2019) Adverse drug event detection using a weakly supervised convolutional neural network and recurrent neural network model. Information 10(9):276 18. Cocos A, Fiks AG, Masino AJ (2017) Deep learning for pharmacovigilance: recurrent neural network architectures for labeling adverse drug reactions in Twitter posts. J Am Med Inform Assoc 24(4):813–821 19. Lee K, Qadir A, Hasan SA, Datla V, Prakash A, Liu J, Farri O (Apr 2017) Adverse drug event detection in tweets with semi-supervised convolutional neural networks. In: Proceedings of the 26th international conference world wide web, Melbourne, QLD, Australia, pp 705–714

Bayesian Algorithm for Labourer Job Performance at Home in the COVID-19 Pandemic Bui Huy Khoi and Nguyen Thi Ngan

Abstract This study investigates the variables persuading labourer performance at work when working from home in Ho Chi Minh City during the COVID-19 pandemic. Six characteristics that determine how labourers perform at work when working from home have been offered as a model of employee work performance based on theory and research from prior models. They are Work–family balance, Leadership, Working at home, Labourer’s gender, Concerns about the COVID-19 epidemic, and Facilities. The discoveries of the chapter on labourer performance are in line with the features and circumstances of the operation. Working from home is the only factor influencing work performance. Considering the findings, the author offers management recommendations to enhance labourer performance when they work from home in HCM City during the COVID-19 epidemic. The paper uses the optimum selection by Bayesian Algorithm for Labourer’s Job Performance at Home during the COVID-19 Epidemic. Keywords BIC Algorithm · Work performance · Work–family balance · Leadership · Work at home · Labourer gender · Concerns about the COVID-19 pandemic · Facilities

1 Introduction From 2019 to the present time, the COVID-19 pandemic is still raging, and it has been causing huge losses from the economy to life for people around the globe. According to statistics from the Ministry of Health as of July 9, 2022, Vietnam has recorded 43,089 deaths from COVID-19 ranked 12th out of 227 countries and territories. In the current period, the entire world and Vietnam are applying measures such as social distancing, and many businesses and schools change the form of working and studying from offline to online modeto reduce the spread of the pandemic. In this context, many companies have implemented government regulations and B. H. Khoi (B) · N. T. Ngan Industrial University of Ho Chi Minh City, Ho Chi Minh City, Vietnam e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_32

447

448

B. H. Khoi and N. T. Ngan

allowed employees to work from home. According to data from the International Labour Department, the percentage of workers working from home is increasing trend because of the prolonged COVID-19 epidemic. It is estimated that before the crisis, the number of workers working from home was 260 million people, accounting for 7.9% of total global employment. Those working from home include Regular telecommuters and workers who work on digital applications that provide a variety of services, such as processing insurance claims, proofreading text, or annotating data for improving the quality of human resources through training sessions on artificial intelligence systems, and it is also reported that this number will still increase in the future upcoming time. Therefore, solving the problem of working from home and using workers correctly has become an urgent issue. In the research of Vuong [1], there are all five factors: (1) Work efficiency; (2) The balance between work and family; (3) Job satisfaction; (4) Working from home; (5) Concerns about COVID19 impact on work-from-home performance in the COVID-19 pandemic. Research by Van Der Lippe and Lippényi [2] shows that there are three factors in all: (1) Work from each labour’s home; (2) Working from the home of colleagues in the group; (3) Teamwork from home affects team performance. According to research by Daraba et al. [3], there are three factors affecting the performance of employees during working from home: (1) The True leadership ability of the leader; (2) The Psychological capital of labourers; (3) The Labourer’s gender. According to a study by Guler et al. [4], six other factors are given: (1) Demographic characteristics; (2) Facilities; (3) Working time; (4) Working environment at home; (5) The balance between rest time and work; (6) Health status impacts on labour’s work-from-home performance during the COVID-19 epidemic. And finally, according to research by Vyas and Butakhieo [5], there are two influencing factors to the working performance of labourers: (1) Organizational factors; (2) Personal and family factors, which are the two primary factors that affect labourers’ work productivity during the COVID19 pandemic. In summary from the research conducted by the authors, the author found that there are all 15 factors affecting the work performance of workers working from home. Therefore, this article studies the factors affecting the job performance of labourers when working from home in the context of the COVID-19 epidemic in Ho Chi Minh City. The article uses the optimum selection by Bayesian consideration for Labourer’s Work Performance at Home in the COVID-19 Epidemic.

2 Literature Review 2.1 Work Performance (WP) According to Hoxha et al. [6], job performance is defined as the expected value that an individual achieves in a standard period, and to achieve the value within that standard time, the individual must do a lot of work in different discrete behaviour. However, researchers in Vietnam often make a comparison between efficiency and

Bayesian Algorithm for Labourer Job Performance at Home …

449

performance to make it easier to understand that efficiency is doing the right thing. We can understand work efficiency as the ability to avoid wasting time, labour, and money in working and still achieve efficiency. Meaning that efficiency is doing the work to achieve the goal with the most reasonable resources, to do that requires the individual to work properly, with a plan. According to Rotundo and Sackett [7] and Viswesvaran and Ones [8], work performance is defined as the activities and behaviours that can be measured in employee decisions in which employees join or associate or support colleagues to achieve the common goals of the organization. According to Motowidlo and Van Scotter [9], job performance includes the following: (1) Joining and staying in the organization; (2) Meeting or exceeding the performance standards required by the organization position; (3) Performing work creatively and spontaneously beyond the specified requirements to perform the jobs; (4) Cooperate with other members, protect the organization from harm, make suggestions for improvement, perform self-development, and represent the organization to customers. In short, job performance includes completing the assigned tasks well, performing the responsibilities specified in the job description, and performing the tasks expected by the superior when working from home. The article uses the optimum selection by Bayesian consideration for Labourer’s Work Performance at Home in the COVID-19 Pandemic.

2.2 Research Hypothesis and Research Model 2.2.1

The Relationship Between the Work–Family Balance (WFB) and Labourer Job Performance (LJP)

According to Vuong [1], “the balance between work and family is defined as job satisfaction and family satisfaction. Separation of work and family can undermine both company goals and employee goals, reduce work performance, and negatively affect family life. Reorganizing the way we work to work toward harmony between work and home can yield positive and win–win results.” Vyas and Butakhieo [5] argue that “working from home impacts flexibility and work commitment because it gives employees more flexibility in how they finish their work without being asked for it. Must work during office hours. Therefore, they can arrange their time accordingly between completing work and taking care of their family.” Previous related studies have all shown that the balance between work and family is positively related to the work performance of labourers [1, 5]. Therefore, hypothesis H1 is stated as follows: H 1 : Work–family balance has a positive relationship with labourer job performance.

450

2.2.2

B. H. Khoi and N. T. Ngan

The Relationship Between the Real Leadership of the Leader (LEA) and Labourer Job Performance (LJP)

According to Vyas and Butakhieo [5], when working from home, labourers need support from the leader. Facilities costs are related to the quality of home workers, training, and use of science and technology, as well as organizational communication. Organizational trust and managers’ trust are correlated with labourer workfrom-home performance. That shows that trusting the organization, colleagues, and managers is necessary to work remotely. According to Daraba et al. [3], therefore, hypothesis H2 is stated as follows: H 2 : Real leadership has a positive impact on labourer job performance.

2.2.3

The Relationship Between Working at Home (WAT) and Labourer Job Performance (LJP)

According to Vuong [1], “when labourers work from home, they do not need to waste time, money, and energy going to the office or going on a business trip. They also prefer not to have to dress up during work hours, which allows them to better match the work itself and their actual personality. Remote workers are less stressed and therefore less likely to change jobs; they are also more satisfied with their day-to-day work activities improving employee performance.” Guler et al. [4] said “Working from home will help employees not be limited in their working or resting time. They can work or rest anytime they like without limitation or choose the workplace that they like such as a bedroom, balcony, study, living room, or anywhere they feel comfortable. That makes labourer more comfortable at work and productivity also increases.” Previous studies have shown that working from home is directly proportional to employees’ work performance [1]. Therefore, hypothesis H3 is stated as follows: H 3 : Working at home has a positive impact on labourer job performance.

2.2.4

The Relationship Between Labourer Gender (LG) and Labourer Job Performance (LJP)

According to Daraba et al. [3], the COVID-19 epidemic has caused a lot of employees to work from home. Although this condition can stop COVID-19 from spreading, it can also make it difficult to distinguish between work and family life. Burnout can result from the interaction of personal and professional spheres. Due to gender stereotypes and the traditional responsibilities of housewives, women frequently struggle to manage work and family obligations. Regardless of their employment status, women handle the majority of household tasks. It is important to note at this point that women are more likely than males to experience resource loss as a result of work/work overload when working from home. Women are typically the primary

Bayesian Algorithm for Labourer Job Performance at Home …

451

caretakers, and as a result, they are more prone to have an unbalanced work–life balance. Previous related research has revealed that labourer gender has a positive connection with employee job performance [3]. Therefore, hypothesis H4 is stated as follows: H 4 : Labourer’s gender has an impact on labourer job performance.

2.2.5

The Relationship Between Concerns About the COVID-19 Pandemic (CCP) and Labourer Job Performance (LJP)

According to Vuong [1], Working from home during the COVID-19 pandemic can reduce negative emotions associated with health threats and uncertainties. Therefore, when looking at the job satisfaction of employees working from home, the authors suggest that ‘concern about COVID-19’ can moderate this positive relationship. Working from home during a pandemic can reduce your risk of illness, increase your ability to spend time with family members, reduce perceptions of loneliness and depression, and avoid contact with conversations. story. Thus, continuing to chat with colleagues about the pandemic, reinforce the impact of working from home and job satisfaction. Previous related research has shown that concerns about the COVID-19 pandemic are consistent with the direction of labourer job performance [1]. Therefore, hypothesis H5 is expressed as follows: H 5 : Concern about the COVID-19 pandemic has a positive impact labourer job performance.

2.2.6

The Relationship Between Facilities (FAC) and Labourer Job Performance (LJP)

According to Guler et al. [4], “when working from home, the participants working from home often use their desktop computers, laptops, and tablets while working from home and meet their own needs. They use dining tables, study desks, heightadjustable desks, laptop stands, coffee tables, etc. as workplaces. As a result, they feel uncomfortable when working from home because the facilities are not like in the company. That makes employees unable to concentrate at work and affects labourer performance.” Previous related studies have shown that facilities have the same direction as labourer job performance [4]. Therefore, hypothesis H6 is stated as follows: H 6 : Facilities influence labourer job performance. All hypotheses and factors are shown in Fig. 1.

452

B. H. Khoi and N. T. Ngan

Fig. 1 Research model

3 Methodology 3.1 Sample Size Tabachnick and Fidell [10] claim that N 8 m + 50 should be the minimum sample size for the optimal regression analysis, where m is the number of independent variables and N is the sample size. According to the formula N >= 8 m + 50, there are 8 * 6 + 50 samples total in the survey. The authors investigated staff working from home online during the COVID-19 epidemic in HCM City. Research information is collected by the author by submitting Google Forms and distributing survey forms directly to labourers who have been working online in Ho Chi Minh City. After sending out 250 questionnaires, the number of votes the author received was 240, of which 211 were valid. We have officially surveyed by online survey (online), starting from May 15 to July 20, 2022. Table 1 indicates the sample characteristics and statistics. Respondents were selected by convenience method with an official sample size of 211 labourers as shown in Table 1.

3.2 Bayes’ Theorem Let H be the hypothesis and D denote the actual data got from the collection. Bayes’ theorem [17] states that the probability of H given D occurs, denoted as P (H|D), is P(H |D) =

P(H ) ∗ P(D/H ) P(D)

(1)

Bayesian Algorithm for Labourer Job Performance at Home …

453

Table 1 Statistics of Sample Amount Percent (%)

Characteristics Sex and age

Male

88

41.7

Female

123

58.3

18–25

130

61.6

26–32

55

26.1

8

3.8

32–40 >40

18

8.5

Income (VND to USD at https://vi.coinmill.com 639.93 USD

38

18.0

District 1

2

0.9

District 2

5

2.4

District 6

4

1.9

District 7

5

2.4

District 8

4

1.9

District 9

4

1.9

District 11

1

0.5

District 12

18

8.5

Tan Binh

18

8.5

9

4.3

Location in HCM city

Binh Tan Binh Thanh

18

8.5

Go Vap

80

37.9

Hoc Mon

5

2.4

Phu Nhuan

4

1.9

Tan Phu

7

3.3

Thu Duc

27

12.8

The probability of the hypothesis before collecting data is called P (H). P (D|H) is the probability that the data happens under the correct hypothesis H; P (D) is the distribution of the data in Eq. 1 [18].

3.3 Bayes’ Inference According to Gelman and Shalizi [19], based on the Bayes theorem, we can see that the inference of Bayes has three types of information: information we want to know [posterior information], the information we already know [prior information],

454

B. H. Khoi and N. T. Ngan

and practical information [likelihood]. Here, “information” can be understood as probability or distribution in Eq. 2. Therefore, Bayesian inference can be generalized as follows: Posterior information = Prior information × Likelihood

(2)

3.4 Selection of the Model by the Bayesian Model Averaging Usually, to simply define a model for a research problem, one gives only a single model (the model includes all the collected variables) to estimate and then deduce, as if that model was the model most suitable for the data. Therefore, the method can ignore other models built with some variables from the set of collected variables, and some of those models may be more suitable. Therefore, it is necessary to survey and compare the models of a research problem to find the actual most suitable model for the data (which can also be interpreted as the “best” model). The Bayesian statistical model selection method is the Bayesian mean model method (BMA), which uses posterior probabilities and the BIC index to measure the model. The advantage of using the BMA method is the ability to take the model uncertainty into account by considering all models of the study.

4 Results 4.1 Reliability Test Cronbach’s Alpha test is a method that the author can use to determine the reliability and quality of the observed variables for the important factor. This test determines whether there is a close relationship between the requirements for compatibility and concordance among dependent variables in the same major factor. The reliability of the factor increases with Cronbach’s Alpha coefficient. The values listed below are included in Cronbach’s Alpha value coefficient: Very good scale: 0.8 to 1, good use scale: 0.7–0.8, and qualified scale: 0.6 and above. A measure is considered to meet the Corrected item-total correlation (CITC) greater than 0.3 [11].  2   σ (xi ) k 1− α= k−1 σx2 Table 2 indicates Cronbach’s Alpha coefficient of Work–family balance (WFB), Real leadership of the leader (LEA), Working at home (WAT), Concerns about the COVID-19 pandemic (CCP), and Facilities (FAC) for Labourer job performance

Bayesian Algorithm for Labourer Job Performance at Home …

455

(LJP) is all greater than 0.7. Cronbach’s Alpha coefficient of Labourer gender (LG) equal to 0.572 shows that this factor is not reliable, so it is rejected. According to Table 2, all item-total correlations that have been corrected are over 0.3. This demonstrates a high correlation between the items in the factor and their contribution to an accurate evaluation of the idea and characteristics of each component. Since all observed variables met the criteria that Cronbach’s Alpha coefficient be larger than 0.6 and the Corrected item coefficient-Total Correlation is greater than 0.3 in the reliability test of Cronbach’s Alpha for each scale, all items were included in the subsequent test step.

4.2 BIC Algorithm To find association rules in trans-action databases, many methods have been developed and thoroughly investigated. More mining capabilities were offered by the presentation of additional mining algorithms, including incremental updating, generalized and multilevel rule mining, quantitative rule mining, multidimensional rule mining, constraint-based rule mining, mining with multiple minimum supports, mining associations among correlated or infrequent items, and mining of temporal associations [12]. Two data science subfields that are attracting a lot of attention are big data analytics and deep learning. Big Data has become more significant as more people and organizations have gathered vast amounts of Deep Learning Algorithm for Labourer Job Performance (LJP) [13]. R program used BIC (Bayesian Information Criteria) to determine which model was the best. BIC has been employed in the theoretical environment to select models. To estimate one or more dependent variables from one or more independent variables, BIC can be used as a regression model [14]. The BIC is a significant and practical metric for selecting a complete and simple model. Based on the BIC information standard, a model with a lower BIC is selected [14–16]. R report displays each stage of the search for the ideal model. Table 3 lists BIC’s choice of the top 4 models. One dependent variable and five independent variables are present in the models in Table 3. COVID-19 pandemic (CCP) and Facilities (FAC) have a probability of 100%. Work–family balance (WFB), real leadership of the leader (LEA), and working at home (WAT) have a probability of 14.5, 7.5, and 5.4%.

4.3 Model Evaluation Table 4’s findings indicate BIC demonstrates that Model 1 is the best choice because BIC (−133.48007) is the minimum. Concerns about the COVID-19 pandemic (CCP) and Facilities (FAC) impact Labourer job performance (LJP) is 49.5% (R2 = 0.495) in Table 4. According to BIC, model 1 is the best option, and the probability of two

456

B. H. Khoi and N. T. Ngan

Table 2 Reliability Factor

α

Item

Code

CITC

Work–family balance (WFB)

0.889

Separation of work and family undermines company and employee goals

WFB1

0.767

Harmony between work and family WFB2 can bring positive and win–win results

0.793

Flexible time to arrange work and take care of family

WFB3

0.787

Need material support from the leader

LEA1

0.815

Leaders in training and using science and technology

LEA2

0.802

Leaders support communication with the organization

LEA3

0.436

Authentic leaders have a positive impact on labourer performance

LEA4

0.776

Having supportive leaders that are fair, transparent, and ethical can improve labourer performance

LEA5

0.795

Working from home saves time and travel costs

WAT1

0.532

Working from home is less stressful than being in the office

WAT2

0.512

Working from home makes you more satisfied with daily activities

WAT3

0.652

Working from home helps labourers to be flexible when working and resting

WAT4

0.624

Women do most of the household chores regardless of their working status

LG1

0.465

Women are more likely to lose resources due to work overload than men

LG2

0.313

Women suffer from an unbalanced work–family life

LG3

0.371

Helps reduce negative emotions associated with health threats and uncertainties

CCP1

0.847

The COVID-19 emergency makes you feel scared

CCP2

0.835

Working from home makes you feel safe

CCP3

0.757

Real leadership of the leader (LEA)

Working at home (WAT)

Labourer gender (LG)

Concerns about the COVID-19 pandemic (CCP)

0.870

0.773

0.572

0.905

(continued)

Bayesian Algorithm for Labourer Job Performance at Home …

457

Table 2 (continued) Factor

α

Item

Code

CITC

Facilities (FAC)

0.709

Technical means to meet the needs FAC1 of the use

0.516

Labourer job performance (LJP)

0.749

Internet connection is stable

FAC2

0.586

Full office equipment

FAC3

0.487

When working from home, you do LJP1 well to complete the assigned tasks

0.611

When working, you perform the LJP2 responsibilities specified in the job description

0.632

When working, you perform the tasks expected by your superiors

0.511

LJP3

Table 3 BIC model selection LJP

Probability (%)

Intercept WFB

SD

Model 1

100.0

0.212109

1.24026

14.5

0.01696

Model 2 1.17128

Model 3 1.32667

Model 4 1.29504

0.03859 −0.02436

LEA

7.5

0.009867

WAT

5.4

0.010429

CCP

100.0

0.048965

0.39703

0.39298

0.39545

0.39731

FAC

100.0

0.055688

0.29390

0.28735

0.29374

0.29443

−0.01555

variables is 72.7% (post prob = 0.727). The investigation mentioned above shows that the regression equation below is statistically significant. LJP = 1.24026 + 0.39703 CCP + 0.29390 FAC Coded: Labourer job performance (LJP), Concerns about the COVID-19 pandemic (CCP), and Facilities (FAC). Table 4 Model Test Model

nVar

R2

BIC

Post prob

Model 1

2

0.495

−133.48007

0.727

Model 2

3

0.500

−130.25757

0.145

Model 3

3

0.497

−128.92787

0.075

Model 4

3

0.495

−128.26616

0.054

458

B. H. Khoi and N. T. Ngan

BIC = −2 ∗ LL + log(N ) ∗ k

5 Conclusions This chapter uses the optimum selection of the BIC Algorithm for Labourer’s job performance (LJP) of labourers working in Ho Chi Minh, Vietnam. Working from home during the COVID-19 epidemic can be clarified by the BIC Algorithm between dependent and independent variables. Finally, the difference in factors affecting the work efficiency of labourers working from home during the COVID-19 pandemic in Ho Chi Minh City is analysed and evaluated. By conducting research data analysis on direct survey data of 211 employees drawn by the non-probability sampling method, the scales, theoretical models, and research hypotheses were tested again. The results of the study on labourer performance are consistent with the characteristics and operating situation in HCMC. There are two factors affecting Labourer’s job performance: Concerns about the COVID-19 epidemic (CCP) and Facilities (FAC).

5.1 Implications with Concerns About the COVID-19 Pandemic (CCP) According to the results of the job performance investigation on the COVID-19 pandemic concern factor, employees rated this factor with the highest level of influence with the β value at 0.39703 and the descriptive analysis shows that workers are very interested in two factors working from home that help them feel safe and help reduce negative emotions related to health threats and instability. However, there is still a concern that workers choose a lot because the COVID-19 emergency makes them feel scared. Here are some solutions suggested: Thus, in the specific case of the pandemic, to improve the health and safety of workers, organizations need to switch to working from home immediately when there are cases similar to the pandemic occurring to warrant the health and safety of staffs and to help labourers feel more secure when working. Heed labourers’ concerns about COVID-19 by providing them with simple information about the procedures taken by the company to prevent the spread of the virus in company locations and the measures of personal protection. Provide equipment to employees and avoid giving conflicting messages that will increase their fear of working from home. In addition, when working from home to prevent employees from feeling isolated from society, individuals and businesses can use social tools and channels such as Zalo, Zoom, and Google Meet and organize online meetings to maintain social contact between labourers and supervisors and to support labourers when facing difficult problems at work.

Bayesian Algorithm for Labourer Job Performance at Home …

459

5.2 Implications for Facilities (FAC) After BIC Algorithm analysis, it can be seen that the factor of infrastructure is assessed to have a powerful impact with the β value at 0.29390. The descriptive analysis shows that workers are very interested in two factors: complete office equipment and a stable Internet connection. However, there is still a concern that workers choose a lot of technical means to meet their needs. Here are some solutions proposed by the author: In today’s working-from-home conditions, the Internet is an essential requirement, so companies need to develop corresponding policies to allow Employees to be most productive without having to experience network interruptions while at work. This affects the employee’s work efficiency and work progress, making it impossible for the employee to complete the assigned work on time and quickly. Businesses should develop work-from-home processes and get room support in the event of technical issues with remotely connected devices or administrative and technical issues that require a response. When responding, labourers can request timely support to help labourers ensure the progress of work. Support feedback from colleagues or management. Companies and businesses need to provide the equipment and training on how to use it to work from home to reduce concerns about technical failures and meet the needs of adequate office equipment for labourers as every month, the business will have a day to distribute stationery items to labourers or businesses can also convert it in cash into employee allowances to save time and ensure safety for labourers when they have to go to the company to receive or the business has to deliver from house to house to labourers. Create the most optimal working conditions for labourers to work most conveniently and efficiently. This will make employees feel safe and cared for even when occupied from home. Then labourers will be more dedicated to labour and work efficiency will also increase. Limitations The chapter has made certain contributions to understanding the factors affecting the work performance of employees when working from home in the context of the COVID-19 pandemic in Ho Chi Minh City. However, there are still specific limitations as follows: Firstly, due to the lack of resources in the survey and sampling with non-probability methods, the survey object of the study is still limited and not representative of the population. Future research may expand the survey to a more diverse range of specific occupations, regions, and ages, making the findings more representative. Second, the author has yet to find secondary data to clarify the current status of employee performance. Third, the results of regression analysis for studies with an adjusted R2 value of 0.495 correspond to the model that explains 49.5% of the change of the dependent variable. Thus, it shows that the author’s research model is still incomplete and flawed because 50.5% of other factors still affect how employees perform when working from home during the epidemic.

460

B. H. Khoi and N. T. Ngan

References 1. Vuong BN (2021) Impact of working from home on work performance in the context of the covid-19 pandemic: empirical evidence in Ho Chi Minh City (Vietnamese). Int J Manag Econ 139:120–140 2. Van Der Lippe T, Lippényi Z (2020) Co-workers working from home and individual and team performance. New Technol Work Employ 35(1):60–79 3. Daraba D, Wirawan H, Salam R, Faisal M (2021) Working from home during the corona pandemic: investigating the role of authentic leadership, psychological capital, and gender on employee performance. Cogent Bus Manag 8(1):1885573 4. Guler MA, Guler K, Gulec MG, Ozdoglar E (2021) Working from home during a pandemic: investigation of the impact of COVID-19 on employee health and productivity. J Occup Environ Med 63(9):731–741 5. Vyas L, Butakhieo N (2020) The impact of working from home during COVID-19 on work and life domains: an exploratory study on Hong Kong. Policy Des Pract 4(1):59–76. https:// doi.org/10.1080/25741292.2020 6. Hoxha I, Fejza A, Aliu M, Jüni P, Goodman DC (2019) Health system factors and caesarean sections in Kosovo: a cross-sectional study, BMJ Open 9(4):e026702 7. Rotundo M, Sackett PR (2002) The relative importance of task, citizenship, and counterproductive performance to global ratings of job performance: a policy-capturing approach. J Appl Psychol 87(1):66 8. Viswesvaran C, Ones DS (2000) Perspectives on models of job performance. Int J Sel Assess 8(4):216–226 9. Motowidlo SJ, Van Scotter JR (1994) Evidence that task performance should be distinguished from contextual performance. J Appl Psychol 79(4):475 10. Tabachnick B, Fidell L (2001) Using multivariate statistics, 4th ed. HarperCollins, New York, pp 139–179 11. Nunnally JC (1994) Psychometric theory 3E. Tata McGraw-Hill Education 12. Gharib TF, Nassar H, Taha M, Abraham A (2010) An efficient algorithm for incremental mining of temporal association rules. Data Knowl Eng 69(8):800–815 13. Najafabadi MM, Villanustre F, Khoshgoftaar TM, Seliya N, Wald R, Muharemagic E (2015) Deep learning applications and challenges in big data analytics. J Big Data 2(1):1–21 14. Raftery AE, Madigan D, Hoeting JA (1997) Bayesian model averaging for linear regression models. J Am Stat Assoc 92(437):179–191 15. Kaplan D (2021) On the quantification of model uncertainty: a Bayesian perspective. Psychometrika 86(1):215–238 16. Raftery AE (1995) Bayesian model selection in social research. Sociological methodology, pp 111–163 17. Bayes T (1763) LII. An essay towards solving a problem in the doctrine of chances. By the late Rev. Mr. Bayes, FRS communicated by Mr. Price, in a letter to John Canton, AMFRS. Philos Trans Royal Soc London 370–418 18. Thang LD (2021) The Bayesian statistical application research analyzes the willingness to join in area yield index coffee insurance of farmers in Dak Lak province. University of Economics, Ho Chi Minh City 19. Gelman A, Shalizi CR (2013) Philosophy and the practice of Bayesian statistics, Br J Math Stat Psychol 66:8–38

Comparative Analysis of Human Hand Gesture Recognition in Real-Time Healthcare Applications Archita Dhande, Shamla Mantri, and Himangi Pande

Abstract There is strong evidence that people will be able to communicate with various sensor-based devices more effortlessly and intuitively thanks to future human–computer interfaces (HCI), creating a more human-like interaction. Human– computer interaction has advanced as a result, and new technologies have been created that allow humans to connect with computers in increasingly intuitive and natural ways. Computer systems that use these technologies exhibit improved efficiency, speed, power, and realism. Real-time gesture interpretation is utilized by a vision-based system to modify items in a medical data visualization environment. This study suggests a technique for recognizing gestures that makes use of a convolutional neural network’s sequential model made out of a collection of near-infrared pictures obtained by a Leap Motion sensor. The CNN model is made up of 13 different layers. The study also discusses numerous hand gesture recognition applications in the field of healthcare as well as the methodologies employed by these applications. Keywords Human action recognition · Hand gestures · Human–computer interactions · Convolution neural network

1 Introduction A value-added service has been offered to users by recent advancements in computer hardware and software. Physical movements serve as effective forms of communication in daily interactions. A wide range of information and emotions may be effectively communicated through them. One can convey a variety of messages by swaying their hand from side to side, such as “hello” or “goodbye”. Additionally, the majority of human–computer conversations do not make full use of physical gestures. Recognition of hand gestures is a crucial and fundamental challenge in the field of computer vision. A. Dhande (B) · S. Mantri · H. Pande MIT World Peace University, Pune, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_33

461

462

A. Dhande et al.

The invention of tools like Kinect and Leap Motion, which are examples of innovative input device technologies, was motivated by the rise of computers and the accessibility of new technologies. These devices may therefore record human gestures, creating a new interface for interaction between humans and machines. These devices are used in a wide range of fields, including robotics, medicine, sign language interpretation, computer graphics, and augmented reality. According to [1], the 12-layered convolutional neural network provides the highest test accuracy in comparison to other techniques such as logistic regression, KNN, and SVM. For this reason, I implemented the Sequential model from the CNN technique with the Leap Motion Hand image dataset. The two main kinds of gesture recognition technologies are static and dynamic. Static gestures are those that just need a single image to be processed at the classifier’s input; this method has the benefit of being less computationally expensive. Image sequence processing and more advanced gesture detection techniques are needed for dynamic gestures. We employed 10 pictures per base totaling 10 gestures in this study, along with certain segmentation methods and convolutional neural network (CNN) for categorization. Using the suggested technique, we showed that it is feasible to classify static gestures with great results using a straightforward convolutional neural network design. The identification of hand gestures in modern society has a wide range of medical applications, from the simplest sign recognition to the most important robotic surgery. In order to develop new technologies for the gesture detection system, it is important to do research on this strategy.

2 Related Work/Literature Survey The literature review offers many strategies and methods for putting hand gesture recognition into practice. These offer information about the benefits and drawbacks of various methods and strategies. The previously employed techniques that attracted notice were as follows: Smith et al. [2] examine innovative data collecting and training methods for convolutional neural networks (CNNs) and improved classification accuracy of stationary (static) hand gestures using frequency-modulated continuous wave (FMCW) millimeter-wave radars. They have proposed a technique that consists of a radar mounted over a 2D mechanical scanner for collecting huge and diverse radar datasets. However, the CNN accuracy and robustness can be improved if they capture more sterile static and dynamic gestures. Chen et al. [3] have studied compact CNN through Surface Electromyography Signals. They have created a convolution neural network (CNN) model with fewer parameters that increase classification accuracy while simultaneously being more compact. On the Myo Dataset and the Ninapro DB5 Dataset, our suggested model was verified. In order for the individuals referring to this work to have a clear understanding of how to condense their CNN to enhance accuracy, the elimination of

Comparative Analysis of Human Hand Gesture Recognition …

463

transition time between subsequent gestures may have been discussed further in depth. Breland et al. [4] have given the designs for a whole end-to-end embedded system that is capable of correctly identifying hand motions in 32 × 32 pixel low-resolution thermal pictures. Further, they have mentioned how the present investigation was carried out using a lightweight CNN model on thermal pictures of extremely poor quality (size: 32 × 32). Here, no attempt was made to do research with intricate backgrounds. Cheng et al. [5] wrote a survey paper on 3D Hand Gesture Recognition. In this article, they give a comprehensive summary of the most recent advancements in 3D hand gesture detection. They have discussed several applications for 3D hand gesture recognition and related subjects. The study investigated significant advances in 3D depth sensors, datasets, 3D hand modeling, hand trajectory gesture detection, continuous hand gesture recognition, associated applications, and representative systems. van Amsterdam et al. [6] examine the state-of-the-art strategies for automatically identifying really well gestures in robotic surgery, as well as unsolved difficulties and potential future research topics, with a focus on current data-driven approaches. They have provided a thorough study and critical assessment of the topic of identifying surgical motions from information gathered intraoperatively in this publication. Zholshiyeva et al. [7]findings of this study may be summed up as follows: Artificial neural networks, which use state-of-the-art methods and designs for hand gesture recognition and identification, are the most effective classifiers for comprehending Kazakh Sign Language. This article looked at about 70 references from 2012 to 2021 to find prevalent methods for hand gesture detection. Jiang et al. [8] summarized a result that wearable hand gesture recognition devices have proven to have great potential in a variety of additional human–computer interface areas, including rehabilitation, prosthesis control, sign language identification, and others. They have also proven to be essential for VR/AR engagement. Furthermore, the distinction between human–human communication and human–machine communication, as well as potential application fields based on hand function, are examined in detail. Oudah et al. [9] wrote a review of methods for Computer Vision-Based Hand Gesture Recognition. This study presents a comprehensive general analysis of hand gesture techniques along with a brief discussion of certain prospective applications. Several approaches for gesture detection are presented in this study, including fuzzy c-means clustering, neural networks, and the histogram of orientation for feature representation. Problems with gesture recognition are examined in detail, as well as updated recognition techniques and a brief description of a few. Mahmoud et al. [10] introduced a new technique and the performance of remote monitoring healthcare systems can be enhanced with the help of dependable gesture recognition (DGR). Functional filtering of studies and activation-dependent gradient descent unbiasing are employed to clear up the confusion in gesture recognition. Rehman et al. [11] studied a contactless platform designed for early COVID-19 indication diagnosis, monitoring human motion, and SDR usage. The capabilities

464

A. Dhande et al.

of the device are investigated by monitoring breathing rate utilizing zero crossings, peak detection, and Fourier processing. Based on existing research on HGR data and models implemented, this research paper aims to explore the healthcare applications and their models. This paper also explores the impact of parameters used to pre-process the data and how they improve the accuracy.

3 Details of Technology Gesture recognition provides real-time data so that a computer can carry out the user’s instructions. Since gestures can be tracked and understood by leap motion sensors on a device, they are used as the primary method of data input. Most gesture detection systems integrate infrared cameras, 3D depth-sensing cameras, and machine learning algorithms. But in this case, we’re going to use a preset dataset from 2D depth sensing. Using labeled depth images of hands, machine learning algorithms are taught to discern between the positions of the hands and fingers. Gestures recognitions consist of three basic levels (see Fig. 1).

Fig. 1 HGR system

Comparative Analysis of Human Hand Gesture Recognition …

465

3.1 Detection A machine learning method splits the image to locate hand edges and locations after a gadget recognizes hands and fingers with the aid of a dataset.

3.1.1

Image Frame Acquisition

The process of getting an image from a source typically a hardware source to be processed (webcam or camera). The reason it comes first in the workflow sequence is that without a picture, the subsequent preparation steps are not possible. Here the Leap Motion controller uses infrared stereo cameras as tracking sensors.

3.1.2

Hand Segmentation

It entails breaking up a digital image into many segments. Segmentation is used to simplify or alter how an image is represented so that it may be more easily analyzed. The hand segmentation extracts the hand pictures from the provided photographs and accurately determines if the gesture recognition process was successful or not. To segment the images, it first detects the color of the skin before removing the background, extracting the hand region, applying an affine transformation, and finally normalizing the images.

3.2 Tracking A device records motion frame by frame in order to record every movement and provide exact input for data processing.

3.2.1

Hand Tracking

The method of hand tracking involves a computer identifying a hand from an input image and retaining information on the hand’s movement and position. We can create a variety of systems that use hand movement and orientation as their input due to hand tracking.

3.2.2

Feature Extraction

A feature is a piece of data that is taken from the appropriate application to carry out the computation. In order to extract gesture features, acquired data in recognition

466

A. Dhande et al.

systems can be gathered either through spare devices, such as instrumented wearable sensors [12], or by employing a virtual hand image [13]. When compared to sensorbased systems, which require the user to wear a specific device and be linked to the computer and inhibit the naturalness of communication between the user and the computer, vision-based approaches are seen to be simple, natural, and less expensive [14]. Numerous evaluations from recently covered various methods and techniques for gesture recognition [14–17].

3.3 Recognition The system tries to find trends using the data that has been collected. When it finds a match and recognizes the gesture, the system performs the action associated with that motion. In order to perform the recognition capabilities, feature extraction and classification are used. After extracting those features, any classification method can be applied. CNNs, a class of deep learning neural networks, include fully connected layers that can be used for classification and can extract features on-the-fly. The memory requirements and computational complexity are minimized and performance is improved by CNN by combining these two phases. The links between the images, which are complex and non-linear, may also be understood. We will thus tackle the issue using a CNN-based technique.

4 Analytical and Experimental Work The experimental work for hand gesture recognition was performed at a Google collaboratory and involved data collection, data pre-processing, choosing an appropriate deep learning model, building and testing the model, and lastly evaluating the model to forecast the correctness of the test dataset.

4.1 Dataset The Hand Gesture Recognition Database is used in this study, and there are 10 folders labeled 00 to 09, each containing 200 images from a given subject. In each folder, there are subfolders for each gesture. It includes 20,000 pictures of various hands and hand movements. There are 5 male and 5 female participants. Classification for every hand gesture is shown in Table 1. The infrared photos of a single individual are contained in every root folder (00, 01, etc.). Each unique subject is identified by its folder name. After pre-processing the data, shuffling the input data just to check whether all the categories are been

Comparative Analysis of Human Hand Gesture Recognition … Table 1 Classification for every hand gesture

Hand gesture

467 Labels

Palm (horizontal)

01

L

02

Fist (horizontal)

03

Fist (vertical)

04

Thumbs up

05

Index

06

OK

07

Palm (vertical)

08

C

09

Thumbs down

10

accepted by the input_data. Since each of the subfolders in 00 contained 200 images, we will load one image of each gesture (see Fig. 2). The first thing to keep in mind is that this is not a difficult classification issue. There is absolutely no background to be concerned with, and the movements are quite distinct. In addition, the motions generally take up just approximately 25% of the total image, and they would all fit comfortably inside a square box.

Fig. 2 Shuffled input data

468

A. Dhande et al.

Fig. 3 Example of convolutional neural network. https://medium.com/@RaghavPrabhu/understan ding-of-convolutional-neural-network-cnn-deep-learning-99760835f148

4.2 Model Training This paper studies Hand Gesture recognition based on a CNN Sequential model using the Hand Gesture Recognition dataset. We’ll use the idea of linear regression to make the model’s construction concept more understandable. We may construct a straightforward model using linear regression and symbolize it using the equation. y = ax + b

(1)

We are looking for the parameters a and b, which stand for slope and intercept, respectively. We can forecast y by determining the optimal parameters for each given value of x. The usage of Convolutional Neural Networks makes the same concept considerably. When given a picture as input, a convolutional neural network (ConvNet/CNN) may differentiate between different objects and components by giving them different values (learnable weights and biases). A ConvNet needs far less pre-processing compared to other classification techniques. While filters in outdated approaches are hand-engineered, with sufficient training, CNNs may pick up these filters or features. Considering the input layer is represented as x and the output layer is represented by y using information from Fig. 3, we covered the linear regression model equation. Different models have different hidden layers, but they are all employed to learn our model’s parameters. Although they each serve a different purpose, they all strive to get the ideal “slope and intercept”.

4.3 Construction of Model There are several algorithms in machine learning for classification, but we’ll use deep learning with the aid of convolution neural networks (CNNs) and with the

Comparative Analysis of Human Hand Gesture Recognition …

469

Fig. 4 Construction of sequential CNN model

aid of Keras (an open-source neural network library written in Python). The CNN algorithm is supported by Keras, which is simple and quick to use. It works well on both the CPU and the GPU. CNNs use a succession of filters to analyze an image’s raw pixel data in order to extract and learn higher-level properties that the model may use for categorization, now creating a CNN architecture for Sequential model (see Fig. 4). Sequential: It is a linear stack of layers that is used to build deep learning models, such as when a Sequential model is built and layers are incrementally added to it. Conv2D: This layer of 2D convolution forms a kernel, which is essentially a filter, which is convolved with the input layer (at the first level with the picture), and then generates a new array (tensor). The following inputs must be added while building this layer: filters (number of convolution output filters), kernel size (dimension of the kernel), and strides (an integer or tuple specifying the strides of the convolution along with height and width). MaxPooling2D: This layer downscales the input image by compressing the input image with more details to less detailed images as some features (like boundaries) have already been identified. Conv2D also accepts the following arguments: pool size (the size of the maximum pooling windows), strides, padding, etc. Flatten: The convolutional and pooling layers are added after this flattening layer, which flattens the inputs. This layer is followed by a fully connected dense layer, which creates an N-dimensional vector, where N is the number of classes from which the model chooses the required class. Dropout: The input is submitted to Dropout. By setting the weights of some redundant neurons in a given layer to 0, this method stops the model from overfitting.

470

A. Dhande et al.

Fig. 5 Model loss of training and testing dataset per epoch

Fig. 6 Model accuracy of training and testing dataset per epoch

4.4 Testing Model We need to determine whether the model is valid now that it has been assembled and trained. We first run “model.fit” with batch size 32 for 10 epochs to check the precision. Then, in order to verify everything, we generate predictions and plot the photos while using both the expected and actual labels. This allows us to see how our algorithm is performing. Model loss and accuracy of training and testing dataset per epoch are shown in Figs. 5 and 6.

4.5 Model Evaluation A model accuracy of 99.97% with 0.17% loss was achieved after running the training dataset for 10 epochs. The test dataset was made up of a small collection of photos

Comparative Analysis of Human Hand Gesture Recognition …

471

Fig. 7 Calculating loss and accuracy on test data

Fig. 8 Confusion matrix

that included each sign folder. Based on the test findings, it was determined that the accuracy was 99.97% with a 0.35% loss as shown in Fig. 7. Later, we create a confusion matrix (Fig. 8), a particular type of table arrangement that makes it possible to see how an algorithm performs.

4.6 Applications Five main healthcare application fields, including sign language recognition, daily assistance, the growing field of human–robot interaction, sterile human–machine interface, and robotic surgery, are heavily relied on hand gesture detection techniques.

472

4.6.1

A. Dhande et al.

Sign Language Recognition

Sign language recognition system can substantially assist disabled persons who are deaf-mute in maintaining contact with others. Research has been done on ASL sign language detection using hand gestures. To gather the Japanese Sign Language (JSL) data, Sun et al. [18] used a cutting-edge two-Kinect system. The PCL Library was used to identify the JSL movements, and two Kinect sensors were placed perpendicular to one another. A completely automated hand gesture detection system may seem natural to users based on personal experiences. Kurakin et al. [19] developed an instantaneous system for recognizing hand gestures as a means of achieving this aim. The system was created for real-world ASL applications. It is totally automated and tolerant of differences in hand orientations, speed, and style. The system in issue is the first data-driven one to automatically recognize hand motions.

4.6.2

Virtual Manipulation

The most common usage of 3D hand gesture recognition is in virtual manipulation because it has a natural user interface for HCI tasks. Through the use of an infrared camera, Lee et al. [20] revealed methods for monitoring hand rotation and other grabbing actions. The 3D hand gesture recognition and 3D stereoscopic projection have also been merged with the stereo vision virtual reality system’s immersive human– computer interaction experience [21]. With this technique, the virtual object in the 3D stereoscopy scenario may be moved by the user. The user’s hand was tracked by the Kinect sensor, which also rendered the virtual items from the user’s perspective. Particularly, some recently developed systems for operating theater operations incorporate Kinect-based touchless interaction with surgical pictures [22].

4.6.3

Human–Robot Interaction (HRI)

The key component of the recently created social robot, which can interact with humans by using natural movements, is Human–Robot Interaction (HRI) [23]. Humans may engage with robots more effectively and readily using a hand gesturebased interface [24]. A posture recognition system was constructed by Yin et al. [25] on the actual humanoid service robot HARO-1, and the system’s efficacy and resilience were shown in experimental findings. Pointing gestures are among the collection of instinctively executed human motions that are particularly crucial for communication with robots.

4.6.4

Sterile Human–machine Interface

This technology tsunami has also affected the healthcare industry. Wachs et al. [26] developed a gesture-based interface for sterilely exploring radiological images.

Comparative Analysis of Human Hand Gesture Recognition …

473

Because it enables the surgeon to manage medical data without contaminating the patient, the operating room, or other surgeons, a sterile human–machine interface is essential. The usage of gesture-based technologies might replace touch displays, which are currently common in operating rooms in many hospitals. Smooth surfaces are necessary for these touch displays; however, they sometimes aren’t thoroughly cleaned after each use. The hand motion recognition technology presents a potential solution to the excessively high infection rates that hospitals are now experiencing.

4.6.5

Robotic Surgery

Due to surgical motion’s diversity, intricacy, and fine-grained structure, automatic recognition of surgical motions is difficult. van Amsterdam et al. [27] studied data-driven recognition models, Temporal Convolutional Network (TCN), Recurrent Neural Networks (RNN), Temporal Convolutional and Recurrent Network (TricorNet), and Deep Reinforcement Learning (RL), which have been developed as a result of the digitalization of surgical videos and the expansion of sensor data in robotic instrumentation. Deep-learning-based models still face a number of challenges, which are made worse by the lack of surgical training datasets and their limited size, and they are insufficient in capturing the domain variability of healthcare practices across diverse, and occasionally procedure-specific, activities.

5 Limitations 1. A deep learning model is also particularly challenging to comprehend because of its abstractions. Since all the parameters must be adjusted effectively, hard coding the actual CNN model is challenging. 2. The delay between making a gesture and its classification must be eliminated by the gesture detection system. If gestures don’t speed up and simplify your interaction, there simply is no other justification to start using gestures. To truly offer the user feedback instantly, it is important to aim for a negative latency (classification even before the action is complete). 3. The sensors may be less compatible with other systems since they use specific hardware and software. Only fingertips that are parallel to the sensor plane can be detected. Therefore, if fingers overlap, it might not be able to identify several fingertips (e.g., sign language). This limits its capacity to be used for the identification of complex motions. 4. The ethnic and illuminating groupings are defined by the environmental limitations. The term “illumination” refers to the process of altering the environment’s natural and artificial lighting conditions. The system should be built to classify various ethnic groups of people (such as those who are white, black, red, etc), as diverse skin tones affect skin saturation and result in the same hue value.

474

A. Dhande et al.

5. Even when the parameters are precise to the model, it is more difficult to attain an accuracy of 100% since there are errors made while translation, scaling, rotation, and the background elimination of the gestures. This can be a drawback when employing these models in medical applications.

6 Conclusion This research devised a technique for recognizing gestures that use a sequential model of a convolutional neural network built from a collection of near-infrared pictures acquired by a Leap Motion sensor and achieves a strong accuracy of 99.97% with a loss of 0.35%. The depicted motions are rather distinct, and the images were clear and background-free. Our model is enhanced by the fact that there are enough images. The model’s accuracy is determined by how the model’s layers are divided and how the batch size (number of input records) is adjusted. The live detective cameras are still new to the sector because the majority of the data utilized in healthcare is captured from a Kinect or Leap motion device, as well as radiological images. Along with a number of other approaches, such as robotic surgery deep learning models, stereo vision virtual reality systems, humanoid service robots like HARO-1, and automatic hand gesture recognition systems, they have significantly advanced healthcare technology. Applications for hand gesture recognition are challenging because, at some point, they become challenging to use in actual medical care. There are some trust concerns in the industry because this subject is still relatively new to healthcare and because there are still many innovations and tests to be done.

References 1. Shah P, Shah R, Shah M, Bhowmick K (2020) Comparative analysis of hand gesture recognition techniques: a review. In: Springers advanced computing technologies and applications 2. Smith JW, Thiagarajan S, Willis R, Makris Y, Torlak M (2021) Improved static hand gesture classification on deep convolutional neural networks using novel sterile training technique. IEEE Access 9 3. Chen L, Fu J, Wu Y, Li H, Zheng B (2020) Hand gesture recognition using compact CNN via surface electromyography signals. MDPI J Sens 20(3). https://doi.org/10.3390/s20030672 4. Breland DS, Skriubakken SB, Dayal A, Jha A, Yalavarthy PK, Cenkeramaddi LR (1 May 2021) Deep learning-based sign language digits recognition from thermal ımages with edge computing system. IEEE Sens J 21(9) 5. Cheng H, Yang L, Liu Z (July 2015) A survey on 3D hand gesture recognition. IEEE Trans Circuits Syst Video Technol XX(XX) 6. van Amsterdam B, Clarkson MJ, Stoyanov D (June 2021) Gesture recognition in robotic surgery. In: IEEE Trans Biomed Eng 68(6) 7. Zholshiyeva LZ, Kokenovna T, Zhukabayeva ST, Berdiyeva MA (2021) Hand gesture recognition methods and applications: a literature survey. In: ICEMIS’21: the 7th ınternational conference on engineering & MIS 2021

Comparative Analysis of Human Hand Gesture Recognition …

475

8. Jiang S, Kang P, Song X, Lo BPL, Shull PB (2021) Emerging wearable ınterfaces and algorithms for hand gesture recognition. In: IEEE Rev Biomed Eng 15 9. Oudah M, Al-Naji A, Chahl J (July 2020) Hand gesture recognition based on computer vision: a review of techniques. In: MDPI 10. Mahmoud NM, Fouad H, Soliman AM (2020) Smart healthcare solutions using the internet of medical things for hand gesture recognition system. In: Researchgate 11. Rehman M, Shah RA, Khan MB, Ali NAA, Alotaibi AA, Althobaiti T (2021) Contactless small-scale movement monitoring system using software defined radio for early diagnosis of COVID-19. IEEE Sens J 15 12. Dipietro L, Sabatini AM, Dario P (2008) Survey of glove-based systems and their applications. IEEE Trans Syst Man Cybern 38(4):461–482 13. Wysoski SG, Lamar MV, Kuroyanagi S, Iwata A (2008) A rotation ınvariant approach on staticgesture recognition using boundary histograms and neural networks. In: IEEE 9th ınternational conference on neural ınformation processing, Singapura 14. Khan RZ & Ibraheem NA (2021) Survey on gesture recognition for hand ımage postures. Int J Comput Inf Sci 5(3):110–121. https://doi.org/10.5539/cis.v5n3p110 15. LaViola Jr JJ (1999) A survey of hand posture and gesture recognition techniques and technology. Master thesis, NSF Science and Technology Center for Computer Graphics and Scientific Visualization, USA 16. Moeslund TB, Granum E (2001) A survey of computer vision-based human motion capture. Comput Vis Image Underst 81:231–268 17. Erol A, Bebis G, Nicolescu M, Boyle RD, Twombly X (2001) Visionbased hand pose estimation: a review. Comput Vis Image Underst 108:52–73 18. Sun Y, Kuwahara N, Morimoto K (2013) Development of recognition system of Japanese sign language using 3D image sensor. In: HCI 19. Kurakin A, Zhang Z, Liu Z (2012) A real time system for dynamic hand gesture recognition with a depth sensor. In: European signal processing conference 20. Lee C-S, Chun S, Park SW (2013) Tracking hand rotation and various grasping gestures from an IR camera using extended cylindrical manifold embedding. Comput Vis Image Underst 117(12):1711–1723 21. Hoang V, Nguyen Hoang A, Kim D (2013) Real-time stereo rendering technique for virtual reality system based on the interactions with human view and hand gestures. In: Virtual augmented and mixed reality 22. O’Hara K, Gonzalez G, Sellen A, Penney G, Varnavas A, Mentis H, Criminisi A, Corish R, Rouncefield M, Dastur N, Carrell T (2014) Touchless interaction in surgery. Commun ACM 57(1):70–77 23. Waldherr S, Romero R, Thrun S (2000) A gesture based interface for human-robot interaction. Auton Robot 9:151–173 24. Yin X, Zhu X (2006) Hand posture recognition in gesture-based humanrobot interaction. In: IEEE conference on ındustrial electronics and applications 25. Yin X, Xie M (2007) Finger identification and hand posture recognition for human-robot interaction. Image Vis Comput 25(8):1291–1300 26. Song Y, Tang J, Liu F, Yan S (2014) Body surface context: a new robust feature for action recognition from depth videos. IEEE Trans Circuits Syst Video Technol 24(6):952–964 27. van Amsterdam B, Clarkson M, Stoyanov D (2021) Gesture recognition in robotic surgery: a review. IEEE Trans Biomed Eng

Compression in Dynamic Scene Tracking and Moving Human Detection for Life-Size Telepresence Fazliaty Edora Fadzli and Ajune Wanis Ismail

Abstract Human detection, tracking and other forms of human identification have much practical applicability, including telepresence systems. The advent of the depth cameras, such as the Microsoft Kinect, has made human detection and tracking much faster and more efficient. With telepresence technology capabilities, it is possible for a local user to interact with a user located at a remote location freely even in a moving state. Thus, this paper describes the proposed method involving the phases in producing real-time moving human detection to be integrated with life-size telepresence. The proposed method starts with the real-time moving human detection phase followed by the data transmission phase. The paper also presents the lifesize telepresence in the holographic display, which has been configured for remote communication and setting up the experimental setup for the life-size telepresence. The implementation process has been explained and the results for the amount of data in bytes transmitted per frame have been discussed. This paper aims to discuss a real-time moving human detection captured using depth sensors and it has been successfully integrated with a life-size telepresence application. Keywords Life-size telepresence · Real-time · 3D reconstruction · Kinect sensor

1 Introduction Long-distance communication has become an integral part of our daily lives and professional endeavours, especially after the global pandemic caused by Covid-19 [1]. In order to live and work in a new area, family and friends are leaving their current residence, and companies often send their employees on international business trips. F. E. Fadzli (B) · A. W. Ismail Mixed and Virtual Reality Research Lab, ViCubeLab, Faculty of Computing, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia e-mail: [email protected] A. W. Ismail e-mail: [email protected] Faculty of Computing, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_34

477

478

F. E. Fadzli and A. W. Ismail

Fig. 1 a TeleSuite in 1993 [4]. b TED telepresence robot in 2014 [5]

As the cost and inconvenience of travel continue to rise [2], telepresence has emerged as a growing field of study due to the high motivation and potential technology it offers for lowering these costs [3]. In addition, the development of this technology hopes to cut costs in terms of both time and money while leading to the next wave of technological advancement. With the use of telepresence technology as can be seen in Fig. 1, a local user can communicate with a remote user; thus, it is important to think about how the user can record and send their user representation even in moving state. The expensive cost of three-dimensional (3D) reconstruction technology over the previous two decades has made the virtual environments have been largely confined to the realms of research, medicine and other expert fields [6]. Advances in consumer-grade depth sensors in recent years have brought this technology closer to the consumer market and have led to a corresponding growth in the amount of research conducted in this area. Therefore, this paper further discusses related works with our research and the proposed method of real-time moving human detection integrated with life-size telepresence. The implementation and the results are also presented, and this paper ends with a conclusion.

2 Related Works This section highlights research works related to real-time moving human detection integrated with life-size telepresence system. Several of the notable works in this field of research have been discussed. This study has found the Room2Room as an example, where the telepresence system demonstrated two remote participants to communicate in life-size projection. Pejsa et al. [7] explained the approach to recreating the perception of face-to-face conversation by capturing the local user in 3D with colour and depth cameras and by projecting the user’s virtual copy in a remote location. This provides a perception of the distant physical existence of each person in the local environment and a shared understanding of verbal and nonverbal cues. Córdova-Esparza et al. [8] have produced a visual telepresence system with a lowcost and low-bandwidth. Their system formed the multiple views from commodity

Compression in Dynamic Scene Tracking and Moving Human …

479

depth cameras to represent remote participants as a full 3D model and projecting into the simulated holographic display. The system pipeline has four Kinect cameras to capture and perform background removal. Further, the point cloud extraction and fusion on the local site have been involved. Each client collects colour and depth images from the cameras and transmits the data to the server. Three data reduction steps which were performing foreground extraction, lowering the colour image resolution and data compression were implemented in order to reduce the amount of data transmitted to the server. The use of network timestamps accomplishes synchronization of the cameras. While at the remote site would generate virtual views of the data transmitted to be projected in the holographic display simulation. Zioulis et al. [9] have introduced a tele-immersion (TI) system that integrates 3D reconstruction, motion capture, activity detection and analysis, 3D compression, and networking technologies to enable immersive interactions between users from distant locations. The real-time 3D reconstruction was extended from [10], which was performed using multiple Kinect cameras to generate a full 3D geometry-based representation of users. The final triangulated mesh was extracted as the iso-surface using the Marching Cube algorithm. The method, however, was performed at nearreal-time rates. Professor Avatar Holographic Telepresence Model, a research work by Luevano et al. [11], defines a setup and sets criteria for remote holographic telepresence communication, which consists of a projector, computer, webcam and holographic screen. Their hardware setting to demonstrate their systems were time-consuming, and the difficulty might demotivate the potential users. Therefore, it is important to acquire the optimal setting for a remote holographic telepresence system and feasible to humanize communication over a large distance.

3 Proposed Method This section explains the proposed system that was taken in producing the real-time moving human detection for life-size telepresence. As shown in Fig. 2, there are phases and processes planned to be executed such as the real-time moving human detection followed by data transmission and life-size telepresence phases that will be explained in detail in the following subsections.

3.1 Real-Time Tracking Method This research has utilized Microsoft Kinect to detect and capture moving human and reconstruct the user’s 3D representation that is to be integrated into a life-size telepresence system in real-time. The Kinect has sensor and a built-in human detection feature. The device is able to achieve this because it is able to accurately capture the human body depth information. The sensor is equipped with an infrared (IR) emitter

480

F. E. Fadzli and A. W. Ismail

Fig. 2 The framework of real-time moving human detection for life-size telepresence

and camera that can detect depth, as well as a red–green–blue (RGB) camera for colour video. Millimeteres are the units that are returned for the results of the depth measurements. Kinect’s IR emitter and IR depth sensors are essential for acquiring human depth data. Kinect generates a depth image of a human when positioned in front of it. The IR depth sensor works in combination with the IR emitter to determine the X, Y, and Z coordinates of a detected human’s actual location. The tracking method for the Kinect generates a hierarchical skeleton that is formed of joint objects. It has the capability to track six individuals at once, each with up to twenty-five movable skeleton joints as shown in Fig. 3. The tracked joints are

Compression in Dynamic Scene Tracking and Moving Human …

481

Fig. 3 The skeleton joints tracked by the Microsoft Kinect

also more precise and stable than in the previous Kinect iteration, and the range at which the skeleton can be tracked is much wider. In order to reduce jitter, the average position of each joint is computed in real time using data from a 300 ms memory buffer (about 10 valid frames at 30 Hz) [12]. When a joint cannot be tracked by the sensor due to occlusion or other factors, the Microsoft software development kit (SDK) can estimate its position using information from neighbouring joints. The real-time moving human detection has a few phases as in Fig. 4. The functionality of human detection depends on the system being able to select their human targets, which means that tracking human targets is an essential component of that operation. The Kinect’s skeleton recognition technique was used in order to choose moving human target. It is necessary for the system to get information of all five body parts which includes torso, left and right arms as well as left and right legs in order to complete the initial phase of gathering data pertaining to the human target. There are two parts for the flowchart which are the before identification process and after the tracking process. First, by using the player identification (ID) data provided by the Kinect, the tracking status of the human target is identified. By detecting overlap user ID, if the identical human-target player ID is found in the succeeding image frame, reidentification will not be implemented; this will keep the amount of time spent computing to a minimum. The explanation on the second part which is after the tracking process will be further explained in the next section.

3.2 Extraction Point-Cloud in Triangulation Process The body index image is derived from the depth image and consists of a twodimensional (2D) grid with each coordinate giving a simple 0 through 5 integers

482

F. E. Fadzli and A. W. Ismail

Fig. 4 Human tracking flowchart

representing the body that the sensor associates with that coordinate. The second part begins with the capture of colour and depth images, following the human target’s tracking. The instance segmentation map for each individual body captured by the depth camera is included in the body index image. Each pixel’s value indicates to which body it belongs. Each pixel maps to the corresponding pixel in the depth or IR image. In some cases, it may indicate a detected body, while in others it may simply be the background. The human body segmentation masking was necessary for point cloud extraction by mapping this body index image onto a colour image. Using a perspective projection where z = depth, we can determine the 3D coordinates (x, y, z) of each pixel in the human body segmentation mask and so extract the point cloud. The corresponding colour of each 3D point is then retrieved by projecting the [x, y, z] coordinates onto the colour frame using the colour camera’s intrinsic and extrinsic parameters. After the point cloud has been extracted from the depth image, additional work needs to be done in order to generate 3D meshes of a moving human; in this research, we carried out a process known as triangulation. Extracted point cloud data is used in the triangulation process, which involves connecting the points in the point cloud to form a collection of triangles that closely approximating the surface of interest. The real-time generation of polygons from the 3D point cloud is accomplished with the help of the Marching Square algorithm as suggested by Jang et al. [13]. This algorithm produces a polygonal mesh surface representation by applying an image grid to the depth map image that stores the point cloud data. The grid is applied to the image so that it can store the data. In order to generate a triangulation mesh from the point cloud data, there must be a minimum of three vertex candidates. The data that was produced includes a list of vertices and triangles, along with colour images and UV indices that were utilized for texture mapping.

Compression in Dynamic Scene Tracking and Moving Human …

483

3.3 Data Transmission and Compression Over Network The network is essential because it enables data transmission and for telepresence setup to function [14]. It also is required for users to be able to communicate with one another, both locally and remotely. People are completely submersed in the sights and sounds of human connections when they can communicate in real-time, which keeps them engaged without any difficulty. Agora Video SDK makes it simple to integrated real-time communication by relying on the sophisticated and worldwide Software Defined Real-time Network (Agora SD-RTNTM) provided by Agora [15]. The advantage of using the Agora.io Video SDK is minimum amount of code is required to incorporate high-quality and low-latency video features into the system. Agora.io also can enable people to watch an event in real-time while not physically present by broadcasting the live streaming video. Based on Fig. 5, a network framework using Agora.io SDK was utilized to transmit the view of the rebuilt user from the virtual camera in the scene, together with the local user’s voice input. The network used is for real-time communication. A user must first obtain a token from the server before they are allowed to join any channel. When both users had the token ID, they could enter the channel by simply using the same room channel ID. In order to achieve life-sized telepresence that is integrated with a real-time moving human detection, the hypothesis for the findings is that if the data captured from multiple Kinect devices with server-client model setup are generated, more features could be extracted from different viewpoints; thus, the more data need to be transmitted from each client to the server, particularly if moving human is involved. Thus, data compression is required during data transmission. According to the SDK provided by the Microsoft Kinect, it is a limitation by the software since it allows only one Kinect can run on one PC. In this case, the multiple Kinect devices are connected using a distributed client–server approach. Each client node tracks the user within the field of vision of its local depth sensor. In addition, clients are responsible for supplying the data that the data acquisition module has obtained and performing a double lossless compression method to speed up the data transmission process. We performed lossless compression two times in order to speed up the data transmission by reducing the large data produced

Fig. 5 Data transmission module architecture

484

F. E. Fadzli and A. W. Ismail

such as 1024 × 720 texture frame, vertices array and triangle index. The lossless compression method used for this research is Lempel-Ziv-Free (LZF) compression, as recommended by Waldispühl et al. [16]. It is a fast compression algorithm requiring minimal code space and working memory.

4 Implementation This section explains the implementation and the results of the proposed method for real-time moving human detection implemented in the holo-professor telepresence to capture real human real-time movement tracking. The setup as shown in Fig. 6 shows the placement of the Kinect devices on tripod and connected to each desktop computers. The hardware specification used in this research is as shown in Table 1. The first step of the real-time moving human detection phase is to generate depth and colour streams acquired from the Kinect devices. The data captured were then used as the input for the next process to produce a point cloud from the depth image. The result of the triangulation data using the point cloud data was as shown in Fig. 7. The acquired and processed data from each RGB-D camera were then compressed using LZF compression before being transmitted into a server. Once the transmitted data was received and decompressed, the synchronization algorithm was implemented. Once the system was synchronized, the surface mesh

Fig. 6 The setup for dynamic scene tracking and moving human detection

Table 1 The hardware specification Hardware

Characteristic

System Model Central Processing Unit (CPU) Graphic Processing Unit (GPU) Random Access Memory (RAM) System Type Display device Audio device

Desktop Intel i7 8th gen Nvidia RTX 1060 6 GB 16 GB 64-bit Operating System, X64-based Life-size holographic frame display Headphone with microphone attached

Compression in Dynamic Scene Tracking and Moving Human …

485

Fig. 7 The triangulation result using the Marching Square algorithm

rendering for each receiving data was generated using each client’s vertices and triangle data. Using the texture that was sent from the client, the texture mapping was then applied to the surface mesh of the reconstructed model. The dynamic scene of the reconstructed moving human per frame was as shown in Fig. 8. Then the textured mesh data, along with the local user’s audio, are transmitted over the network to the remote place. A virtual camera was placed in the scene. The virtual camera viewed the reconstructed user and was used to live stream to the remote site once the user had entered the same room channel for the telepresence. The reconstructed model is transferred to a remote location using the existing network framework, Agora Video SDK was used to provide video transmission of each user’s generated video and captured audio streams. Instead of delivering the video from the user’s webcam as typically occurs, this research customized the video source into a virtual camera in the scene that captures the virtual 3D reconstructed mesh model, as shown in Fig. 8. Firstly, the user is placed in front of Kinect as shown in Fig. 9. All the required devices are integrated and adequately positioned to the appropriate position. The experiment setup to capture the user requires two Kinect devices placed on a tripod for full-body tracking and two desktop computers connected to each Kinect device

486

F. E. Fadzli and A. W. Ismail

Fig. 8 The dynamic scene tracking and moving human detection per frame

for data acquisition. The two Kinect devices, each connected to a client PC that processes and sends frames to the server through asynchronous Socket APIs. The connection between a Kinect camera and a computer is the client. A server node on a PC initiates the function of this synchronization unit. Coordinated Universal Time (UTC) was utilized for time synchronization between all clients and servers. UTC is the primary time standard by which the globe controls clock and time. For the local site, RGB-D cameras were placed to enable 3D acquisition using depth cameras to gather the 3D data along with the depth that is used to reconstruct the user’s full body as shown in Fig. 10a. The camera has been set from user in a range 2.1 m, same setting for the second camera. The height to place the camera so it can capture the human movement, 1.1 m. The setting for second camera also in the

Fig. 9 The life-size measurement setup

Compression in Dynamic Scene Tracking and Moving Human …

(a) A local user

487

(b) A remote user

Fig. 10 The local user interacting with remote user at the local site

same height, 1.1 m. Two cameras have sent data to the computers, the communication between two cameras are on the network. Figure 10b shows the result of the remote user that transmits over network from different location.

5 Results This paper has presented the design of the real-time moving human detection for the life-size telepresence, where we executed the method to reconstruct a moving user using data captured from Kinect device. The existing 3D reconstruction for telepresence is either for the static scene or involved non-moving user such as to capture the remote environment, and does not concern performing the real-time 3D reconstruction involving moving human. The proposed method in this research was also successfully integrated with life-size telepresence. Data transmission evaluation was measured to test if the compression method implemented could help to speed up the data transmission of the captured data from multiple RGB-D cameras, which eventually could speed up the 3D reconstruction method. Table 2 summarizes the amount of data transmitted in bytes per frame without compression, with single compression and with double compression implemented. The number of frames sent from the client and received at the server in 1s were also measured and tabulated in Table 3. From the data transmission result, it is proven that the double compression method could help to reduce the number of bytes to be transmitted to the server, which eventually could help to speed up the 3D reconstruction method. With the double compression method implemented, a greater number of frames could be received by the server in one second with the most reduced bytes for texture, triangles, and UV mapping indices where the letters “U” and “V” refers to the axes of the 2D texture,

488

F. E. Fadzli and A. W. Ismail

Table 2 Amount of data in bytes transmitted per frame Method

Colour (Texture)

Vertices

Triangles

UV

No compression

868,352

56,426

100,080

37,620

Single compression

441,343

50,287

52,211

21,604

Double compression

433,613

51,499

27,766

10,790

Table 3 Number of a frame transmitted in 1s

Method

Frame sent by client

No compression

Frame received by server

10

1

Single compression

9

3

Double compression

10

7

data to be transmitted to the server. However, the double compression method seems to be not affecting and is unsuitable to be implemented on the vertices data. This is because after single compression was performed, the byte size seemed to be increased rather than decreased when the double compression method was implemented.

6 Conclusion This paper has presented the design of the real-time moving human detection for the life-size telepresence, where we executed the method to reconstruct a moving user using data captured from Kinect device. The existing 3D reconstruction for telepresence is either for the static scene or involved non-moving user such as to capture the remote environment, and does not concern performing the real-time 3D reconstruction involving moving human. The proposed method in this research was also successfully integrated with life-size telepresence. As a conclusion, the design of the proposed method involved three phases: realtime moving human detection, data transmission, and life-size telepresence. Each of these phases has been discussed throughout this paper. The data capture process has been explained in the first phase. Then, the data transmission phase where the implementation of network framework called Agora.io is introduced in this paper. The data compression method in this research was implemented to be well suited with data involved such as vertices, triangles and colour data. The conversion of these data into byte format was able to be achieved. The result for the double compression method performed which is lossless compression method that was executed two times to make sure the large size data were reduced and transmitted in real-time has been discussed as well.

Compression in Dynamic Scene Tracking and Moving Human …

489

We have elaborated the life-size telepresence which has been configured for remote communication and setting up the experimental setup for the life-size telepresence as well as the results of the implementation. For future works, we plan to extend this work by implementing two-way life-size telepresence setup where both local and remote user can be detected and reconstructed into 3D representation. With the proposed compression method, the data generated from reconstructing local and remote users can be displayed and viewed in life-size manner as well. The results addressing network requirements such as acceptable compression method to be used will be investigated in the future as well. Therefore, we conclude that this paper described the life-size telepresence with moving human detection performed in real-time.

References 1. Haucke E, Walldorf J, Ludwig C, Buhtz C, Stoevesandt D, Clever K (2020) Application of telepresence systems in teaching–transfer of an interprofessional teaching module on digital aided communication into the block training “internal medicine” during the Covid-19 pandemic. GMS J Med Educ 37(7) 2. Ishigaki SA, Ismail AW, Kamruldzaman MQ (23 Sep 2022) MR-MEET: mixed reality collaborative interface for HMD and handheld users. In: 2022 IEEE global conference on computing, power and communication technologies (GlobConPT). IEEE, pp 1–7 3. Fadzli FE, Ismail AW, Aladin MY, Othman NZ (1 May 2020) A review of mixed reality telepresence. In: IOP conference series: materials science and engineering, vol 864, no 1. IOP Publishing, p 012081 4. Lichtman HS (2006) Telepresence, effective visual collaboration and the future of global business at the speed of light. HPL Hum Prod Lab Mag 5. Snowden E (2023) Ted talk edward snowden (telepresence-live from ted2014). https://www. ted.com/speakers/edwardsnowden. Accessed 2 Feb 2023 6. Zollhöfer M, Stotko P, Görlitz A, Theobalt C, Nießner M, Klein R, Kolb A (May 2018) State of the art on 3D reconstruction with RGB-D cameras. In: Computer graphics forum, vol 37, no 2, pp 625–652 7. Pejsa T, Kantor J, Benko H, Ofek E, Wilson A (27 Feb 2016) Room2room: enabling life-size telepresence in a projected augmented reality environment. In: Proceedings of the 19th ACM conference on computer-supported cooperative work & social computing, pp 1716–1725 8. Córdova-Esparza DM, Terven JR, Jiménez-Hernández H, Herrera-Navarro A, VázquezCervantes A, García-Huerta JM (Aug 2019) Low-bandwidth 3D visual telepresence system. Multimed Tools Appl 78(15):21273–21290 9. Zioulis N, Alexiadis D, Doumanoglou A, Louizis G, Apostolakis K, Zarpalas D, Daras P (25 Sep 2016) 3D tele-immersion platform for interactive immersive experiences between remote users. In: 2016 IEEE ınternational conference on image processing (ICIP). IEEE, pp 365–369 10. Alexiadis DS, Zarpalas D, Daras P (10 June 2013) Real-time, realistic full-body 3D reconstruction and texture mapping from multiple Kinects. In: IVMSP 2013. IEEE, pp 1–4 11. Luevano L, de Lara EL, Quintero H (2019) Professor avatar holographic telepresence model. Hologr Mater Appl 25:91 12. Manghisi VM, Uva AE, Fiorentino M, Bevilacqua V, Trotta GF, Monno G (2017) Real time RULA assessment using Kinect v2 sensor. Appl Ergon 1(65):481–491 13. Jang GR, Shin YD, Yoon JS, Park JH, Bae JH, Lee YS, Baeg MH (2013) Real-time polygon generation and texture mapping for tele-operation using 3D point cloud data. J Inst Control Robot Syst 19(10):928–935

490

F. E. Fadzli and A. W. Ismail

14. Ishigaki SAK, Ismail AW (2023) Real-time 3D reconstruction for mixed reality telepresence using multiple depth sensors. In: Shaw RN, Paprzycki M, Ghosh A (eds) Advanced communication and ıntelligent systems. ICACIS 2022. Communications in computer and ınformation science, vol 1749. Springer, Cham 15. Video Calling SDK Quickstart: Agora Documentation (no date) Video calling SDK quick-start | Agora documentation. https://docs.agora.io/en/video-calling/get-started/get-started-sdk?pla tform=android. Accessed 26 Dec 2022 16. Waldispühl J, Zhang E, Butyaev A, Nazarova E, Cyr Y (2018) Storage, visualization, and navigation of 3D genomics data. Methods 1(142):74–80

Deep Learning-Based Facial Emotion Analysis M. Mohamed Iqbal, M. M. Venkata Chalapathi, S. Aarif Ahamed, and S. Durai

Abstract Human facial expressions are a fundamental and simple way to convey feelings. This unsaid sentiments’ automatic analysis has been a fascinating and difficult undertaking in the field of computer vision with its applications from a variety of fields, like process automation, product marketing, psychology, etc. This task has been challenging because people differ considerably in the way of using language to convey their feelings. Deep learning in machine learning, in particular, has been crucial to the advancement of several branches of research that makes use of computer vision. This academic paper introduces an implemented convolutional neural network architecture that addresses the issue of facial emotion analysis. We have utilised FER-2013 dataset for the purpose of training and testing the proposed method. The outcomes of the investigation were a very positive 61%, and they represent an advancement in the field of robotic facial sentiment analysis. Keywords Convolution neural network · Emotion analysis · Neural network · Image processing

M. Mohamed Iqbal (B) · M. M. Venkata Chalapathi School of Computer Science and Engineering, VIT-AP University, Amaravati, Andhra Pradesh, India e-mail: [email protected] M. M. Venkata Chalapathi e-mail: [email protected] S. Aarif Ahamed Department of Computer Science and Engineering, Presidency University, Bangalore, Karnataka, India S. Durai Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_35

491

492

M. Mohamed Iqbal et al.

1 Introduction In any interpersonal contact, emotions play a significant role. These can be expressed in several methods, like through facial expressions [1], speech [2, 3], gestures and even stance. Face expressions are among the options that can be used for sentiment analysis as they are the most noticeable and information-rich. Furthermore, compared to other forms of expression, collecting and processing faces is simpler. A facial expression is a complex movement of the facial muscles that communicates the subject’s feelings to anyone watching it [4]. Expressions are essentially a message regarding how someone is sensing on the inside. For these reasons, researchers in the fields of animation [5], psychology, linguistics, neurology and medicine [6, 7], human–computer interface [8], and security have started to place a greater emphasis on a human–computer interaction system. An emerging field today is computerassisted facial expression analysis. connecting a sentiment with a facial image is the basis of sentiment analysis. Therefore, the purpose is to infer an individual’s inner emotions from their face. In order to make human–machine interaction simpler, face sentiment analysis software is essential. But this is not an easy task. For some time now, many characteristics of facial emotions have been fetched and processed for effective emotion analysis utilising the convolutional neural networks (CNN) [9], which were inspired by the study of biology. This has inspired us to do this research with the goal of developing a deep learning-based technique for face emotion investigation. Based on face characteristics, a NN (neural network) architecture is shown that allows sentiments to be divided into a common set of 7 sentiments [10]. The remaining portions of the essay are organised as follows. Section 2 has made reference of the connected works that have come before it. Then, in Sect. 3, we present our suggested method and provide pertinent background information. The experiment and its findings are stated in Sect. 4, at last this study ended with conclusion at Sect. 5.

2 Related Work Bidirectional approaches have been used in earlier sentiment analysis research. In the initial phase, the raw pictures’ characteristics are detected, and then a classification algorithm (such as Naive Bayes [11], SVM, decision trees, vector field convolution [12], etc.) is used to categorise the images. These methods performed admirably in a controlled setting on smaller datasets of photos, but they encountered issues on larger datasets with greater intra-class variability. Six main emotions—happiness, sadness, disgust, surprise, anger and finally fear were recognised in an influential study on emotion analysis by Ekman [13]. Deep learning models can be used for sentiment analysis, according to recent studies. The method [14] is an example of Boosted Deep Belief Network trained across three iterative training rounds for this study. In their article [15], Zhang

Deep Learning-Based Facial Emotion Analysis

493

et al. suggest a brand-new deep neural network-based facial characteristics analysis technique to determine an association between SIFT characteristics and top-level connotation data. The top position winners in the 2014 ImageNet competition all used CNNs to address the issue. The GoogleNet methodology stands out among them with a phenomenal 6.66% classification error rate [16]. Another notable design that achieved exceptional results is the AlexNet [17], which was one of the first to advocate using dropout layers in the network to solve the problem of over-fitting.

3 Proposed Approaches The method of face sentiment analysis systems is intended to be improved, hence a classification mechanism employing a CNN architecture is suggested. Because deep network training requires a lot of data, the FER2013 dataset, which is freely accessible, is used in this case. The characteristics of our selected dataset are stated out in the part that follows, which is then followed by a description of our network design and, lastly, a list of the performance metrics we utilised to assess it.

3.1 Dataset 2013’s version of facial expression recognition, i.e. FER2013 during the 2013 ICML competition, database [18] Learning from representations. This dataset contains a staggering 35,887 portraits of people, primarily in natural environments. The photos have a resolution of 48 × 48 and a practise set of more than 28 k pictures, plus every testing and validation set each include 3.5 k pictures. This database was created and was carried out utilising the GoogleAPI for face-based picture search labelled with one of the seven fundamental emotions. Compared to other datasets, FER2013 is superior and has greater variety in veiled faces and photos with little contrast all make it better for generalising our model during training and guard against oversizing. Some sample images from this data set are shown in Fig. 1.

Fig. 1 FER2013 sample images

494

M. Mohamed Iqbal et al.

3.2 Network Architecture Our design (see Fig. 2) is rather simple, with 48 × 48 input flowing to convolution layers, every one of which is followed by a pooling layer, before terminating with dense layers and a dropout layer in the middle. The maximum one is chosen from the output, which offers probabilities for the seven classifications. Figure 3 provides precise information on the layers of our proposed method. The following provides an explanation of the various layers and activation functions utilised in the proposed work: Convolution layer The Convolution layer’s job is to manage the image’s prominent characteristics. Traditionally, low-level characteristics like gradient orientation, colour, edges, etc.

Fig. 2 Architecture diagram

Fig. 3 Layers details in proposed Neural Network

Deep Learning-Based Facial Emotion Analysis

495

are analysed using the leading convolution layer. The architecture starts to recognise the high-level characteristics along the successive levels. Layer for pooling process The Convolved Feature’s spatial size is constrained by pooling layers. Through dimensionality reduction, this lowers the computing requirements for processing the data. In this study, Max and Average Pooling, two prominent types of pooling, have been employed. Dense Layer The output of the convolution and pooling layer are 2D learned features. These features are flattered and fed as 1D features into the fully connected dene network in order to predict the facial expression. Here we have used activation function in order to maintain the non-linearity in the output of every layer. Activation functions Activation functions come in numerous varieties, however the following are the most popular ones: Rectified linear units, or ReLU, are often referred to as activation functions. Its straightforward approach is relu(y) = max (zero, y). It is employed because it maintains a high level and does not guide to saturation. A function called Softmax takes M real numbers as input and normalises them into M probabilities that are proportionate to the input numbers’ values. It generates the likelihood that the input will lead to one of the potential results. The operations of CNN algorithm are described in Fig. 4.

Fig. 4 General architecture diagram of CNN

496

M. Mohamed Iqbal et al.

Categorical cross entropy Here we have utilised Categorical cross entropy as loss function to calculate the loss during training phase of our CNN model. This has the following mathematical formula: M E N ) ( E ' ' L y, y = − (yij ∗ log(yij )) j=0 i=0

4 Discussions on Outcome This section describes the resources utilised for the proposed work and outcome achieved. The computing system utilised for this experiment has the following technical specifications (see Table 1). On this system, Jupyter-Notebook 6.0.1 is implemented using Python 3.7.4 and Anaconda 1.9.7. Additionally, the Google Colab Notebook was utilised since it offers free GPUs and up to 8 GB of RAM for a session of twelve hours, all of which were helpful for training. The dataset’s organisation graph is shown below (Fig. 5). The model was evaluated using the following evaluation metrics after being trained and tested on the training and testing datasets, individually:

Table 1 System details

Fig. 5 FIR2013 data set distribution

Intel Core i5-12,400 CPU with clock speed 2.50 GHz

CPU OS

64 bit windows10

GPU

Quadro T400 4 GB from NVDIA

RAM

16 GB

Dataset Distribution 10000 8000 6000 4000 2000 0

Deep Learning-Based Facial Emotion Analysis

497

Fig. 6 Classification distribution of testing set

Table 2 Performance results

Emotion

Precision (%)

Recall (%)

Angry

53.2

46.2

Disgust

59.2

43.4

Fear

42.4

41.5

Happy

77.5

78.5

Sad

50.2

44.5

Surprise

78.1

74

Neutral

44.6

52.2

Loss in training phase: 0.22100321234 Accuracy in training phase: 91.9245672345 Loss in testing phase: 2.314356730564 Accuracy in testing phase: 60.0043124356867. For a challenge like this one that involves several classes of categorization, accuracy shouldn’t convey the correct impression. The following (Fig. 6) is the confusion matrix for this technique. Using the confusion matrix described in Fig. 6, the accuracy and recall of the proposed investigation is determined and described in Table 2. The graphical representation of accuracy of our 7 different classes is stated in Fig. 7.

5 Conclusion This study investigates face sentiment analysis. In this work, the convolution neural network model is described for the job of classifying face photos into the seven common feelings such as fear, happiness, sorrow, anger, surprise, disgust and neutral. Images from fer2013 data sets are utilised for our investigation. The accuracy of the proposed work is 60% throughout the dataset, which is excellent given that

498

M. Mohamed Iqbal et al.

Performance in %

Performance of Proposed Work 100 90 80 70 60 50 40 30 20 10 0 Angry

Digest

Fear

Happy

Sad

Surprise

Neutral

Emotions Precision(%)

Recall(%)

Fig. 7 Accuracy of proposed method

the winning technique in the fer2013 competition had an accuracy of 34%. This demonstrates its efficacy in the field of face sentiment analysis and shows that it is superior to other methods. It will be used in the future to do real-time analysis and cut down on latency. Facial Sentiment Analysis has various uses, and there will be more new and improved contributions to the discipline than ever before.

References 1. Kabir MM, Anik TA, Abid MS, Mridha MF, Hamid MA (2021) Facial expression recognition using CNN-LSTM approach. In: 2021 international conference on science & contemporary technologies (ICSCT), pp 1–6https://doi.org/10.1109/ICSCT53883.2021.9642571 2. Zhang S, Pan X, Cui Y, Zhao X, Liu L (2019) Learning affective video features for facial expression recognition via hybrid deep learning. IEEE Access 7:32297–32304. https://doi.org/ 10.1109/ACCESS.2019.2901521 3. Chung C-C, Lin W-T, Zhang R, Liang K-W, Chang P-C (2019) Emotion estimation by joint facial expression and speech tonality using evolutionary deep learning structures. In: 2019 IEEE 8th global conference on consumer electronics (GCCE), pp 221–224. https://doi.org/10. 1109/GCCE46687.2019.9015558 4. Wiens AN, Harper RG, Matarazzo JD (1978) Nonverbal communication: the state of the art. Wiley 5. Lakhani MI, McDermott J, Glavin FG, Nagarajan SP (2022) Facial expression recognition of animated characters using deep learning. In: 2022 international joint conference on neural networks (IJCNN), pp 1–9https://doi.org/10.1109/IJCNN55064.2022.9892186 6. Verma GK, Tiwary US (2014) Multimodal fusion framework: a multiresolution approach for emotion classification and recognition from physiological signals. Neuroimage 102:162–172. https://doi.org/10.1016/j.neuroimage.2013.11.007 7. Kumar S, Ansari MD, Gunjan VK, Solanki VK (2020) On classification of BMD images using machine learning (ANN) algorithm. In: ICDSMLA 2019. Springer, Singapore, pp 1590–1599 8. Cowie NT, Douglas-Cowie E, Roddy WF, Votsis G, Kollias S, Taylor JG (2001) Emotion recognition in human-computer interaction. IEEE Signal Process Mag 18(1):32–80 9. Singh SK, Thakur RK, Kumar S, Anand R (2022) Deep learning and machine learning based facial emotion detection using CNN. In: 2022 9th international conference on computing for

Deep Learning-Based Facial Emotion Analysis

10. 11. 12. 13. 14.

15.

16.

17.

18.

499

sustainable global development (INDIACom), pp 530–535. https://doi.org/10.23919/INDIAC om54597.2022.9763165 Moore S, Bowden R (2011) Local binary patt erns for multi view facial expression recognition. Comput Vis Image Underst 115(4):541–558 Mohamed Iqbal M, Latha K (2023) A parallel approach for sentiment analysis on social networks using spark. Intell Autom Soft Comput 35(2):1831–1842 Mliki H, BenAbdallah H, Hammami M, Fourati N (2013) Data mining based facial expressions recognition system. In: SCAI, pp 185–194 Paul E, Friesen WV (1971) Constants across cultures in the face and emotion. J Pers Soc Psychol Liu P, Han S, Meng Z, Tong Y (2014) Facial expression recognition via a boosted deep belief network. In: 2014 IEEE conference on computer vision and pattern recognition, pp 1805– 1812https://doi.org/10.1109/CVPR.2014.233 Zhang T, Yan K, Cui Z, Tong Y, Yan J, Zheng W (2016) A deep neural network-driven feature learning method for multi-view facial expression recognition. Trans IEEE Multimed 18(12):2528–2536 Harshitha S, Sangeetha N, Shirly AP, Abraham CD (2019) Human facial expression recognition using deep learning technique. In: 2019 2nd international conference on signal processing and communication (ICSPC), pp 339–342. https://doi.org/10.1109/ICSPC46172.2019.8976876 Agarwal A, Patni K, Rajeswari D (2021) Lung cancer detection and classification based on Alexnet CNN. In: 2021 6th international conference on communication and electronics systems (ICCES), pp 1390–1397. https://doi.org/10.1109/ICCES51350.2021.9489033 Carrier IJ, Mirza PL, Courville A, Goodfellow M, Bengio Y (2013) FER-2013 database. University de Montreal

Electronic Health Records Rundown: A Novel Survey A. Sree Padmapriya, B. Sabiha Sulthana, A. Tejaswini, P. Snehitha, K. B. V. Brahma Rao, and Dinesh Kumar Anguraj

Abstract Over the years, huge amounts of data are being generated in various sectors. The healthcare industry is one of the largest, and patient medical record data is growing exponentially with time. Medical records are used to track patient and healthcare-professional interactions. Electronic Health Records (EHR) are the computerized version of a patient’s health information. Electronic health records have many benefits. Storage of such large volumes of big data is a challenge. In addition to this, it is equally important to provide the security and privacy of the data. Healthcare can be provided accurately, only when the health information is perfect. In this paper, a few methodologies in data analytics have been used to produce some effective insights. Keywords Electronic health records · Health records · Data analytics · Data prediction · Disease prediction

1 Introduction Technology today is mostly about making our work simplified and easy to deal with. Medical records contain information about the patient’s medical history, diagnostic tests, laboratory results, preoperative treatment, surgical notes, postoperative care, and regular notes of their progress and prescriptions. Hospital record management and administration is a significant undertaking. The records were maintained physically, but now they are being moved to electronic systems and stored in the database. The health records that used to be physical on paper have evolved and been updated to be digitalized health records. Conversion and evolution were not easy tasks to A. Sree Padmapriya · B. Sabiha Sulthana · A. Tejaswini · P. Snehitha · K. B. V. Brahma Rao · D. K. Anguraj (B) Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vijayawada, Andhra Pradesh, India e-mail: [email protected] K. B. V. Brahma Rao e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_36

501

502

A. Sree Padmapriya et al.

establish and took many ups and downs in the implementation, testing, and finally producing the output. Implementing an electronic health record involves key factors like the collection of data, data analytics, storage and security, data retrieval, and reports [1]. While each factor has its own significance, data analytics can be considered the prominent one. The main objective of this medical records storage is to provide accurate, timely statistical information to the department. And more importantly to provide the best and most effective healthcare services to the patients. The three major categories of healthcare record digitization are PHR (personal health record), EMR (electronic medical record), and EHR (electronic health record). A personal health record (PHR) is a compilation of a person’s medical records that are maintained by the person himself. An electronic medical record, or EMR, is a computerized version of a chart that contains patient data. A patient’s medical history can be stored in an electronic version with the help of an Electronic Health Record that is stored over time. The Electronic Health Record performs the same objective more efficiently. While Electronic Medical Records can only be used to track a patient’s record in a specific hospital, Electronic Health Records can be shared with labs, facilities, other hospitals, etc. Electronic Medical Record is used by mostly all the hospitals that are familiar with digitalization where small hospitals in rural areas might be an exception, where it is confined to that particular location and cannot be shared freely. Electronic Health Record is being used to give many people better healthcare services all around the world. It has been adopted in various parts of the globe. The Ministry of Health and Family Welfare announced Electronic Health Record Standards for India in 2013 to establish a universal standard-based framework for the creation and maintenance of Electronic Health Records by healthcare providers. Except for a handful of large healthcare organizations in India, it is yet to be implemented in some parts of the country. Adoption of Electonic Health Records in hospitals taps to improve healthcare delivery by maintaining structured healthcare information. It aids in predicting people’s health issues over time and delivering appropriate treatment. Online prescriptions, patient portals, online communication, lab, and chart analysis, and a few of the features of Electronic Health Records. There are several other features that make doctor and patient life easy. Data analytics involves processes like obtaining or extracting the data, data preprocessing which includes cleaning of data, data integration, redundancy reduction, etc., and data mining. Data transformation is also required in order to transform the different forms of data into one single form which can be operated on. Data analytics helps in providing concrete evidence to support decision-making so that assumptions and guesses can be eliminated in the process of acquiring accurate outcomes.

2 Literature Survey Several papers concerning Electronic Health Records, data analytics, and cloud computing ranging from the years 2018–2022 have been reviewed and displayed in

Electronic Health Records Rundown: A Novel Survey

503

Table 1. Different types of methodologies were used in each paper that used a wide range of operable datasets containing patient information collected from hospitals, universities, data warehouses, etc.

3 Discussion Electronic Health Record is one of many remarkable healthcare technologies that include the knowledge acquired from domains like data science and big data analytics as well as cloud computing. Implementing one such Electronic Health Record involves several complex tasks and methodologies. Different types of methodologies and machine learning algorithms have been used by different people according to the available data and the purpose of the application across the world. Some of the methodologies that caught our sight while performing journal reviews in the journey of exploring Electronic Health Records will be called to our attention in this review paper.

3.1 Why Electronic Health Records? As specified earlier, Electronic Health Record is the digitalized form of health records stored in a hospital. Electronic Health Record is not solely used for storage purposes but can be utilized in several other ways when used appropriately. The big data that is collected should contain several determinant attributes like age, gender, region, and season in which patients visit the hospital more frequently. These determinant factors can help in predicting seasonal diseases regional-wise, considering the age factor and gender, and can follow precautionary measures to avoid sickness. It can also be used to restock the most utilized medication in a particular pharmacy accordingly. Implementation of an Electronic Health Record begins with collecting appropriate data which may be structured, unstructured, or of different forms which are finally reshaped to structured data that can serve the purpose of assembling systematic analytics that is to be performed on the collected data. Performing analytics on big data is a strenuous task as numerous complications may occur due to its large volume and variety. Getting the better of these complexities and obtaining the aftermath are the prime objectives. Data preprocessing is the primary step of dataset formation after collecting data from various sources [18]. Since the collected data may not be in a similar form, data preprocessing helps in converting data of different formats into a single format to make data consistent on which analytics can be performed. Accurate, reliable, and less redundant data are impacts of data preprocessing. This step is beneficial as well as impactful in building efficient electronic health records that contain the right information. Data cleaning is less complex when compared to data preprocessing. Reducing noise, removing wrong data, and replacing missing values are part of the

504

A. Sree Padmapriya et al.

Table 1 Survey papers on electronic health records Author and year

Methodology

Dataset

Remarks

Hossain et al. (2021) Statistical methods, Electronic health [2] Regression model, ML, datasets and data mining

Digital prediction models are best for effective healthcare

Claudio et al. (2022) [3]

Data analysis

Proposed and evaluated clinical path

Qamar (2022) [4]

Classification and CPRD feature selection by DL methods

Boosting neural network architecture

Tsang et al. (2020) [5]

Ensemble deep neural Taken from Welsh networks and machine NHS, UK learning methodologies

Prediction of hospitalization potential with patients Suffering from dementia

Liu et al. (2020) [6]

Statistical shape model, Paper-based medical PCA, and interpolation records algorithm

Automatically saving, structuring, and analyzing the paper-based medical records

Sachdeva et al. (2020) [7]

Space and time efficiency analysis of data models

Considered 50 K instances of standard EHRs

Critically analyzed the adoption of various models (Relational, EAV, DT, OEAV, and OCOM)

Neamah (2020) [8]

Software-defined networking and ML algorithms

Medical datasets

EHR-based smartphone apps associated with data warehouse

Souiki et al. (2020) [9]

UML formalism

Medical information

Mobile application

Auefuea et al. (2019) Stakeholder and [10] workflow analysis

Taken from Ramabodi hospital, Thailand

Comparing the efficiency and effectiveness of the system

Nazir et al. (2019) [11]

ML algorithms and analytical methods

Collected from Endeavor to study the studies, from 2009 to visualization of medical 2018 (10 years) big data in cardiology

Polychronidou et al. (2019) [12]

Multi-scaling data analysis and visualization

Collection of 200 patients with 79 features

Zhang et al. (2019) [1]

Sample and data Surveyed 580 nurses collection variables and at a large hospital in instruments, China measurement, validity, and reliability

FAPESP COVID-19 data sharing/BR repository

Visualization of healthcare data Adaptation of analytical tools to EHRs from the different knowledge modes (continued)

Electronic Health Records Rundown: A Novel Survey

505

Table 1 (continued) Author and year

Methodology

Shickel et al. (2018) [13]

Deep learning and NLP From MIMIC, i2b2

Dataset

Remarks Predictive models are improved by using deep learning methodologies

Bian et al. (2018) [14]

Data collection, task modeling, and task taxonomy

EMR datasets

Taxonomy summarization

Pitoglou et al. (2018) SVM, KNN, Naive [15] Bayes, logistic regression, and deep learning neural network

Provided by The Athens Municipal Hospital “ELPIS”

Measuring the performance of the emergent models

Bernard et al. (2018) Participant, procedure visualization, and [16] novel method

2,000 patients

Three-tier visualization structure

Rabieh et al. (2018) [17]

System bootstrap, Medical records on-vehicle sensory data processing, and declaring distress mode

Re-encryption process

Jin et al. (2018) [18]

ARIMA, RNN, LSTM, (EHR) data from risk prediction, and real-world datasets preprocessing methods related to congestive heart disease

Prediction of heart failure diagnosis

Joshi et al. (2018) [19]

Attribute-based encryption (ABE)

Development of attribute-based, field-level, document encryption

Zeng et al. (2018) [20]

NLP, computatıonal ICD-9 phenotypıng, keyword search and rule-based system, supervised statistical ML algorithms, unsupervised, and deep learning

Radiology reports and doctors’ notes

Extract relations between medical concepts

data-cleaning process. The data-cleaning process is achieved through the following sequence of steps removing unwanted data, modifying data to a single structure, odd ones out, filling missing spaces with mean value, and verifying. Digital prediction models are best for effective healthcare. With the help of predictive analysis, optimized diagnosis can be obtained which can help in preventing misdiagnosis, readmission of patients unnecessarily can be reduced, and promotion of the clinic can be done with respect to the right target audience so that patients know whom to approach for their condition and be treated. To improve the predictive models, the latest and large amount of data needs to be used so that better results are acquired. The type of data that is used to perform

506

A. Sree Padmapriya et al.

Fig. 1 Electronic health record implementation example

predictive analysis must be collected according to the problem to be worked on. Electronic Health Record Implementation Example is shown in Fig. 1.

3.2 Methodology Data analytics has been identified to be of Five main types which are predictive, discovery, descriptive, diagnostic, and prescriptive which play divergent roles with their own significance. Various survey papers that displayed several techniques in data analytics have been reviewed. Prediction is one significant type that is very useful in the field of medical science. With the help of data obtained from electronic medical records and electronic health records, diverse methodologies can be applied to predict different outcomes to bring upgraded applications in the medical science field. New drugs and formulas as solutions to diseases can be produced with the help of prediction. Help in diagnosis with the help of prediction is a boon to doctors which can be considered as disease prediction. Statistical methods, Machine Learning, and data mining are prime approaches in disease prediction as part of data analytics. Basic Block Diagram of Data Analytics in Disease Prediction is shown in Fig. 2. Statistical Method: Purposeful information can be obtained from patterns and the latest trends with help of statistical methods. Statistical analysis uses statistical methods to get the above-described outcomes. Out of several methods that have been put to use in the statistical method, one model is discussed below. The regression model is one of the sub-models of the statistical method. When two different variables—dependent and independent—are taken in the scenario, the relation between

Electronic Health Records Rundown: A Novel Survey

507

Fig. 2 Basic block diagram of data analytics in disease prediction

those variables can be determined through regression analysis. In the case of Electronic Health Records, the following example can be considered—age and glucose level of a person are two variables where age is the independent variable and glucose is the dependent variable: as age increases, glucose level of a person decreases. Machine Learning and Data Mining: Machine Learning and data mining often intersect in the field of data science to extract knowledge from large volumes of datasets like electronic health records. Supervised algorithms such as Support Vector Machine, decision tree, and random forest while Unsupervised algorithms such as association analysis are discussed below. Linear as well as non-linear data can be classified with the application of Support Vector Machine. It can also be described that Support Vector Machine is a classification algorithm that can be used to classify diseases that can help in disease prediction [15]. It also helps in identifying complex relations between data reducing multiple transformations. Tree-like structure, the root node, internal nodes, leaf nodes, and connecting branches make a decision tree. Dealing with large volumes of data obtained from Electronic health records is a challenging task. A decision tree helps in splitting this large volume of data into smaller segments. The decision tree displays hierarchy while classifying the data and is also capable to perform regression. A decision tree can be used in disease prediction and estimation. A group of decision trees combines together to form a random forest. Random forests can be used in the process of disease prediction to get more accurate predictions. Identification of diseases can be done by analyzing patients’ medical records

508

A. Sree Padmapriya et al.

using random forest. Random forest is efficient, easy to use, beginner friendly, and versatile. Association analysis is an unsupervised machine learning algorithm. Interesting relationships can be identified between large data obtained from Electronic health records with the help of association analysis. Frequent item sets or association rules are the two forms of these interesting relationships. A collection of items that frequently occur together is known as a frequent item set. Hidden patterns can be extracted from data collected from an emergency department with the help of association rule mining. Figure 3 displays the accuracy in disease prediction of a few data analytical methods that are being used in data or disease prediction with the data obtained from electronic health records that have been discussed earlier in this paper. The accuracy values of the following have been used from the papers reviewed [2]. As several data analytical methods play a vital role in the implementation of electronic health records as well as predictive results obtained from them, cloud storage and security play a key role in the storage of these analytical metrics and large volumes of data with high encrypted security [19]. Safe and secure transaction of data is particularly important in the implementation of Electronic Health Records.

Fig. 3 Different methods in data analytics and their accuracy in disease prediction

Electronic Health Records Rundown: A Novel Survey

509

4 Conclusion Electronic health records are crucial in the medical analysis domain because it allows us to record an electronic version of a patient’s medical history and the data is maintained by the collector for a very long time which contains the medical data of an individual person. Due to this data, a patient’s medical history, the medications one has used or using, the allergies one has, their previous x-rays, laboratory records, the test analysis, etc. can be identified. So, access to evidence-based patient medical analysis through this can be provided. Some very effective segmentation techniques for factors like demographics, progress data, problems one has suffered, medications they used, vital signs that have been earlier, past medical history, immunizations, laboratory data, and radiology reports are recognized. Several well-known components are namely Patient Management, Clinical Component, Secure Messaging and Alerts, Financial Dashboards, and Revenue Cycle Management (RCM). Additionally, there are several efficient strategies for electronic health records rather than just one. Data science is used to collect the patient’s medical history which is data, and it also helps to record one’s analysis and track the patient’s medications. Cloud computing also plays a key role as it is used to retrieve the information or data that one has collected. By retrieving data, a clear image of the patient’s health condition can be observed. When implementing Electronic Health Records, all major components that help to manage the workflows and improve patient care must be noted. In some cases, it may also increase clinical revenue. Prediction is a crucial part of data analytics which can be obtained by performing different types of methodologies based on the type of data collected and the outcome that we want to acquire. The random forest has more accuracy when compared to other methods that have been used in disease prediction. This conclusion could be obtained with the help of accuracy values considered from the survey papers reviewed.

References 1. Zhang C, Ma R, Sun S, Li Y, Wang Y, Yan Z (2019) Optimizing the electronic health records through big data analytics: a knowledge-based view, vol 7, pp 136223–136231 2. Hussain ME, Khan A, Moni MA, Uddin S (2021) Use of electronic health data for disease prediction: a comprehensive literature review, pp 745–758 3. Linhares CDG, Lima DM, Ponciano JR, Olivatto MM, Gutierrez MA, Poco J, Traina C, Traina AJM (2022) Clinical path: a visualization tool to ımprove the evaluation of electronic health records in clinical decision-making 4. Qamar S (2022) Healthcare data analysis by feature extraction and classification using deep learning with cloud based cyber security, vol 104 5. Tsang G, Zhou SM, Xie X (2020) Modeling large sparse data for feature selection: hospital admission predictions of the dementia patients using primary care electronic health records, vol 9, pp 1–13 6. Liu N, Wang C, Miao X, Bai H, Wang Y, Yang L, Lei Y, Zhang W, Wang H (2020) A new data visualization and digitization method for building electronic health record, pp 2980–2982

510

A. Sree Padmapriya et al.

7. Sachdeva S, Batra D, Batra S (2020) Storage efficient ımplementation of standardized electronic health records data, pp 2062–2065 8. Neamah AF (2020) Flexible data warehouse: towards building an ıntegrated electronic health record architecture, pp 1038–1042 9. Souiki S, Hadjila M, Moussaoui D, Ferdi S, Rais S (2020) M-health application for managing a patient’s medical record based on the cloud: design and ımplementation, pp 44–47 10. Auefuea S, Nartthanarung A, Pronsawatchai P, Soontornpipit P (2019) Comparing a conceptual model of the electronic record system applying in home health care unit, pp 1–4 11. Nazir S, Nawaz M, Anwar S, Adnan A, Asadi S, Shahzad S, Ali S (2019) Big data visualization in cardiology-a systematic review and future directions, pp 115945–115958 12. Polychronidou E, Kalamaras I, Votis K, Tzovaras D (2019) Health vision: an interactive web based platform for healthcare data analysis and visualisation, pp 1–8 13. Shickel B, Tighe PJ, Bihorac A, Rashidi P (2018) Deep EHR: a survey of recent advances in deep learning techniques for electronic health record (EHR) analysis, vol 22, no 5, pp 1589–1604 14. Bian X, Kharrazi H, Caban JJ, He G, Feng Z, Chen J (2018) Towards a task taxonomy of visual analysis of electronic health or medical record data, pp 281–286 15. Pitoglou S, Koumpouros Y, Anastasiou A (2018) Using electronic health records and machine learning to make medical-related predictions from non-medical data, pp 56–60 16. Bernard J, Sessler D, Kohlhammer J, Ruddle RA (2018) Using dashboard networks to visualize multiple patient histories: a design study on post-operative prostate cancer, pp 1615–1628 17. Rabieh K, Akkaya K, Karabiyik U, Qmruddin J (2018) A secure and cloud-based medical records access scheme for on-road emergencies, pp 1–8 18. Jin B, Che C, Liu Z, Zhang S, Yin X, Wei X (2018) Predicting the risk of heart failure with EHR sequential data modeling, vol 6, pp 9256–9261 19. Joshi M, Joshi KP, Finin T (2018) Attribute-based encryption for secure access to cloud based EHR systems, pp 932–935 20. Zeng Z, Deng Y, Li X, Naumann T, Luo Y (2019) Natural language processing for EHR-based computational phenotyping, vol 16, no 1, pp 139–153

ENT Pattern Recognition Using Augmented Bounding Boxes P. Radha, V. Neethidevan, and S. Kruthika

Abstract This work is used to detect the disorders of medical patterns related to ear, nose, and throat of human beings marked by Augmented Bounding Boxes as Object Detection technique. Ear disorders like hearing loss, cochlear implant surgery, tinnitus, presbycusis, and usher syndrome; Throat oriented problems like Laryngitis, croup, and tonsillitis; and Nose related disorders like Nasal polyps, deviated septum, Rhinitis, Nosebleeds, and Nasal fractures are identified using this technique. Using the Augmented Bounding Boxes Object Detection approach, this work is utilized to identify medical patterns associated with ailments of the ear, nose, and throat in humans. Using this technique, it is possible to diagnose ENT disorders. Using Entoscopic camera, the disorder patterns were collected. These are used as training samples for an enhanced bounding box algorithm in the experimental stage of this work. With the use of box label data, the trained visual patterns were pojected. With the help of this additional data, a thorough analysis could be conducted to provide the ENT hospital’s patients with a helpful diagnosis. This research especially focuses on ear disorders and uses image segmentation. Foreground and background masks are used for lazy snapping-based segmentation as well. Keywords Image object detection · Augmented bounding box · ENT disorders

1 Introduction Most children, individuals in their middle years, and older persons are affected by ear, nose, and throat diseases [1, 2]. With the help of various ENT equipment, the doctor can examine the patient and provide them with the right medications to treat their condition [3]. The doctor or patient is under pressure to physically interact with people in the usual situation. Endoscopes, like the one shown in Fig. 1, are used to collect symptoms related to the ear, including cochlear implant surgery, hearing loss, tinnitus, presbycusis, and Usher syndrome, as well as throat-related issues like P. Radha (B) · V. Neethidevan · S. Kruthika Mepco Schlenk Engineering College, Sivakasi, Tamil Nadu, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_37

511

512

P. Radha et al.

Fig. 1 Endoscope 30°, 3 mm, 140 mm

laryngitis, croup, and tonsillitis and nose-related issues like nasal polyps, deviated septums, rhinitis, nosebleeds, and nasal fractures. In the form of photographs, the patterns of illnesses are recorded [4–8]. Figure 2 reports the sample images of disorders of the nose, throat, and ears. Various industries and applications are using object detection algorithms increasingly often [9–11]. But in the ENT field, there isn’t a comparable process or method. In order to identify disorders in the ear, nose, and throat, ENT pattern recognition uses augment bounding boxes for object detection. Image augmentation is the process of changing existing training data to increase the dataset size. Fundamentally, it involves fabricating false data that appears genuine. Ear problems are also addressed using the image segmentation approach. As of right now, no pertinent study has been done in this area. The ENT industry does not currently employ such a device or technique. Numerous areas have made use of object detection techniques. To identify sickness in the ear, nose, and throat, ENT Pattern Recognition uses Augment Bounding Boxes for Object Detection [12–15].

ENT Pattern Recognition Using Augmented Bounding Boxes

513

Fig. 2 Sample disorder ENT patterns

2 System Methodology The system’s general architecture is shown in Fig. 3. The unprocessed images of the ENT pattern abnormalities have been gathered. The bounding boxes and image patterns were scaled, cropped, and warped using the image processing and computer vision capabilities of MATLAB. Finally, the evaluation method is used. As a result, this work involves several sub-phases, including data collection, loading data, reading and pojecting bounding boxes, resizing images and bounding boxes, cropping images and bounding boxes, warping images and bounding boxes, applying augmentation to training data of datastore and pattern segmentation.

514

P. Radha et al.

Fig. 3 Overall architecture of the system

Data collection: Images of ear, nose, and throat disorders are gathered though a number of data centers that offer a variety of datasets such as the Google and Kaggle datasets. Reading and Displaying Bounding Boxes: Bounding boxes and example pictures are read and presented. Each transformation must employ the same input pattern and bounding box in order to compare the impacts of the various methods of augmentation. Last but not least, the example picture and bounding boxes are shown. Image resizing process: Scaling an image is referred to as image resizing. Many applications of image processing and machine learning benefit from scaling. It assists in lowering the amount of pixels in an image, which has various benefits. It can shorten the time required to train a neural network since an image’s pixel density increases the number of input nodes, which in turn raises the model’s complexity. Additionally, it facilitates image zooming.

ENT Pattern Recognition Using Augmented Bounding Boxes

515

Resizing bounding box: The image is downsized using the resize tool to scale it down by a factor of 2. The same scaling factor is used when boxer resize value is used and when the picture has been resized and has a bounding box. Cropping image and bounding box: Cropping is a typical pre-processing procedure to make the input data to fit the intended size. Cropping a picture and its bounding box is also known as image trimming. When using the randomWindow2d or centerCropWindow2d functions from the Image Processing Toolbox, the crop window’s size and location are originally given in order to produce output pictures of the required size. A cropping window must be chosen by the user so that the desired portion of the image is included. Imcrop is then used to crop the picture and pixel label image to fit into the same window. Crop. Similar to this, the bounding boxes are cropped by using the bboxcrop command in the same crop window. When the crop window does not entirely surround the bounding box, OverlapThreshold is taken into account as less than 1, which prevents the function from discarding the bounding boxes. Instead, it clips them to the crop window. The user is able to decide the objects to be accepted following the clipping process by adjusting the overlap threshold. Wrapping image and bounding box: With the use of rotation, translation, scaling (resizing), reflection, and shearing, the randomAffine2d (Image Processing Toolbox) function generates a randomized 2-D affine transformation. Following that, imwarp is used to apply warping to a picture (Image Processing Toolbox). A tool called bboxwarp is also used to warp the bounding boxes. Finally, user may use the affineOutputView (Image Processing Toolbox) function to regulate the warped output’s spatial boundaries and resolution. Applying Augmentation to training data: The process of creating new, modified copies of the pictures in the provided image dataset in order to broaden its variety is known as image data augmentation. Images are nothing more than a two-dimensional collection of numbers to a computer. These figures stand for pixel values, which user may change in a variety of ways to create brand-new, enhanced images. Datastores make it simple to access and enhance data collections. A picture and bounding box are stored in data storage and numerous procedures are used to enhance the data. The picture and bounding box with labels file names are duplicated to expand the size of the sample datastores. Utilizing the transform function, data augmentation is done to the training data. The initial augmentation jitters the hue of the picture before randomly rotating and reflecting it horizontally on each image and box label pair. The auxiliary function jitterImageColorAndWarp defines these operations. The jitterImageColorAndWarp helper function jitters the color of the image data randomly before transforming the box label and image data with the same affine transformation. The transformation comprises of random rotation and horizontal

516

P. Radha et al.

reflection. The input data and output data are both represented by two-element cell arrays, where the first element is made up of picture data and the second element is made up of box label data. Segmenting data based on graphs: The foreground and background of a picture can be separated using graph-based segmentation techniques like lazy snapping. Either programmatically (lazy snapping) or interactively (using the Picture Segmenter app), MATLAB might carry out this segmentation on a raw image.

3 Results and Discussion The images are stored in PNG format. The bounding box size is initialized as [2 90 200 130]. The augmentation related to throat disorders is mentioned in Figs. 4 and 5. The augmentation related to nose disorders is exhibited in Figs. 6 and 7. Also, the augmentation related to ear is reported in Figs. 8 and 9. The segmentation results in one or two bounding rectangles. The selected objects are marked in green color and finally, the image pattern inside the bounding rectangle is selected for visualization. These results are given in Figs. 10, 11, and 12.

Fig. 4 Throat augmentation of medical pattern

ENT Pattern Recognition Using Augmented Bounding Boxes

Fig. 5 Results of Throat augmentation

Fig. 6 Nose augmentation of medical pattern

517

518

Fig. 7 Results of Nose augmentation

Fig. 8 Ear augmentation of medical pattern

P. Radha et al.

ENT Pattern Recognition Using Augmented Bounding Boxes

Fig. 9 Results of Ear augmentation

Fig. 10 Segmentation of Ear pattern

519

520

Fig. 11 Visualizing segmented Ear pattern

Fig. 12 Segregating selected EAR pattern

P. Radha et al.

ENT Pattern Recognition Using Augmented Bounding Boxes

521

4 Conclusion The enhanced bounding box is referred to in this publication as an object identification technique to find ENT pattern problems. In this study, MATLAB’s image processing and computer vision technologies are utilized. This will assist the ENT specialist, or otolaryngologist, in accurately assessing the patterns of ENT medical patterns. Additionally, segmentation is used to divide the necessary patterns. Future work may incorporate automated mechanisms by integrating HOT technologies like IoT, machine learning, and cloud computing. The otolaryngologists are unable to physically care for their patients due to the pandemic condition. Since they must contact the patients’ throats and noses, the corona illness might be transferred regularly. Using an integrated strategy using various tools, the otolaryngologist may offer an intelligent diagnosis. Domain specialists and the medical equipment are connected over the Internet using IoT. From the obtained raw data, the machine learning technique can forecast the irregular patterns. Cloud computing allows medical professionals and distant patients to communicate in a healthy way. Using the HOT technologies like IoT, Machine Learning and Cloud computing, ENT experts will be able to provide smart diagnosis for the remote patients.

References 1. Suna F, Lib H, Liua Z, Lic X, Wu Z (2021) Arbitrary-angle bounding box based location for object detection in remote sensing image, arbitrary-angle bounding box based location for object detection in remote sensing image, vol 54, p 15. https://doi.org/10.1080/22797254.2021. 1880975 2. Ibrahim MS, Badr AA, Abdallah MR, Eissa IF (2012) Bounding box object localization based on image superpixelization, vol 01, no 1877–0509, p 12 3. Zhou Y, Suri S (2022) Analysis of a bounding box heuristic for object ıntersection, vol 46, no 33–857, p 26 4. Ravi N, Naqvi S, El-Sharkawy M (2022) An ımproved bounding box regression for object detection, p 16. https://doi.org/10.3390/jlpea12040051,2022 5. Ha J, Haralick RM, Phillips IT (2002) Document page decomposition by the bounding-box project, no 5637736, p 01. 0-8186-7128-9 6. Things for uninterrupted, ubiquitous, user-friendly, unflappable, unblemished, unlimited health care services (BC IoMT U6 HCS). IEEE Access 8:216856–216872. https://doi.org/10.1109/ ACCESS.2020.3040240 7. Cha D, Pae C, Seong S-B, Choi J, Park H-J (2019) Automated diagnosis of ear disease using ensemble deep learning with a big otoendoscopy image database. EbioMedicine 45https://doi. org/10.1016/j.ebiom.2019.06.050 8. Viscaino M, Maass JC, Delano PH, Torrente M, Stott C, AuatCheein F (2020) Computeraided diagnosis of external and middle ear conditions: a machine learning approach. PLoS One 15(3):e0229226. https://doi.org/10.1371/journal.pone.0229226

522

P. Radha et al.

9. https://blog.paperspace.com/data-augmentation-for-bounding-boxes/ 10. https://www.researchgate.net/publication/2500978_Analysis_of_a_Bounding_Box_Heuris tic_for_Object_Intersection 11. https://ieeexplore.ieee.org/document/9400416 12. https://www.sciencedirect.com/science/article/pii/S1877050915028574 13. https://www.researchgate.net/publication/335109611_Image_Segmentation_A_Review 14. https://ieeexplore.ieee.org/document/1674602 15. https://ieeexplore.ieee.org/document/8204282

Evaluation of Bone Age by Deep Learning Based on Hand X-Rays R. G. V. Prasanna, Mahammad Firose Shaik, L. V. Sastry, Ch. Gopi Sahithi, J. Jagadeesh, and Inakoti Ramesh Raja

Abstract Bone age assessment will provide the clinical information about maturity in individuals. This is mostly helpful in investigating growth abnormalities in children and also in the identification of several endocrinological and genetical diseases. It even helps pediatrics to estimate puberty entrance along with the growth in the individual. This eventually predicts genetical disorders. Generally, Tanner-Whitehouse or the Greulich and Pyle techniques are used in the radiological evaluation of hand X-ray images. Several algorithms are already proposed such as ResNet 50 and Inception v3. Here this study suggests a deep learning algorithm to determine the bone age automatically. This research is suggesting the most accurate results and suggesting an algorithm named ResNet 50 trained with a dataset of 12,611 images and tested with 200 images. Keywords Bone age · ResNet 50 · VGG19 · Genetic disease · X-ray images

1 Introduction Throughout the course of a person’s life, their bones might alter in size and shape. In the forensic profession, gender identification with age estimation is a crucial and difficult undertaking. In order to forecast growth limitations in young patients and to identify and treat endocrine diseases, bone age assessments are commonly performed. The assessment of hand and wrist bone growth has traditionally been performed using visual evaluation methods such as the Tanner-Whitehouse and Greulich and Pyle atlas techniques. These methods rely on a comparison of left-hand digital images to standardized atlases, performed by a trained expert, to determine the age and gender R. G. V. Prasanna · M. F. Shaik (B) · L. V. Sastry · Ch. G. Sahithi · J. Jagadeesh EIE Department, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, AP, India e-mail: [email protected] I. R. Raja ECE Department, Aditya College of Engineering and Technology, Suram plaem, Kakinada, AP, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_38

523

524

R. G. V. Prasanna et al.

Fig. 1 Hand structure

of the subject. However, manual prediction can be time-consuming, and the accuracy may vary due to observer bias and reliance on laboratory experience. Human bones frequently undergo alterations, which are amplified throughout the growing phase. Through the radius and ulna bones, the hand bone is tightly connected to the lower arm. It consists of 27 tiny bones, 8 carpal bones, and metacarpal and phalange bones. Figure 1 shows the arrangements of the wrist and hand bones or the bone anatomy. For the creation of the generic bone age prediction model in Fig. 2, a large dataset of hand scan images must be assembled. The data is augmented through various techniques to increase the quantity of information available. The data pre-processing block performs image reshaping, equalization, normalization, and flattening before it is fed into a convolutional neural network. The network is composed of multiple layers with varying parameters, which enable the formation of a hypothesis from the input image pixels and the prediction of bone age.

2 Literature Survey Identifying a person’s gender given their age is a difficult and time-consuming operation. Consequently, a fully automatic identifying solution was created. This section has a summary of the fundamental concept, driving force, and Methods employed

Evaluation of Bone Age by Deep Learning Based on Hand X-Rays

525

Fig. 2 General bone age prediction model

by other researchers and a brief overview of their contributions to forensic medicine utilizing computer vision, AI, machine learning, and deep learning. Zulkifley et al. [1] developed an automated bone age prediction approach based on wrist radiographs using an image regression technique in 2020. The model employed separable residual convolution as its foundation. It requires a 199 × 199 pixel image, which is processed by XCeption network modules for prediction. The image must be pre-processed through separable CNN for normalization and standardization prior to prediction. The results showed an average absolute error of 8.21 months and an average squared error of 122 months, with errors below a year considered acceptable

526

R. G. V. Prasanna et al.

for age prediction. This approach may serve as a backup technique for additional age prediction recommendations. In 2020, Booz et al. [2] evaluated a method for determining bone age through AI. They compared the accuracy and effectiveness of an AI-based program to the GP method. In their study, they analyzed 524 radiographs and had four experts perform bone age estimation using the GP technique and AI software. A comparison of the results of three blinded experts showed improved accuracy and time efficiency. The study demonstrated that software-based methods can be beneficial in various fields. In 2020, Scendon et al. [3] examined carpal bones using MR scans to estimate human age. The objective of the current study is to create a method for age assessment based on the average growth of carpal’s developing surface and nucleus ossification. They looked at 58 radiographs from magnetic resonance imaging (MRI) that were free for any pre-existing disorders and ranged in the age from 11 to 19 years old. Using the program ImageJ, parameters of nucleus ossification are also used, and increasing surface was measured. Their results show that five characteristic predictors, including nucleus ossification and the growth surfaces of the capitate, scaphoid, trapezium, trapezoid, and pisiform, yielded the best and most accurate results. In 2020, Lee et al. [4] proposed a bone age assessment method using digitized wrist photos and deep learning algorithms. They developed a system with a set of deep learning architecture and approximately 3000 computer-based photos for marking feature points on the wrist image. These sites were taken into account as the foundation points for age calculation in order to preserve the area of interest. By choosing a suitable portion from the wrist photos, it is possible to preserve the factors that are not important for estimating age. Since the background area in digital photos is useless for determining the age, it can be deleted without negatively impacting system performance. In 2019, Dallora et al. [5] conducted research on using machine learning techniques for wrist bone age assessment. They conducted a comprehensive evaluation to provide evidence for their findings and address any questions regarding gender and age estimation. Their research started with a literature review of three databases (Scopus, Web of Science, and Pubmed) to identify studies related to bone age estimation using ML techniques. A small number of studies are chosen to analyze bias in lesser impact research and to evaluate the quality of assessments. For their final estimation, authors analyzed and compared 25 papers; many of these studies presented automated systems for gender and estimation methods.

3 Proposed Work Here, working on two different algorithms namely VGG19 and ResNet 50 which are producing better results when compared to other algorithms like Inception v3, Mobilenet, and XceptionNet. A significant number of hand scan images must be obtained to create the generic bone age prediction model as depicted in Fig. 3. The dataset is enlarged by utilizing

Evaluation of Bone Age by Deep Learning Based on Hand X-Rays

527

a data augmentation technique for each image. Prior to entering the convolutional neural network [6], the picture is pre-processed through reshaping, equalization, normalization, and flattening. The convolutional neural network comprises multiple layers with varying parameters, which allow for the creation of a hypothesis from the picture’s pixel values and the prediction of the bone age.

Fig. 3 Basic block diagram of bone age assessment

528

R. G. V. Prasanna et al.

Fig. 4 VGG19 architecture

3.1 VGG19 The VGG, an acronym for Visual Geometry Group, represents a prevalent deep convolutional neural network (CNN) architecture characterized by multiple layers. The depth of the network is reflected in the number of layers, with VGG16 and VGG19 having 16 or 19 convolutional layers, respectively [7], as depicted in Fig. 4. VGG architecture is used to construct creative item identification models. VGGNet, a deep neural network, outperforms benchmarks in a variety of tasks and datasets outside of ImageNet and remains a widely utilized image recognition architecture. The dataset downloaded from Kaggle [8] is large in size with an image of 12,811 in which 12,611 are used for training and 200 are used for testing. The VGGNet architecture features 19 weight layers, including 5 pooling layers, 16 convolutional layers, and 3 fully connected layers. The two variations of the VGGNet both include fully connected layers with 4096 channels, followed by another fully connected layer with 1000 channels for predicting 1000 labels. The final layer for categorization is a fully connected softmax layer.

3.2 ResNet 50 The ResNet 50, a convolutional neural network with 50 layers, is a commonly used foundation for many computer vision applications. ResNet, short for Residual Network, allows the training of deep neural networks with over 150 layers as shown in Fig. 5, thanks to its main innovation. The concept was first introduced in a 2015 computer vision research paper titled “Deep Residual Learning for Image Recognition” by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. The approach to training a system using VGG19 and ResNet 50 models involves initializing the model with pre-trained weights and keeping the same architecture. This allows the model to use pre-learned features, adapt to a new task, and achieve better performance while saving time and computing resources. Only the last layers

Evaluation of Bone Age by Deep Learning Based on Hand X-Rays

529

Fig. 5 Execution flow of ResNet 50

need to train because the first set of layers is frozen. In place of the final SoftMax layer, the Batch Normalization, Global Average Pooling, and Dropout layers are added prior to the fully connected layer. The regression output is generated by a single neuron in the last layer. All models utilize the “Adam” optimizer with “mean squared error” (MAE) as the loss function and “mean average error” as the metric, with the only difference being the training methods used. Each model undergoes 10 epochs with a patience of 100. Here, patience is the number of Validation-MAE epochs where no improvement has occurred, following which the model’s training is finished. The bone age assessment result is then achieved by using the trained model to predictions on the test dataset, which consists of 200 photographs.

4 Results and Discussion In Fig. 6, a convolutional neural network is utilized in a feature extraction and segmentation procedure. The RNSA Pediatric Bone Age Challenge made use of the RSNA Bone Age dataset, which was provided by a collaboration of Stanford University, the University of California-Los Angeles, and the University of Colorado. The dataset consists of 12,811 hand scans, with 12,611 being used as training images (6833 for males and 5778 for females) and 200 used as test images, representing a range of ages and stages of bone growth over 19 years [9]. Figure 6 represents the graphical representation of the images in the dataset, i.e. the no. of male images vs no. of female images. In addition, the network in question used a composite function consisting of the rectified linear unit, convolution layer, and batch normalization operator [10], unlike the typical flow of other networks that include convolution layer, rectified linear unit, and batch normalization operator. Figure 7 as shown will provide the graphical information about the number of children and bone age z score.

530

R. G. V. Prasanna et al.

Fig. 6 Distribution of images in dataset (male, female)

Fig. 7 Relation between no. of children and bone age z score

4.1 VGG19 Results The mean absolute error of the algorithm is around 19.5 as shown in Fig. 8, which is less when compared to the remaining analysis accuracy improved by increasing the epochs which can be further implemented.

Evaluation of Bone Age by Deep Learning Based on Hand X-Rays

531

Fig. 8 Result of VGG19

Fig. 9 Result related to ResNet 50

4.2 ResNet 50 This technique is producing better accuracy and even better mean absolute error when compared to remaining Inception V3 and MobileNet and XceptionNet. In ResNet 50, the mean absolute error is 31.80 months which is a better thing when compared to the other as shown in Fig. 9. Figure 10 indicates the graph between the mean absolute error and the no. of epochs for the algorithm ResNet 50. Compared to the results related to the different pre-trained models, the following table consists of results compared to the already existing models. The conventional flow in other networks consists of a convolution layer, rectified linear unit (ReLU), and batch normalization operator. However, this network employs a unique combination function comprising the ReLU, convolution layer, and batch normalization operator. This can be even improved by increasing the number of epochs which eventually increases the accuracy and decreases the absolute error. The proposed work results are compared with existing works in Table 1.

5 Conclusion The VGG19 architecture utilized and achieved an MAE (Mean Absolute Error) result of 19.5 months. By utilizing a similar dataset and another whole bone age assessment model, the results are comparable. In determining an individual’s bone age, the bones in the hand and wrist’s center play a crucial role. Future work may involve experimenting with various filters, developing architectures that combine gender information with consideration for the varying rates of bone growth in the sexes, and assessing the effectiveness of the resulting designs.

532

R. G. V. Prasanna et al.

Fig. 10 MAE in months versus Epochs

Table 1 Comparison of MAE (mean average error) in months for trained model

Trained model

MAE in months

Inception v3

29.13

MobileNet

29.56

XceptionNet

26.65

VGG19 [11]

19.58

ResNet 50 [12]

31.80

References 1. Zulkifley MA, Abdani SR, Zulkifley NH (2020) Automated bone age assessment with image registration using hand x-ray images. Appl Sci 2. Booz C, Yel I, Wichmann JL, Boettger S, Al Kamali A (2020) Artificial intelligence in bone age assessment: accuracy and efficiency of a novel fully automated algorithm compared to the Greulich-Pyle method. Euro Radiol Exp 4:6 3. Scendon R, Mariano C, Andrea G, Marco F, Piergiorgio F (2020) Analysis of carpal bones on MR images for age estimation: first results of a new forensic approach. Forensic Sci Int 4. Lee JH, Kim YJ, Kim KG (2020) Bone age estimation using deep learning and hand X-ray images. Biomed Eng Lett 10:323–331 5. Dallora AL, Anderberg P, Kvist O, Mendes E, Diaz Ruiz S, Sanmartin Berglund J (2019) Bone age assessment with various machine learning techniques: a systematic literature review and meta-analysis 6. Cheng CF, Huang ET, Kuo JT, Liao KY, Tsai FJ (2021) Report of clinical bone age assessment using deep learning for an Asian population in Taiwan. BioMedicine 11(3), Article 8. https:// doi.org/10.37796/2211-8039.1256

Evaluation of Bone Age by Deep Learning Based on Hand X-Rays

533

7. https://www.researchgate.net/figure/Modified-VGG-19-modelarchitecturefig1_344398328 8. https://www.kaggle.com/code/aswinge0119062/boneage-prediction-deeplearning-project 9. Kim J et al (2017) Computerized bone age estimation using deep learning based program: evaluation of the accuracy and efficiency. Am J Roentgenol 209(6):1374–1380. https://doi.org/ 10.2214/ajr.17.18224. Accessed 18 Apr 2021 10. Poojary NB et al (2021) A novel approach for bone age assessment using deep learning. Int J Sci Res Comput Sci Eng Inf Technol (IJSRCSEIT) 7(3):67–75. https://doi.org/10.32628/CSE IT21731 11. Saranya N, Kanthimathi N, Boomika S, Bavatharani S, Karthick Raja R (2022) Classification and prediction of lung cancer with histopathological images using VGG-19 Architecture. In: Kalinathan LRP, Kanmani MSM (eds) Computational intelligence in data science. ICCIDS 2022. IFIP advances in information and communication technology, vol 654. Springer, Cham. https://doi.org/10.1007/978-3-031-16364-7_12 12. Zhang A, Lipton ZC, Li M, Smola AJ (2021) Dive into deep learning. https://www.arXiv:2106. 11342

Image Processing-Based Presentation Control System Using Binary Logic Technique Sheela Chinchmalatpure, Harshal Ingale, Rushikesh Jadhao, Ojasvi Ghule, and Madhura Ingole

Abstract Providing a perfect presentation becomes a challenging job because of various factors like changing the slides and the correct keys to be used to change the slides while maintaining composure in front of the audience. There are a lot of interruptions during a presentation when using the keyboard for controlling the slides. Therefore, this paper proposes a smart presentation system using hand gestures which provides an easy way to change or operate the slides. The proposed system uses various algorithms to detect different hand gestures to perform several operations. Keywords Gesture recognition · Python · Presentation

1 Introduction A more practical and user-friendly interface for managing presentation display is provided by interactive presentation systems, which employ cutting-edge humancomputer interaction techniques. These hand motion approaches substantially enhance the presentation experience as compared to conventional mouse and keyboard control. The gesture is a type of nonverbal communication or non-vocal communication that makes use of body movement to express a specific message. The system is primarily created using the Python framework, along with tools like open cv, cv zone, NumPy, and media-pipe. This approach tries to increase the effectiveness and usefulness of presentations. The system uses motions to get the pointer, write, and undo portions of the document. The proposed system aims to enable people to control the slideshow with the gestures of their hands, to improve the flow of the presentation, and to create a

S. Chinchmalatpure · H. Ingale (B) · R. Jadhao · O. Ghule · M. Ingole Vishwakarma Institute of Technology, Pune, Maharashtra 411037, India e-mail: [email protected] R. Jadhao e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_39

535

536

S. Chinchmalatpure et al.

user-friendly interface. The system mainly reduces the working of the external interface; hence, it makes the presentation better and portable. Machine learning helps to detect the gestures with subtle differences and maps them with some fundamental presentation slideshow controlling functions using Python. Different gestures such as swiping left, swiping right, thumb up, and stop sign are used to control and operate slides. People can now recognize, interact with, and engage with machines (laptop, PC) using a wide range of gestures, thanks to technological advancements in humanmachine interaction. Twenty-one specific spots on the human hand are used to make gestures by recognizing hand movements, to carry out a task.

2 Literature Review There are many hand gesture-driven systems based on different algorithms; however, this proposed concept is different such that it uses binary logic rather than a database. Zeng et al. [1] developed a thermal camera and web camera-integrated hand gesture-based presentation system. The system offered a comfortable and natural interactive interface for presentations. Experimental data in quantitative form demonstrated the system’s effectiveness. The gesture interaction can only manage one hand at the moment, but the system intends to expand the work to handle two hands to enhance the system’s interaction capabilities. The web camera is simply used for calibration, but additional data, such as the shape of the hand (such as a palm or fist), might be extracted for interactivity. Future studies will also examine further HCI uses, including gaming. Cohen et al. [2] suggested that the use of gesture inputs can significantly improve the control of computer programs like PowerPoint. Gestures can communicate a person’s full intentions and go beyond basic mouse movements and point-and-click commands. The use of gestures as a human-computer interface has various advantages over other kinds of input, such as speech or the keyboard, including greater hygiene, a decrease in system deterioration due to noisy situations, the removal of language barriers, and a more user-friendly command interface. Additionally, gestures last longer than input devices with mechanical parts do. Therefore, the system showed how to use gestures for human-computer interaction while utilizing the system’s state to select the appropriate action. Gadekallu et al. [3] presented an innovative convolutional neural network-based framework for hand gesture identification. Deep learning and CNN-based modeling techniques are particularly well-liked for identifying gestures. Choosing the right hyper-parameters for CNN is crucial to getting the correct classification results. Choosing the best CNN hyper-parameters for categorizing a publicly available hand motion dataset from Kaggle was the aim of the study. Categorical values were initially encoded in the dataset using the one-hot approach before being converted to binary format. The crow search meta-heuristic technique was then used to choose the appropriate hyper-parameters for the CNN. The chosen hyper-parameters were then used to train the CNN.

Image Processing-Based Presentation Control System Using Binary …

537

Zhang et al. [4] suggested a short-term sampling neural network for the recognition of dynamic hand gestures. Every hand movement was recorded as a video input. Each video input was split up into a predetermined number of frame groups. From each collection of color and optical flow frames, one sample frame was chosen at random. Convent was used to extract features from the samples, and then the combined features were delivered to an LSTM to predict the type of input hand motion. Tan et al. [5] proposed that human interaction can be facilitated by a variety of hand gestures. While hand gestures are important in human-computer interaction, they also reduce communication barriers and make it easier for the public to connect with the hearing-impaired community. The research presented CNN-SPP for the recognition of hand gestures based on vision. Spatial Pyramid Pooling (SPP) and a Convolutional Neural Network (CNN) were used. In order to address the problem with traditional pooling, SPP expands the features being fed into a fully linked layer by stacking multi-level pooling. Paper [6] contributed to entirely doing away with tradition. The only equipment needed to capture IP images is a webcam. This would usher in a new era of computerhuman interaction where no direct physical contact is required. The system makes it simple for anyone to use a computer by employing gesture commands by the CSA method on the output dataset. The obtained categorization results were measured against the most recent models [7–18].

3 Proposed Methodology Initially, a folder called “Presentation” is prepared, and slides that have been saved as PNG files are added to it. The system offers PNG in ascending sequence, starting with 1.png for the first slide, 2.png for the second slide, and so on.

3.1 System Architecture When one hand is shown to the system camera as an input, after comparing 21 points on the hand, the system tells if the hand exhibited is right or left. Then, the system employs binary logic to do a variety of actions. If the fingers are close, it indicates 0, and if the fingers are open, it indicates 1, and the execution is over. In Fig. 1, the system begins by accepting user input by hand (only one hand at a time). The gesture functions only when the hand movements are shown above the green threshold line as in the block diagram. After that, additional information is needed for picture processing, and then the action is made in accordance with the results. Recognition of hand gestures is shown in Fig. 2.

538

S. Chinchmalatpure et al.

Fig. 1 Block diagram of the proposed system

Gesture 1

Gesture 2

Gesture 3

Gesture 4

Gesture 5

Fig. 2 Recognition of hand gestures

3.2 Binary Algorithm The system takes the input of only one hand to process it. The system then advises whether the hand should be the right or left. Binary logic then needs to be used, and it will respond to the input in accordance with its rules. The following steps are then applied. 1. The system employs the algorithm (Logic) of 1 and 0. 2. The system uses 21 points of a hand for the recognition of gestures. Also, the user can only use one hand at a time.

Image Processing-Based Presentation Control System Using Binary …

539

Table 1 Event handling with binary logic Event handling Binary logic Description Next slide

[0,0,0,0,1]

In this gesture, only the little finger is open, and the other four fingers are all closed

Previous slide

[1,0,0,0,0]

In this gesture, only the little finger is open, and the other four fingers are all closed

Getting pointer [0,1,1,0,0]

The forefinger and middle finger are both open in this gesture, but the other fingers are all closed

Moving pointer [0,1,1,0,0]

The forefinger and middle finger are both open in this gesture, but the other fingers are all closed

Writing part

[0,1,0,0,0]

Only the forefinger is open in this gesture; the other four are all closed

Undo process

[0,1,1,1,0]

The forefinger, middle finger, and ring finger are all open in this gesture, but the other fingers are all closed

3. Because of the use of the camera, if the user raises the right hand, it indicates left on the screen. 4. If the user now brings all the fingers together to form a fist, then the thumb, forefinger, middle finger, ring finger, and little finger are represented as [0,0,0,0,0]. 5. Additionally, the system creates a green threshold line, above which the gestures will be done. Table 1 displays the binary logic used to handle events. The description column in the table contains a brief description of the logic and gesture of the proposed system.

4 Results The smart controlling presentation system helps to reduce external interfaces like mouse and keyboard which may cause disturbances during presentations. Due to the reduction of the external interface, the system seems to be portable. The system helps to manage the slides solely by performing gestures in front of the camera. A wrapper of Python language is used for communicating with COM objects and controlling Windows applications. The wrapper allows the user to do any required task of a Microsoft application using Python. The tools required are PyCharm, Python 3.10, and Webcam. Distance and Accuracy The relationship between the distance and accuracy is determined and illustrated in the graph. Using the camera’s range, the system discovers the distance-accuracy relationship graph. When the user is closer to the camera, the distance is short, and

540

S. Chinchmalatpure et al.

Fig. 3 Distance versus accuracy

the accuracy is high. As the user moves away from the camera, distance increases, and the accuracy decreases as shown in Fig. 3. Implementation Figure 4 shows the opening slide of the presentation, and when the system recognizes the user’s hand, it indicates whether it is right or left.

Fig. 4 Showing hand gesture

Image Processing-Based Presentation Control System Using Binary …

541

Fig. 5 Hand gesture for the next slide

After showing the gesture with the little finger open, the system moves to the next slide. The first slide is shown in Fig. 4, and after applying the gesture for the next slide, the system moves to the second slide which is shown in Fig. 5. The system moves to the previous slide when the gesture with the thumb open is shown. The second slide is shown in Fig. 5, and after applying the gesture for the previous slide, the system moves to the first slide which is shown in Fig. 6.

Fig. 6 Hand gesture for the previous slide

542

S. Chinchmalatpure et al.

When the gesture with the index and middle fingers open is shown, the system gets the pointer as shown in the highlighted part of Fig. 7. The system moves the pointer according to the motions of the hand gesture with the index and middle fingers being open and is shown in Fig. 8. As seen in Fig. 9, the system writes on slides according to the movement of the index finger.

Fig. 7 Hand gesture for getting the pointer

Fig. 8 Hand gesture for moving the pointer

Image Processing-Based Presentation Control System Using Binary …

543

Fig. 9 Hand gesture for writing

When the system recognizes the index, middle, and ring fingers open at the same time, then the written part gets erased as shown in Fig. 10. Table 2 shows the accuracy of every event with its binary logic. These accuracies were calculated using the random forest algorithm.

Fig. 10 Hand gesture for undo process

544 Table 2 Accuracy of events

S. Chinchmalatpure et al. Event handling

Binary logic

Accuracy (%)

Next slide

[0,0,0,0,1]

100

Previous slide

[1,0,0,0,0]

100

Getting pointer

[0,1,1,0,0]

95

Moving pointer

[0,1,1,0,0]

95

Writing part

[0,1,0,0,0]

90

Undo process

[0,1,1,1,0]

95

5 Future Scope Gestures are used in various fields and are of great importance. Gestures are the future of real-time interactions. In this current scenario, there is a need for a more natural way of interacting with computers and machines. The system helps students with human-computer interaction when gestures are used. The system can operate all computer programs like cut, copy, and paste by adding more gestures. The proposed system can be enhanced further to control the PowerPoint application as well. A single technology or algorithm can be used for various purposes rather than using different techniques for each purpose.

6 Conclusion The proposed system aims to remove the assistance required during presentations, by allowing the presenter to control the slides himself using hand gestures. The suggested system will help to overcome the limitations of the traditional presentation process completely. By using gesture commands, the technology makes it simple for anyone to use a computer. The shortcomings of the past systems are overcome by gesture recognition technology with a user-friendly interface. By employing this technique, the user can use the application from a distance without the use of a keyboard or mouse.

References 1. Zeng B, Wang G, Lin X (2012) A hand gesture based interactive presentation system utilizing heterogeneous cameras. Tsinghua Sci Technol 17(3):329–336 2. Cohen CJ, Beach G, Foulk G (2001) A basic hand gesture control system for PC applications. In: Proceedings 30th applied imagery pattern recognition workshop (AIPR 2001). Analysis and understanding of time varying imagery. IEEE, pp 74–79 3. Gadekallu TR, Alazab M, Kaluri R, Maddikunta PK, Bhattacharya S, Lakshmanna K (2021) Hand gesture classification using a novel CNN-crow search algorithm. Complex Intell Syst

Image Processing-Based Presentation Control System Using Binary …

545

4. Zhang W, Wang J, Lan F (2021) Dynamic hand gesture recognition based on short-term sampling neural networks. IEEE/CAA J Autom Sinica 8:110–120 5. Tan YS, Lim KM, Tee C, Lee CP, Low CY (2021) Convolutional neural network with spatial pyramid pooling for hand gesture recognition. Neural Comput Appl 33:5339–5351 6. Shinde V, Bacchav T, Pawar J, Sanap M (2014) Hand gesture recognition system using camera. Int J Eng Res Technol 7. Baudel T, Beaudouin-Lafon M (1993) Charade: remote control of objects using free-hand gestures. Commun ACM 36(7):28–35 8. Mujahid A, Awan MJ, Yasin A, Mohammed MA, Damaševiˇcius R, Maskeli¯unas R, Abdulkareem KH (2021) Real-time hand gesture recognition based on deep learning YOLOv3 model. Appl Sci 11:4164 9. Qi W, Ovur SE, Li Z, Marzullo A, Song R (2021) Multi-sensor guided hand gesture recognition for a teleoperated robot using a recurrent neural network. IEEE Robot Autom Lett 6:6039–6045 10. Kelly SD, McDevitt T, Esch M (2020) Brief training with co-speech gesture lends a hand to word learning in a foreign language. Lang Cognit Process 24:313–334 11. Jaramillo-Yánez A, Benalcázar ME, Mena-Maldonado E (2020) Real-time hand gesture recognition using surface electromyography and machine learning: a systematic literature review. Sensors (Basel, Switzerland) 20 12. Ceolini E, Frenkel C, Shrestha SB, Taverni G, Khacef L, Payvand M, Donati E (2020) Handgesture recognition based on EMG and event-based camera sensor fusion: a benchmark in neuromorphic computing. Front Neurosc 14 13. Rehman ZU, Zia MS, Bojja GR, Yaqub M, Jinchao F, Arshid K (2020) Texture based localization of a brain tumor from MR-images by using a machine learning approach. Med Hypotheses 109705 14. Shen Y, Li J, Zhu Z, Cao W, Song Y (2015) Image reconstruction algorithm from compressed sensing measurements by dictionary learning. Neurocomputing 151:1153–1162 15. Shin HK, Ahn YH, Lee SH, Kim HY (2019) Digital vision based concrete compressive strength evaluating model using deep convolutional neural network. CMC-Comput Mater Contin 61(3):911–928 16. Skaria S, Al-Hourani A, Lech M, Evans RJ (2019) Hand-gesture recognition using two-antenna Doppler radar with deep convolutional neural networks. IEEE Sens J 19(8):3041–3048 17. Srivastava G, Deepa N, Prabadevi B, Reddy MPK (2021) An ensemble model for intrusion detection in the internet of softwarized things. In: Adjunct proceedings of the 2021 international conference on distributed computing and networking, pp 25–30 18. Tan M, Zhou J, Xu K, Peng Z, Ma Z (2020) Static hand gesture recognition with electromagnetic scattered field via complex attention convolutional neural network. IEEE Antennas Wirel Propag Lett 19(4):705–709

Intelligent Fault Diagnosis in PV System—A Machine Learning Approach R. Priyadarshini, P. S. Manoharan, and M. Niveditha

Abstract This work describes a methodology for detecting and classifying faults in large-scale Photovoltaic (PV) systems using the Machine learning algorithm. The requirement of automation in fault diagnosis in large scale systems is that the manual inspection of PV systems is practically not possible, which may lead to performance loss in Solar PV due to the faults. The system for fault detection was developed using classification learner in MATLAB/SIMULINK environment. The features such as voltage and current was recorded from the real time PV system installed in Thiagarajar college of Engineering, Madurai. This method indicates the fault in the panels of the system based on the variations of the voltage and current values. The SVM algorithm yields an accuracy of 99.8% for the realtime dataset using training and validation. Keywords Machine learning · Support vector machine · Renewable energy · MATLAB/simulink

1 Introduction Photovoltaic Energy is a promising and reliable source of energy in place of fossil fuels. The concern for using fossil fuels is increasing, due to environmental effects and hence PV energy is a better replacement [1–3]. As a result, the number of large scale solar PV installations is increasing day by day. The installations require periodic monitoring for reliable power generation [4]. The manual monitoring of large scale PV systems is not possible due to greater deployment area, so we require automated R. Priyadarshini · P. S. Manoharan · M. Niveditha (B) Thiagarajar College of Engineering, Madurai, Tamil Nadu, India e-mail: [email protected] R. Priyadarshini e-mail: [email protected] P. S. Manoharan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_40

547

548

R. Priyadarshini et al.

systems for condition monitoring [5]. In this paper we use the Machine Learning algorithm for condition monitoring of the PV system. Visual and electrical failures are two broad categories of faults assosciated with the PV system [6]. The electric faults have the potential to permanently damage the PV system while visual faults may cause performance degradation or permanent damage to the system or even both. Electrical faults that commonly occur in the PV system are partial shading faults, unbalanced faults, faults associated with line or module disconnections, etc. [7, 8]. Visual faults also known as non-electrical faults included hotspots, microcracks, snail trails, etc. [9]. The most commonly occurring electrical faulty scenarios such as shading conditions, short circuiting of lines, line or module disconections are considered in this work. In general, the fault detection methods depend on measuring and analyzing the data collected based on electrical and environmental features. The electrical data include the voltage, current, irradiance, generated power, etc. [10]. In addition the P-V and I–V curves are also considered for analysis for fault detection in solar PV [11, 12]. Many researches are being carried out in fault diagnosis in the solar PV system both on generation and distribution side of the system. This work was done on the PV system’s generation side mainly focusing on the solar panels. The artificial intelligence techniques requires historical data for training the Machine learning model. In general the data collected can be satellite data or simulation data or real-time data collected from the PV system using sensors [13–15]. This work focuses on suggesting a reliable technique for solar PV fault identification and categorization. The realtime data is collected from the PV system available at Thiagarajar college of Engineering, Madurai.

2 Methodology Representation of fault detection and classification system is shown in Fig. 1.

2.1 Collection of Dataset A real-time dataset is collected from two WSM-145 PV models conected in series. The DC current and DC voltage being generated by it are measured through the use of ammeters and volt meters along with the incorporation of a 100-W DC bulb that would act as the load during the measurement. Almost 4656 samples have been collected.

Intelligent Fault Diagnosis in PV System—A Machine Learning Approach

549

Fig. 1 Representation of fault detection and classification system

2.2 Training Phase The experimental setup available at Thiagarajar college of Engineering is used for data collection. It consists of 2 WSM-145 PV models connected in series. The technical specification of each panel is provided in Table 1. A supervised learing form of the machine learning algorithm is the Support Vector Machine, that uses data points and decision boundary for classification and regression applications as in Fig. 2. The decision boundary decides the margin for every class by using the Support Vectors [16, 17] (Fig. 3). The predictors such as voltage and current measured from the installed PV system are used to train the SVM algorithm. This study uses the linear SVM as there exists a linear relation between the predictors (i.e. Voltage and current). They also involve less computational complexities and provide better accuracies. Thus Linear SVM is suitable for classification using the available dataset. The SVM model is trained using 3738 data as shown in the Confusion matrix in Fig. 5. In the presented work the classification learner tool available in matlab is utilized for training of the SVM model. The dataset collected from the real-time system is being fed into the classification learner tool and further on the model is trained upon Table 1 Panel specification

S. No

Specification

Value

1

Maximum power (Pmax)

145 W

2

Voltage at maximum power

17 V

3

Current at maximum power

8.53 A

4

Open circuit voltage (V OC )

21 V

5

Short circuit current (I SC )

9.21 A

6

Number of cells

36

550

R. Priyadarshini et al.

Fig. 2 Representation of SVM algorithm

Fig. 3 Scatter plot for ınput dataset

the selection of the algorithm, and here SVM is being chosen. The learner provides a better understanding and knowledge about the supervised machine learning models. The dataset consisting of the predictors and labels is split into training and validation data. The SVM model chosen is trained using the training data. As a result the accuracy, ROC curve and parallel coordinates plot are obtained. The trained model can be imported to MATLAB workspace for predicting new data. The imported model can be deployed in the Simulink Coder using function block.

Intelligent Fault Diagnosis in PV System—A Machine Learning Approach

551

Fig. 4 Trained SVM block

The realtime dataset consisting of 4656 values that are used to train & test the SVM model for both normal and faulty conditions. Firstly the dataset is divided as eighty percent for training and twenty percent for testing.

2.3 Testing Phase The trained SVM model as shown in Fig. 4 is deployed in the real time scenario PV system simulated using MATLAB/SIMULINK and the testing accuracy is calculated. The Realtime Scenario is shown in Fig. 10.

3 Results and Discussion The Support Vector Machine model was trained using classification learner app in MATLAB. The training dataset consisting of 3738 observations of current-voltage was given as input to the learner app and the training accuracy of 99.7% was obtained. The confusion matrix for training is shown in Fig. 5. The confusion matrix consists of the true class and predicted class during training. In the matrix, for example Partial shading is classified as normal in 1 instance, open circuit in 2 instances, short circuit in 1 instance and the remaining 930 samples are correctly classified as partial shading. Figure 6 shows the scatter plot obtained as a result of the training the SVM model. The cross marks in the plot are the misclassified data and the dot marks are correctly classified as data. The Receiver Operating Characteristics (ROC) curve illustrates how well the classification model performs under various circumstances. The ROC curve is 1 as shown in Fig. 7 for the trained SVM model depecting that it performs well with any condition of data. The confusion matrix for the real-time dataset has been illustrated in Fig. 8. From Fig. 8 it can be inferred that 2 instances of partial shading alone is misclassified as normal and the short circuit once each, and the remaining normal and fault conditions such as the open circuit and short circuit are classified without any misclassification, thus yielding an higher accuracy. The true positive rate against the false positive rate was plotted for the SVM classifier under various threshold conditions. The generated ROC curve for the testing dataset is shown Fig. 9. The trained SVM model is then deployed in the Simulink model of the PV system available in Thiagarajar college of Engineering, Madurai. The system consists of a

552

Fig. 5 Confusion matrix obtained after training

Fig. 6 Scatter plot for training

R. Priyadarshini et al.

Intelligent Fault Diagnosis in PV System—A Machine Learning Approach

Fig. 7 ROC curve for training

Fig. 8 Confusion matrix for testing

553

554

R. Priyadarshini et al.

Fig. 9 ROC curve for testing

string with two 145 W panels connected in series. The trained model is subjected to testing using the real-time dataset consisting of 934 Observations of voltage and current. The testing accuracy obtained was 99.8%, proving that the chosen model is suitable for the considered realtime dataset. A PV system consisting of two panel which are connected in series is modeled through MATLAB/Simulink environment and is shown in Fig. 10. The modelled Photovoltaic system’s generated voltage & current can be regulated through the MPPT block to maintain the optimum values of DC current & DC voltage values in case of any discrepancies. The generated values of DC current & DC voltage can be monitored through the help of current and voltage sensors. As we can see the generated current & voltage values, similarly the P-V and I–V curves can also be visualized simultaneously. It can be observed that system under normal state, that is, when the system is free of any electrical or non-electrical discrepancies, the normal values of current and voltage are obtained and our SVM model has diagnosed the state of the system as a normal state. From Fig. 11 it can be observed that the value of current during normal state as 0.4 A and voltage as 50.3 V. Based on these values our predicted model has identified the condition of the system as “0” referring to the normal state. When a current carrying conductor disconnected the open circuit condition arises and here the same scenario was created by disconnecting a line. Through the SVM prediction model the fault has been diagnosed as “1” refering to the open circuit state. During this state as it can be seen from Fig. 12 that the system’s voltage remains

Intelligent Fault Diagnosis in PV System—A Machine Learning Approach

555

Fig. 10 Simulink model of the 2 WSM-145 PV modules connected in series

Fig. 11 System under normal state

normal but the current drops down to 0 A thereby clearly projecting the open circuit condition. Upon connection of two points at different potentials the short circuit condition has been evoked and during this state the system is subjected to the short circuit fault. Similar to the way in which observations were made for the normal state observations for short circuit states has been carried out and from Fig. 13 it can be seen that during this state the voltage value drops down to 0 V while the current

Fig. 12 Detection of open circuit state of the system

556

R. Priyadarshini et al.

Fig. 13 Detection of short circuit condition of the system

Fig. 14 Detection of the system being subjected to partial shading conditions

value remains unchanged. The prediction model has detected the state of the system as “2” referring to the short circuit conditions. Shading conditions are caused due to the presence of shadows from buildings or trees. These can cause reduction in the power generation from the PV system and from Fig. 14. We can see that the voltage value has reduced to 22.5 V and current to 0.2 A due to the variation of the irradiation values to 100 W/m2 . The shading condition has been detected by the SVM prediction model by indicating to it as “3”.

4 Conclusion With the rising concern for renewable energy people are starting to get adapted to renewable energy. As the usage of PV systems has been increasing day by day it is quite essential to be concerned about its health and performance. Hence incorporation of artificial intelligence methods such as the usage of machine learning algorithms has become quite essential due to its efficiency and hence the detection of certain severe faulty conditions in the PV system’s DC side has been implemented in this paper. A real time dataset from a PV sytem has been extracted out and through the classification learner in the MATLAB/simulink environment fault detection model has been trained and an accuracy of 99.7% has been obtained. About 20% of testing data has been used for testing phase and it has also yielded an accuracy of about 99.8%. The predicted model has been deployed to the simulink environment where a model of the real time PV system has been designed. The predicted SVM model was able to correctly detect different faulty states invoked in the PV system successfully.

Intelligent Fault Diagnosis in PV System—A Machine Learning Approach

557

Acknowledgement The authors express their gratitude to the Thiagarajar College of Engineering (TCE) for supporting us to carry out this research work. Also, the financial support from TCE under Thiagarajar Research Fellowship scheme (File.no: TRF/Jul-2022/03) is gratefully acknowledged.

References 1. Harrou F, Saidi A, Sun Y, Khadraoui S (2021) Monitoring of photovoltaic systems using improved kernel-based learning schemes. IEEE J Photovolt 11:806–818 2. Rashini S, Manoharan PS, Valan Rajkumar M (2013) Interfacing PV system to the utility grid using a voltage source ınverter. J Emerg Technol Electr Eng (ICBDM 1.1) 124–129 3. Yang NC, Ismail H (2022) Voting-based ensemble learning algorithm for fault detection in photovoltaic systems under different weather conditions. Mathematics 10:285 4. Yang NC, Ismail H (2022) Robust ıntelligent learning algorithm using random forest and modified-ındependent component analysis for PV fault detection: ın case of ımbalanced data. IEEE Access 11:41119–41130 5. Eskandari A, Milimonfared J, Aghaei M (2020) Fault detection and classification for photovoltaic systems based on hierarchical classification and machine learning technique. IEEE Trans Ind Electron 68:112750–112759 6. Adhya D, Chatterjee S, Chakraborty AK (2022) Performance assessment of selective machine learning techniques for improved PV array fault diagnosis. Sustain Energy Grids Netw 29:100582 7. Badr MM, Hamad MS, Abdel-Khalik AS, Hamdy RA, Ahmed S, Hamdan E (2021) Fault identification of photovoltaic array based on machine learning classifiers. IEEE Access 9:159113–159132 8. Hojabri M, Kellerhals S, Upadhyay G, Bowler B (2022) IoT-based PV array fault detection and classification using embedded supervised learning methods. Energies 15:2097 9. Venkatesh SN, Sugumaran V (2022) Machine vision based fault diagnosis of photovoltaic modules using lazy learning approach. Measurement 191:110786 10. Ghoneim SS, Rashed AE, Elkalashy NI (2021) Fault detection algorithms for achieving service continuity in photovoltaic farms. Intell Autom Soft Comput 30:467–469 11. Mellit A, Kalogirou S (2022) Assessment of machine learning and ensemble methods for fault diagnosis of photovoltaic systems. Renew Energy 184:1074–1090 12. Liu Y, Ding K, Zhang J, Lin Y, Yang Z, Chen X, Li Y, Chen X (2022) Intelligent fault diagnosis of photovoltaic array based on variable predictive models and I–V curves. Sol Energy 237:340– 351 13. Swathika S, Manoharan PS, Priyadarshini R (2022) Classification of faults in pv system using artificial neural network. In: 2022 7th ınternational conference on communication and electronics systems (ICCES). IEEE, pp 1359–1363 14. Li P, Zhang H, Guo Z, Lyu S, Chen J, Li W, Song X, Shibasaki R, Yan J (2021) Understanding rooftop PV panel semantic segmentation of satellite and aerial images for better using machine learning. Adv Appl Energy 4:100057 15. Ioannou K, Myronidis D (2021) Automatic detection of photovoltaic farms using satellite imagery and convolutional neural networks. Sustainability 13:5323 16. Dhimish M (2021) Defining the best-fit machine learning classifier to early diagnose photovoltaic solar cells hot-spots. Case Stud Thermal Eng 25:100980 17. Wang J, Gao D, Zhu S, Wang S, Liu H (2019) Fault diagnosis method of photovoltaic array based on support vector machine. Energy Sour Part A: Recov Utiliz Environ Effects 1–6

Literature Survey on Empirical Analysis on Efficient Secured Data Communication in Cloudcomputing Environment P. V. Shakira

Abstract Cloud computing (CC) is an accessibility of computer resource for the storage of data. CC comprises remote servers for storing the information. CC gives users for storing the information within third-party data. CC security protects cloud environment, data, information and applications against the unauthorized access. Storage security secures the data storage systems. User authentication is establishing the user identity in fast access network resource by using authorizing transfer into verified user authenticity. Many researchers carried out their research on protected data within cloud. But, the data confidentiality rate was not enhanced as well as the time was not minimized during secured communication in cloud. The major contribution of the study is used for reviewing different data methods within cloud environment. Keywords Cloud computing · User authentication · Network resource · Storage security · Computer system · Secured data communication

1 Introduction CC is the process of delivering information technology services, storage, database, networking, tools, and software. CC permits companies for utilizing resource for maintaining computing infrastructures. CC is an accessibility of computer resources without active organization by user. Cloud computing provides the user ability to access the information anytime from anywhere. Cloud Computing presented the computing and communication related services with network resources. Cloud security includes the events and technology that preserve the cloud computing environments against threats. Cloud security and security management prevent the unauthorized access to maintain the data and applications in the cloud server. Storage security is the convergence of storage and security disciplines for securing the digital assets. P. V. Shakira (B) Karpagam Academy of Higher Education, Coimbatore, Tamil Nadu, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_41

559

560

P. V. Shakira

1.1 Research Objective The article is used to achieving the efficient protected data within cloud environment using better data confidentiality rate and minimal time. Besides, data confidentiality is increased by using different authentication techniques. The article is organized by: Sect. 1, Introduction. Section 2, reviews protected data methods in cloud platform. Section 3, describes protected data communication techniques in cloud environment. Section 4, explains simulation setup. Section 5, discusses limitation of existing protected data methods. Section 6, provides the conclusion of the article.

2 Lıterature Revıew GGH was performed in [1] to aggregated blocks that were modernized from aggregated tags without data indices. But, data confidentiality level was not improved by the GGH cryptosystem. An encrypted data deduplication scheme termed SPADE was introduced in [2] to resist the key servers. A proactivization mechanism was introduced for MLE to substitute the key server as well as to maintain encrypted data deduplication. But, encryption time was not minimized by the SPADE scheme. The light weight privacy-preserving DPOS scheme was introduced in [3] to private POS schemes. Though efficiency was enhanced, the space complexity was not minimized by light weight privacy-preserving DPOS scheme. Accountable authority and revocable CP-ABE based cloud storage using whitebox traceability was performed in [4] to auditing termed Crypt Cloud+. The proposed method was designed to decrease the misuse. However, Crypt Cloud + failed to provide entirely public traceability lacking for cooperation of the performance. The privacy-preserving online fingerprint authentication termed e-Finga was designed in [5] to finish the encrypted data. A user fingerprint was scheduled within a trust authority out sourced for many servers. But, computational cost was not minimized. A patient-centric PHR sharing framework was introduced in [6] to preserve the patient privacy and to guarantee its control. Though data integrity was improved, the time consumption was not reduced by the patient-centric PHR sharing framework. The provable dynamic revocable three-factor MAKA was designed in [7] to achieve user dynamic management with Schnorr signatures. But, the authentication accuracy level was not improved by provable dynamic revocable MAKA. The secure authentication protocol was developed in [8] to big data using hierarchical attribute authorization construction. The designed method improved the security level. While, protection was increased, computational complexity was not minimized.

Literature Survey on Empirical Analysis on Efficient Secured Data …

561

The secure data group sharing as well as the conditional dissemination method was introduced by [9] using multi-owner. The data owner shared data using user group by cloud in a secure way. However, it failed to improve the performance by keyword search over cipher text. A cloud-backed storage system termed Charon was designed in [10] to store the data within reliable mode. Multiple cloud providers as well as storage repositories were employed. But, computational cost was not minimized with Charon. The efficient as well as secure data sharing method was introduced in [11] to mobile devices. The designed method was improving the protection and authorized access of data. But, encryption time was not reduced by effective and protected data sharing method. The privacy-preserving as well as untraceable model was designed in [12] to maintaining users within data sharing. Group members and proxies employed key exchange part for obtaining the keys as well as limit the multi-party approval. Federated Learning based Secure data Sharing method termed FL2S was introduced in [13] to IoT. The hierarchical asynchronous FL construction was performed depending on sensitive task breakdown by using secure data sharing. But, computational complexity was not reduced by FL2S. A hierarchical security system was introduced in [14] with encryption type to increase the security level. A shuffle standard cryptography key policy with optimized KDSP was employed for improving access control performance where cloud data were used. The designed system was fast and secures files. However, the time complexity was not reduced by the hierarchical security system. The secure distributed data storage model was introduced for blockchain enabled edge computing. BLS-Homomorphic Linear Authenticator was employed for verifying the data. Data dynamics mechanism was employed with application of CBF. However, data confidentiality rate was not enhanced by secure distributed data storage scheme. BHDS-BCDA was designed to service-level agreement using lesser space complexity and improved accuracy. The server generated a public key with a private key to each registered user by Benaloh Cryptographic function. Although space complexity was reduced but, the processing time was not minimized with BHDS-BCDA. Key-aggregate proxyre-encryption was introduced to decryption capabilities with aggregate keys. The key-aggregate cryptosystem was employed in a highly dynamic environment for improving access capabilities of the user. But, memory consumption was not decreased by the key-aggregate proxy re-encryption. A MRSS using minimal storage overhead was introduced for privacy preserving data. Hear discrete wavelet transform (DWT) was employed to disease diagnosis. However, data confidentiality rate was not improved with MRSS. The SecACS scheme was designed to data dynamics. SecACS minimized time consumption by lightweight cryptographic operations. SecACS obtained the magnitude for out sourcing data by higher data integrity. But, processing time was not reduced by the SecACS scheme.

562

P. V. Shakira

Crypt DICE was performed to various data encryptions. It supported trade-off as well as encryption by different data granularities. But, memory consumption was not reduced by the distributed data protection system. An auditing scheme was introduced to secured procedure for reviewing out sourced data. The designed scheme was employed with minimal computation overhead on users for the data verification procedure. Though data confidentiality rate was improved, the processing time was not minimized. An intelligent crypto graph approach was introduced to cloud service operator. The designed method partitioned file in addition to data within the distributed servers. An alternative method was performed for maintaining data packets for reducing the operation time.

3 Secured Data Communication in Cloud Environment Cloud computing provides the data storage, communications as well as function. CC is the method of delivering the computing resources as service in which the resources were owned and managed by the cloud provider. CC employs the network of remote servers hosted on internet for storing the data. CC includes the cloud service providers for handling the remote data centers required to handle hard resources. Cloud computing includes the cloud service providers for managing the remote data centers. It is a cost-friendly system that enables the networking system to function smoothly. Cloud storage alleviates the user’s burden of local data storage.

3.1 A Compressive Integrity Auditing Protocol for Secure Cloud Storage From the large function of cloud, user reliability received large attention in the cloud environment. The equation was formed by aggregated blocks, tags, and indices for verifying the reliability of cloud. The compressive secure cloud storage protocol was introduced with GGH. Cloud stored data offered an integrity proof as aggregated blocks get reconstructed without data indices. Communication and storage cost was minimized as well as user data were hidden.The algorithmic process of the compressive integrity auditing protocol is given as. Algorithm 1 Compressive integrity auditing protocol for secure cloud storage

Literature Survey on Empirical Analysis on Efficient Secured Data …

563

Algorithm 1 explains the step by step process of compressive integrity auditing protocol. The main objective of the designed protocol was used for combining the tags of blocks as well as achieves the remote integrity auditing with data. The cloud storage reduction as well as privacy preservation was performed as tags were stored on cloud. The data owner model invokes encryption within GGH for combining data. Cloud combines tags within integrity evidence. The verifier ensured cloud storage reliability with decryption into the GGH cryptosystem with cloud proofs. The GGH cryptosystem was highly efficient because of algebraic operations and was protected with forge, restore as well as replay attacks. The security does not depend on hardness of NP-hard issues.

3.2 Secure Password-Protected Encryption Key for Deduplicated Cloud Storage Systems The encrypted data deduplication scheme termed SPADE was introduced to resist servers. A proactivization mechanism was employed for server-aided MLE to renew

564

P. V. Shakira

privacy as well as retain data deduplication. The server-aided password-hardening was employed for resisting dictionary guessing attacks. A password-based layered encryption method as well as password-based authentication method allowed the users for accessing their data using their passwords. The secure password-protected MLE key method was employed. SPADE was introduced depending on servers-aided MLE to eliminate the single-point of-failure issue. SPADE was partitioned within fixed intervals of pre-determined length. A handoff mechanism was employed for permitting the transferring active call or data session. A well-implemented handoff is important for delivering uninterrupted service to a caller or data session user. Server-side secret was sent over various key servers which were not compromised. The user password was considered as an essential role in SPADE. SPADE was constructed on a password based layered encryption mechanism. In SPADE, the user validated herself/himself using servers as well as cloud server with help of the same password. SPADE was the success of DGA against password. Brute-force attack against MLE was employed with susceptibility. A server-aided password-hardening preserved password aligned with DGA.

3.3 Light Weight and Privacy-Preserving Delegable Proofs of Storage with Data Dynamic Sin Cloud Storage The lightweight privacy-preserving DPOS scheme was introduced to support the third party auditor. Tag generation procedure was carried out by various hundred times devoid of efficiency. The designed scheme was employed for maintaining dynamic operations using better efficiency and minimal computation as well as communication costs. A new POS scheme was introduced for lightweight and privacy preservation. The designed scheme was an efficient one with a private key POS scheme for authentication tag generation. The designed scheme supported the third party auditor as well as revoke auditor. Authentication tag generation prevented the data leakage for the auditor through an auditing procedure.

4 Performance Analysis of Secured Data Communıcatıon Techniques in Cloud Experimental evaluation of existing data techniques in cloud is performed using Java Software. The result comparison is performed for three different methods namely Goldreich-Goldwasser-Halevi (GGH) cryptosystem, SPADE and lightweight privacy-preserving DPOS. Result comparison is carried out with existing techniques with parameters like, • Data Confidentiality Rate • Processing Time

Literature Survey on Empirical Analysis on Efficient Secured Data …

565

• Memory Consumption.

4.1 Impact on Processing Time It is described by the number of time taken to perform protected data in cloud. It is a product of amount of data points and time used to perform protected data in cloud. It is computed in terms of milliseconds (MS) and formulated as, P Time = N ∗ time consumed to perform secured communication

(1)

In Eq. (1), the processing time (P Time) is calculated. From Table 1, describes the processing time with respect to different number of data points varying over 10–100. The processing time is carried out on existing Goldreich-Goldwasser-Halevi (GGH) cryptosystem, SPADE and light weight privacy-preserving DPOS. Let us consider the number of data points is 60, the processing time of the GGH cryptosystem is 38 ms while the SPADE and light weight privacy-preserving DPOS scheme is 43 and 52 ms correspondingly. The graphical illustration of the processing time is demonstrated from Fig. 1. In Fig. 1, describes the processing time for different number of data points. The blue color cylinder represents the processing time of the existing GoldreichGoldwasser-Halevi (GGH) cryptosystem. The yellow color cylinder as well as green cylinder indicate the processing time of SPADE and light weight privacy-preserving DPOS scheme correspondingly. It is attained with processing time of the GGH cryptosystem that is minimal compared with SPADE and lightweight privacy-preserving DPOS scheme correspondingly. This is because of using a data owner model to invoke Table 1 Tabulation for processing time Number of data points

Processing time (ms) GGH cryptosystem

SPADE scheme

Lightweight PRIVACY-PRESERVING DPOS scheme

10

25

32

39

20

28

34

42

30

31

37

45

40

33

39

47

50

35

41

50

60

38

43

52

70

40

46

55

80

42

48

58

90

45

50

60

100

47

52

62

566

P. V. Shakira

Fig. 1 Measurement of processing time

the encryption algorithm in GGH for combining the data. Cloud joined the tags within reliability evidence within the confirmable manner. The verifier ensured cloud storage reliability via decryption with cloud proofs. Accordingly, the processing time of the GGH cryptosystem is minimized as 14% compared with SPADE as well as 29% compared with the light weight privacy-preserving DPOS scheme.

4.2 Impact on Data Confidentiality Rate It is the proportion of number of data that are correctly accessed with an authorized user for an entire number of data. It is computed by percentage (%). It is mathematically formulated as, DC Rate =

Number of data points are correctly accessed by authorized user ∗ 100 Total number of data points (2)

From Eq. (2), the data confidentiality rate is (DC rate) determined. In Table 2, explains the data confidentiality rate using different number of data points ranging over 10–100. Data confidentiality rate evaluation is carried out on the existing Goldreich-Goldwasser-Halevi (GGH) cryptosystem, SPADE and the light weight privacy-preserving Delegable Proofs of Storage (DPOS) scheme. Let us consider that number of data points as 30, the data confidentiality rate of the GGH cryptosystem is 73% whereas the SPADE and lightweight privacy-preserving

Literature Survey on Empirical Analysis on Efficient Secured Data …

567

DPOS scheme is 84% and 79% respectively. The graphical illustration of the data confidentiality rate is described in Fig. 2. Figure 2 describes the data confidentiality rate to diverse number of data points. The blue color cylinder represents the data confidentiality rate of the existing Gold reich-Goldwasser-Halevi (GGH) cryptosystem. The yellow color cylinder and green color cylinder denote the data confidentiality rate of the SPADE and DPOS scheme correspondingly. This is because of the data confidentiality rate using SPADE is Table 2 Tabulation for data confidentiality rate Number of data points Data confidentiality rate (%) GGH cryptosystem SPADE scheme Lightweight privacy-preserving DPOS scheme 10

68

80

75

20

70

82

77

30

73

84

79

40

71

83

76

50

74

86

78

60

77

88

80

70

79

91

82

80

81

92

83

90

78

90

81

100

80

93

85

Fig. 2 Measurement of data confidentiality rate

568

P. V. Shakira

higher when compared to the GGH cryptosystem and lightweight privacy-preserving DPOS scheme correspondingly. This is due to the application of the server-aided password-hardening protocol for resisting the dictionary guessing attacks. The password-based layered encryption method as well as the password-based authentication method allowed users for storing their data using their passwords. It aids to enhance the data confidentiality rate. Consequently, the data confidentiality rate of the SPADE is minimized as 16% compared with the GGH cryptosystem as well as 9% compared with the light weight privacy-preserving DPOS scheme.

4.3 Impact on Memory Consumption It is referred by the number of space used to perform the protected data. It is described by the product of number of data as well as the space utilized to perform the secured data communication. It is determined by Megabytes (MB) and calculated as, MC = N ∗ Memory utilized to perform secured communication

(3)

In Eq. (3), memory consumption (MC) is determined. Table 3 describes the memory consumption with respect to different number of data points varying from 10–100. Memory consumption evaluation is carried out on existing Goldreich-Goldwasser-Halevi (GGH) cryptosystem, SPADE and light weight privacy—preserving DPOS. Let us consider the number of data points as 90, the memory consumption of the GGH cryptosystem is 65 MB while the SPADE and lightweight privacy-preserving DPOS scheme is 54 MB and 45 MB respectively. The graphical illustration of memory consumption is described in Fig. 3. Table 3 Tabulation for memory consumption Number of data points Memory consumption (MB) GGH cryptosystem SPADE scheme Lightweight privacy-preserving DPOS scheme 10

45

32

25

20

48

35

29

30

50

38

31

40

52

41

33

50

55

43

35

60

58

46

38

70

61

48

40

80

63

51

42

90

65

54

45

100

66

57

48

Literature Survey on Empirical Analysis on Efficient Secured Data …

569

Fig. 3 Measurement of memory consumption

In Fig. 3, illustrates the memory consumption for different number of data points. The blue colorcylinderrepresents the memory consumption of the existing GoldreichGoldwasser-Halevi (GGH) cryptosystem. The yellow color cylinder and green color cylinder denote the memory consumption of SPADE and light weight privacypreserving DPOS scheme respectively. It is observed that memory consumption using the lightweight privacy-preserving DPOS scheme is lesser when compared to the GGH cryptosystem and SPADE correspondingly. This is due to the application of the tag generation process without efficiency in an additional aspect. The authentication tag generation prevented the data through an auditing procedure. By this way, memory consumption was reduced in an effective way. Consequently, memory consumption of the lightweight privacy-preserving DPOS scheme is minimized as 36% compared with the GGH cryptosystem as well as 18% compared with the SPADE.

5 Discussion and Limitations on Secured Data Communication in Cloud Environment The compressive secure cloud storage method was employed with GGH that allowed data owners for storing tags on cloud. User private data gets hidden over server. The designed method reduced the storage cost and communication cost within data out sourcing as maintaining better efficiency. But, data confidentiality level was not improved by the GGH cryptosystem.

570

P. V. Shakira

The encrypted data deduplication model resisted the compromised servers as well as users over the key management issue. A proactivization mechanism was introduced for server-aided MLE to reserve the servers. SPADE renewed privacy as well as retained encrypted data. SPADE was not compatible by PEKS for preserving the keywords. The processing time was not minimized by the SPADE scheme. Delegable Proofs of Storage (DPOS) determined the authentication tags to every data block. The designed scheme was supported for an entire dynamic function using better efficiency and minimum computation. Though the efficiency was improved, the space complexity was not reduced by the light weight privacy-preserving DPOS scheme.

5.1 Future Direction It is performed through machine learning methods to increase protected data communication performance in cloud environment with lesser processing time and memory consumption.

6 Conclusion Analysis of various protected data communication methods are carried out in cloud environment. From the survival study, it is observed that the data confidentiality level was not increased. The review analysis illustrates that processing time was not reduced by the SPADE scheme. Though the efficiency was improved, the space complexity was not reduced by the light weight privacy-preserving DPOS scheme. Different experiments on many existing secured data communication methods are performed. Lastly, research methods are carried out with deep learning methods for increasing the secured data communication performance.

References 1. Yang Y, Chen Y, Chen F (2021) A compressive integrity auditing protocol for secure cloud storage. IEEE/ACM Trans Netw 29(3):1197–1209 2. Zhang Y, Xu C, Cheng N, Shen X (2021) Secure password-protected encryption key for deduplicated cloud storage systems. IEEE Trans Dependable Secure Comput 1–18 3. Yang A, Xu J, Weng J, Zhou J, Wong DS (2021) Light weight and privacy-preserving delegable proofs of storage with data dynamics in cloud storage. IEEE Trans Cloud Comput 9(1):212–225 4. Ning J, Cao Z, Dong X, Liang K, Wei L, Choo KK (2021) Crypt Cloud+: secure and expressive data access control for cloud storage. IEEE Trans Serv Comput 14(1):111–124 5. Zhu H, Wei Q, Yang X, Lu R, Li H (2021) Efficient and privacy-preserving online fingerprint authentication scheme over outsourced data. IEEE Trans Cloud Comput 9(2):576–586

Literature Survey on Empirical Analysis on Efficient Secured Data …

571

6. Zhang L, Ye Y, Mu Y (2021) Multi-authority access control with anonymous authentication for personal health record. IEEE Internet Things J 8(1):156–167 7. Li W, Li X, Gao J, Wang H (2021) Design of secure authenticated key management protocol for cloud computing environments. IEEE Trans Dependable Secure Comput 18(3):1276–1290 8. Shen J, Liu D, Liu Q, Sun X, Zhang Y (2021) Secure authentication in cloud big data with hierarchical attribute authorization structure. IEEE Trans Big Data 7(4):668–677 9. Huang Q, Yang Y, Yue W, He Y (2021) Secure data group sharing and conditional dissemination with multi-owner in cloud computing. IEEE Trans Cloud Comput 9(4):1607–1618 10. Mendes R, Oliveira T, Cogo V, Neves N, Bessani A (2021) CHARON: a secure cloud-of-clouds system for storing and sharing big data. IEEE Trans Cloud Comput 9(4):1349–1361 11. Lu X, Pan Z, Xian H (2020) An efficient and secure data sharing scheme for mobile devices in cloud computing. J Cloud Comput: Adv Syst Appl (Springer) 9(60):1–13 12. Shen J, Yang H, Vijayakumar P, Kumar N (2021)A privacy-preserving and untraceable group data sharing scheme in cloud computing. IEEE Trans Dependable Secure Comput 1–13 13. Miao Q, Lin H, Wang X, Hassan MM (2021) Federated deep reinforcement learning based secure data sharing for internet of things. Comput Netw (Elsevier) 197:1–18 14. Devi KG, Devi RR (2021) S2 OPE security: shuffle standard one time padding encryption for improving secured data storage in decentralized cloud environment. In: Materials today proceedings. Elsevier, pp 1–16

Multi-scale Avatars in a Shared Extended Reality Between AR and VR Users Shafina Abd Karim Ishigaki and Ajune Wanis Ismail

Abstract This paper introduces multi-scale avatar sizes in Extended Reality (XR) allowing the avatars to shrink down into a small size or enlarge into a giant size. Collaborative interfaces are when one or more interfaces have merged into one shared environment with a stable network connection and the cloud-based interaction happened between users. The users from different locations or separate platforms were joining one room to interact. Multi-scale in collaborative MR is significant in some situations that required precise avatar measurements to interact and manipulate the content. However, the correct or appropriate scaling for multi-scale still remains unsolved, and very less people work to explore the method. Therefore, this chapter aims to produce a correct multi-scale technique between VR and AR users. This paper discusses the development of a multi-scale interaction between VR and AR users and implementation in a shared or remote application. The remote MR application involves different interaction techniques. Keywords Virtual reality · Multi-scale interaction · Network · Augmented reality · Extended reality

1 Introduction Developing a multi-scale feature for this project required the correct scaling. Using the correct scaling is crucial for an application that required precise measurements such as the medical, architecture, and automotive industries. It is critical to align precisely the way humans perceive the virtual world with the ways humans perceive S. Abd Karim Ishigaki (B) · A. W. Ismail Mixed and Virtual Reality Research Lab, ViCubeLab, Faculty of Computing, Universiti Teknologi Malaysia, 81310 Johor, Malaysia e-mail: [email protected] A. W. Ismail e-mail: [email protected] Faculty of Computing, Universiti Teknologi Malaysia, 81310 Johor, Malaysia © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_42

573

574

S. Abd Karim Ishigaki and A. W. Ismail

the real world [1]. Humans tend to use natural bodies as a ruler to measure the distance or object size in everyday life. Because of that, a virtual avatar in Virtual Reality (VR) that does not use proper measurement might cause difficulty to determine the distance or object size. The virtual object might seem smaller if the virtual avatar is too big and bigger if the virtual avatar is too small. For example, given a controllable child body instead of an adult body scaled down to the size of a child, participants overestimate object sizes more than in the small adult body and have faster reaction times on an implicit association test for child-like attributes [2]. This concept also applies to object distance where if an avatar is too big the object distance might see as near and if the avatar is too small object distance might see as far away. Either way, finding the correct scaling measurement based on the real world is a must so humans can have the same perception in the virtual world. Piumsomboon et al. [3] has conducted a similar study that used multi-scale interaction where the action changed the avatar either to a giant or miniature mode. Scaling the avatars should sustain the sense relative to the surrounding. To determine the correct scaling, a fixed avatar of the Augmented Reality (AR) user was used as a placeholder for reference to give a better scaling measurement to the VR user avatar. The trigger method was used in the application in order to change the virtual avatar scaling by using a handheld controller or by entering the snow dome area. Figure 1 shows a study conducted by [4] on the dominant size avatar and the common size of the avatar, which influences the distance of the object. Even though the object is placed at the same place as the dominant and common scale avatar, the distance is different for both avatars. Fig. 1 An experiment shows that the size of the avatar influences object distance and size [4]

Multi-scale Avatars in a Shared Extended Reality Between AR and VR …

575

While research into collaborative systems involving either AR or VR users has received a lot of attention, research into collaboration in an MR environment involving both AR and VR has been relatively scant [5]. Hence this chapter proposes a method for the development of a multi-scale interaction between VR and AR users to produce the correct multi-scale technique. With the correct multi-scale technique, it is able to help users and other users acquire an understanding and feeling of the virtual environment’s size and scale [4, 6, 7].

2 Related Works This section highlights research works related to this field of research. The shared environments mean sharing a real-world environment with a remote environment. Our research on multi-scale avatars involves the collaboration experience between the AR local user and VR remote user. One situation is where the VR user with redirected gaze and movement consistently gazes and points at the same position in a shared room [3] whenever the AR user is not facing the VR avatar. Another example is when the tiny size of an avatar can disappear or snap back to the original size of the VR avatar if the AR user is facing the VR avatar. In this condition, the size of the avatars is significant and multi-scale interactions were happening between the two different sizes of avatars. As shown in Fig. 2a, an adaptive avatar imitates the VR user with redirected gaze even with AR users who cannot see the normal size avatar. Snow Dome application is an extension of the previous research of the Mini-Me adaptive avatar. The research focuses on increasing the collaboration experience between an AR local user and a VR remote user that enables VR users in multi-scale interaction [8]. The Snow Dome application was made to simulate the miniature VR avatar to interact with the small virtual object inside the dome [8]. VR users can become small and teleport inside the virtual dome and interact with the small virtual objects inside the dome or can become bigger into a giant and interact with the reconstructed space. In another study by [9], the VR user can scale up into a giant mode or down into

(a)

(b)

Fig. 2 a VR user in miniature and giant mode [8]. b Superman versus giant mode [9]

576

S. Abd Karim Ishigaki and A. W. Ismail

a superman mode to view project space from a different perspective as shown in Fig. 2b. Figure 2b shows the two situations where the VR user in a miniature mode and the VR user in a giant mode in helping to experience a different viewing perspective. Superman vs Giant research has been conducted by [9] to study multi-scale MR interface. They succeeded to address certain issues in MR collaboration such as smallscale collaboration and they proposed a solution by the user to remotely collaborate at a wider scale like a building construction by using the combination of MR, Unmanned Aerial Vehicles (UAV), and multi-scale collaboration virtual environments. The UAV is used to fly around the building for mapping the 3-dimensional (3D) reconstruction of the building and place it inside the virtual environment. The UAV is an autonomous drone that is synchronized with the VR user’s head and body movement where the drone can fly at the eye height of the user. The VR user can scale themselves up to a giant resulting in the UAV adjusting itself so it can have the same eye height as the giant. The VR user can also change to superman mode used to fly around the building. This helps users to change the view perspective of the environment [9]. The chronological of researchers who have worked in remote MR environments, especially in remotely combining AR and VR to become Extended Reality (XR) shared environment is presented in Table 1. The works started in the 2018 Snow dome, and 2022 has explored the avatars for communication in the XR environment. Mahmood et al. [10] created an immersive visualization system for studying multiattribute and geospatial data using intuitive multi-modal interactions, emulating the impacts of co-located cooperation. In their study, they provided the use scenario and discussed the findings to illustrate their system’s capacity to enable users to complete a range of immersive and collaborative analytics tasks efficiently. In the study by [11], remote collaboration is enabled between two users between AR and VR with mobile games setup. Despite that authors in [12] discussed the user interaction technique implemented in the MR environment for an interior designing application called MRDeco instead. They enable AR and VR to integrate using holographic projection in Table 1 Previous works Year

Research/project

2018

Snow dome: a multi-scale interaction in mixed reality remote collaboration [8]

2018

Superman versus giant: a study on spatial perception for a multi-scale mixed reality flying telepresence interface [9]

2019

Improving information sharing and collaborative analysis for remote GeoSpatial visualization using mixed reality [10]

2020

A mobile game SDK for remote collaborative between two users [11]

2020

MR-Deco: mixed reality application for interior design [12]

2020

3D telepresence for remote collaboration in extended reality (XR) [13]

2021

Multi-scale mixed reality collaboration for digital twin [5]

2022

Duplicated reality for co-located augmented reality collaboration [14]

Multi-scale Avatars in a Shared Extended Reality Between AR and VR …

577

the MR environment. Fadzli et al. [12] has suggested remote collaboration for multiuser as the future work to enable multiple users to collaborate in designing the virtual space together in a different set of places remotely. Meanwhile authors in [13], have described human teleportation in the XR environment using advanced RGB-D sensor devices. It explains the phases to develop real human teleportation in XR and also discusses the proposed collaborative XR system that has successfully actualized user teleportation. Other than that authors in [5] describe an MR system based on digital twins for remote collaboration with user and space size scalability. Their proposed approach facilitates communication between an AR host user and a VR distant user by sharing the AR host user’s 3D digital twin. Moreover, in [14], they propose the notion of Duplicated Reality (DR) to improve co-located collaboration in AR by replicating the actual environment, including augmented information, into a distinct and interactive version.

3 Proposed Method This section explains the multi-scale avatar, user interaction, multi-scale features used in this research experiment, and the specification of software and hardware.

3.1 Multi-scale Avatar The multi-scale avatar for the VR user has 3 modes, giant, normal, and miniature. The crucial mode is the normal mode to reflect the user’s real height. The scaling was done by calculating the default height of the virtual avatar with the user’s real height. To avoid complications, the avatar scale for the default size used scaling 1. After calculating the default height of the virtual avatar with the user’s real height, the scale ratio then replaced the default avatar scale. Table 2 shows the scaling value used for 3 multi-scale modes. At this stage, the overall progress both on multi-scale interaction and MR collaboration space is integrated together for both parts and can be working together. To allow multi-scale to synchronize in MR collaboration space, the scaling synchronization used the Photon Transform View component. The Photon Transform View is a component that will allow the synchronization transformation of position, rotation, Table 2 Multi-scale mode and scaling value definition

Multi-scale mode

Scaling value

Giant

Multiply 4 times with the original scale

Normal

Based on the user real height

Miniature

Divide 10 times with the original scale

578

S. Abd Karim Ishigaki and A. W. Ismail

and scaling of the local client across the network. The Oculus VR is composed of three components, the head, left hand, and right hand, the local VR needs to recalculate and reposition the offset of the head object to the newly scaled VR body after using the multi-scale feature. Since the remote VR avatar was automatically synced transformation from the local VR avatar, the remote VR avatar did not recalculate and reposition the new head offset to the newly scaled VR body that made the remote VR body and oculus head component use the previous offset causing the local AR not seeing the remote VR body after VR user used the multi-scale feature. Even though the VR user sees the remote AR user, the AR user is not seeing the remote VR user, but the remote VR user is actually present in the AR local client but at a different location. The scaling in the Photon Transform View component was disabled to prevent automatically synced scaling across the network and instead, used Photon RPC to trigger recalculate and reposition the head offset of the remote VR avatar (as in Fig. 3). The unit used for this research is in meters. Because the unit was measured in meters, the size of the marker was also measured in the meter before importing the marker to Vuforia for feature extraction. The size of the marker must be the same in both spaces for achieving the same virtual object size. A function was created to resize a specific virtual object to have the same size as the real environment such as a physical table and virtual table. This is crucial because the main image target Fig. 3 Multi-scale triggered by VR user

Multi-scale Avatars in a Shared Extended Reality Between AR and VR …

579

was placed on top of the physical table. The marker height from the floor must be the same for both spaces. The physical table first was measured to find the height from the floor to the table surface. The measured height then was used to resize the virtual table to achieve the same height as the physical table. The same concept was used for calculating the virtual avatar size to have the same height as the user’s real height. Table 3 shows the software and hardware specifications set up for developing the project application. As shown in Table 3, the application was developed using Unity3D Game Engine in a windows 10 64-bit operating system. Unity3D is a game engine that supported 25 platforms and was used for developing AR and VR for the MR space for the project’s prototype development. By using Unity, system development kit (SDK) such as Vuforia SDK and Oculus SDK can be integrated into the project prototype. Vuforia is an AR SDK that enables the creation of AR applications. Oculus SDK is used to support the Oculus device with Unity. Visual Studio Community is a free version of Visual Studio integrated development environment (IDE) software and is used as a programming language script editor. Oculus Desktop is used to run the Oculus hardware for testing purposes and android OS is used for the android application for the AR user. Meanwhile, for the hardware part, during the development process, the project prototype was developed using an AMD Ryzen 7 core processor. Using total randomaccess memory (RAM) with the size of 16 gigabytes (GB) is more than enough for high performance. The minimum RAM size that can be recommended for running the project application is 6 GB while 4 GB might cause noticeable latency in the program. The graphic card that was used is NVIDIA GeForce GTX 1660 TI with a Virtual RAM (VRAM) with a size of 6 GB. The earphone is used for awareness cues for the AR user. Any smartphone with a camera can be used for the android application where the camera was used for tracking and the screen for displaying the superimposed virtual object. The Oculus Quest 2 HMD is used to render the display Table 3 Software and hardware requirements

Software

Hardware

Windows 10 Home 64-bit OS

AMD Ryzen 7 4800H CPU

Visual studio community 2019

16.00 GB DDR4 RAM

Unity 2019.4.18f1 game engine

NVIDIA GeForce GTX 1660 TI

Vuforia SDK

Oculus Quest 2 HMD

Oculus SDK

Oculus Quest 2 Touch controller

Oculus desktop

Earphone

Android OS

Smartphone with display and camera



ZapBox HMD



ZapBox controller

580 Table 4 The features of Oculus Quest 2

S. Abd Karim Ishigaki and A. W. Ismail Features

Oculus quest 2

MR content supported

No

VR content supported

Yes

Head rotation movement

Yes

Positional tracking

Yes

Controller

Yes

in the VR space with the Oculus Touch controller to interact with the virtual object and GUI. The features of the Oculus Quest 2 are shown in Table 4.

3.2 Tangible Interaction Using ZapBox ZapBox controller (as Fig. 4a) allows tracking and interaction with the virtual object for the AR user [15]. Using a 3-Dimentional (2D) flat marker would cause the marker not to be detected during AR user rotating the marker in all directions. To solve this problem, the Vuforia object scanner was used to scan all sides of the ZapBox controller for reference points to enable tracking from all ZapBox 3D surfaces for more stable tracking. A virtual line or very thin cylinder object was attached to the front end of the virtual ZapBox controller (as in Fig. 4b). At the end of the line or cylinder, an invisible collider was attached to enable interaction with the virtual object. Upon contact with other interactable virtual objects, a specific function would trigger such as an open door or disabling trap. An invisible virtual collider surrounding the virtual ZapBox controller also was attached to enable the AR user for parrying the virtual falling rock object. The mobile-based Head-mounted display (HMD) is when an AR user used a smartphone camera to scan the image target and ZapBox HMD was used as the smartphone holder. Using the physical main image target to anchor the desirable virtual object to the real world. The smartphone is also used as a processor unit for running the application. Figure 5 shows the workspace setup for the AR user. Mobile HMD has been used to place the smartphone inside and the user uses the ZapBox. Insert the earphone into the right hole of the ZapBox HMD to connect with the smartphone. Wear the HMD and put the earphone on both ears. Lastly, look at the physical marker to overlap the virtual object on top of the marker. The AR application was built and pre-install into the smartphone before the testing begin. The printed marker was placed on top of a physical table with a height of 80 cm from the floor to reflect the virtual table used in the virtual environment. The ZapBox controller was placed, facing up, on the right hand of the AR user. The AR user is required to input their information and connect to the network before placing the smartphone inside the ZapBox HMD. The earphone was connected to the smartphone inside the HMD and place both earphone speakers at both AR user ears. Lastly, the AR user is required to wear the HMD in order to begin the testing.

Multi-scale Avatars in a Shared Extended Reality Between AR and VR …

(a) Physical ZapBox

(b) Virtual ZapBox Fig. 4 Tangible Zapbox for mobile-based HMD interaction

Fig. 5 ZapBox input device in AR

581

582

S. Abd Karim Ishigaki and A. W. Ismail

Fig. 6 Quest 2 controllers in VR

Figure 5 the smartphone with the earphone connected while holding the ZapBox controller for virtual object interaction.

3.3 Interaction in VR VR user also needs to interact with the interface however the VR user used Oculus Quest 2 device, and the touch controller was used for the interaction with the virtual objects. Ray pointer was used to interact with the user interface (UI) in 3D space. By overlapping the ray pointer to the UI such as the UI button, the VR user can interact by clicking the controller button. The VR user also can use the multi-scale features by clicking on the specific Oculus controller buttons. The VR application was built and pre-install into the Oculus Quest 2 HMD using the Side Quest application. The VR user is required to wear the HMD and place both Quest 2 controllers on both hands, as in Fig. 6. The earphone was connected to the Quest 2 HMD and both earphone speakers were placed at both VR user ears. The application was executed, and the VR user would be greeted inside the initial VR lobby scene and wait for further instructions.

4 Test Application: A Networked Maze Game This section explains the implementation and the results of the proposed method. In order to allow remote collaboration between two users, the MR maze has been developed and both users need to be in the same shared interface. Two users from different interfaces were connecting into the same space, the MR maze. Once the player in the room is equal to 2, both users would be transferred to the main scene.

Multi-scale Avatars in a Shared Extended Reality Between AR and VR …

583

Fig. 7 a Miniature AR user. b Giant size of VR user

Figure 7a shows the shared space for the AR user and Fig. 7b shows the shared space VR user, in normal mode after successfully connecting to the same network room. Figure 7b shows the VR user, in giant mode, and the VR user can see the AR user in miniature mode. To start the game, the VR user needs to change to the miniature mode. After changing to the miniature mode, the VR user would automatically be teleported into the maze and the game timer would start. The VR user must protect himself from the enemy by shooting at the enemy. The AR user, in normal size, must help the VR user by disabling the trap, opening a hidden door, parrying falling rocks, and giving directions to the VR user through verbal communication. The game would end if the VR user’s health reached zero, if all of the hidden chests were successfully collected, or simply by toggling the multi-scale feature. The VR user will automatically be forced out of the maze. Figure 8 shows the AR user will see the VR user in the giant mode. Figure 8a shows the AR user helping the VR user open the hidden door and Fig. 8b shows the VR user seeing the AR user’s hand open the hidden door. Figure 8c, d shows the AR user using ray pointing using ZapBox. In the maze, three walls and one door covered all sides of the chest. The chest is hidden from the VR user and only the AR user knows the chest position by looking at the maze from above. The AR user must point and touch using the ZapBox controller, 1 of the 4 walls to open the door to give access to the VR user. Figure 9a shows the chest hidden by the wall and Fig. 9b shows the door open to uncover the chest behind it. A similar approach was used for disabling the spike trap. By pointing the ZapBox controller at the spike trap, it would disable and give way to VR users. Figure 10a shows the spike trap before disablement and Fig. 10b shows the spike trap after disablement. The VR user would receive damage if triggered the spike trap. During Maze application gameplay the VR user is in miniature mode, and the AR user has no type of indicator to know the current health of the VR user. UI was placed on the wall outside of the maze showing the amount of chest collection, best time, timer, and health of the VR user to allow the AR user to be aware of the game and collaboration information. The best time would be updated if the timer is less than

584

S. Abd Karim Ishigaki and A. W. Ismail

Fig. 8 AR user sees VR user in the real world

Fig. 9 Hidden chest hit using ZapBox controller

the last best time. After the VR user exits the Maze game by toggling the multi-scale, loses because health reaches zero, or has successfully collected all chests, the timer, VR health, and total chests would be reset. The timer would automatically start after the VR user entered the maze. If the VR user collects chests, the total chests UI would also be updated showing the currently collected chests. The VR user also can view his current health by looking at the left hand of the avatar.

Multi-scale Avatars in a Shared Extended Reality Between AR and VR …

585

Fig. 10 Hidden chest open to enable obstacles using ZapBox controller

Figure 11a shows the shared space for AR users and Fig. 11b shows the shared space for VR users, in normal mode, after successfully connecting to the same network room. Figure 11c shows the AR user, in normal mode, seeing the VR user in miniature mode and Fig. 11d shows the VR user, in miniature mode, he can see the AR user in the normal mode. AR user tried to protect the VR user from the falling rock by parrying the falling rock using the ZapBox controller. AR user is the only one who can manipulate the network object, VR user needs to request ownership of the network object to interact with the network object. In AR space, only the virtual object under the image target would be shown to the AR user such as the virtual table. There are another couple of virtual objects that exist in the same scene but were disabled. If the application detected a VR interface, all the hidden virtual objects would be enabled. Data synchronization is needed in a shared environment. If the VR avatar moves, it also should show and reflect the movement in XR spaced where the AR user can also see the VR user’s movement in real-time. AR application is acted as a server or master client where the AR user has ownership of the network object but if the AR application is closed or disconnected while the VR user is still in the MR space, the VR user automatically disconnected from the room. Figure 12 shows the diagram of the workflow for AR and VR in XR shared space. With remote collaboration, the two users need to be able to communicate with one another. To allow online communication, Photon Unity Networking (PUN) Voice was used to enable verbal communication between two users. This gives both users more efficient collaboration. During the users’ interactions with one another, the 3D

586

S. Abd Karim Ishigaki and A. W. Ismail

Fig. 11 AR and VR users in shared space

Fig. 12 Network to connect AR and VR users in a shared space

spatial sound was utilised in order to provide a greater sense of presence within the shared space between the users.

Multi-scale Avatars in a Shared Extended Reality Between AR and VR …

587

5 Conclusion This paper has presented the design of multi-scale interaction between AR and VR users in the XR shared environment. In a remote collaboration, connecting the AR and VR user to MR space required a stable network connection. PUN supported cross-platform to connect the android operating system (OS) with window OS. PUN has a function to create a room either public or private that is suitable to use for this application. For the sake of simplicity for this project, the PUN room for connecting other users into the MR space used a public connection with a maximum of two users, one AR user, and one VR user. In this application, the AR application is acted as a server or master client where the AR user has ownership of the network object but if the AR application is closed or disconnected while the VR user is still in the MR space, the VR user is automatically disconnected from the room. For multi-user remote connection between AR and VR, data synchronization is needed. If the VR avatar moves, it also should show and reflect the movement in MR space where the AR user can also see the VR user’s movement in real-time. The Photon transform View component was used to allow position, rotation, and scaling synchronization to all clients. Photon transform View also was used for synchronizing network objects for user interaction in the MR space. But the remote client also needs to identify AR and VR users to either give or receive collaboration. Remote Procedure Calls (RPC) component from PUN was used to sync data of the virtual object in one client to the remote client like a mirror. For example, if a cube in a local client was tagged as AR mode, the same cube in a remote client would also show the same tag. The RPC also was used to update the shared UI content to allow both users to see the collaboration data such as the time taken to complete a task. The same concept also applies if the local client is tagged as VR mode, all other clients would also tag as VR mode. With remote collaboration, the two users need to be able to communicate with one another. To allow online communication, PUN Voice was used to enable verbal communication between two users. This gives both users more efficient collaboration. To give more presence between the users in the shared space, the 3D spatial sound was used during the user’s communication with one another. The application is integrated into both AR and VR producing the MR collaboration space. The application was able to run and is executed on both AR and VR spaces using a single application. The application then automatically detected the interface to be executed. The multi-scale feature is enabled through the Oculus Touch controller for scaling purposes in a VR space while in AR it is disabled. Both AR and VR users need to input the height and name into the respective graphical user interface (GUI) input field and then the virtual space was superimposed on the physical space of the AR user and produced the MR collaboration space. Lastly, the result could be more polished if implementing a different approach for the collaboration and the multi-scale. The future work is to implement the multiscale avatars into more serious tasks such as in architecture or medical field where users can collaborate by using the multi-scale feature. An urban planning or dive

588

S. Abd Karim Ishigaki and A. W. Ismail

into a virtual autonomy of human body field is also a good starting point for the development to implement the collaboration and multi-scale feature in the future.

References 1. Ogawa N, Narumi T, Hirose M (2018) Object size perception in immersive virtual reality: avatar realism affects the way we perceive. In: 2018 IEEE conference on virtual reality and 3D user interfaces (VR). IEEE, pp 647–648 2. Lin L, Normovle A, Adkins A, Sun Y, Robb A, Ye Y et al (2019) The effect of hand size and interaction modality on the virtual hand illusion. In: 2019 IEEE conference on virtual reality and 3D user interfaces (VR). IEEE, pp 510–518 3. Piumsomboon T, Day A, Ens B, Lee Y, Lee G, Billinghurst M (2017) Exploring enhancements for remote mixed reality collaboration. In: SIGGRAPH Asia 2017 mobile graphics & interactive applications, pp 1–5 4. Langbehn E, Bruder G, Steinicke F (2016) Scale matters! Analysis of dominant scale estimation in the presence of conflicting cues in multi-scale collaborative virtual environments. In: 2016 IEEE symposium on 3D user interfaces (3DUI). IEEE, pp 211–220 5. Kim HI, Kim T, Song E, Oh SY, Kim D, Woo W (2021) Multi-scale mixed reality collaboration for digital twin. In: 2021 IEEE international symposium on mixed and augmented reality adjunct (ISMAR-Adjunct). IEEE, pp 435–436 6. Kopper R, Ni T, Bowman DA, Pinho M (2006) Design and evaluation of navigation techniques for multiscale virtual environments. In: IEEE virtual reality conference (vr 2006). IEEE, pp 175–182 7. LaViola Jr, JJ, Feliz DA, Keefe DF, Zeleznik RC (2001) Hands-free multi-scale navigation in virtual environments. In: Proceedings of the 2001 symposium on Interactive 3D graphics, pp 9–15 8. Piumsomboon T, Lee GA, Billinghurst M (2018a) Snow dome: a multi-scale interaction in mixed reality remote collaboration. In: Extended abstracts of the 2018 CHI conference on human factors in computing systems, pp 1–4 9. Piumsomboon T, Lee GA, Ens B, Thomas BH, Billinghurst M (2018b) Superman versus giant: a study on spatial perception for a multi-scale mixed reality flying telepresence interface. IEEE Trans Visual Comput Graph 24(11):2974–2982 10. Mahmood T, Fulmer W, Mungoli N, Huang J, Lu A (2019) Improving information sharing and collaborative analysis for remote geospatial visualization using mixed reality. In: 2019 IEEE international symposium on mixed and augmented reality (ISMAR). IEEE, pp 236–247 11. Ong YH, Ismail AW, Iahad NA, Rameli MRM, Dollah R (2020) A mobile game SDK for remote collaborative between two users in augmented and virtual reality. In: IOP Conf Ser: Mater Sci Eng 979(1):012003. IOP Publishing 12. Fadzli FE, Ismail AW, Talib R, Alias RA, Ashari ZM (2020a) MR-deco: mixed reality application for interior planning and designing. IOP Conf Ser: Mater Sci Eng 979(1):012010 (IOP Publishing) 13. Fadzli FE, Kamson MS, Ismail AW, Aladin MYF (2020b) 3D telepresence for remote collaboration in extended reality (xR) application. IOP Conf Ser: Mater Sci Eng 979(1):012005 (IOP Publishing) 14. Yu K, Eck U, Pankratz F, Lazarovici M, Wilhelm D, Navab N (2022) Duplicated reality for colocated augmented reality collaboration. IEEE Trans Visual Comput Graph 28(5):2190–2200 15. Kuhail MA, ElSayary A, Farooq S, Alghamdi A (2022) Exploring immersive learning experiences: a survey. Informatics 9(4):75 (MDPI)

Pattern Recognition of Durian Foliar Diseases Using Fractal Dimension and Chaos Game Mia Torres-Dela Cruz, V. Murugananthan, R. Srinivasan, M. Kavitha, and R. Kavitha

Abstract Like all cultivated plants, durian is vulnerable to diseases all year round. Early detection of these diseases in the durian plant is essential to a productive harvest due to an earlier preventive measure that can be applied. A fractal dimension model is being developed for the early detection of the diseases in durian foliar so immediate actions may be utilized. This is a preliminary study using fractal dimension and chaos game for pattern recognition. The durian leaf inflicted with advanced disease is to be examined in accordance to the fractal dimension of its image using the boxcounting dimension algorithm. First, it will identify the self-similarity of the durian leaf’s disease pattern and second, it will recreate the fractal with the use of chaos game random walk basing from the fractal principle that a pattern’s self-similarity fractal at the first stage of the disease is identical at full infection. The paper proposes this pattern recognition algorithm for the early detection of durian foliar diseases as decision support for a disease management system. The proposed model for pattern recognition is developed with fractal dimension and chaos game algorithm set. Keywords Chaos game theory · Fractals · Fractal dimension · Pattern recognition · Foliar diseases · Durian leaf

M. T.-D. Cruz Systems Technology Institute, Baguio City, Philippines e-mail: [email protected] V. Murugananthan School of Computing, Asia Pacific University of Technology & Innovation (APU), Taman Teknologi Malaysia, Jalan Teknologi 5, 57000 Kuala Lumpur, Malaysia R. Srinivasan (B) · M. Kavitha · R. Kavitha Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R & D, Institute of Science and Technology, Chennai, Tamil Nadu, India e-mail: [email protected] M. Kavitha e-mail: [email protected] R. Kavitha e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_43

589

590

M. T.-D. Cruz et al.

1 Introduction The durian fruit, also called “king of fruits” is found in Southeast Asia. It is an important fruit product in Thailand, Malaysia, Vietnam, and Philippines that brings in economic advantage. Durian’s unique taste and smell is extremely valued in Asia and because of this the prices for the fruit are comparatively high [1, 2]. The fast upgrade of technology has given way to new possibilities for the agricultural sector of the society and take advantage of the new methods and techniques to make agriculture management more efficient, and informative. There is now a higher motivation in Southeast Asia to develop new systems to detect and prevent plant diseases, particularly, durian. The use of emerging technologies has endless potentials because of the new upcoming knowledge being explored in application to different uncommon areas, in this case, agriculture—production of tropical fruits. Researchers are continually looking for ways to improve agricultural systems by looking for new ways to use technology to make better systems. This has led to the exploration of mathematical tools such as chaos theory, specifically fractal dimension, to be applied to solve agricultural issues such as early detection of diseases in high valued plants. The paper will propose a model using fractal dimension and chaos game for pattern recognition of durian foliar disease patterns. The focus of the paper is a development of an algorithm for fractal recreation of durian foliar disease patterns. This is done to identify, map, and detect the disease at the earliest possible time. There are several ways in which this algorithm is helpful. It may be used by farm managers to make decisions on durian diseases and how these can be controlled. Information from this study may be essential to develop other systems that could support decision making for agriculture experts and other persons who are in this field of endeavor. The paper is organized as follows: backgrounder on durian and durian diseases; discussion of fractal dimension, chaos game, and pattern recognition; the proposed algorithm development; and conclusion and recommendation.

1.1 The Durian Plant Durian’s scientific name is Durio zibethinus, a member of the family Bombacaceae and belonging to the genus Durio [2–4]. There are approximately 30 known species of Durio and around 9 species produce edible fruits. The most popular of these edible species is zibethinus, which is the most common durian [2]. Durio zibethinus L. is the lone species that is commercially cultivated on a large-scale. It is mainly openpollinated that is why it shows different sizes, odors, and colors of the fruit flesh. It also shows difference in seeds, trees and leaf characteristics [3]. The durian tree grows an average height of 38 m with broad conical frame, straight trunk, which when matured can be between 50 and 120 cm in diameter [5]. Trees can survive for more than 150 years.

Pattern Recognition of Durian Foliar Diseases Using Fractal Dimension …

591

Fig. 1 Durian leaves—back and front surfaces. Source stuartexchange.com

Figure 1 shows the durian leaves of back and front surfaces. Durian leaves are oblong, between 12 and 22 cm long, 4–8 cm wide. Leaf color ranges from pale green to olive green with a tinge of bronze. The top surface is glossy but its back surface is bronze brown with hairy scales and its edges are smooth. The arrangement of the leaves is alternate [6]. Durian flowers are large, 4–5 cm long, 2–2½ centimeters wide and found usually on the branches and sometimes, though rare, on the main trunk. The flower buds are in clusters, consisting of around 30 buds. They hung from mature branches with the buds and eventually the flowers hanging downwards. It has 5 light cream petals, with a smell of sour milk, which open in the late afternoons. The durian fruit is large, weighing around 2–5 kg, between 15 and 25 cm long and 15–20 cm wide, irregular oblong to round in shape [5]. The husk is from yellow green to dark green going to brown, covered in strong thorns or spines. Inside the durian fruit are compartments called locules, which are composed of several fused carpels. It may be composed of between 3 and 7 carpels but commonly 5. The fruit pulp called the aril, which contains the seed is edible. It is light golden yellow in color [7]. Figure 2 describes the durian leaves and buds.

1.1.1

Durian Foliar Diseases

Like other tropical tree crops, durian is severely affected by diseases which could impede growth, or prevent flowering, which would cause the plant not to bear fruit, or worse, kill the plant. The common foliar diseases of durian are: Phytophthora palmivora [8], Rhizoctonia solani [9], Colletotrichum gloesosporoides [10], Phomopsis durionis [11], Meliola durionis, Cephaleuros virescen, Corticium salmonicolor, Phythium sp., and Rigidoporus lignosus. Anim [12] has categorized four foliar diseases: (1) durian leaf blight is disease that is caused by Rhizotonia solani is seen in Fig. 3. The spots appear as water soaked smudges on the leaves

592

M. T.-D. Cruz et al.

Fig. 2 Durian leaves and buds. Source durianrasasedap.com

which eventually will expand and enlarge. When the spots dry-up, they become light brown in color then later darken then makes the leaves curly and shriveled. The effect of the damaged leaf is that it reduces the photosynthesis process and also when the leaves get infected, they fall off. Less leaves and less photosynthesis will lead to a problem in the flowering process producing lesser buds, thus, decreasing fruit production. The solution for this disease is fungicide, such as, benomyl, thiram fentin acetate, or copper, to be sprayed on the tree; (2) durian leaf spot, caused by phomopsis durionis as seen in Fig. 4. Leaves infected by this disease fall off prematurely. The signs to look for are tiny yellow spots that would turn brown. The spots, however, are difficult to see on the bottom side of the leaf because of the presence of the peltate leaf scales which cover them. Control for the disease includes systemic fungicide spray, i.e. carbendazim, benomyl, and triophanate methyl; (3) durian leaf antracnose, caused by Collectotricum gleosporoides, appears near the tip of the leaves as round spots and irregularly shaped patches that are gray-brown centers with darker colors, like dark brown or ever darker up to black, at the edges, as seen in the picture (Fig. 5). It is recommended to use foliar sprays to control the disease. Fungicides like carbendazem combined with chlorothalonil or mencozeb and maneb, benomyl, thiophanate methyl or propineb, menthiram; and (4) durian leaf green alga rust (Fig. 6) is a disease caused by a green alga also called Cephaleuros virencen. Green alga rust produces small rusty spots on the leaves. The effect is it reduces photosynthesis process which affects flower production. The manifestation of the disease is rust colored spots on the upper side of the leaf. This can be controlled with the fungicides, fentin acetate, copper, benomyl or thiram, sprayed on the leaves. The study is confined to analyzing the four foliar diseases because all have very similar spot appearances. But since the causes of the diseases are from different pathogens, the solution for control are also different for each. Most of the time,

Pattern Recognition of Durian Foliar Diseases Using Fractal Dimension …

593

Fig. 3 Durian leaf blight. Source durianinfo.blospot.my

Fig. 4 Durian leaf spot. Source http://dost-bentre. gov.vn

Fig. 5 Durian leaf antrachnose. Source jaowoffice.weebly.com

identification of the disease only takes place when the outbreaks on the leaf are already severe. Durian orchard management would prefer that they identify the disease early so they can give preventive solution before the disease spread.

594

M. T.-D. Cruz et al.

Fig. 6 Durian leaf green alga rust. Source jaowoffice.weebly.com

2 Literature Survey Aditya Deshpande et al. [1] showed how chatbots progressed from a rudimentary model to a complex artificial intelligence system. Also discusses how a chatbot can accurately resemble a human conversation by analyzing the user’s input and creating the relevant answer using natural language processing (NLP). David Corser et al. [2] described in what way to build a travel bot guiding arrangement using data from social media sites such as Facebook, Twitter, and Instagram, as well as connected data. People being well for providing local travel information, such as bus operators and transportation officials, will post on Twitter or Facebook. Dahiya et al. [3] showed how to create an artificially intelligent chatbot, including how to create a dialogue box, module descriptions, and the pattern matching compares the user’s query to the terms in the database. The results of a survey on intelligent chatbots, which outperform rule-based chatbots, are described by Almansor et al. [4]. Deep learning is used by these chat bots to comprehend the user’s intents, emotions, and other data. Jwala et al. [5] outlined the differences between chatbots produced NLP and deep learning which are two methods that can be used. The metrics for expanding chat bot implementation were also examined, which will help with future updates. Ramachandran et al. [6] looked into user acceptance of chatbots. In recent years, users have become far more acclimated to chatbots than ever before. It’s due to the chatbot’s capabilities, which include customer service and interaction. The essential technique AIML was explained by Sameera et al. [7]. The goal of AIML is to make informal modelling easier in the context of a process ‘stimulus-response’. Sasa Arsovski et al. [8] investigated the source languages used in the production of AIML and chat script, as well as the similarities and differences in the execution of chat bots. Du Preez et al. [9] shed light on the format and summarization of XML communications. Client, server, and content acquisition are the three essential components of this chatbot. A SOAP (Simple Object Access Protocol) server is used

Pattern Recognition of Durian Foliar Diseases Using Fractal Dimension …

595

for this bot (SOAP). The retrieval-based conversation bot and generative chatbots were described by Zia Babar et al. [10]. A rule-based chatbot responds to queries that are already in the database, whereas propagative bots employ deep learning to study from the queries asked by users. Eleni Adamopoulou∗, Lefteris Moussiades, explain about the history, technology and various application of chatbot with machine learning apporaoches [11].

3 Proposed Algorithm 3.1 Fractal Dimension and Pattern Recognition of Durian Foliar Disease People have been fascinated by patterns which are seen in ordinary everyday things. These natural phenomena are explained by fractals [13]. Our world and beyond are filled with different fascinating patterns. Everywhere there are patterns, some perfect and some are not. Earth, moon and sun are seemingly spherical but not perfectly, mountains, hills and volcanoes are not exactly cone-shaped, the stars are not star-shaped, the heart is definitely not heart-shaped [14]. All these are, in reality, approximated or generalized. Wegner and Peterson [15] say that fractals are responsible for the rich structure of the universe which encompasses all measures from the incalculable galaxies at extraordinarily far distances to the mysterious inner electric flashes and vibrations of the subatomic realm. Fractal is described as an infinite pattern somehow compressed into a finite space [15]. The Mandelbrot set, considered abstract fractals, is generated by Z n+1 = Z n2 + C which can be calculated by the computer repeatedly to generate a never ending pattern. It is a simple formula but is done over and over in a large scale which the generates the breathtaking Mandelbrot set [16]. The mathematical set, fractal, includes a dimension called fractal dimension, which most of the time surpasses its properties-of-space dimension and could be found in fractions (between integers). The self-similar idea of repetition or what can be called a self-repeating pattern is represented in detail at the same scale or more or less be the same but in different scales. In Mandelbrot’s [17] definition of fractals, they are “geometrical objects with the fractal dimension represented by D, whose fractal geometry covers objects and spaces”. Their location is specific, (x, y, z), they occupy space of any dimension that is greater than or same with the objects’ dimension. Jadoon and Sayab [18] analyzed different alteration zones in porphyry copper deposits in Pakistan with the use of fractal dimension. Pattern recognition, on the other hand, is primarily involved with the description and classification of measurements taken from physical or mental processes [19]. The study of pattern recognition is composed of two things, the pattern characteristics analysis and the recognition systems design.

596

M. T.-D. Cruz et al.

There were similar researches made using pattern recognition in agriculture, specifically leaf disease detection. One of such is by Muthukannan and Latha [20] who used pattern recognition using an automated image segmentation system to detect diseases and extract the features present in a leaf through color image. They used a new approach named Particle Swarm Optimization (PSO) for image segmentation. Another paper by Kokane and Bhale [21] used pattern recognition to detect the disease of cotton leaf. Diseased cotton leaf images were classified with the use of Error Back propagation neural network. Seven invariant moments were extracted from three different types of diseased leaf images in which the regular precision of classification is shown as 85.52%. An algorithm named snake segmentation was used to provide effectual procedure to identify the unhealthy spot. However, this algorithm was found to process very slowly. Fractal dimension was also used in agricultural applications. One such research was done by Ji-xiang et al. [22] which was the development of a recognition system for plant leaves using fractal dimension. In this paper, the fractal of the leaf edge and the vein were explored and studied and were applied in plant leaf classification. Eguiraun et al. [23] also used fractal dimension together with pattern recognition to study contaminated fish responses to aquaculture. This paper’s objective are to develop an image acquisition methodology, to do an analysis of processing and nonlinear trajectory of the collective fish response. This is done by getting hold of the information of fish movement through images and video or echo sounds. The information was processed by different nonlinear algorithms using fractal dimension and Entropy to identify their nonlinear features. Fractal dimension is a statistical value which advocates how totally fractal fills up spaces, while zooming into more and more finer scales. Below is the formula that defines an object’s fractal dimension (D) with a self-similar characteristic: fractal dimension(D) =

log(number of self − similar dimensions) log(magnification factor)

(1)

In Linden’s paper [13], he said that FD is a tool to determine self-similarity of a pattern. An object is considered self-similar if it is approximately the same shape in all scales. From this premise, this study extracts parameters to identify the fractal dimension of the different leaf samples inflicted with various diseases to get each one’s self-similarity and as such, will be able to recognize it when the pattern is recreated. The sample images were taken from the normal durian leaf and the diseased leaves at the advanced stage. Images of the normal leaf and the diseased leaves are taken and the pattern recognition process is used on the leaf samples. This is based from the principle (self-similarity idea) that the patterns that are generated in the advanced stage will have the same fractal dimension as that of the image at the early stage [14]. This means the pattern at the starting period of the leaf infection, when reproduced, it will be the same pattern when it is already in the advanced stage and the mathematical model of the fractal is the same in both. A small spot’s D is regenerated and its pattern will be matched and recognized. Therefore, from this

Pattern Recognition of Durian Foliar Diseases Using Fractal Dimension …

597

method, pattern recognition is done and detecting at the early stage of the diseases is feasible. Measurements for fractal dimension may be done in different methods. Some methods are self-similar fractals calculation, varied measured lengths through Richardson’s Method, Hausdorff-Besicovitch Dimension, and Box-counting Method. This paper used the box counting method, which is the most popular method in this field. The box-counting method is also called Brute force method, and sometimes, Grid Method. The reason for its popularity is because of the ease of mathematical calculations and its empirical estimation [17, 24]. This is used as a procedure of estimation for complicated objects to compute their FD. When the objects’ dimension cannot be calculated using numerical formulas, or in cases where the slope dimension cannot be determined accurately, the box counting method is used. It is straightforward and also adaptable to a lot of situations such as calculations on very tiny objects and can be used also for very large objects, hence, its popularity. Basically, in box-counting dimension, a subject is covered with a series of same measurement boxes (size = δ). The shape of the boxes could be round or could also be square, intersecting or mutually exclusive, or could be planar grids or cubic grids. If the subject is completely covered by these boxes and the size approximately equals zero (0), the number of the boxes’ logarithm’s ratio to the size’s reciprocal logarithm is the box dimension. Implementation of the fractal analysis was done through digitization of the images measured with a lateral resolution of 256 × 256 pixels. In Mandelbrot’s [17] calculation of fractal dimension, “If a finite set in n-dimensional Euclidean Space Rn is covered by boxes of size δ, the number of non-empty boxes is N(δ), and then the box dimension, D, may be identified as”: lnN (δ) n→∞ ln 1 δ

D = lim

(2)

For statistically self-similar fractals which are characteristic of the infected leaf images, this research considers Box-counting Dimension as the most appropriate calculation method, especially since it is in the irregular distribution on a planar graph. The research used the procedure as follows: images of both the normal leaf, as well as the leaf that is full-blown diseased, were processed and derived a gray-scale graph from these variables. The technique called space decomposition is applied that corresponded to the gray scale. The following were considered: i. The maximum gray-scale’s value is zmax , ii. The minimum gray-scale is zmin iii. The analyzed gray-scale value of the leaves is z which is also the pattern analysis threshold. iv. The formula of the gray-scale value, zh is:

598

M. T.-D. Cruz et al.

Zh =

Z − Z min Z max − Z min

(3)

4 Implementation 4.1 Chaos Game The Chaos Game is considered a simple algorithm which recognizes a point in the plane at a particular stage. A set of points are then produced using this procedure. The sets of points that ultimately emerge would produce remarkable structures that are intricate. The relationship between the algorithm and fractal sets is not at all obvious, as there is no evident connection between them [25]. However, this is the procedure to recreate self-similarity in the fractal dimension of the samples of durian leaves. Barnsley’s [26] chaos game theory was developed in 1988 when he described the algorithm where one picks randomly any point inside a regular polygon, n-gon. Then, the succeeding point is drawn a fraction r of the gap in the middle of it and a polygon vertex picked at random. Then continue the process and the result of the procedure is a fractal. Same procedure is used in this paper as a mapping mechanism where new fractals are reproduced from the diseased leaf sample images. One similar research is the paper of Jampour et al. [27] applying chaos game theory in the fingerprint recreation of fractals for purpose of identification. Jampour’s paper shows that a fractal can be produced through the theory based on Shannon Theorem which is shown by using the Random Walk mechanism and through the aid of a polygonal procedure. There were two essential things that were deliberated in the paper: (1) recreation of a new fractal can be done by performing chaos game mechanism on a fractal; (2) aside from the properties of a fractal, the detection of some parameters would be valuable in the identification process and as such, in pattern recognition [28]. Figure 7 describe the vertices obtained from the leaves with A and B values. The Chaos Game algorithm using Shannon Theorem Random Walk is the following: 1. 2. 3. 4.

A number of vertices must be specified, (a1 , b1 ), (a2 , b2 ), …, and (aN , bN ) The scaling factor, r, must be identified: where r < 1. The chaos game is played starting with the point, (x 0 , y0 ). One of the vertices is picked randomly, i.e., (ai , bi ). Point (x 1 , y1 ) is fraction, r, of the gap within (ai , bi ) and (x 0 , y0 ), where: (x1 , y1 ) = r · (x0 , y0 ) + (1 − r ) · (ai , bi )

5. There are with six vertices with r = 1/2, given (a4 , b4 ) as the 1st vertex selected at random, thus:

Pattern Recognition of Durian Foliar Diseases Using Fractal Dimension …

599

Fig. 7 Vertices obtained. Note If r = 1, the point (x 1 , y1 ) is the same as the initial point (x 0 , y0 ); If r = 0, the point (x 1 , y1 ) is the same as selected vertex (ai , bi )

6. Picking randomly another vertex, (aj , bj ). Point (x 2 , y2 ) is given by the following formula:   (x2 , y2 ) = r · (x1 , y1 ) + (1 − r ) · a j , b j

(4)

Chaos Game Plot is generated this way with the points sequence: (x 0 , y0 ), (x 1 , y1 ), … (x n , yn ), repeated at specific number of times.

5 Proposed Results Calculating the box-counting dimension for the pattern recognition will only determine the variable is within the box. The reproduction of the fractal is needed for selfsimilarity. In order to do this, chaos game is used to reproduce the disease patterns for recognition against other diseases. Because patterns for the diseased leaves are all fractals, fractal dimension self-similarity is applied but need to be reproduced using other methods and, in this case, the chaos game random walk. New fractals are reproduced and new parameters are obtained [14]. The proposed pattern recognition leaf detection algorithm is set in Fig. 8. The sample leaf images were preprocessed and converted to grey-scale where it is smoothened to remove noise that could affect the image extraction variable. Image extraction was done and vertices were set using the durian leaf sample B from Fig. 8. From the eight vertices set in Fig. 9, Chaos Game was played following the algorithm for chaos game random walk discussed in Sect. 3 with probabilities set for each vertex shown in Fig. 11.

600

M. T.-D. Cruz et al.

Fig. 8 Proposed pattern recognition algorithm

Fig. 9 Sample durian leaf grey-scale images, a normal leaf, b leaf blight, c leaf green alga rust, d leaf antrachnose

Figure 10 shows the fractal reproduction with eight vertices ((A1 , B1 ) to (A8 , B8 )), probabilities (r) for each vertex shown in Fig. 11. The generated fractal shows 500 points ((x1, y1) to (×500, 7500)). Same procedure was done with all sample durian leaves, one of which is normal and the other four samples are images of the durian leaf with advanced diseases. Tests were done with different number of points, from 100 to 1000 points. The software

Pattern Recognition of Durian Foliar Diseases Using Fractal Dimension …

601

Fig. 10 Sample B, with eight vertices identified

Fig. 11 Chaos game fractal reproduction (500 points)

used is a chaos game fractal generator developed by Shodor [26] which allowed input of the number of vertices, probabilities of each vertex, where probability is between 0 and 1, and the number of points (dots) from 0 to 10,000. Figure 12 shows the eight vertices with set of probabilities from Vertices 1 to Vertices 8.

602

M. T.-D. Cruz et al.

Fig. 12 The eight vertices with set probabilities

6 Conclusion In this paper, the algorithm for pattern recognition for early detection of durian foliar diseases is proposed using fractal dimension, specifically the use of box-counting method, and chaos game using the random walk method. The algorithm allowed the reproduction of the fractal from the infected leaf and mapped to the fractal of the disease to recognize what kind of disease it is. The same procedure is done for all the diseases that were under study to identify the fractal for each for appropriate mapping and recognition. The recreated fractal is the same for the newly infected advanced infected leaf allowing the detection through pattern recognition at the early stages of infection. This proposed algorithm may be of interest to future researches of different essential plants or other agricultural applications. It can be used for decision support systems concerning disease management for the durian plant and any other agricultural products of the same category. Future studies should be done on the same parameter but using other methods of fractal dimension, chaos theory and game theory and other contributors. It is further recommended that other areas of pattern recognition may cover audio and image to consider this algorithm using the same principle of self-similarity.

References 1. Lim TK (1997) Durian. In: Hyde K (ed) The new rural industries: a handbook for farmers and investors. Rural Industries Research and Development Corporation: Canberra, Australia. pp 279–285. http://www.rirdc.gov.au/pub/handbook/durian.html 2. Foo KY, Hameed BH. Transformation of durian biomass into a highly valuable end commodity: trends and opportunities

Pattern Recognition of Durian Foliar Diseases Using Fractal Dimension …

603

3. Brown MJ (1997) Durio-A bibliographic review. IPGRI Office for South Asia, New Delhi, India 4. Subhadrabandhu S, Ketsa S (2001) Durian king of tropical fruit. CABI Publishing, New York 5. Voon YY, Hamid NSA, Rusul G, Osman A, Quek SY (2007) Characterisation of Malaysian durian (Durio zibethinus Murr.) cultivars: relationship of physicochemical and flavor properties with sensory properties. Food Chem 103:1217e27 6. Ng F (1988) Fruit tree biology. Chapter 9, 102–125. In: Earl of Cranbrook (ed) Key environments Malaysia. Pergamon Press, Oxford 7. Kothagoda N, Rao AN (2011) Anatomy of the durian fruit—Durio zibethinus. J Trop Med Plants 12(1) 8. Fftc.agnet.org (2017) Managing phytophthora disease of durian. http://www.fftc.agnet.org/lib rary.php?func=view&id=20110901051439&type_id=7. Accessed 25 Jun 2020 9. Thuan TTM, Tho N, Tuyen BC (2008) First report of Rhizoctonia solani Subgroup AG 1-ID causing leaf blight on durian in Vietnam. APS J 92(4):648 10. Phoulivong S, Cai L, Chen H (2010) Fungal Diversity 44:33. https://doi.org/10.1007/s13225010-0046-0 11. Tongsri V, Songkumarn P, Sangchote (2016) Leaf spot characteristics of Phomopsis Durionis on Durian (Durio Zibethinus Murray) and latent infection of the pathogen 12. Anim M (2017)Durian—Foliage diseases. Animhosnan.blogspot.my. http://animhosnan.blo gspot.my/2013/12/durian-foliar-diseases.html. Accessed 17 June 2020 13. Lindén F (2012) Fractal pattern recognition and recreation. Examensarbete 30 hp, UPPSALA Universitet 14. Cruz MD, Velayutham M, Luckose V, Eagananthan U, Venu S (2013) Chaos and fractal dimension in image processing for detection of rice leaf blast disease. In: International symposium on mathematical sciences and computing research, Perak, Malaysia 15. Wagner T, Peterson M (1991) Fractal creations. Waite Group Press. ISBN:1878739050. ISBN:13:9781878739056 16. What are Fractals?—Fractal Foundation (2017). Fractalfoundation.org. http://fractalfound ation.org/resources/what-are-fractals/. Accessed 17 June 2020 17. Mandelbrot BB (1983) The fractal geometry of nature. W. H. Freeman Publishing Company, San Francisco 18. Jadoon SK, Sayab M (2012) Fractal pattern of different alteration zones in porphyry copper deposits of Rekodik, Chagai belt Pakistan. National Center of Excellence in Geology, University of Peshawar, Pakistan 19. Fu K, Rosenfeld A (1976) Pattern recognition and image processing. IEEE Trans Comput C-25(12) (IEEE) 20. Muthukannan K, Latha P (2015) A PSO model for disease pattern detection on leaf surfaces. Image Anal Stereol 34:209–216. https://doi.org/10.5566/ias.1227 21. Kokane C, Bhale N (2017) To detect and identify cotton leaf diseases by using pattern recognition techniques. Int J Adv Res Innov Ideas Educ IJARIIE 3(4). e-ISSN:2395-4396. DUI: 16.0415/IJARIIE-4886 22. Du J, Zhai C, Wang Q (2013) Recognition of plant leaf image based on fractal dimension features. Neurocomputing 116:150–156 (Academic Search Complete, Ipswich, MA). Accessed 09 June 2020 23. Eguiraun H, López-de-Ipiña K, Martinez I (2014) Application of entropy and fractal dimension analyses to the pattern recognition of contaminated fish responses in aquaculture. Entropy 16(11):6133–6151 (Academic Search Complete, Ipswich, MA). Accessed 01 Nov 2017 24. Chapter 4: calculating fractal dimensions (2017) Wahl.org. http://www.wahl.org/fe/HTML_v ersion/link/FE4W/c4.htm#box. Accessed 01 July 2020 25. ThatsMaths (2014) The Chaos game. https://thatsmaths.com/2014/05/22/the-chaos-game/. Accessed 21 June 2020 26. Barnsley MF (2006) Superfractals. Cambridge University Press, Cambridge, New York, Melbourne

604

M. T.-D. Cruz et al.

27. Jampour M, Yaghoobi M, Ashourzadeh M, Soleimani A (2010) A new fast technique for fingerprint identification with fractal and chaos game theory. Fractals 18(03):293–300 28. Shodor.org (2015) Interactivate: the Chaos Game. The Shodor Education Foundation, Incorporated. http://www.shodor.org/interactivate/activities/TheChaosGame. Accessed 17 June 2020

Pressure Ulcer Detection and Prevention Using Neural Networks A. Durga Bhavani, S Likith, Khushwinder Singh, and A Nitya Dyuthi

Abstract Pressure Ulcers (PU) or Decubitus Ulcers (DU) are localized injuries to the skin or underlying tissue, usually over a bony prominence resulting from unrelieved pressure. They are deep scars that can potentially reach up to the bones and put the patient under extreme pain. They affect people that do not have much ambulation and are bound to a bed all day, with minimal mobility. This study observed patients with a risk of bedsores and attempts to tackle the issue with predictive and preventive methods. This work intends to reduce the risk of developing pressure ulcers in patients with COVID and those who are bedridden. The dataset was collected from the postsurgery and paralyzed patients at a hospital. The overall performance of the system of F1 score is 98%, and the accuracy is 97.5. This system is affordable, disposable, wireless, and inconspicuous, with fabric-based pressure sensors and a hygrometer that continuously monitors tissue status in at-risk areas. Keywords Ulcer prediction · Moisture sensor · Neural network · Prevention · Bed sores

1 Introduction Older people are prone to the risk of suffering from health problems irrespective of whether they’re at home, in nursing homes, or in hospitals. The number of cases when patients are prescribed bed rest is increasing because of the global pandemic, COVID-19. In most situations, some type of monitoring is beneficial in helping the medical staff stop the patient’s health status from deteriorating. Decubitus ulcers (DU), also known as bedsores or pressure ulcers, are wounds that form when the skin is subjected to continual pressure over an extended period. Shoulder blades, heels, elbows, and the tailbone area are the most common sites for these ulcers to occur as shown in Fig. 1. This is owed to the fact that these are the bonier areas of the body, A. D. Bhavani (B) · S. Likith · K. Singh · A. Nitya Dyuthi Department of Computer Science and Engineering, BMS Institute of Technology & Management, Bangalore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_44

605

606

A. D. Bhavani et al.

Fig. 1 Bedsores pressure points in the human body

exposing them to much more pressure than muscular prominence. They are frequent injuries that primarily affect elderly and feeble individuals and are a big source of worry in healthcare facilities. The screening and prevention methods used involve moving patients every 1–3 h and require a lot of manual labor. These methods can also be highly subjective. The Centers for Disease Control (CDC) reports that DUs impact 25 lakh individuals annually, including 1.6 lakh patients in nursing homes. Each year, the United States spends about $11.6 billion on treating them. In this work, a device is used to continually monitor tissue status in at-risk areas using low-cost, disposable, wireless, unobtrusive, fabric-based pressure sensors, and hygrometer (to assess moisture levels on the skin). Multiple decubitus ulcer risk assessment tools are used worldwide with the most popular being the Braden, Norton, and Water low scales. The Braden scale [1] proposes storing various parameters in six subscales, concerning the physical status of the subject, like sensory perception, skin moisture, activity, mobility [2], friction and shear, and nutritional status [3]. After evaluating all the parameters, a score indicates how susceptible the subject is to a decubitus ulcer (DU). While quantitative, scoring can be subjective and studies have shown high variability in scoring among clinicians. Based on the score, this is monitored by the caregiver. There are various wearable devices to help in detecting (lack of) ambulation, and if there is none for a long period, the caretakers are alerted and action is taken [4, 5]. Other publications mention using image classification [6, 7] to detect bedsores using logistic regression, KNN clustering, and other deep-learning techniques [8]. IoT techniques are also used for real-time monitoring of the subjects using various sensors [9]. The other set of work is mentioned with the use of sensors such as FSR (Force Sensing Resistor), Pulse oximeter (SPO2) [10], pulse, and heart HR sensor, humidity,

Pressure Ulcer Detection and Prevention Using Neural Networks

607

and temperature sensors to analyze the physical conditions, and ambulation and predict the formation of a DU. Some of the works that deal with the prevention of DUs, use mattresses [11– 13] in various ways, like, increasing the area of contact, decreasing pressure, using different kinds of materials like sheepskin [14], dialysis bags [15], CME (Combustion Modified Ether) [11] foam mattresses are suitable for those up to the medium risk of developing a pressure ulcer. Viscos mattresses [12] are suitable for up to high-risk or very high-risk etc., to predict and prevent the formation of a DU. The challenge of this work is to use a suitable fabric/mattress that minimizes the shearing effect and provides good aeration and ventilation, body pressure dispersion mattresses are helpful tools for preventing pressure ulcers. Pressure-reducing mattresses redistribute a patient’s weight to relieve pressure points, but critical factors such as physical conditions and moisture are usually ignored. Another important factor to consider is the patient’s comfort, which is often overlooked. This paper proposes a system using low-cost, disposable wireless, and unobtrusive fabric-based pressure sensors and hygrometer (to measure moisture levels on the skin) to continuously monitor the tissue status in at-risk areas already developed to detect the pressure and make the necessary adjustments to the bed to prevent the same.

2 Background The architecture diagram is illustrated in Fig. 2.

Fig. 2 Overall architecture of the proposed system

608

A. D. Bhavani et al.

The proposed system aims to detect and prevent the occurrence of bedsores with the following points: i.

Developing circuitry with the necessary sensors, to detect constant symptoms that point to the possibility of occurrence of bedsores ii. Developing an interface as a front end, to examine the likelihood or risk that the patient is at, of developing a bedsore iii. A set of API endpoints for: a. The circuitry to upload the values to the database b. The front end retrieves the values from the database iv. A real-time analysis engine that can input the values from the circuitry and analyze the data, to infer the likelihood of developing bedsore, using machine learning and artificial intelligence techniques. The following languages have been used: 1. JavaScript for backend using NodeJS Runtime Environment 2. ExpressJS as a module for providing server-side functionality 3. HTML as the mark-up language and Cascading Style Sheets (CSS) for styling the interface along with Embedded JavaScript (EJS) 4. Python for Arduino Programming as well as building the model The following algorithms have been used: 1. Representational State Transfer (REST) for creating API Endpoints connecting the server to the frontend, hardware, and model 2. Neural Network implemented with the help of scikit-learn to predict the occurrence of Pressure Ulcers. The proposed system consists of the following modules: 1. 2. 3. 4.

Frontend Module Server Module Hardware Module Prediction Module.

2.1 Frontend Module The frontend component of the system allows users to log in to the system, as shown in Fig. 3a, b. It allows authorised users to add new patient information, view registered patient information, view the number of beds currently being monitored, and view the number of patients and their information that are currently at risk, and view historical data for every patient’s bed. This module helps caregivers and doctors to take proactive and informed steps to prevent the formation of ulcers and helps improve the comfort of the patient. The screens are styled with the help of Bootstrap and given functionality using the EJS

Pressure Ulcer Detection and Prevention Using Neural Networks

609

Fig. 3 a Dashboard. b Login screen

Module (Express Java Script) provided by the NPM Package repository. This web app is hosted on Heroku for better accessibility, and is built to be responsive and lightweight, consuming a negligible amount of resources on the client side, thus making it accessible to all users using any kind of device.

610

A. D. Bhavani et al.

2.2 Server Module The backend of the system is the centralized server to which the frontend, the model, and the hardware components connect. No part of the system can directly connect to the database because of security reasons, all queries to the database must go through the server. It provides multiple API endpoints for the components. The server is built using JavaScript as the primary language with a NodeJS runtime environment. It further uses ExpressJS to provide the server functionality. It also uses modules like body-parser, dotenv, EJS, and MySQL to be able to connect with the database. It is hosted on Heroku as well with automatic deployments enabled, thus making it fault-tolerant and highly responsive.

2.3 Hardware Module All sensors were connected to Arduino Uno, which is programmed using Python with the help of the Pyfirmata module. It connects to the server to send the values to the database. The sensor must monitor and measure pressure in Interlink Electronics FSRTM 400, a force-sensing resistor. To measure the skin moisture levels, the SEN13322 moisture sensor, DHT 11 for ambient moisture, and temperature are used in this system. NodeMCU ESP8266 is the microcontroller component to transmit data to the server and control the sensors via instruction. The 24 V DC Solenoid Valve is used to inflate or deflate the air pockets to redistribute pressure. The microcontroller is used to monitor the sensor readings and control the various components connected to it. The solenoid valve will also be controlled based on realtime sensor values. This will alter the pressure in the air pockets. The microcontroller also sends the sensor data to the server for analysis. The instructions are sent to the valves from the microcontroller. The Arduino Uno is used to control the sensors and obtain the data from them. It is also used to relay values through the ESP8266 module to the cloud. Arduino is an easy to program, easily available and affordable microcontroller among the ones available in the market. Arduino being an open-source, makes our system portable and fairly platform independent. The pressure sensors constantly measure body pressure distribution. Temperature and humidity are measured using ambient temperature and humidity sensors, as shown in Fig. 4a–c. The inner pressure of each air cell was adjusted according to the site-specific body pressure data, temperature, and humidity. The pressure of the air cell is maintained by controlling the valves. The output valve is opened for a specified period to reduce the air pressure and the inlet valve is opened to increase the air pressure. The physical parameters that are measured are: • Body pressure, measured through pressure sensors distributed across the mattress (roughly 4 sensors per mattress)

Pressure Ulcer Detection and Prevention Using Neural Networks

611

Fig. 4 a Arduino Uno. b. Moisture sensor. c Four pressure sensors

• Ambient temperature and ambient humidity • Body temperature and skin moisture. These parameters are the features that affect the formation of bedsores. When any of these parameters are screened to be consistently high as shown in Fig. 5, then it will be the subject to be at risk of developing bedsores.

2.4 Prediction Module The prediction module consists of a neural network for predicting the formation of ulcers using the sensor values. The network comprises one neuron for each sensor. In the hidden layer, the correlation between the sensor values is calculated and based on the weights, the amount of contribution of each sensor is computed and the final

612

A. D. Bhavani et al.

Fig. 5 Parameters that affect the formation of bedsores

layer produces a binary output of whether there is or there is not a chance for ulcer formation. The prediction involves two factors: pressure and moisture. Monitoring the pressure values from the FlexiForce™ pressure sensor in real-time and comparing it against a set threshold values. If the threshold (400 mm Hg) is crossed for a duration (4–6 h), the sample data which is collected from the sensors is shown in Table 1. This screen displays the current/last status of the selected patient and also displays the current status of the patient’s risk as in Figs. 6 and 7. The moisture component collects the value from the moisture sensor and using this data it trains a neural network model, which predicts the formation of a DU. Moisture increases as the area of contact increases and so, the chances of a DU increase. Table 1 Sample data from the sensor Patient code

Age

Pressure (mm Hg)

Skin moisture

Skin temperature (°C)

Ambient temperature (°C)

Ambient moisture

Pressure ulcer

4244

46

435

0.2

30

25

0.7

No

3995

48

556

0.3

32

24

0

Yes

4155

55

371

0.7

32

30

0.7

No

3115

41

240

0.6

31

27

0.3

No

4764

56

629

0

30

30

0.2

Yes

4319

51

830

0.6

30

28

0.3

Yes

3682

34

652

0.9

32

23

0.3

Yes

Pressure Ulcer Detection and Prevention Using Neural Networks

613

Fig. 6 Sensor monitor showing high-risk

Fig. 7 Sensor monitor showing no risk

2.5 Prevention Module In this work, the solution for prevention involves a mattress that is fabricated with air pockets situated evenly across the mattress. These air pockets can be inflated and deflated using a microcontroller and a portable air pump, based on real-time pressure sensor readings. The aim is to reduce the pressure on a vulnerable area of the body. Since the weight of the patients’ weight is constant, increasing the area of contact will reduce the pressure over the vulnerable area. Deflating the air pocket of the mattress surrounding the vulnerable area will effectively increase the area of contact that the at-risk area has with the mattress, thus redistributing the pressure.

614

A. D. Bhavani et al.

The air pockets have a layer of a cotton sheet over them, which are also connected to a pump to aerate humid areas and to make the mattress comfortable. Decubitus ulcers are aggravated because of the presence of humidity and pressure. A replaceable cotton sheet that is aerated using a pump is used to reduce the amount of moisture in the vulnerable areas. The area of contact will have the air pockets deflated and the surrounding air pockets will be inflated. This will result in reduced pressure and increased aeration in the vulnerable area.

3 Experiment Results and Discussion As the system is primarily focused on helping doctors and patients to prevent the formation of pressure ulcers during hospitalization, the test environment selected was a hospital bed or a wheelchair. The testing of the model is done using the test data set, which is usually 20% of the original dataset. After the training is completed and the model weights have been calculated, they are tested with the new data i.e., the test data. The accuracy of the test dataset is an overall estimate of the accuracy of the neural network model and is shown in Table 2. The confusion matrix was obtained after training and testing the neural network model, as shown in Fig. 8. The results of accuracy, precision, recall, and F1 score obtained are given Fig. 9 Precision is defined as the fraction of positive instances among all retrieved instances. Recall specifies the sensitivity, which is the fraction of retrieved instances among all relevant instances. The confusion matrix of the precision for the predicted value is between 0.96 and 1 and recall is between 0.93 and 1, which shows that the predictions are very accurate. F1-score combines the precision and recall of a classifier into a single metric by taking their harmonic mean. The F1 scores obtained were 0.98 and 0.96. The accuracy obtained was 97.5%. The high accuracy is owed to the nature of the dataset. The data (Table 1) was collected in real time every 15 min from the post-surgery and paralyzed patients at BEL hospital, under the supervision of doctors. The tight feedback loop that came from being involved with the patients and doctors helped correct errors in the data Table 2 Accuracy of the neural network model

Activation function

Training accuracy

Testing accuracy

Binary step function 82.6

78.3

Linear activation function

86.7

88.3

Sigmoid activation function

88.1

91.2

ReLU activation function

94.6

95.2

Pressure Ulcer Detection and Prevention Using Neural Networks

615

Fig. 8 Confusion matrix of our model

Fig. 9 Performance analysis of the model

collected. The data is collected iteratively, and we re-train our neural network model and this eventually improves the accuracy. This self corrective nature of the data collected helped to improve the accuracy of the output significantly. The resilient design of the neural network and the appropriate choice of the loss function as described earlier help us achieve high accuracy and minimal loss. The accuracy and loss rates with respect to epochs are depicted in Figs. 10 and 11.

4 Conclusion This work was inspired by the fact that patients with limited mobility/bedridden patients must not suffer from bedsores in addition to the original ailment. The proposed method will save a lot of effort to prevent bedsores which in turn can be

616

A. D. Bhavani et al.

Fig. 10 Accuracy

Fig. 11 Loss curve

used to heal the person from their original ailment. This could cut down on hospital time, and by extension, the hospital bill. The overall performance of the system of F1 score is 98% and accuracy is 97.5. The assumptions made are that the hospital has sensors to monitor ambient temperature and moisture levels and those sensors are positioned and placed according to the patient’s physical features. The issue of placing sensors at certain places can be solved by having an array of many connected sensors, which is, for the nonce, a future enhancement. In conclusion, by monitoring the pressure, moisture, and temperature of the bedridden patients and their surroundings over an amount of time, it can be decided whether they are at risk of developing bedsores with a high degree of accuracy.

Pressure Ulcer Detection and Prevention Using Neural Networks

617

References 1. Bergstrom N, Braden BJ, Laguzza A, Holman V (1987) The Braden scale for predicting pressure sore risk. Nurs Res 36(4):205–210. PMID: 3299278 2. Perneger TV, Gaspoz J-M, Raë A-C, Borst F, Héliot C (1998) Contribution of individual items to the performance of the Norton pressure ulcer prediction scale. J Am Geriatr Soc 46(10):1282–1286 3. Dorner B, Posthauer ME, Thomas D (2019) The role of nutrition in pressure ulcer prevention and treatment: National Pressure Ulcer Advisory Panel white paper. Adv Skin Wound Care 22(5):212–221 4. Park E-B, Heo J-C, Kim C, Kim B, Yoon K, Lee J-H (2021) Development of a patch-type sensor for skin using laser irradiation based on tissue impedance for diagnosis and treatment of pressure ulcer. IEEE Access 9:6277–6285. https://doi.org/10.1109/ACCESS.2020.3048242 5. Elsharif E, Drawil N, Kanoun S (2021)Automatic posture and limb detection for pressure ulcer risk assessment. In: 2021 IEEE 1st international maghreb meeting of the conference on sciences and techniques of automatic control and computer engineering MI-STA, pp 142–149. https:// doi.org/10.1109/MI-STA52233.2021.9464360 6. Yilmaz B, Atagün E, Demırcan FO, Yüceda˘g ˙I (2021)Classification of pressure ulcer images with logistic regression. In: 2021 international conference on innovations in intelligent systems and applications (INISTA), pp 1–6. https://doi.org/10.1109/INISTA52262.2021.9548585 7. Kottner J, Cuddigan J, Carville K, Balzer K, Berlowitz D, Law S, Haesler E (2020) Pressure ulcer/injury classification today: an international perspective. J Tissue Viability. https://doi.org/ 10.1016/j.jtv.2020.04.003 8. Šín P, Hokynková A, Marie N, Andrea P, Krˇc R, Podroužek J (2022) Machine learning-based pressure ulcer prediction in modular critical care data. Diagnostics 12(4):850 9. Mansfield S, Vin E, Obraczka K (2021)An IoT system for autonomous, continuous, realtime patient monitoring and its application to pressure injury management. In: 2021 IEEE international conference on digital health (ICDH), pp 91–102. https://doi.org/10.1109/ICD H52753.2021.00021 10. Saleh ZS, Al-Neami AQ, Raad HK (2021) Smart monitoring pad for prediction of pressure ulcers with an automatically activated integrated electro-therapy system. Designs 5(3):47 11. Ehelagastenna M, Sumanasekara I, Wickramasinghe H, Nissanka ID, Nandasiri GK (2021)Design of an alternating pressure overlay for the treatment of pressure ulcers. In: 2021 Moratuwa engineering research conference (MERCon), pp 202–207. https://doi.org/10.1109/ MERCon52712.2021.9525787 12. Shayan Z, Sabouri M, Shayan M, Asemani MH, Bina Sarmoori A, Zare M (2021)Pressure control of cellular electromechanical medical mattress for bedsore prevention. In: 2021 7th international conference on control, instrumentation and automation (ICCIA), pp 1–7, https:// doi.org/10.1109/ICCIA52082.2021.9403585 13. Serraes B, Van Leen M, Schols J, Van Hecke A, Verhaeghe S, Beeckman D (2018) Prevention of pressure ulcers with a static air support surface: a systematic review. https://doi.org/10.1111/ iwj.12870 14. Sikka MP, Garg S (2020) Functional textiles for prevention of pressure ulcers—A review. Res J Text Appar 24(3):185–198. https://doi.org/10.1108/RJTA-10-2019-0047 15. Daengsi T, Muttisan B, Wuttidittachotti P, Sirawongphatsara P (2021)Sustainable development of a prototype of air mattress from re-used materials for pressure ulcer prevention. In: 2021 international conference on green energy, computing and sustainable technology (GECOST), pp 1–5. https://doi.org/10.1109/GECOST52368.2021.9538773

Recreation of a Sub-pod for a Killed Pod with Optimized Containers in Kubernetes Indrani Vasireddy, Rajeev Wankar, and Raghavendra Rao Chillarige

Abstract Recently, cloud computing has been adopting containerization, created by virtualizing the operating system, for its several service offerings. These software bundles contain all the necessary functions to run for a specific environment. Kubernetes, an open-source system, is used to automate containerized applications, deployment, scaling, and management. Kubernetes assumes a pod as the smallest execution unit and may contain several related containers. One of the significant drawbacks associated with Kubernetes is when a pod is killed due to the lack of hardware resources, decommissioning of a node, or adopting any load-balancing strategy, it cannot be recreated automatically. Since a pod contains several associated containers, one requires some mechanism to recreate the killed pod. When a pod is killed, several possibilities exist, (i) Execution of all pod containers is completed (ii) None completed, and (iii) Out of n containers, only m finished their execution (n > m). In this work, an efficient algorithm is proposed which opens the related .yaml file, finds the logs of all container actions, removes the executed container information, and recreates the .yaml file, in turn, and sub-pod. This process helps to move the pod to another execution unit during load balancing, resource pooling, and rapid elasticity, which are desirable characteristics of any cloud. Keywords Kubernetes · Container · Pod · Algorithm · Load balancing

I. Vasireddy (B) · R. Wankar · R. R. Chillarige School of Computer and Information Sciences, University of Hyderabad, Hyderabad, Telangana, India e-mail: [email protected] R. Wankar e-mail: [email protected] R. R. Chillarige e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_45

619

620

I. Vasireddy et al.

1 Introduction Cloud computing involves a high requirement of resources and demands high scalability and performance [1]. Sometimes a single cloud service provider cannot achieve availability due to running out of resources and may be unable to meet the requirement of its client [2], resulting in poor performance, scalability, and reliability. In large-scale virtualized platforms, migration is essential functionality. Migrating virtual machines is an enabler for scheduled node that is decommissioned, resource alliance, disaster retrieval, and vertical scaling [3]. Most applications are moving toward containerization for simplicity, efficient performance, and cost. One of the solutions for containerization is using Docker. Docker is an open-source containerization platform that uses OS-level [4] virtualization to deliver applications in packages called containers. Docker is a software platform that allows us to build, test, and deploy applications quickly. While Docker is used for creating containers, Kubernetes is an orchestration platform that is used to manage containers. Kubernetes is a multi-host container management software that uses the masterslave model to manage the Docker containers across Kubernetes Nodes. Kubernetes, an open-source system, is used to automate the deployment of containerized applications, scaling, and management. For a given problem to solve, if we require a ‘C’ number of distinct containers, then created pod may take one among the 2c − 1 possible configurations. Here, the configuration is represented as a binary vector of c values. Containers can be in one of two states, i.e. in r (running state) or in c (completed state). A pod P is represented as a set of containers, P = [C1 , C2 , C3 · · · Cn ]. Kubernetes assumes Pod as the smallest execution unit. The pod may contain several related containers and associated volumes. Containers inside a pod share resources like IP addresses, volumes, and libraries. Pods in Kubernetes are mortal, i.e. once killed, they cannot be recreated; instead, a new pod should be created. In this paper, we analyzed the possibilities of killing a pod and having to be recreated on a new node. The associated state of the containers must be taken care of when a pod is killed. The present state of research proposes methods in which if a pod is killed due to the reasons mentioned above, a new pod is created considering all the containers. In this work, we propose an algorithm for the possibility that out of ‘n’ containers, if ‘m’ containers had completed their execution, a Sub-pod of the running pod is created with only ‘n-m’ containers, thus saving time and space. This paper is organized as follows: Section 2 presents the related work, Sect. 3 presents the proposed method, Sect. 4 presents the design and algorithm, Sect. 5 evaluation, and Sect. 6 conclusion.

Recreation of a Sub-pod for a Killed Pod with Optimized Containers in Kubernetes

621

2 Related Research Kubernetes is a platform that is used for the orchestration of containers. In Kubernetes, containers are organized in a pod. A pod is the smallest executable unit in Kubernetes. Pod [5] is a set of one or more containers with data volumes. Kubernetes assures that the containers which belong to one pod execute in the same machine and share the same set of resources such as a single private IP address within the Kubernetes cluster [6]. Numerous studies [7–9] have been carried out on migration in virtual computing platforms and container migration. We focus here only on pod migration. With recent trends toward elasticity, scalability, and distributed microservices, containerization has become an adoptable alternative for deployment. The most important advantage of containers is their quick creation and minimal process isolation. Containers share a set of resources like CPU, disk, and memory; hence, orchestration of containers, i.e. creating, scheduling, and detecting failed containers [10], is a big challenge. Containers within a pod share IP address, port number space, and volumes. Migration of pods in Kubernetes Cluster is one of the main requirements for Load balancing of pods in node. When the node is under-resourced or unreliable, then migration of the pod is required. Several research works focus on container migration using CRIU-based [11, 12] platforms with checkpointing. The memory state and state of containers are saved using checkpoints, and a snapshot of the migrated container is taken. Once the container is migrated, it starts from the saved state, as it is restored from the checkpoint. The same method can be applied to pod migration. However, a pod in Kubernetes is mortal. Presently migrating a pod is only done by deleting the current pod and then recreating a new pod on a new node from scratch. One of the significant drawbacks associated with Kubernetes is when a pod is killed due to the lack of hardware resources, decommissioning of a node, or adopting any load-balancing strategy is not automatic. Since a pod contains several associated containers, one requires some mechanism to recreate the killed pod. Hence, research is going on how to preserve the necessary states of the containers in a pod and recreate a new pod with that preserved information with minimal service disruption, which saves the cold start time. When migration is triggered, the old pod is deleted, and a new pod is created. The ability to migrate a pod from one node to another would be essential for many reasons like cost effectiveness, load balancing, and if a data center has to be evacuated due to major disruptions so that users are not affected. The present literature considers the whole tuple to be recreated on a new node thus taking the space and time. This research focuses on creating a sub-pod eliminating those containers which are in the complete state. A pod is expected to be mortal which can fail at any [10]. By default, when a pod is killed due to various reasons, its attached data volumes are deleted, and its IP address is recycled to be assigned to a sub-pod. If the pod is a StatefulSet, then persistent volumes can be used, so that a newly created sub-pod may later re-attach to them.

622

I. Vasireddy et al.

When a pod is killed, several possibilities exist, i. Execution of all pod containers is completed, ii. None completed, and iii. Out of n containers, only m finished their execution (n > m). In this work, an efficient algorithm is proposed which opens the related .yaml file, finds the logs of all container actions, removes the executed container information, and recreates the .yaml file and, in turn, pod. Much research has been done on pod migration, without considering the state of containers in a pod but our work concentrates on how the state of the containers is taken into consideration, and the sub-pod is created by considering the state of the containers.

3 Proposed Method Migration is an essential requisite in large-scale virtualized computing platforms. In the Kubernetes cluster, pod migration is required when a node is decommissioned due to under-resourced or overloading of a node. In Kubernetes, containers reside in a pod, hence during migration, a pod should be migrated from the decommissioned node to the new node. However, pods in a Kubernetes portal are mortal, i.e. once killed, it cannot be recreated. We have to kill the pod and recreate a new pod on the new node. Recreating a new pod on the new node can happen in three ways. In the proposed method when the pod has to be relocated to a new node, the possibility 3 is considered where pod has n containers out of which m had completed the execution n > m and n-m containers have to complete the execution. In possibility (i) when a pod has all of its containers in the completed state, then the pod can be killed, and a new pod can be created with a new list of containers; hence here, preserving the state of a pod in a persistent volume is required. In possibility (ii), when all the containers are running, then before the migration of a pod, all the states of the containers have to be preserved, and a new pod has to be created with all the containers. One can see that possibilities (i and ii) are slightly easy to handle. In this paper, we propose an algorithm for the possibility (iii) when the pod has ‘n’ number of containers and ‘m’ are in a completed state, and ‘n-m’ containers are running. If ‘c’ distinct containers are required in a computation environment, any computing instance requires one among 2c − 1 container configurations. A given task may demand a subset of distinct containers, which is denoted by the indicator function δi = 1 if the ith container is required or it will be 0. In Kubernetes, this configuration is required where a pod runs with related containers and will be in the running state, represented by the indicator function. Over the period of time when the pod is executing, some containers might have completed their assigned tasks. A task requires a subset of containers, represented by C0 . Over the time when the pod is executing at time t, some containers might have completed the tasks, and corresponding indicator functions will turn to 0, will call it Ct (Ct < C0 ). When a pod is killed, the usual practice is to create a pod with C0 on the available node, which is a resource-hungry process (complete rollback process). In this study, it is proposed a recreate a pod in the available node with Ct .

Recreation of a Sub-pod for a Killed Pod with Optimized Containers in Kubernetes

623

We then have to recreate a new pod, a sub-pod of the original pod, i.e. the containers in a completed state are to be removed, and running state containers only are part of the sub-pod. This process reduces the cold start time and uses resources like memory and CPU time. Pods are mortal, i.e. once a pod is killed, it cannot be recreated. In the life cycle of a pod, we cannot go from the waiting state to the running state. We can only get to the end state from the running state. If a pod is killed due to a lack of resources, or when we update the number of containers, it cannot be recreated. Instead, a new pod is created. Hence while recreating a new pod with containers that had gone to the end state will maximize the usage of resources like memory and CPU. In our proposed method, we can edit the number of containers in a pod, i.e. we can edit and create a sub-pod using the minimum number of containers. Create a sub-pod by considering the state of the containers inside a pod at the time of killing by removing the ones that are not required anymore, saving memory and CPU time. We edited the number of containers in a pod in the proposed method. Creating a sub-pod with an optimized number of containers will reduce the resources required, like memory and CPU. For example, when a pod runs initially, it may have two containers (1. Redis, 2. Ubuntu). While running a pod the container Redis may come to an end state, and it is no longer required, and at the same time, the pod has to migrate to another node. Hence, we can recreate a pod with only one container, i.e. Ubuntu. That means a sub-pod is created with optimized containers by considering the state of containers inside a pod at the time the pod was killed. Run the kubectl edit pod command. This command opens the pod specification in an editor (vi Editor). Then, edit the required properties, but the system will deny us because we are attempting to edit a field on the pod that is not editable. A copy of the file with our changes is saved as a yaml file in a temporary location. Then create a new pod with changes using the temporary file. Running a pod with an optimized number of containers will balance cost, reliability memory, and CPU time.

4 Design Kubernetes assumes a pod as the smallest execution unit and may contain several related containers. A pod will be in a pending state until all the containers inside a pod had started. Once all the containers had started execution pod comes into a running state. A pod in Kubernetes reaches a Successful state if all the containers inside a pod had terminated successfully. A pod moves to failed state if a pod has been stopped with at least one container in a running state. As a pod in Kubernetes is mortal, i.e. a pod in Kubernetes cluster will go from running state to successful state or failed state, but will not come back to pending state. Hence, when a pod is killed or has to move from one node to another, then we have to delete the old pod and recreate a new pod (Fig. 1).

624

I. Vasireddy et al.

Fig. 1 Proposed sub-pod life cycle

Since a pod contains several associated containers, one requires some mechanism to recreate the killed pod. In the present state of research when a pod is killed the state of container (SC) is not considered while recreating a sub-pod. In this work, an efficient algorithm is proposed which opens the related .yaml file, considers the state of container (SC), and finds the logs of all container actions like what containers have a flag with r or c. It removes the container with flag as c and retains the container with flag as r, and recreates the .yaml file of the source pod with containers with flag as r. Hence, a Sub-pod is recreated for a killed pod with optimized containers by considering the state of containers when the pod is killed.

4.1 Proposed Algorithm The above algorithm involves a three-stage process. In stage 1, we create a pod with n containers using the kubectl command and make the pod as editable using edit pod in stage 2 the containers are assigned a flag and if the container has completed its execution then flag is set to 0 and if a pod is killed due to some reason then in stage 3 the containers with flag as 0 (m containers) are deleted from the pod by editing the pod, i.e. the pod is recreated by using kubectl command with the temp file which is created after editing a pod with n-m containers by considering the killed state.

Recreation of a Sub-pod for a Killed Pod with Optimized Containers in Kubernetes

625

Algorithm 1: POD-RECREATION 1 2 3

4 5 6

7 8

9

Algorithm POD-RECREATION (P) Begin; Step1: Initiate a pod P with n containers, C0 = (δ0, δ1, ..., δn) //Containers in which the indicator function is 1 are in running state and has value 0 are in completed state. Example: A pod starts (running) when all its containers are in running state, i.e. C0 = [1, 1, 1, 1] here we are considering a pod with four containers// Step2: CREATE (P, C0 , node1)//we assume a process CREATE creates the pod P on node 1 with C0 = (δ0, δ1, ..., δn). Step 3: KILL (P, C0 , node1)//we assume that due to some reason a pod is killed// Step 4: After checking the state of the container now edit the pod by removing the containers with flag c, i.e. Edit the number of containers in the pod’s yaml file and save the yaml file in a temporary location; Step 5: DELETE (P, C0 , node1)//we assume that DELETE procedure removes the pod from node1//; Step 6: CREATE (P, Ct , nodex)//we assume a process CREATE creates the sub-pod on some node x. Ct < C0 , the same pod with less containers but with the same ip address can be created on some other node x//; STEP 7: End

4.2 To Check the Status of Container in a Pod We can check the status of a container in a pod in Kubernetes with the kubectl logs command. Syntax to check the logs of a pod, the following command is used: kubectl logs pod-name -c container-name. We can check the logs of a crashed pod in Kubernetes with the following command: kubectl log -previous and Syntax to check the logs of a crashed pod is kubectl logs pod-name -previous. When we have multiple containers running inside a single pod, it is better to check the container status before the pod crashes. We remove the container which has successfully completed its execution and recreate a pod with optimized containers. After a new pod is created, it is scheduled to a new node and the pod is executed.

5 Evaluation 5.1 Experimental Set Up We evaluate this work by comparing the CPU and memory utilization of the recreation of a pod with and without optimized containers. We organize the Kubernetes cluster with four worker nodes and one control plane node. We now evaluate the performance of our newly created pod using our proposed algorithm. This is done by comparing the CPU and memory usage of the pod with ‘n’ containers, by considering the state of containers (SC) during killed state and without considering the state of containers at killed state. We can measure the CPU and mem-

626

I. Vasireddy et al.

Fig. 2 Comparision of CPU utilization in Kubernetes cluster

ory usage by getting into pod exec mode using kubectl exec − it pod_name − n namespace − −/bin/bash, and to check the CPU and memory usage, we have to run cat/sys/ f s/cgr oup/cpu/cpuacct.usage and to get the CPU usage cat/sys/ f s/cgr oup/memor y/memor y.usage_in_bytes for memory usage. Figure 2 illustrates the comparison of CPU usage of a pod without considering the state of containers at kill time and after applying the proposed algorithm by considering the state of containers after kill time. A pod without considering the state of containers at kill time has high CPU utilization compared to the pod recreated after considering the state of containers after the kill time. This concludes that if we can recreate a pod with optimized containers using the proposed algorithm, it will provide an efficient recreation strategy of a killed pod in Kubernetes by minimizing the CPU utilization. Figure 3 illustrates the comparison of memory usage of a pod without considering the state of containers at kill time and after using the proposed algorithm by considering the state of containers after kill time. A pod without considering the state of containers at kill time has high memory utilization compared with a pod recreated after considering the state of containers after the kill time. This concludes that if we can recreate a pod with optimized containers using the proposed algorithm, it will have an efficient recreation strategy of a killed pod in Kubernetes by minimizing the memory utilization. This process helps to move the pod to another execution unit during load balancing, resource pooling, and rapid elasticity, which are desirable characteristics of any cloud.

Recreation of a Sub-pod for a Killed Pod with Optimized Containers in Kubernetes

627

Fig. 3 Comparision of memory utilization in Kubernetes cluster

6 Conclusions and Future Research Migration of pods in Kubernetes Cluster is one of the main requirements for load balancing of pods in node. When the node is under-resourced or unreliable, then pod migration is required. This process helps to move the pod to another execution unit during load balancing, resource pooling, and rapid elasticity, which are desirable characteristics of any cloud. We proposed an efficient algorithm which uses pod container state discovery process, i.e. finds the state of all container actions, removes the executed container information, and creates a sub-pod. Conflict of Interest The authors declare that they have no conflict of interest to report regarding the present study.

References 1. Buyya R, Srirama S, Casale G, Calheiros R, Simmhan Y, Varghese B, Gelenbe E, Javadi B, Vaquero L, Netto M et al. (2018) Research directions for the next decade. A manifesto for future generation cloud computing. ACM Comput Surv (CSUR) 51:1–38 2. Masne S, Wankar R, Raghavendra Rao C, Agarwal A (2012) Seamless provision of cloud services using peer-to-peer (p2p) architecture. In: Distributed computing and internet technology: 8th international conference, ICDCIT 2012, Bhubaneswar, India, February 2–4, 2012. Proceedings 8, pp 257–258 3. Jain N, Mohan V, Singhai A, Chatterjee D, Daly D (2021) Kubernetes load-balancing and related network functions using P4. In: Proceedings of the symposium on architectures for networking and communications systems, pp 133–135 4. Docker: https://www.docker.com/

628

I. Vasireddy et al.

5. Nguyen Q, Phan L, Kim T (2022) Load-balancing of kubernetes-based edge computing infrastructure using resource adaptive proxy. Sensors 22:2869 6. Kubernetes: http://kubernetes.io/ 7. Chang C, Yang S, Yeh E, Lin P, Jeng J (2017) A kubernetes-based monitoring platform for dynamic cloud resource provisioning. In: GLOBECOM 2017-2017 IEEE global communications conference, pp 1–6 8. Junior P, Miorandi D, Pierre G (2020) Stateful container migration in geo-distributed environments. In: 2020 IEEE international conference on cloud computing technology and science (CloudCom), pp 49–56 9. Junior P, Miorandi D, Pierre G (2022) Good shepherds care for their cattle: seamless pod migration in geo-distributed kubernetes. In: 2022 IEEE 6th international conference on fog and edge computing (ICFEC), pp 26–33 10. Kim E, Lee K, Yoo C (2021) On the resource management of kubernetes. In: 2021 international conference on information networking (ICOIN), pp 154–158 11. Loo H, Yeo A, Yip K, Liu T (2018) Live pod migration in kubernetes. University Of British Columbia, Vancouver, Canada 12. Schrettenbrunner J Migrating pods in kubernetes

Review of Model-Based Techniques in Augmented Reality Occlusion Handling Muhammad Anwar Ahmad, Norhaida Mohd Suaib, and Ajune Wanis Ismail

Abstract This paper reviews a model-based method in occlusion handling. The common methods of handling occlusion in Augmented Reality (AR) can be generally classified as depth-based or model-based. This paper presents a review of modelbased methods in order to gain understanding of the current state-of-the-art literature and challenges in this field. After searching and filtering in SCOPUS database, 7 papers ranging from 2015 to 2022 were selected for this review and analyzed. From the review, the main conclusion that can be drawn is the trend evolved from researching and developing on the Kinect device to handheld devices. The main challenges are improving the lighting interaction between real and virtual objects and also reducing misalignment due to reprojection errors. Keywords Occlusion problem · Augmented reality · 3D reconstruction · Kinect sensor

1 Introduction Augmented Reality (AR) has been gaining more attention in the recent years. From the rise of real world applications around 2000 until recently, there are numerous industries that have adopted the use of AR such as gaming, education, healthcare, and various other sectors [1]. By definition in [2], AR involves overlaying digital objects onto a real world scene through specific displays. Milgram’s RealityVirtuality Continuum (1994) [2] illustrates this difference, shown in Fig. 1. AR has M. A. Ahmad (B) · A. W. Ismail ViCubeLab, Faculty of Computing, Universiti Teknologi Malaysia, 81310 Johor, Malaysia e-mail: [email protected] A. W. Ismail e-mail: [email protected] N. M. Suaib UTM Big Data Center, Ibnu Sina Institute of Scientific and Industrial Research, Universiti Teknologi Malaysia, 81310 Johor, Malaysia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_46

629

630

M. A. Ahmad et al.

Fig. 1 Reality-virtuality continuum [2]

three characteristics, which is combining real and virtual environment, being able to interact in real time, and registered in 3D [3]. While Virtual Reality (VR) completely replaces the real world environment with a virtual one through a head-mounted display, AR supplements it instead. In the pursuit of making AR realistic, the virtual object should correctly exist in the real world. The fundamentals of AR realism depends on tracking and registration, in which the correct relative positions of real objects and virtual objects are very important. When the user moves from the current position, the virtual objects displayed should remain aligned with the position and orientation of real objects [4]. Common tracking methods in AR can be classified as sensor-based, vision-based which consists of marker-based and markerless-based, and hybrid [5]. The next step in making augmented image realistic is to properly handle any occlusion that occurs after the tracking process, in which virtual objects should be correctly occluded by foreground objects. When the virtual object is occluded by the real object, the visual of the augmented image will look fake. An ideal AR tracking system should be independent and robust for adapting with the visual information in the real world which includes occlusion [6]. Conventional AR tracking produces a scene that disregards the understanding of the real world, but this issue is already being tackled in the form of Mixed Reality (MR), with Microsoft HoloLens for example. The HoloLens is capable of real time mapping of the environment that it tracks as triangle meshes [7]. The challenge of AR occlusion handling is to figure out 3D depth information from 2D camera images. The AR system needs to estimate the scene mesh, commonly known as “3D reconstruction”. Research on improving AR occlusion handling has been pioneered for as early as 1996 by Breen et al. [8]. From their research, two common improvement methods can be categorized which is depth-based and modelbased. Another type of method is proposed and classified as contour-based method in which the occlusion relationship is manually identified by marking the real objects that occlude the virtual objects [9]. To elaborate further, this method works by using the interactive segmentation method to obtain the contour of the real object, and then tracked in the subsequent frames to display the correct occlusion relationship by redrawing all the pixels of the tracked object on the virtual image in real time [10]. Figure 2 illustrates the occlusion problem in AR, as mentioned by [11]. The target in Fig. 2a is the virtual object which is supposed to be behind the red caddy. In Fig. 2b it can be seen that without proper occlusion handling the virtual object is occluding the caddy. Figure 2c shows the correct positioning of the target behind the caddy. This shows the importance of determining the correct relative positioning of virtual and real objects which is achieved by proper tracking. This paper

Review of Model-Based Techniques in Augmented Reality Occlusion …

631

Fig. 2 AR occlusion problem [10]

presents a review of AR occlusion handling research that implements the modelbased method. From this review, it is hoped that it can provide an understanding towards the gaps and opportunities that can be tackled in the future.

2 Related Works When selecting the papers for review, several criteria were established to narrow down the selection of papers, listed as follows. • Papers are selected from SCOPUS database, with keyword search ‘occlusion handling in augmented reality’ • Search is limited to 10 years time span from 2013 to 2022 From the search, 105 results were returned, and after applying year limit filter, the results are left with 63 papers. Then, a quick assessment of each paper was performed by identifying if it is an AR occlusion handling article and 20 papers were left. The 43 papers excluded are either review papers or did not touch on the occlusion handling topic. The remaining papers were screened via full article reading to identify the papers that implement model-based method. Finally, there were 7 papers that were selected for review ranging from 2015 to 2022. Table 1 shows a systematic summary of the selected papers. From Table 1, there is gap of this research topic in 2016, 2019 and 2021. This gap was investigated by reviewing the list of papers returned by the SCOPUS database by each year again. From the reading, it is found that the research in those years are more focused towards depth-based and contour-based methods.

3 The Tracking and Pipeline Over the time, with the release of AR tracking libraries and SDKs such as Vuforia, Google ARCore and Apple ARKit, researchers gradually switched their focus to handheld devices which has more marketability potential. Each of the libraries has

632

M. A. Ahmad et al.

Table 1 Systematic summary of selected papers Year

Title (Author)

Display type

Method

Evaluation

2015 (Journal)

Handling occlusions in AR based on 3D reconstruction method 10]

Desktop display through Kinect

3D reconstruction using point clouds

Framerate (23fps average)

2017 (Proceeding)

Real-time AR with Occlusion Handling Based on RGBD Images [12]

Desktop display through Kinect

3D reconstruction through Kinect

Framerate (25fps average) and previous work comparison

2018 (Journal)

Occlusion handling using moving volume and ray casting techniques for AR systems [13]

Desktop display through Kinect

3D reconstruction using improved KinectFusion algorithm

Framerate (22fps average) and previous work comparision

2018 (Conference)

Addressing the Handheld Occlusion Problem in AR Environments with Phantom Hollow Objects [14]

Inverse model-based approach (rendering virtual object on see-through real objects)

Frame time measurement (improvement up to 34.2%)

2020 (Conference)

Interactive AR for the Handheld Depth of An Object Using the Model-Based Occlusion Method [15]

Vuforia Model Target to track real object and occlude virtual object

Tracking performance (lighting, distance, angle)

2022 (Journal)

Real Time Handling Occlusion in AR Based on Photogrammetry [16]

3D reconstruction using photogrammetry technique

Previous work comparison

2022 (Journal)

Future landscape Handheld visualization using a city digital twin integration of AR and drones with implementation of 3D model-based occlusion handling [17]

Client-PC server system (PC performs occlusion process and send video feed to end user)

Alignment accuracy using IoU (0.820), latency and framerate (average 30fps)

Handheld

their own strenghths and drawback that have been evaluated comparatively by Hanafi et al. (2019) [18]. Vuforia’s Model Target feature used by [15, 16] is a powerful tool to quickly apply depth masks and perform occlusion handling, especially with the photogrammery reconstruction technique. It allows accurate 3D reconstruction without spending on high performance capture cameras. The city digital twin project by Kikuchi et al.’s [17] is ambitious but it may prove useful for city planning and

Review of Model-Based Techniques in Augmented Reality Occlusion …

633

Table 2 Comparison of Vuforia, ARCore, and ARKit Library

Tracking features

OS supported

Vuforia

Image target, orientation-based ground and plane detection

Android, iOS

ARCore

Motion tracking, feature points-based ground and plane detection

Android, iOS

ARKit

Motion tracking, feature points-based ground and plane detection

iOS

landscaping as it allows the stakeholders to view the planned structures in real time with the correct occlusion. However, since they opted using live video feed to display the scene, it opens up to an entirely different issue such as latency between the devices. Table 2 summarizes this comparison based on their licensing and tracking features. ARCore and ARKit both have the environment understanding where they did not rely on an external camera, there are fewer chances of occlusion. ARCore detects planes and feature points so we can properly put virtual objects onto real, flat surfaces. When the virtual object is rendered, the tracking features will enable the user to move the display while retaining the position the object was rendered relative to the real world objects. Moving the virtual object behind or in front of the real objects still retains the occlusion as intended. Figure 3 shows the effect without the occlusion on the left and with the occlusion in the right. Issue rises when moving the camera around a scene may cause the occlusion to fail due to changes in the background and foreground causing the algorithm to sometimes only work from certain angles within the scene. Vuforia does not occlude the objects behind the real world like ARKit and ARCore. Vuforia relies on depth masking. The depth mask will prevent any objects behind the masked region from being rendered. The review also identifies the principles and pipeline behind the 3D reconstruction process. Bekiri and Chouki’s photogrammetry 3D reconstruction system overview explained this and summarized it in a pipeline [16], shown in Fig. 4 and explained further below. Feature extraction. The objective is to extract feature sets of pixels that are unaffected by view-points changes of cameras throughout image capturing. This is done using Scale-Invariant Features Transform (SIFT) algorithm.

Fig. 3 ARKit occlusion handling example

634

M. A. Ahmad et al.

Fig. 4 3D reconstruction pipeline [16]

Structure from motion. This step aims to introduce the relationship between different observations from input images and extract the structure scene with 3D points, their position, their orientation, and the internal parameter of every camera. Prepare dense scene. This step is for obtaining undistorted images with eliminating reprojection errors and compute a depth from distortion function. Depthmap estimation. This step performs the retrieval of the depth value of each pixel. It is based on Semi-global Matching (SGM) algorithm in which the similarity of each pixel in the neighbouring images is compared. Fronto-parallel planes are selected based on the intersection of the optical axis with the pixels of the selected neighboring cameras which creates a volume W, H, and Z. Then, the similarity is computed using Zero Mean Normalized Cross-Correlation (ZNCC) of a small patch in the main image that is reprojected into each of the neighbouring images, which is then accumulated in the volume.Then a filter is applied to reduce the noise in the volume and then the local minima is selected to be stored in the depth map. Meshing. A dense geometric surface of the image is constructed in the scene. The mesh is then simplified by filtering the terrible cells on the surface to decrease the redun-dant vertices. This process is based on the Alicevision Meshroom software’s 3D reconstruction system using photogrammetry. This pipepline provides a general view of how 3D reconstruction works for the occlusion handling process. However, it needs to be noted that this process must be done entirely before runnning the AR application. While it does makes the occlusion handling runs in real time, any changest to the scene will not be updated and may affect the tracking of the objects thus also disrupting the realism of the composed image. Therefore, future directions should work on making the photogrammetry 3D reconstruction being able to be performed in real time.

Review of Model-Based Techniques in Augmented Reality Occlusion …

635

4 Model-Based AR Occlusion Handling This section reviews the selected papers on the topic of model-based AR occlusion handling methods. The review is structured in a timeline form to show the progress of this topic. In 2015, Tian et al. [10] proposed using Kinect to obtain the depth map and RGB information of the scene and then using point clouds to perform the 3D reconstruction. This phase is done offline without any augmented objects. Then, in the augmented scene the distance between the virtual object and the reconstructed scene is calculated to perform the occlusion. They performed two experiment to verify the proposed method and both handled occlusion properly. To evaluate, they compared with previous methods and also framerate measurement which averaged at 23fps. The limitation of their method is that the scene reconstruction is done offline which mean that when any new real object is added to the scene it will not recognize the object and will not handle occlusion properly. Figure 5 shows one of the results of their proposed method. Later in 2017, Guo et al. [12] proposed a method that uses two pass scheme in which the first pass is for parallel camera tracking using Kinect that allows 3D reconstruction of the scene to be updated and visualized. Then the second pass handles the occlusion and fusion between the virtual object and real environment. Their evaluation is by framerate measurement with 25 fps average and comparison with previous work that use only raw depth data from the Kinect [19]. The author did not mention any limitations to their method, but they proposed future work of calculating shadows for the virtual models and add interactions. Figure 6 shows the result of their proposed method. In 2018, Tian et al. [13] implemented an improved moving volume KinectFusion algorithm for the reconstruction process. KinectFusion is a framework proposed by Newcombe et al. [20] that allows 3D reconstruction of an object or a small environment in real-time the Kinect. The improved algorithm can avoid reconstructing previously built scene when the camera moves back and forth. Then they use ray

Fig. 5 Result of Tian et al.’s method [10]

636

M. A. Ahmad et al.

Fig. 6 Result of Guo et al.’s method [12]

cast method to compare the depth value of real and virtual object based on the reconstructed scene to perform the occlusion. Their evaluation method is by comparison with previous works and framerate measurement with average of 22 fps. Their limitations of their system is it requires a large amount of video memory and there can be reprojection errors that cause white holes at the border of virtual and real objects. Figure 7 shows one of the experiments of their proposed method. Gimeno et al. [14] tackled on an inverse approach for model-based occlusion method for rendering virtual objects inside of a real object, using interior of cars for their case study. Instead of targetting the whole car as the occlusion model, they targetted the windows instead. This method reduces the geometry that needs to be loaded for handling the occlusion which in turn also reduces the time to render the scene. This was proven from their performance measurement of frame time improvement of up to 34.20%. The author did not mention any limitations to their method, but their proposed future work is to implement on a scene with multiple overlapping windows and also to optimize the rendering by rendering the geometries of the virtual object that are visible in the boundary of the windows.

Fig. 7 Tian et al.’s proposed moving volume KinectFusion method [13]

Review of Model-Based Techniques in Augmented Reality Occlusion …

637

In 2020, Hidayat and Astuti [15] proposed using Vuforia’s Model Target as the occlusion model. Model Target is a model-based tracking system for markerless AR in which the real object is digitalized and used as the tracker for projecting the virtual object. It works by generating an outline around the object called Guide View. In the augmented scene the Model Target is invisible. To perform the occlusion, a depth mask shader is applied on the generated Model Target and any virtual object behind the Model Target will be occluded properly. Figure 8 shows how the result of the proposed method. Recenly, in 2022 Bekiri and Chouki [16] proposed a 3D reconstruction method using photogrammetry. Static objects were captured using the close-range photogrammetry method and reconstructed via Alicevision Meshroom software. Then, they used the AR SDK Vuforia’s Model Target Generator (MTG) to register the reconstructed model as the model target and applied depth mask for handling the occlusion. Their method of evaluation is to compare with previous methods. The strengths of their method is that they used open source software to implement the photogrammetry reconstruction therefore reducing the overall cost of the research and development. They did mention any limitations but they proposed future work to increase realism by adding shaders, improve lighting and extend as Mixed Reality. Figure 9 shows the result of their work.

Fig. 8 Vuforia’s model target as the occlusion model [15]

Fig. 9 Bekiri and Chouki’s AR occlusion handling via photogrammetry [16]

638

M. A. Ahmad et al.

Fig. 10 Outdoor AR occlusion handling system by [17]

Kikuchi et al. [17] proposed an outdoor system that implements a fully reconstructed city in 3D and stored in a client-PC server system. Then, a drone controlled by a smartphone with attached camera provides a live feed of the city to the server. Whenever the augmented model is attached to the scene between the city model, the server will mask the city model and send the feed with the occlusion handling to the end user via live streaming. Figure 10 shows the conceptual diagram of their proposed method. In theory, this system works by capturing live video feed through the drone with a predefined movement and each frame is sent to the PC server. The PC server then compares the existing buildings in the frame with the 3D reconstructed buildings (occlusion model). The virtual object (design target) is also present in between the buildings that need to be occluded. Then, the PC server will mask the reconstructed builings in the current frame. The mask image is then used to perform the occlusion handling of the virtual object. The resulting frame is sent to the end-user devices with the correct occlusion. They evaluated the system’s accuracy via IoU measurement when matching the city building’s model to the real world buildings in which the final value is 0.8. They also measured the system’s communication speed and latency between the devices (controller-server, server-end user device). They also mentioned that the system runs around 30fps. The strengths of their system is that it can visualize the augmented object in outdoor with high accuracy but the limitation is that the accuracy decreases over time which causes misalignment between the virtual object and real object due to the drone positioning not fully updated in real time.

5 Conclusion In this paper, various model-based methods for handling Augmented Reality occlusions were reviewed. The papers selected ranged from 2015 to 2022. From the review, clear progression heading towards optimizing AR occlusion handling for handheld

Review of Model-Based Techniques in Augmented Reality Occlusion …

639

devices is acheived. The main challenges that are identified are on improving the tracking of the objects in the scene in order to determine the relative positions between the virtual and real objects. Reprojection when moving the display can also cause misalignment of the occluded objects, therefore, this is also one of the directions for improving AR occlusion handling. Machine learning can also be explored to improve the efficiency of the proposed methods. For the photogrammetry method, future work should focus towards enabling real time reconstruction, as the current method needs to be done before running the AR application. Two common improvement methods can be categorized which is depth-based and model-based. Depth-based methods focus on generating 3D depth information from 2D images, ideally in the form of dense point cloud or depth image while model-based integrates the 3D information over several frames to generate ge-ometry, in the form of triangular mesh, in real time. From the review in Sect. 4, RGB-D camera, specifically the Microsoft Kinect, was the device of choice for most researchers from 2015 to 2018. The Kinect is a robust depth sensing capable camera which also has a low ntry cost due to its main focus market for the Xbox gaming console. It also comes with official SDK support from Microsoft when they launched Kinect for Windows version. From the proposed methods, it can be seen that the device is capable of performing AR occlusion handling with high accuracy and acceptable performance. The occlusion handling system is eventually able to generate the 3D reconstruction in real time, however it comes with the issue of high memory usage which is less ideal to implement on smaller devices. Based on one of the papers reviewed, it is identified that factors that affect the processing time for real time augmented reality occlusion handling is the sum of scene reconstruction time and the occlusion handling time. In conclusion, we have described a model-based based pipeline followed by 3D reconstruction of the pipeline which involved five phases: (1) feature extraction, (2) structure from motion, (3) prepare dense scene, (4) depthmap estimation and (5) meshing. Each of these phases has been discussed throughout this paper. AR tracking system should be independent and robust for adapting with the visual information in real world which includes occlusion. Conventional AR tracking produces a scene that disregards the understanding of the real world. This paper has discussed about the recent AR tracking libraries and SDKs such as Vuforia, Google ARCore and Apple AR-Kit. Acknowledgements We would like to express our gratitude to Universiti Teknologi Malaysia (UTM) for funding under UTM Encouragement Research (UTMER) grant number Q.J130000.3851.19J10 for support towards this research.

640

M. A. Ahmad et al.

References 1. Minaee S, Liang X, Yan S (2022) Modern augmented reality: applications, trends, and future directions. arXiv. https://doi.org/10.48550/ARXIV.2202.09450. 2. Milgram P, Takemura H, Utsumi A, Kishino F (1994) Augmented reality: a class of displays on the reality-virtuality continuum. Telemanipulator Telepresence Technol 2351:282–292. https:// doi.org/10.1117/12.197321 3. Azuma RT (1997) A survey of augmented reality. Presence: Teleoperators Virtual Environ 6(4):355–385. https://doi.org/10.1162/pres.1997.6.4.355 4. Rabbi I, Ullah S (2013) A survey of augmented reality challenges and tracking. ACTA GRAPHICA 24:29–46 5. Zhou F, Duh HB-L, Billinghurst M (2008) Trends in augmented reality tracking, interaction and display: A review of ten years of ISMAR. In: 2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, pp 193–202. https://doi.org/10.1109/ISMAR.2008.4637362 6. Ismail A, Bilinghust M, Sunar MS (2015) An s-pi vision-based tracking system for object manipulation in augmented reality. J Teknol 75. https://doi.org/10.11113/jt.v75.5060 7. Hübner P, Clintworth K, Liu Q, Weinmann M, Wursthorn S (2020) Evaluation of HoloLens tracking and depth sensing for indoor mapping applications. Sensors 20(4). https://doi.org/10. 3390/s20041021 8. Breen DE, Whitaker RT, Rose E, Tuceryan M (1996) Interactive occlusion and automatic object placement for augmented reality. Computer Graphics Forum 15(3):11–22. https://doi.org/10. 1111/1467-8659.1530011 9. Tian Y, Zhou X, Wang X, Wang Z, Yao H (2021) Registration and occlusion handling based on the FAST ICP-ORB method for augmented reality systems. Multimed Tools Appl 80(14):21041–21058. https://doi.org/10.1007/s11042-020-10342-5 10. Tian Y, Long Y, Xia D, Yao H, Zhang J (2015) Handling occlusions in augmented reality based on 3D reconstruction method. Neurocomputing 156:96–104. https://doi.org/10.1016/j. neucom.2014.12.081 11. Tian Y, Guan T, Wang C (2010) Real-time occlusion handling in augmented reality based on an object tracking approach. Sensors 10(4):2885–2900. https://doi.org/10.3390/s100402885 12. Guo X, Wang C, Qi Y (2017) “Real-time augmented reality with occlusion handling based on RGBD images. In: International conference on virtual reality and visualization (ICVRV) 2017, pp 298–302 https://doi.org/10.1109/ICVRV.2017.00069 13. Tian Y, Wang X, Yao H, Chen J, Wang Z, Yi L (2018) Occlusion handling using moving volume and ray casting techniques for augmented reality systems. Multimed Tools Appl 77(13):16561– 16578. https://doi.org/10.1007/s11042-017-5228-2 14. Gimeno J, Casas S, Portalés C, Fernádez M (2018) Addressing the occlusion problem in augmented reality environments with phantom hollow objects. In: 2018 IEEE International symposium on mixed and augmented reality adjunct (ISMAR-Adjunct), pp 21–24. https://doi. org/10.1109/ISMAR-Adjunct.2018.00024 15. Hidayat T, Astuti IA (2020) Interactive augmented reality for the depth of an object using the model-based occlusion method. In: 2020 3rd International conference on computer and informatics engineering (IC2IE), pp 382–387. https://doi.org/10.1109/IC2IE50715.2020.927 4565 16. Roumaissa Bekiri BM, Chaouki (2022) Real time handling occlusion in augmented reality based on photogrammetry. In: Pattern recognition and artificial intelligence, pp 47–62 17. Kikuchi N, Fukuda T, Yabuki N (2022) Future landscape visualization using a city digital twin: integration of augmented reality and drones with implementation of 3D model-based occlusion handling. J Comput Des Eng 9(2):837–856. https://doi.org/10.1093/jcde/qwac032 18. Hanafi A, Elaachak L, Bouhorma M (2019) A comparative study of augmented reality sdks to develop an educational application in chemical field. In: Proceedings of the 2nd International conference on networking, information systems & security. https://doi.org/10.1145/3320326. 3320386

Review of Model-Based Techniques in Augmented Reality Occlusion …

641

19. Hauck J, Mendonça M, Silva R (2014) Occlusion of virtual objects for augmented reality systems using kinect. Workshop De Realidade Virtual De Aumentada. https://doi.org/10.13140/ 2.1.5008.6080 20. Newcombe RA et al. (2011) KinectFusion: Real-time dense surface mapping and tracking. In: 2011 10th IEEE International symposium on mixed and augmented reality, pp. 127–136. https://doi.org/10.1109/ISMAR.2011.6092378

Segmentation and Area Calculation of Brain Tumor Images Using K-Means Clustering and Fuzzy C-Means Clustering P. Likhitha Saveri, Sandeep Kumar , and Manisha Bharti

Abstract Tumors of the brain are abnormal growths of cells that are precursors to cancer and may lead to low survival rates. So, the early detection of tumors can be a life saving measure. Brain tumor can be seen using MRI image but the problem is to differentiate the tumor from the normal tissue. This can be overcome by image segmentation. To know the exact location and size of the tumor, K-means and Fuzzy C-means (FCM) clustering and Fuzzy C means plus thresholding combined segmentation techniques are used in this work. And the results of these methods are compared. First the MRI images are exposed using image preprocessing techniques like median filter, skull stripping etc. and then the preprocessed images are subjected to segmentation and feature extraction is done by the proposed methods. The performance of segmentation methods is evaluated using PSNR and IMMSE. The performance results show that Fuzzy C means with the thresholding method is giving better results compared to other segmentation methods. Finally. the area of the tumor is calculated by counting the number of pixels in the tumor region after some morphological operations. Keywords Image segmentation · Area calculation · K-means clustering

1 Introduction Segmentation of brain MRI images has become an emerging research area in the medical image system field, since brain tumor is one of the deadliest health conditions P. L. Saveri · S. Kumar (B) · M. Bharti Department of Electronics and Communication Engineering, National Institute of Technology, Delhi, India e-mail: [email protected] P. L. Saveri e-mail: [email protected] M. Bharti e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_47

643

644

P. L. Saveri et al.

of the entire world. Thus, early brain tumor detection plays a key role in the treatment of brain tumor. Manual identification of the brain tumor is a difficult work and there is high risk for errors. So exact detection of the location and size of the brain tumor is important for proper treatment. And even after diagnosis and start of treatment the spread of tumor needs to be analyzed to ensure the progress of the treatment. To address this, image segmentation method can be used to get the exact location of the tumor. Thus, we have implemented a system which can identify the exact tumor location using segmentation techniques like K-means clustering, FCM clustering, and Fuzzy C means plus thresholding to find the area of the tumor based on the pixel count. The structure of the paper is as follows. Literature review related to the work has been presented in Sect. 2. The proposed technique is presented in Sect. 3 followed by the results and discussion in Sect. 4. The last section is the conclusion section.

2 Related Work Tumor is a condition developed by multiplication of cells unusually. In many cases cancer is detected at last stages. Image segmentation is a technique used to identify or differentiate tumor from normal cells. Computer aided diagnosis has helped the medical field in identifying different types of cancers, and to know about the progress of treatment by calculating the area of the tumor. Several image segmentation techniques are used to identify the tumor location. In this work we are using k-means clustering, FCM and fuzzy clustering segmentation techniques. Siddiqui and Isha [1] proposed enhanced moving K-means techniques for image segmentation and compared the efficiency of the newly proposed technique with conventional segmentation techniques and found that the proposed algorithm outperforms other conventional clustering techniques. In [2], detection of heart atherosclerosis by segmentation from heart images using novel Fuzzy C means and k means clustering methods has been proposed and found that the proposed technique has better accuracy than other techniques. In [3], Yogesh et al. compared the performance of color-based segmentation of different techniques on fruit images. Shuanf Lu and Li Peng [4] proposed an improvised Fuzzy C means clustering method by incorporating weighted spatial information in the FCM algorithm. In [5] it has been observed that the adaptive FCM based image segmentation is more robust compared to other conventional techniques to reduce irregularities in intensity. In [6], the brain tumor detected using MRI images and area calculation using SCILAB. Automatic segmentation and area calculation in ophthalmic images of the optic disc has been presented in [7]. Calculation of percentage of tumor of MRI images after brain tumor segmentation has been reported in [8]. In [9], the changes of the left ventricular area have been obtained and analyzed. Segmented images can be further used to classify the images using different techniques like transfer learning for the classification of lung cancer [10]. De Queiroz et al. proposed a multi-layer approach for image segmentation using thresholding technique [11]. An intuitionistic improvised segmentation

Segmentation and Area Calculation of Brain Tumor Images Using …

645

technique of infrared images using Fuzzy C means was presented in [12]. Rong et al. implemented an improvised k means algorithm to extract object from images [13]. It has been observed that the K-means clustering is widely used and is an efficient technique for the image segmentation. So, in this work, the K-means clustering, FCM and Fuzzy c means plus thresholding is used for the segmentation and area calculation of the brain tumor. Further, in this work we combined thresholding and FCM algorithm for better performance.

3 Methodology 3.1 Proposed Techniques This work emphasizes on the location of the exact tumor from MRI images using Kmeans clustering, FCM and Fuzzy C-means plus thresholding methods. In clustering data points are generated and grouped separated for further processing. And these groups are based on similarity between data points. The similar data points thus grouped are known as clusters. K-means Clustering. In k-means clustering initial centroids of groups are used to divide data points into clusters. K-means algorithm chooses k data points as cluster centers initially and finds the distance between different data points and cluster centers and assign the point to a different cluster and then update the average of each cluster until the distance becomes zero [14]. Although segmentation can be done using pixel intensity values k-means introduces a dimension to it. For example, in medical images two regions of similar intensity levels can be segmented as one region. But with distance being introduced as one of the dimension K-means can classify into two clusters. So, addition of the distance as dimension in image and color space can make a robust segmentation method than manual thresholds. The output of this process is dividing the image into k number of segments with each segment having different gray levels. In this paper we had taken cluster size 4. Fuzzy C-means Clustering. In the FCM algorithm data points are replaced by a gray scale to reduce the computation of the algorithm. FCM is an efficient segmentation technique and easy to implement in a controlled environment since this method is sensitive to difference in lighting. This method involves a data point to belong to more than one clusters. Fuzzy C-means plus thresholding. In this method thresholding technique is combined with Fuzzy C means clustering method. In medical imaging segmentation thresholding helps to precisely segment the image. So here we are combing thresholding with multi view Fuzzy C means to achieve better performance In this work we are using 3 clusters. The threshold levels for histogram are 0.1815 and 0.5.

646 Input MRI image

P. L. Saveri et al. Image preprocessing

Segmentation techniques

Morphological operations

Extraction of tumor

Area calculation

Fig. 1 Block diagram of steps involved in the process

3.2 Description of Block Diagram The proposed work follows five steps which are acquisition of image, image preprocessing, segmentation using K means, FCM, and Fuzzy C means plus thresholding, morphological image operations, exact detection of location of tumor and tumor area calculation as shown in Fig. 1. In this work, the area of the tumor is calculated from the segmented image by K-means, Fuzzy C means and Fuzzy C plus clustering methods. Input MRI image. Acquisition of the image consists of acquiring of brain MRI images. At first these MRI images form the system inputs on which various operations are carried out. Image pre-processing. Before subjecting the MRI image to segmentation image preprocessing steps are done. In the image preprocessing the image is converted from RGB to gray image. Then before segmentation of the image noise removal should be done to eliminate the unwanted noises. Since MRI of brain is done using scanning machines the possibility of noises is high so noises should be avoided. In this paper we are using median filter for noise removal. Segmentation techniques. After noise removal is done morphological operations must be done to the image. Morphological operations include stripping of the skull from the brain image. After that image segmentation is done which is dividing an image into various segments which helps in locating various objects in the image. In this work k-means, FCM and combined thresholding with Fuzzy C means techniques are used as segmentation techniques for segmentation of the tumor from brain MRI images. Morphological Operations. After image segmentation using segmentation techniques morphological operations like skull stripping, ROI and NROI segmentation are done. At first NROI is segmenting is done by calculating the maximum pixel count and dividing by 255. This fraction is used to scale the image and thus NROI is achieved. Then ROI is extracted by subtracting NROI from the original image. Tumor extraction. From ROI image and subjective analysis of the image, the exact location of the tumor is detected and extracted from the image. Area calculation. Finally, from the previous step the tumor is extracted and the area is calculated based on the pixel count of the region extracted.

Segmentation and Area Calculation of Brain Tumor Images Using …

647

4 Results and Discussion This work designs a system model to detect the exact tumor location using the MATLAB platform. The brain MRI images used are taken from the Kaggle dataset [15]. After segmentation of the image, morphological operations like erode and dilation are done. Morphological operations are done to separate the tumor from the segmented image. Finally, after extraction of the tumor from the segmented image tumor area is calculated by counting the number of pixels and summing the pixels. Effective algorithms like K-means and FCM are implemented in this work and image segmentation and morphological operations are done using MATLAB. Firstly, the MRI scanned image is subjected to image preprocessing. Further preprocessed output image is then subjected to the segmentation technique k-means and fuzzy C means clustering. And finally, the tumor is extracted from the segmented image using morphological operations. Figures 2 and 5 describe the different processes applied to image like input image, noise removed median filtered image, skull striped image and segmented image of k means and FCM. Similarly, Figs. 3 and 6 describe the Region of interest (ROI) and Non region of interest (NROI) of segmented images, which means a sub-region of image where the object of interest lies. In medical image processing ROI needs to be segmented from the background image for in depth analysis. Accurate segmentation of ROI and NROI helps doctors for in-depth classification of pathological signs and for perfect diagnosis. Here ROI and NROI are formed by thresholding i.e., converting the gray scale image to the binary image where the ROI will be either black or white depending on the type of thresholding (binary or inverse binary). After ROI the segmentation of the object from the background can be easily done. In order to find the area of the segmented tumor it is important to analyze the segmented image, look into the boundaries and contours separating the tumor part from the normal part. For this subjective analysis of image is done as seen in Figs. 4 and 7. After that the area of the tumor is calculated by summing the number of pixels and converting the pixels to mm2 . First the number of pixels in the tumor area are calculated and then we multiply the square root of the number of pixels to 0.264, since 1 pixel is approximately equal to 0.264 mm (Fig. 8). Area of tumor =

/ number of pixels ∗ 0.264

(1)

Number of pixels here is number of non-zero pixels in the ROI area. Number of pixels =

w E h E

(non − zero pixels)

(2)

i=0 j∼=0

w and h dimensions of image width and height. With this we found that by k-means area of the tumor is 10.02 mm2 , by Fuzzy C means the area is 10.07 mm2 and by Fuzzy C means plus thresholding the area is 10.32 mm2 . Although we don’t know that there is no data available to compare

648

P. L. Saveri et al.

Fig. 2 Input image, preprocessed image and skull stripped image

Fig. 3 ROI and NROI of the segmented MRI image using K means

the results by measuring relative area, we can see that k means performed better compared to Fuzzy C-means clustering. For comparing, performance parameters like PSNR and IMMSE are considered.

Segmentation and Area Calculation of Brain Tumor Images Using … Fig. 4 Subjective analysis of MRI image using k means

Fig. 5 ROI and NROI of the segmented MRI image using FCM

649

650 Fig. 6 Subjective analysis of MRI image using FCM

Fig. 7 ROI and NROI of the segmented MRI image using fuzzy C means plus thresholding

P. L. Saveri et al.

Segmentation and Area Calculation of Brain Tumor Images Using …

651

Fig. 8 Subjective analysis of MRI image using fuzzy C means plus thresholding

PSNR PSNR calculates the peak signal-to-noise ratio (PSNR) for the image A, with the reference image. PSNR is used as a quality measurement between the original and a compressed image. A greater PSNR value indicates better image quality. PSNR can be viewed as measure of quality of reconstruction of the image. Since we are doing image operations like filtering and segmentation PSNR measures the image quality of reconstructed image. The better the quality of the image the precise the extracted tumor and thus accurate the area calculation. ( ( ) ) L−1 (L − 1)2 = 20log10 (3) PSNR = 10log10 MSE MSE where MSE is mean square error and L is maximum intensity level in the image x−1 y−1 1 EE MSE = ( O(i, j) − D(i, j))2 x y i =0 j=0

(4)

i, j represents the index of row and column of the image. x and y represent the rows and columns of number pixels of image. O and D represent data in matrix form for the original image and degraded image. IMMSE IMMSE computes the mean square error between original image and segmented image. The performance evaluation results are given in Table 1. The results show that Fuzzy C means plus thresholding segmentation technique has better PSNR and

652

P. L. Saveri et al.

Table 1 Performance evaluation results Technique

PSNR

IMMSE

Area of tumor (mm2)

K-means

3.43

29,517.6

10.02

4.85

21,247.8

10.07

14.16

2494.9

10.32

Fuzzy C means Fuzzy C means plus thresholding

IMMSE of 14.16 and 2494.9. The higher the PSNR value the better the quality of the image and Fuzzy C plus thresholding has a higher PSNR of 14.16 compared to K means and fuzzy C means with value of 3.43 and 4.85. The lesser the IMMSE the lesser the mean square error. The Fuzzy C means plus thresholding has an IMMSE value of 2494.9 which is far better than k means and Fuzzy C means methods with a value of 29,517.6 and 21,247.8. With the performance comparison Fuzzy C means plus thresholding has better results and has a tumor area of 10.32mm2.

5 Conclusion In this work the tumor is extracted from brain MRI image using Fuzzy C means and k means clustering segmentation techniques. Any noise present in the MRI image is filtered out using a median filter. Before segmentation, the RGB image is converted to gray scale and further divided into a number of clusters. Using these clusters the tumor is extracted by applying morphological operations and then the area of the tumor is calculated. The performance of methods is compared using parameters PSNR and IMMSE. The results show that Fuzzy C means combined with thresholding has better PSNR and IMMSE than other two conventional segmentation methods. Finally, the area of the tumor was found to be 10.32 mm2 by using Fuzzy C means plus thresholding technique. This work can be utilized to find the progress of treatment by comparing the relative area.

References 1. Siddiqui FU, Mat Isa NA (2011) Enhanced moving K-means (EMKM) algorithm for image segmentation. In: IEEE Transactions on Consumer Electronics, vol 57, no 2, pp 833–841 2. Elangovan VR, Joe AJR, Akila D, Shankari KH, Suseendran G (2021) Heart atherosclerosis detection using FCM+kMeans Algorithm, In: 2nd International Conference on Computation, Automation and Knowledge Management (ICCAKM), pp 102–106 3. Yogesh, Ali I, Ahmed A (2018) Segmentation of different fruits using image processing based on fuzzy C-means method. In: 7th International conference on reliability, infocom technologies and optimization (Trends and future directions) (ICRITO), pp 441–447. Noida, India 4. Lu S, Peng L (2010) Color image segmentation based on improved FCM algorithm incorporating spatial information. In: 3rd International congress on image and signal processing, pp 1115–1118

Segmentation and Area Calculation of Brain Tumor Images Using …

653

5. Pham DL, Prince JL (1999) Adaptive fuzzy segmentation of magnetic resonance images. IEEE Trans Med Imaging 18(9):737–752 6. Kurnar M, Sinha A, Bansode NV (2018) Detection of brain tumor in mri images by applying segmentation and area calculation method using SCILAB. In: Fourth international conference on computing communication control and automation (ICCUBEA), pp 1–5 7. Sachdeva P, Singh KJ (2015) Automatic segmentation and area calculation of optic disc in ophthalmic images, In: 2nd International conference on recent advances in engineering & computational sciences (RAECS), pp 1–5 8. Wulandari A, Sigit R, Bachtiar MM (2018) Brain tumor segmentation to calculate percentage tumor using MRI. In: International electronics symposium on knowledge creation and intelligent computing (IES-KCIC), pp 292–296 9. HajiRassouliha A, Ayatollahi A (2009) Automatic obtaining of left ventricular area and analyzing the area changes. In: International conference on digital image processing, pp 95–99 10. Saveri PL, Kumar S (2022) Classification of cancerous lung images by using transfer learning. In: 8th IEEE International conference on signal processing and communication (ICSC), pp 298–303. Noida, India 11. De Queiroz RL, Fan Z, Tran TD (2000) Optimizing block-thresholding segmentation for multilayer compression of compound images. IEEE Trans Image Process 9(9): 1461–1471 12. Yang F, Liu Z, Bai X, Zhang Y (2022) An improved intuitionistic fuzzy C-means for ship segmentation in infrared images. In: IEEE Transactions on Fuzzy Systems, vol 30, no 2, pp 332–344 13. Rong H, Ramirez-Serrano A, Guan L, Gao Y (2020) Image object extraction based on semantic detection and improved K-means algorithm. IEEE Access 8: 171129–171139 14. Sulaiman SN, Mat Isa NA (2010) Adaptive fuzzy-K-means clustering algorithm for image segmentation, In: IEEE Transactions on consumer electronics, vol 56, no 4, pp 2661–2668 15. Kaggle.com. (2022) Kaggle: Your machine learning and data science community. Accessed 10 Dec 2022 [online]. Available https://www.kaggle.com/

Smart Agricultural Field Monitoring System N. Sabiyath Fatima, N. Noor Alleema, V. Muthupriya, S. Revathi, Geriga Akanksha, and K. Balaji

Abstract The global population is increasing rapidly; as a result of it, there will be a high demand in the supply of food and water. To meet the requirement, advancing the agricultural process is one of the most important needs. Therefore, this paper proposes a smart agricultural field monitoring system which consists of four sensors, temperature prediction DHT11 sensors, soil moisture prediction sensors, rain detection sensors and water level monitoring sensors or ultrasonic sensors. If the moisture level decreases, then a notification is sent stating that the soil moisture levels have decreased and the pump motor turns on automatically to supply water to the field. A rain detection sensor is used to predict the rainfall in the field. This model focuses on a fully automated system that helps farmers to get a high yield with less amount of energy and resources in the field. The experimental results reveal the efficient usage of water resources in the agricultural field. Moreover, the data collected from the field are directly stored in the cloud system. Keywords Internet of things · Soil moisture sensors · Rain detection sensor · Ultrasonic sensor

1 Introduction The proposed agricultural field monitoring system is built using IoT and different types of sensors. The system controls the water flow to the crops and stores the information about crops and the field in a cloud-based service system. It collects the N. S. Fatima (B) · V. Muthupriya · S. Revathi · G. Akanksha · K. Balaji Department of Computer Science and Engineering, B.S. Abdur Rahman Crescent Institute of Science and Technology, Chennai, India e-mail: [email protected] N. Noor Alleema Department of Information Technology, Veltech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, India

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_48

655

656

N. S. Fatima et al.

real-time data from the agricultural field and analyzes it. With IoT and sensors, it solves problems like maintaining moisture level, decreased or increased temperature levels, and rainfall detection. Sensors are connected to the microcontroller system (Arduino).

2 Related Works Ezz El-Din Hemdan et al., [1] developed an IoT-based farming system with a machine-learning technique to increase crop productivity. The smart farming with support systems achieved increase in crop productivity and profit for farmers. Mukherji et al. [2] developed smart agriculture technique using IoT. It is a wireless network that monitors the agricultural environment from any station. The rise in yield and food products from agricultural fields was obtained. Abraham et al. [3] developed remote monitoring with an IoT-based prototype system with a machine learning system in which the support vector machine and convolution neural networks were used for analysis. The system suggested sprinkler water and a camera monitoring the water system. Sahu et al. [4] developed a hydroponics-based system that is used for planting plants at home, and visualize and analyze data from cloud services. The limitation is that the superior tool was not used to find the temperature and humidity. Osupile et al., [5] developed an IoT field monitoring system to improve productivity in agricultural fields by automating crop monitoring, weed detection, and pesticide detection. The model’s efficiency is its limitation. Vadivel et al., [6] developed a system to monitor fields with data visualization for temperature in the agricultural fields to increase agriculture production without manpower. The limitation is that it does not support the people who create their vegetation by themselves. Araby et al. [7] developed a machine learning approach with IoT technology for optimizing the quality of crops. The diseases in potatoes and tomatoes, which reduces the cost of a product to farmers’ gates can be solved with the machine learning algorithm. Hari Pranav et al. [8] proposed that farmers sell the product at a fixed price, and to prevent pesticides and chemicals in the agricultural field, which are natural forms of growing crops. The blockchain helps develop and maintain the unique forms of farmers data in a distributed database. Di Martini et al. [9] proposed a system for imagery in precision agriculture that can travel larger areas in the field. It is through the use of precision agriculture techniques that are integrated with the pest management system that farmers can get high yields from this system. YokiDonzia et al. [10] developed a smart farming system with big data and a machine learning algorithm. Information collected from the field to take decision line parts will be helpful in the usage of pesticides in the agricultural field.

Smart Agricultural Field Monitoring System

657

Rubia Gandhi et al. [11] proposed that farming can be done by machine learning algorithms and artificial intelligence. The concept of smart agriculture field monitoring brings good connectivity between fields and farmers. Vidya et al. [12] developed an IoT-based agriculture monitoring system with sensors that leads to classifying images using image processing of crops. The virtual model represents the field monitoring system without manpower. The YOLO framework’s virtual fencing and agricultural field monitoring system are simple to use. Danaraj et al. [13] described that depending on the soil moisture, a smart irrigation system is developed to provide better crop yield cost effectively. Pramod Mathew Jacob et al., [14] proposed CNN networks to find disease in crops. The application will help farmers to get details of agricultural fields and crops. The checking of the soil nutrient levels of an agricultural field is performed and then the crops are planted according to them. Savvidis et al., [15] developed remote crop growing with concepts of edge computing and machine learning techniques. The connecting devices must have internet service to be feasible. Rohith et al. [16] developed IoT enabled smart farming with an irrigation system that grows plants with limited amounts of water and electricity using an automation process for growing tomato crops but without power supply. However, one component failure damages the entire functioning of the system, and repeated monitoring is required.

3 Proposed Methodology 3.1 System Architecture The different phases and internal flows of the proposed system are shown in Fig. 1. The operation begins with the solar panel connected to the battery, which is used for converting solar energy to electrical energy with the help of the Arduino Board and storing it in the battery. DHT 11 sensor, which is connected to the Arduino board, is used to calculate the temperature and humidity present in the agricultural field. Similarly, the soil moisture sensor is used to check the moisture content, and rain detection sensors are used to predict rainfall in the agricultural field. The ultrasonic sensor helps to determine the water levels in the water bodies by passing ultrasonic waves into the water bodies. An LCD connected to the Arduino board is used to display the output reading from the agricultural field. IoT modüle is connected to the Arduino board, and is used for sending and receiving data in the agricultural field. Relay is fixed with a pump motor to pump water when the water level is low. And a servo motor, which is a small type of motor that gives a precise angle to the system, is fixed to the Arduino board. Furthermore, collected data from agricultural field is stored in the cloud.

658

N. S. Fatima et al.

Fig. 1 System architecture

3.2 Module Description Smart agricultural field monitoring system modules are developed with sensors and motors, which include operating the field monitoring system along with the values displayed on the LCD panel. Initially, modules are divided into three stages such as, checking moisture levels, rain detection, and performing a temperature check. To begin with, if rain is detected, the motor is turned off, and a message is sent to the farmer using the cloud system. Otherwise, a motor is turned on and a message is sent to the farmer. Figure 3 depicts the working of the rain detector sensor. For checking soil moisture levels, the rain detection is first performed and updated on the server. The temperature (t) is set to 31.5. After that, the system checks whether the recorded temperatures X1, X2, and X3 are less than the default temperature (t). If yes, the motor is turned on, and a message is sent to the farmer. Else, it automatically turns off the motor, a message is sent to the farmer, and the operation is stopped. Figure 4 depicts the working of the soil moisture sensor. If the temperature crosses 45 degrees, the motor condition and moisture levels are checked. If the moisture level values are less than the default value of 16, then the motor is turned on and a message is sent to the farmer. If the temperature does not cross 45 degrees and the moisture level falls above 16, then the motor is turned off and a message is sent to the farmer. Figure 2 depicts the working of the temperature sensor.

Smart Agricultural Field Monitoring System

659

Fig. 2 Working of the temperature sensor

4 Implementation 4.1 Detection of Rainfall This smart agricultural field monitoring system is used to detect the rainfall and drain the stagnated rainwater with the help of servo motors.

660

N. S. Fatima et al.

Fig. 3 Working of the rain detection sensor

Fig. 4 Working of the soil moisture sensor

Algorithm: Step 1: Start the monitoring system and check the rainfall level in the field with the rainfall detection sensor. Step 2: If there is heavy rainfall in the agriculture field, check the water level.

Smart Agricultural Field Monitoring System

661

Fig. 5 Rainfall detection to enable servo motor

Step 3: If the water level has raised above the crops, the servo motor automatically starts to remove the water from the agricultural field. Step 4: The notification is sent to the cloud system, and the servo motor automatically turns off when the rain stops. Step 5: If rain is not detected, no action is needed; the system turns off automatically and stops. The system detects rainfall and activates the servo motor as in Fig. 5.

4.2 Detection of Soil Moisture The agricultural field monitoring system is also used to detect the soil moisture content in the agricultural fields. Algorithm: Step 1: Start the monitoring system and check the soil moisture level in the field with the soil moisture sensor. Step 2: Check the readings of the moisture content level in the cloud system at regular intervals. Step 3: Set standard moisture level for crops and set values in the agricultural field.

662

N. S. Fatima et al.

Fig. 6 Detection of soil moisture level

Step 4: If soil moisture level is reduced, then the notification is received from the agricultural field and then the relay is turned on automatically to the pump motor. Step 5: After water flows into the agricultural fields, the pump motor turns off automatically. Step 6: If soil moisture is at the correct level, no action is needed, and the monitoring system stops. The function of the soil moisture sensor is shown in Fig. 6.

4.3 Rain Detection This smart agricultural field monitoring system can be used to detect rain. If rain is detected, the motor is turned off automatically. If not detected, then the motor remains on. Algorithm: Step 1: Start the monitoring system and check the rainfall level in the field with the rainfall detection sensor. Step 2: Check the readings of the rain sensor in the cloud system at regular intervals. Step 3: If rain is detected in an agricultural field, the pump motor automatically stops the water flow, and the data is saved in the cloud. Step 4: If no rain is detected by the rain sensor, then the water flow does not stop. The working of the rainfall sensor is shown in Fig. 7.

Smart Agricultural Field Monitoring System

663

Fig. 7 Rain detection

5 Results and Analysis Table 1 gives clarity to the data stored in the cloud. There are different types of sensors, motors, and panels used in the smart agriculture field monitoring system. (1) Temperature sensor (DHT11): This sensor is used to determine the temperature level in the agricultural field. From set 1, 40 degrees, and from set 2, 25 degrees are determined. (2) Soil moisture sensor: This sensor is used to determine the moisture content. Set 1 shows that the soil is dry and water is required for the field, whereas set 2 shows that the soil is moisturized and does not require water. (3) Rain sensor: This sensor is used to detect the rainfall level in the agricultural field. Set 1 shows that rain is detected in the field and set 2 shows that rain is not detected. (4) Ultrasonic sensor: This sensor detects the water level in the water bodies by passing ultrasonic waves through it. Because the water content in the water bodies are low in both sets, some arrangements for water supply to the agricultural field should be made. (5) Pump motor: It supplies water to the agricultural field by collecting water from the water body. In both set 1 and set 2, water is pumped due to low water content. 6) Solar panel: It converts solar energy to electrical energy and is stored in batteries. In set 1, sunlight is detected and electric power is generated from the sunlight, and in set 2, sunlight is not detected, hence electric power is not generated. Figure 8 shows the system, collected data for the soil moisture levels and numerical values generated for each operation. They are divided into five stages.

664

N. S. Fatima et al.

Table 1 Dataset S. no

Sensors

Features

Set-1

Set-2

1

Temperature

It is used to check the temperature

40° Time: 13:26 Date: 09/03/22

25° Time: 18:45 Date: 09/03/22

2

Soil moisture sensor

It is used to determine the moisture content

Soil is dry and water is needed Time: 8:12 Date: 11/03/22

Soil is moisturized, hence no need for water Time: 10:00 Date: 11/03/22

3

Rain sensor

It is used to detect the rainfall with a sensor and camera

Rain is detected Time: 5:25 Date: 16/03/22

Rain is not detected Time: 6:00 Date: 16/03/22

4

Ultrasonic sensor

It is used to check the water level in water bodies with ultrasonic waves

Water is very low Time: 12:00 Date: 25/03/22

Water is very low Time: 14:20 Date: 26/03/22

5

Pump motor

It is used to pump water from the water body to the agricultural field

Water is pumped Time: 4:15 Date: 12/03/22

Water is pumped Time: 11:30 Date: 12/03/22

6

Solar panel

It is the external power Sunlight is detected source connected with Time: 12:00 Date: 20/03/22 the battery

Fig. 8 Soil moisture level

Sunlight is not detected Time: 19:00 Date: 20/03/22

Smart Agricultural Field Monitoring System

665

Fig. 9 Graphical representation of soil moisture levels

Stage 1: When the soil is dry, water is required. Water should flow to all crops in the agricultural field. There is too much of sunlight falling on crops, hence shade should be provided to protect crops from sunlight. Stage 2: The soil is moisturized, hence the pump should be turned off and the shade should be removed. Stage 3: When the soil has dried, water is required. Water should flow to all crops in the agricultural field, and after filling it with water, the shade is removed. Stage 4: The soil has dried, and water is required. Water should flow to all crops in the agricultural field, and after filling it with water, the shade is removed. Stage 5: After water flows to the agriculture field, the soil is soggy and moisturized. The following Fig. 9 represents soil moisture levels in the agricultural field. The X axis refers to the soil moisture level, and the Y axis refers to time. In the graph, at 11:00 a.m., as the soil moisture level is below 10, water needs to be supplied to the field in order to increase the soil moisture level, and around 13:00 p.m., due to the increase in sunlight, the soil moisture level decreases, therefore water supply is required for the agricultural field until evening. Figure 10 represents the temperature levels in the agricultural field. The X-axis refers to temperature, and the Y-axis refers to moisture. Moisture level decreases as temperature rises and it rises as temperature falls. The moisture level has decreased from 40 to 10 because the temperature has increased on the field. Figure 11 represents the rainfall levels in the agriculture field. The X-axis refers to rainfall in centimeters, and the Y-axis refers to the moisture level in percentage. The moisture level increases when rainfall increases and it decreases when no rainfall is detected. Moisture levels in soil are increased when rainfall exceeds 10 cm. Figure 12 represents the working of the pump motor, which is turned on when water flow is required and turned off when water is not required. When there is slight rainfall ranging from 0 to 5.9, the motor is on, and the motor is off when rainfall is from 6 to 10. Value 0 refers to the motor’s off condition, and value 1 refers to the motor’s on condition.

666

N. S. Fatima et al.

Fig. 10 Graphical representation of temperature levels

Fig. 11 Graphical representation of rain detection

Fig. 12 Pump motor’s ON and OFF condition

Figure 13 gives a better understanding of the experimental setup of the smart agricultural field monitoring system. The solar panel is connected to the battery and the battery is connected to the Arduino board. Then the sensors like DHT11 temperature sensor, soil moisture sensor, rain detection sensor and ultrasonic sensors are connected to the Arduino board. Moreover, LCD and IoT modules are connected

Smart Agricultural Field Monitoring System

667

Fig. 13 Experimental setup of smart agriculture field monitoring system

to the Arduino board. Pump motor is connected to the relay and the relay is fixed to the Arduino board. Servo motor is also fixed to the Arduino board.

6 Conclusion and Future Work In this agricultural field monitoring system, IoT and sensors play a major role. This system is user-friendly and very helpful to farmers to maintain crops without the need of manpower. The results reveal that the IoT-based smart agricultural field monitoring method achieves 86% efficiency. Farmers can operate this system from anywhere and check the real-time occurrences in the field. It is easy to track the records of the field because it is stored in the cloud. Further study is necessary to achieve a better and greater accuracy. The future work would consider using the same technical indices to increase the efficiency to 99%.

References 1. Rezk NG, Hemdan EED, Attia AF, El-Sayed A (2021) An efficient IoT based smart farming system using machine learning algorithms. Multimed Tools Appl 80:773–797 2. Mukherji SV, Sinha R, Basak S, Kar SP (2019) Smart agriculture using internet of things and MQTT protocol. IEEE Xplore, pp 14–16 3. Abraham G, Raksha R, Nithya M (2021) Smart agriculture based on IoT and machine learning. IEEE Xplore, pp 414–419 4. Sahu CDR, Mukadam AI, Das SD, Das S (2021) Integration of machine learning and ıot system for monitoring different parameters and optimizing farming.IEEE Xplore, pp 1–5 5. Osupile K, Yahya A, Samikannu R (2022) A review on agriculture monitoring systems using ınternet of things (IoT). IEEE Xplore, pp 1565–1572 6. Vadivelu R, Parthasarathi RV, Navaneethraj A, Sridhar P, Muhammad Nafi KA, Karan S (2019) Hydroponics—Monitoring and controlling using ınternet of things and machine learning. In: 1st International conference on innovations in information and communication technology (ICIICT), pp 1–6

668

N. S. Fatima et al.

7. Araby AA, Abd Elhameed MM, Magdy NM, Abdelaal N, Abd Allah YT, Darweesh MS, Fahim MA, Mostafa H (2019) Smart IoT monitoring system for agriculture with predictive analysis. IEEE Xplore, pp 1–4 8. Hari Pranav A, Senthilmurugan M, Pradyumna Rahul K, Chinnaiyan R (2021) IoT and machine learning based peer to peer platform for crop growth and disease monitoring system using blockchain. IEEE Xplore, pp 1–5 9. Di Martini DR, Tetila EC, Junior JM, Matsubara ET, Siqueira H, de Castro Junior AA, Araujo MS, Monteiro CH, Pistori H, Liesenberg V (2019) Machine learning applied to uav ımagery in precision agriculture and forest monitoring IGARSS. IEEE Xplore, pp 9364–9367 10. Donzia SK, Kim HK (2020) Architecture design of a smart farm system based on big data appliance machine learning. IEEE Xplore, pp 45–52 11. Rubia Gandhi RR, Angel Ida Chellam J, Prabhu TN, Kathirvel C, Sivaramkrishnan M, Siva Ramkumar M (2022) Machine learning approaches for smart agriculture. IEEE Xplore, pp 1054–1058 12. Vidya NL, Meghana M, Ravi P, Kumar N (2021) Virtual fencing using yolo framework in agriculture field.IEEE Xplore, pp 441–446 13. Danaraj DR, Haris MK, Fatima NS (2022) Smart moisture sensor-based ırrigation system. IEEE Xplore, pp 424–429 14. Jacob PM, Suresh S, John JM, Nath P, Nandakumar P, Simon S (2020) An ıntelligent agricultural field monitoring and management system using ınternet of things and machine learning simon. IEEE Xplore, pp 1–5 15. Savvidis P, Papakostas GA (2021) Remote crop sensing with IoT and AI on the edge. In:World AI IoT Congress, pp 0048–0054 16. Rohith M, Sainivedhana R, Sabiyath Fatima N (2020) IoT enabled smart farming and ırrigation system. IEEE Xplore, pp 434–439

Smart Vehicle Tracking in Harsh Condition Rakhi Bharadwaj, Pritam Shinde, Prasad Shelke, Nikhil Shinde, and Aditya Shirsath

Abstract In this project, we have tried to develop a system that detects and tracks moving cars on road using cameras. Detected vehicles are given a specific id and information is stored by the system which is further used for the re-identification of the vehicles. The quality of the camera is also important because bad weather and vibrations can affect its capacity of catching images and performing operations on it. So, for this purpose, high-quality cameras with the vibrating lens can be used. Different datasets are fed to this model, so that rain, snow, wind, and night will not affect its performance of detecting. If this project is done on a high scale, then RGB sensors for collecting information and LIDAR can be used in addition to further improve efficiency. R CNN with YOLO makes a good combination for this system and improves accuracy. Our system works in two parts. CNN is used in this model so that it can develop its own neural network and with more and more practice with different types of inputs, it can become better and its accuracy will also increase with time. Such systems are helpful in today’s generation and are still developing and spreading at an enormous rate. Keywords Open computer vision · Python · Re-identification · RGB sensors · CNN

R. Bharadwaj (B) · P. Shinde · P. Shelke · N. Shinde · A. Shirsath Department of Computer Science and Engineering, Vishwakarma Institute of Technology, Pune, Maharashtra, India e-mail: [email protected] P. Shinde e-mail: [email protected] P. Shelke e-mail: [email protected] N. Shinde e-mail: [email protected] A. Shirsath e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_49

669

670

R. Bharadwaj et al.

1 Introduction Vehicle tracking and detection is an important field of study in today’s generation because of its different uses in various tasks. Different concepts and algorithms in image processing are helpful in this task. It can be used to manipulate traffic signals according to the condition of traffic on road. Some other uses are like tracking government vehicles, ambulances, or vehicles carrying important documents or bank money. It can also be used for tracking a car which is used by criminals to run away after doing a crime. When a criminal tries to escape from a crime scene, it becomes difficult to catch him because he may try to escape through a main road full of vehicles. So, it becomes hard to detect his vehicle and know his location. To tackle this problem, car detection comes into consideration and by adding some important content like sensors and some other components, it makes an effective system for it. So, we can manipulate its use as per requirement. Sometimes, the modern system fails because of bad weather conditions which affects visibility and which in turn affects the overall functioning. The process will continue when we increase the number of cameras and by doing some changes to the program. There are some existing systems that work in the same way, but their accuracy gets affected by bad weather and due to this the whole system fails in its task, so different types of datasets help in this purpose. Such trained models do not need more attention and they keep on increasing their accuracy with more and more inputs. In this way, the overall process will work and will contribute to the accuracy of the model. This type of system can also be used for some other purposes like car detection and tracking in self-driving cars, tracking their position, and storing that information in a dataset, so that it can be used for future purposes. This technology is growing fast and will also contribute to the expansion of artificial intelligence all over the world. As car companies are growing, a load comes on the whole system of tolls, number plates, and many other things. The scope of this system is vast, and, in the future, it will be an inevitable part of our day-to-day life. In the future, the scope will increase, and the price of this system will be made less. So, installing these systems at all places can be helpful.

2 Literature Review Liu et al. [1] propose the use of a Tracklet filter strategy for missed vehicles. Vehicles share similar appearances, so their re-identification becomes a difficult task, and to solve this problem DBTM technology is used. It uses a technology based on direction and it matches the tracklets of cameras in order to detect it. By dividing the camera into zones, they can know the direction of a running vehicle. The same vehicle may appear different at different angles, so the size of the used dataset can be increased and the model can be efficiently trained to increase the accuracy. So, SCAC technology is used. They merge the local tracklets and use the same for expansion. Here a YOLOv5 model is used which is trained by the MS COCO dataset. Non-maximum

Smart Vehicle Tracking in Harsh Condition

671

suppression technology is used to avoid the detection of the same vehicle again and again. The tracker can be modified by using BBox and vehicle re-identification features efficiently. MTMC merges many processes of SCT, vehicle re-identification, and various filters. This model uses the Cityflow dataset. The SCT process uses a JDE filter for vehicle tracking and the MCT process use SCAC technology for the same purpose. Herzogj et al. [3] mainly focus on providing more flexible and as much as possible detailed information to the model to train it more effectively. Vehicle detection systems that we are using currently use computer vision. So Fabian Herzogj’s research is mainly focused on training the model more efficiently so it can detect vehicles with more accuracy. This research work [3] additionally adds 3d data to the dataset and uses RGB sensors to get semantic segmentation also. Along with vehicle detection, its re-identification is also important, especially in MTMCT (multi-camera system), and using 3D localization and semantic segmentation is not much effective so [3] uses the Synthehicle dataset as a replacement for the above terminologies. Using a massive dataset with details for the model to be accurate in it is the main approach of [3] in this research work. Li et al. [2] Detection and tracking of certain things are important nowadays, it may be for security or any other reasons, but it plays a crucial role in today’s world. Similarly, tracking cars using multiple cameras can be helpful for keeping an eye on them and for other security reasons. Multi-target multi-camera tracking aims to always determine the position of every vehicle from video streams taken in a multicamera network. Deep learning is an important tool for this purpose. R-CNN and YOLO are used in this context, and each method has its own special character but the task is similar. First frames are separated from the video and the object is detected using object detection algorithms. Here SSD and MobileNets algorithms are used for this purpose. If an object detected is one among the object classifiers during prediction, then a score is generated and MobileNets uses convolutions based on depth. SSD gives output and confidence levels. Object detection involves detecting the region of interest of an object from a given class of image. Together, these steps play an important role in building this deep learning model. Different datasets for different whether conditions can also be fed to this, so that accuracy and efficiency of it can be improved. Chandan et al. [4] The system here has a significant role in multi-target camera tracking and aims to always determine the position of every vehicle from video streams taken in a multi-camera network. Vehicle Re-Identification (which is a multinetwork system) and bounding boxes are used for Single camera tracking which in turn helps in MTMT. Algorithms like Mask R-CNN, YOLOv5, scaled-YOLOv4, and YOLOR are used for Object detection in this project and the COCO dataset. The bounding box filter removes unnecessary boxes and Re-Identification is used to compare cars captured in one camera with another. These are used to calculate formulas. The hierarchical algorithm is used to get the global identity of cars. The roads are divided into zones for this purpose. All these algorithms, models, and database management makes it easy to form the backbone of this project.

672

R. Bharadwaj et al.

Tran et al. [5] explain about the smart project related to vehicle tracking done on account of the AI City challenge of the year 2022. This work proposes the methodology of vehicle detection, Multiple Object tracking (Single Cam−Single Vehicle as well as Single Cam−Multiple Vehicle) & Multi-Camera Vehicle Tracking. The entire thesis speaks about a step-by-step approach to the tracking of vehicles & these steps include Vehicle track prediction, multi-level detection handler & lastly multitarget multi-level association. The evaluation opportunities which were provided by the system were used to verify & check the actual working & functioning of the final algorithm, also to modify it as per the IDFI, IDR, and IDP precision & revise the results. Datasets used for the experiments include 46 cameras overall to record a total of 880 vehicles at 6 different scenarios. Finally, there were 215.03 min of videos in total including training videos of 58.43 min, validation videos of 136.60 min & testing videos of 20.00 min. Wu et al. [6] have significantly elaborated the multi-camera tracking of vehicles at the city level based on space–time appearance features. This MCMVT Project (Multi-Camera Multi-Vehicle Tracking) is basically divided into Object Detection (OD) & Re-identification (ReID). Driving channels at a traffic signal level view were analysed & digitally the allowed zones were monitored. This paper introduces a concept of time variability for adjusting/maintaining the similarity of probability amongst various vehicles. This is a part of the Time-Decay strategy that is used to calculate the matching probability decay. Several Mathematical calculations were considered such as Matric formation for cameras, Zone-Gate Mechanism, TimeDecay Strategy & Trajectory Post-Processing. Experimental events like the dataset of 3.58 h (215.03 min) video sample collected from 46 cameras which were spanning 16 networking intersections at U.S. cities & Evaluation Metrics were taken into consideration. Ultimately the entire constraints collapse to the MCMVT Tracking scheme including detection, ReID, Single-camera tracking, and multi-camera matching[7]. Yao, et al. [8] have significantly elaborated the multi-camera tracking of vehicles at the city level based on space–time appearance features. This MCMVT Project (Multi-Camera Multi-Vehicle Tracking) is basically divided into Object Detection (OD) & Re-identification (ReID). Driving channels at a traffic signal level view was analysed & digitally the allowed zones were monitored. This paper introduces a concept of time variability for adjusting/maintaining the similarity of probability amongst various vehicles. This is a part of the Time-Decay strategy that is used to calculate the matching probability decay. Several Mathematical calculations were considered such as Matric formation for cameras, Zone-Gate Mechanism, TimeDecay Strategy & Trajectory Post-Processing. Experimental events like the dataset of 3.58 h (215.03 min) video sample collected from 46 cameras which were spanning 16 networking intersections at U.S. cities & Evaluation Metrics were taken into consideration. Ultimately the entire constraints collapse to the MCMVT Tracking scheme including detection, ReID, Single-camera tracking, and multi-camera matching [9–17].

Smart Vehicle Tracking in Harsh Condition

673

3 Methodology This proposed system was totally built using various libraries of Python Coding Language & one of them is used in this system which is Open CV i.e., Open Computer Vision. Using three main components listed as a detector, tracker, and counter Smart Vehicle Tracking System is built up. The detector used identifies vehicles in the frame at the instance of the video and returns a list of bounding-colored boxes around the vehicles. The tracker uses the bounding boxes to track the vehicles in the frames. For ensuring that the tracker is tracking the vehicle periodically detectors are used providing that counting is increased by the unit after the vehicle leaves the frame. Basically, Computer Vision allows computers to understand and detect specific or unique types of elements in photos and videos. The aim of the system is to plan and manage Traffic, Control the Traffic crowd, and help in the parking management area. For easier and faster evaluation and improvement of the system, we input videos rather than counting vehicles manually or by using pneumatic tubes and piezoelectric sensors. Only detection is not the factor on the other hand we also implement automatic plate number recognition, speed detection, and many more. The proposed system is eligible to track and count multiple vehicles moving in different directions or different crossed-zone areas. The project internally gets divided into no. of weather or time conditions as per the need of the situation. Day Segment, Night Segment, Snowy, foggy, dusty conditions, etc. The Day Algorithm is explained below: In this system (Fig. 1) following libraries are used1. Math−Python language provides this module for making various mathematical computations easier and simple to code in less time. Along with the basic arithmetic of multiplication, addition, division & subtraction, we are also able to calculate power functions, squares & square roots, trigonometric ratios, etc. It also contains the values of some mathematical constants like the value of pi, Euler’s number, etc. 2. NumPy−It is basically a python library that helps to perform various calculations on arrays, matrices, etc. This helps to manipulate a wide range of data in a simple block of code easily. We can store, change or generate various high-level calculative results using this library. It is mainly used: import NumPy as np−for easy access of the word ‘NumPy’ by ‘np’, hence saving the memory. Fig. 1 Flowchart of background subtraction

674

R. Bharadwaj et al.

3. cv2−OpenCV-Python is a library of Python bindings designed to solve computer vision problems. cv2. imread() method loads an image from the specified file. If the image cannot be read (because of the missing file, improper permissions, or unsupported or invalid format) then this method returns an empty matrix. 4. Datetime−This module has been imported in this system. This module is a combination of various aspects like date, time, year, minute, second, microsecond, and millisecond. These are the parameters used by this module. We have used datetime, now(), and time(). The now() contains many attributes like a year, month, day, hour, etc. in it. Using this now() function, the time for day and night has been set and as the time changes, the program shifts from one code to another. When the hour time is between 11 and 24, the code shifts to the night program. Otherwise, it shifts to a day program. Import necessary packages and initialize the network. Then read frames from a video file for detection in the project. The vehicle is detected in the rectangular green box using the cv2.rectangle function and the center of the box is defined by the cv2.circle function and another line is also defined using the cv2.line function. The algorithm used here is BackgroundSubtractorMOG() which subtracts or erases the background other than vehicles. When the center of the rectangular box crosses or passes the horizontal line, it detects crossing and it will increase the vehicle count by one. The flow of Architecture (Fig. 2) goes like this: 1. 2. 3. 4. 5. 6. 7. 8.

Load the video and run the program. Read the values from the controller panel. Process frame with values read. From the current frame, a [Blob] (blob.py) object is created for each blob of a bright spot in the frame. Eliminate uninterested blob. Track the movement of each blob from entering until leaving the frame. Match pair of blobs together as a car. Output car detected.

4 Result At a specific angle, every vehicle is tracked by first detecting its presence in the green rectangular box using a model trained in the system. By using the counter, when the midpoint of that rectangle passes the horizontal blue line shown in Fig. 3 then the count of the variable present in the program increases by one unit at a time of passing. The model tracks all coming and going vehicles from both the right and left lanes of the road so it is beneficiary and time convenient. Vehicle Tracked in the daytime is displayed in 2 output screens, one is the Coloured Regula screen & other is the grey scale. For the greyscale image console output in Fig. 4 background vision is black and the vehicles that are in motion

Smart Vehicle Tracking in Harsh Condition

Fig. 2 Architectural representation of flow of project

Fig. 3 Day time detection & counting of vehicles

675

676

R. Bharadwaj et al.

Fig. 4 Day time detection in gray scale image

are in detected white color. Grayscaling in python is used for dimension reduction, model complexity reduction, and for many other algorithms to work. In short, Grayscaling is the process in which various types of images which are already colorful (RGB, CMYK, HSV, etc.) are then converted to various shades of gray as per the concentration & dispersion of color. Firstly for determining the car, the whole frame of video is read using read function and is stored in frame variable. Then using COLOR_BGR2GRAY, the whole image is converted to the grayscale image and then Gaussian blur technique is added. Using this filter, the object is detected and then for further detailing, the whole frame is dilated using the dilate function and the moving objects are separated from frame. In this way, that grayscale image is detected, objects become more distinct. Then using noise cancellation function, the object can be detected. This is done frame by frame. In Fig. 5 the headlights are being detected by the program. These headlights are not yellow but they have a sparkling bright color. To detect these lights, normal functions do not work. Due to this blob is imported into this program. This detects the headlights and by using their car is detected. While driving at night, cars use headlights and the distance between them becomes a unique feature to detect it. The distance is set as a parameter for the detection of cars and in the same way this system works for the backlights of the car. The backlights are red in color. So, their red color becomes a unique feature for detection, and the distance between each light also becomes a criterion for vehicle detection. In this picture, as there is night, the system switches from day to night and the headlights are detected by the system. Cars are not visible in this case, but headlights become an effective way for detection. There is a separate system for setting the criteria of the distance between headlights so

Smart Vehicle Tracking in Harsh Condition

677

Fig. 5 Night time vehicle detection & counting of vehicles

that the system will not get confused between the headlights of the same car and the adjacent cars. This plays an effective role to tackle this error. The detection of vehicles at night is controlled by the controller screen provided along with the program output (Fig. 6). This control screen is used to set the size of the output screen, the pixel switching BGR to HSV, etc.

Fig. 6 User-control interface for night tracking o/p

678

R. Bharadwaj et al.

The variables of the controller are as follows1. BGR−HSV This control is used to shift between the BGR and the HSV mode, on dragging the slider it toggles between BGR and HSV. HSR is more effective than BGR. As HSR uses the intensity of the light instead of the pixels. HSR stands for the Hue. 2. BGR−thresholds This controller is to detect the headlight with more accuracy based on the intensity of light. The dragger can toggle between 0 and 10 i.e., max value. Its main application is to categorize the light according to brightness. 3. HSV thresholds It is the same as that of the BGR thresholds. If the BGR-HSV dragger is set to HSV then this HSV threshold comes into action otherwise it does not affect the algorithm in any way. 4. Stabilize Display Count This controller is used to make the display stable so that the programmer can see the count of cars. This controller can toggle between 0 and 1. 5. Car Count on Average To hold the display for a while if Stable Display Count is active. 6. Headlight Min Area in Pixels Choose the threshold area so that the software can set this area value to a considerable area around the headlight. 7. Show Blob Detected To shift between 0 and 1 so that it works like a switch, which provides options to display blob or not. As in this project, it is set to the off condition as default. 8. Car Direction Vertical–Horizontal This variable changes the angle of the video through which the camera is detecting the vehicles i.e., Vertical or Horizontal. 9. Headlight max horizontal distance Adjust the distance between the headlights of the vehicle. This variable it used to detect the car with more accuracy. 10. Headlight max vertical distance Adjust the distance between the headlights of the vehicle in the vertical direction. This variable is used to detect the car with more accuracy. After the programming implementation, the camera system used in the project needs to be specified in its resolutions, camera angles, and position of the camera.

Smart Vehicle Tracking in Harsh Condition

679

As this model must be implemented for counting and detecting cars, main roads and traffic signal become a good spot to install them. But here the problem of angles and distance comes into consideration. The position of the camera becomes important because the model is trained for a specific angle and a slight change can lead to problems. So based on the images captured, the angle and distance of the cameras are set. We can take D as distance and lp as the image’s length by considering pixels. lr is the real length of the image in cm. H is a vertical scene, Z is zoom and W is a horizontal scene. Z =

di do

Here, di is the distance of the image from the camera lens, and do is the distance of the object from the lens. (lh = height of image, lw = width of image) H=

lp ∗ lh lh

W =

lp ∗ lw lw

So now calculate the horizontal angle and vertical angle. 

 H Vertical angle = 2*tan 2D   W Horizontal angle = 2*tan−1 2D −1

Firstly, the value of zoom is calculated and is stored in the variable named Z. The Horizontal distance of the image from the camera and the actual distance of the actual body is divided and the value of zoom is obtained. The value of H and W are obtained by dividing the lp and lr and multiplying with lh and by dividing lp and lr and multiplying with lw respectively. Here lp is length of the image in pixels and lr is length of the object. lh and lw are length and width of image respectively. In this way the values of H and W are used for measuring the appropriate angle needed by camera for better optimization and performance for detection of the object (Fig. 7). The two angles needed are vertical and horizontal angle. D is the distance. Vertical angle is H ) and Horizontal angle is measured using measured using the formula as 2 ∗ tan −1 ( 2D −1 W 2 ∗ tan ( 2D ). In this way the whole requirement is needed for better detection and tracking of cars on road, this technique plays a significant role. Figure 8 defines the camera vision.

680

R. Bharadwaj et al.

Fig. 7 Camera position describing camera angle

Fig. 8 Individually defining camera vision

5 Limitations The proposed project does not have many limitations. But there are things like if two cars have the same appearance and color, the model may get confused when they detect, also if there are too many cars in the frame, some cars may not be detected, e.g., small a car hidden behind a big car. If the car does not enter the camera’s field of view or exits the field of view. Detection can also be hindered by certain atmospheric conditions, such as heavy fog or high levels of air pollution, which can cause the camera to see some cars dimly or blurry.

Smart Vehicle Tracking in Harsh Condition

681

6 Future Scope We can tell the people waiting at the stand which vehicle is coming or what time it will arrive, by tracking specific vehicles at a specific location. The accuracy of vehicle detection is enhanced by a technique for detecting small objects in highway scenes. For the purpose of detecting vehicles, the highway road surface area is extracted and separated into the remote region and the proximate area. By tracking vehicles at a given area, we can inform the people waiting at the stand as to which vehicle is coming or what time it will come. A method for recognizing small items in roadway scenes improves the precision of vehicle detection. The highway road surface area is extracted and divided into the remote region and the proximal area for the purpose of identifying automobiles.

7 Result and Discussion The aim of this project is to make the game operate with the help of gestures & palm tracking; this goal was achieved. We are successful in running our project. We got our desired output. We are successful in creating an interface based on augmented reality in which teenagers can find a new window of interest that is presently not available in other typical gaming out there in the market. Acknowledgements We would like to thank our Computer Department and Vishwakarma Institute of Technology for providing us with this opportunity to build and explore new domains. Further, we would also like to thank our HOD and our project guide for helping and guiding us throughout our project and giving us an opportunity for publishing the paper.

References s 1. Liu C, Zhang Y, Luo H, Tang J, Chen W, Xu X, Wang F, Li H, Shen YD (2022) City-scale multi-camera vehicle tracking guided by crossroad zones 2. Li F, Wang Z, Nie D, Zhang S, Jiang X, Zhao X, Hu P (2022) Multi-camera vehicle tracking system for ai city challenge 3. Herzog F, Chen J, Teepe T, Gilg J, Hörmann S, Rigoll G (2022) Synthehicle: Multi-vehicle multi-camera tracking in virtual cities 4. Chandan G, Jain A, Jain H (2022) Real time object detection and tracking using deep learning and OpenCV 5. Tran DN, Pham LH, Jeon HJ, Jea Nguyen HH, Jeon HM, Tran TP, Jeon JW (2022) A robust trafficaware city-scale multi-camera vehicle tracking of vehicles 6. Wu M, Qian Y, Wang C, Yang MA (2021) A multi-camera tracking system based on city-scale vehicle Re-ID and spatial-temporal information 7. Dixon M, Jacobs N, Pless R (2018) An efficient system for vehicle tracking in multi-camera networks. Washington University, St.Lois MO, USA 8. Yao H, Duan Z, Xie Z, Chen J, Wu X, Xu D, Gao Y City-scale multi-camera vehicle tracking based on space-time-appearance features

682

R. Bharadwaj et al.

9. Abass HK, Al-Saleh AH, Al-Zuky AA (2015) Estimate mathematical model to calculate the view angle depending on the camera zoom 10. Addala S (2020) Vehicle detection and recognition. Dept of CSE Lovely Professional University Punjab, India 11. Wang Z, Zhan J, Li Y, Zhong Z, Cao Z (2022) A new scheme of vehicle detection for severe weather based on multi-sensor fusion 12. Yaghoobi Ershadi N, Menéndez JM, Jiménez D (2018) Vehicle detection in different weather conditions: Using MIPM 13. Chen X-Z, Chang C-M (2020) Chao-Wei Yu andYen-Lin Chen, “A real-time vehicle detection system under various bad weather conditions based on a deep learning model without retraining.” National Taipei University of Technology, Department of Computer Science and Information Engineering 14. El-Khoreby MA, Abu-Bakar SA (2017) Vehicle detection and counting for complex weather conditions 15. Hassaballah M, Kenk MA, Muhammad K, Minaee S (2021) Vehicle detection and tracking in adverse weather using a deep learning framework 16. Abdullah MN, Ali YH (2020) Vehicles detection system at different weather conditions. Department of Computer Science, University of Technology, Baghdad, Iraq 17. Ristani E, Tomasi C (2018) Features for multi-target multi-camera tracking and reidentification. Duke University, Durham, NC, USA

Storage Automation Using the Interplanetary File System and RFID for Authentication Paul John, Anirudh Manoj, Prathik Arun, Shanoo Raghav, and K. Saritha

Abstract Systems that were built previously to handle medical information used local storage, disregarding any means of backing up data. Systems like these are unreliable and can be responsible for data corruption as well as the loss of crucial information. Along with the previous concern, doctors cannot access this crucial information in times of emergency if the patient has been admitted to another hospital because the patient information has been stored locally on a different server. Decentralized storage is an approach that has not been successfully implemented in the market yet. This work involves implementing an Interplanetary File System (IPFS) network to store and retrieve healthcare documents when required. By using an interactive and convenient user interface, a quick and responsive application to securely store these documents has been created. Since the convenience of patients is of the utmost priority as it is a factor of time and effort, the patients are connected to the internet via Radio Frequency Identification (RFID) technology, which allows for quick authentication as well as fully automates the protocol of manual registration every time a patient visits a medical establishment. The nodes would further require an RFID scanner on which the user can scan his or her card that contains a Unique Identification Number, for authentication. IPFS uses strong cryptographic hashing to encrypt the documents. The entire process of storing medical records and documents has been automated, and using such technologies makes it feasible and secure at a high level, which also complies with the healthcare storage standards. Keywords RFID · Interplanetary file system · Cryptographic hashing · SHA-256 · Blockchain · Merkle DAG · Peer-to-Peer · Kademlia

P. John (B) · A. Manoj · P. Arun · S. Raghav · K. Saritha Department of Computer Science and Engineering, PES University, Bangalore, India e-mail: [email protected] K. Saritha e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_50

683

684

P. John et al.

1 Introduction The use of Radio Frequency Identification (RFID) is becoming more common these days. It has proven to be successful in various areas, including retail, manufacturing, and logistics. Even while it is not technically a new technology, it has progressed significantly. RFID is a powerful technology for businesses today regarding data collection, asset tracking, and equipment movement. It is also used in supply chain management, inventory management, and e-passports. RFID is a method that is based on a tag system. An item is marked with a particular tag, which is followed by radio waves. The reader, antenna, and transponder are the three components of RFID. The antenna transmits a radio-frequency signal that allows the RFID tag to communicate with it. When the RFID tag passes across the scanning antenna’s frequency field, it detects the activation signal and records the data received by the antenna. Interplanetary File System (IPFS) is built on the blockchain, a distributed system for storing and accessing files, websites, applications, and data. It is a distributed file system protocol that allows computers all around the world to store and behave as one of the nodes of a giant peer-to-peer system. Every file added to IPFS is given a unique address derived from a hash of the file’s content. This unique address identifier is also known as a Content Identifier, and it combines the hash of the file and a unique identifier for the hash algorithm used into a single string. IPFS currently uses SHA256 by default [14], which produces a 256-bit output that is encoded with Base58. Each new piece of data might be used to create an up-to-date, clear overview of a patient’s health, allowing physicians, pharmacies, and other healthcare professionals to better advise patients.

2 Literature Survey/Related Works In this section, various operational systems to store healthcare documents as well as the feasibility of implementing a new structure with a higher degree of convenience and improved security standards have been analyzed.

2.1 RFID RFID technology consists of a combination of tags and readers. The tags store and transmit data to readers using radio waves. The readers garner data from the different tags and relay it back to the server for further analysis and processing [1, 2]. The system serves the purposes of identification, monitoring, authentication, and alerting, through the method of near physical contact which provides a safe and

Storage Automation Using the Interplanetary File System and RFID …

685

secure method of authentication. To rephrase it, RFID facilitates automatic identification. By utilizing this concept, the process of authenticating patients upon visits or during unforeseen circumstances can be automated. RFID technology can be used to retrieve medical data quickly and accurately [3]. It offers enhanced security, resource utilization, and improved medical processes while reducing cost and time consumption. The capacity to validate using RFID technology can decrease medical mistakes, increase efficiency, and aid in the creation of essential documents for administrative and audit purposes. Validation is an efficient means of assuring quality in a healthcare context. The most crucial validating role is to ensure that the patient being treated is, in fact, the correct patient and that the therapy that is about to be administered is suitable [4]. The following are the findings of the studies exploring the association between RFID usage and patient safety: Patient safety will be enhanced, and expenses and medication mistakes will be significantly reduced [5]; RFID technology can assist nurses in promptly identifying patients and their associated medications [6].

2.2 Cloud Cloud storage is a paradigm of cloud computing that allows for the storage of data and files on the internet via a cloud computing provider. This cloud computing provider may provide access to the cloud via the public internet or a dedicated private network connection. It has been widely used as online storage virtualization by multiple industries all around the globe and holds a tremendous amount of data, both public and sensitive [7]. However, since the data is digitally stored on the cloud, consumers may lose control over their data, raising privacy and security issues as well as privacy threats. Users may be unsure what will happen to their data or whether cloud storage companies will keep it safe or use it for their own gain. Cloud computing with blockchain technology can be used to develop a protected file storage system [8] that implements file manipulation and Advanced Encryption Standard methods, through which a single file is separated and stored in blocks. However, accessing files is difficult due to block size restrictions and the time it takes to construct a block, and thus scalability is limited. The immutability of files is both a benefit and a limitation [9]. Time consumption increases, which would not be suitable in the medical field where every second is crucial.

2.3 Block Chain Blockchains are tamper-evident and tamper-resistant digital ledgers implemented in a distributed fashion, usually without a central authority. At a basic level, they enable a community of users to record transactions in a shared ledger within the community such that, under normal operation of the blockchain network, no transaction can be

686

P. John et al.

changed once published [10]. The integration of blockchain with other developing technologies in health care is examined in addition to the basic structure of blockchain and application possibilities. Paper [11] discussed how healthcare services can be improved using blockchain technology in a decentralized, tamper-proof, transparent, and secure manner. To accomplish this, detailed technical studies were conducted using a theoretical approach, and a framework was described. The application of blockchain in medical scenarios was then reviewed and divided into three segments based on the practical properties of the blockchain. However, even blockchain-based systems come with multiple limitations. Once the provider has uploaded patient data, they may possess it permanently, although the patient may not want it. Cost and complexity can have negative consequences for healthcare stakeholders. Another technical disadvantage of blockchain is that it is not ideal for data with high temporal resolution and also has issues with handling multi-dimensional data, such as complex text, images, and graphs [12]. As previously stated, time consumption due to the creation of blocks, as well as accessing medical records due to the traversal of each block, becomes a major impediment [9]. Its high demand for storage space and bandwidth to synchronize data with the network prevents many nodes from joining the network [13].

2.4 IPFS IPFS is a distributed system or a peer-to-peer network for storing and accessing files, websites, applications, and data. The content stored in an IPFS network is accessible through peers located anywhere in the world that might relay information, store it, or do both. IPFS knows how to find what is asked for, by tracking the content address of the stored data rather than its actual location. It provides a high throughput contentaddressed block storage model with content- addressed hyperlinks [14]. The same data have the same hash in IPFS, which removes redundant data and maintains its exclusivity. Therefore, the IPFS hash of the data stored by each node in the blocks is the same, which can maintain the consistency of the data in each node. At the same time, this scheme removes the reliance on the number of full nodes in the network, which has no restrictions on the type of transaction and retains all transaction data, allowing for the traceability of the blockchain’s history [13]. Data can also be stored permanently in the system. This shows how refined this technology is in terms of security and throughput, which also supports high-capacity storage. The implementation of a Merkle DAG has been an essential part of IPFS because the content addressing and tamper resistance have been integral to the success of IPFS. It follows a Git data structure and makes the Git object model fit on top of the Merkle DAG. IPFS also defines many objects to model a versioned file system above the Merkle DAG, and it finds out that it is pretty hard to split large files into independent blocks. Several alternatives are provided by IPFS that can be customized by users [15]. Considering that using a hash of the content as the address of an object is hazardous, IPFS is also provided to help deal with this problem. Most of

Storage Automation Using the Interplanetary File System and RFID …

687

IPFSs networking functionality is handled by the reusable networking library libp2p (originally part of the IPFS project). IPFS is still the main use case for libp2p as it uses a Kademlia-based Distributed Hash Table (DHT). The Kademlia based DHT provides a mechanism for efficiently locating the files stored in a peer-to-peer network [16]. Each node in the IPFS network is uniquely identified by its node ID, which is the SHA-2 hash of its own public key. The records of data are indexed and located using keys. Records in the distributed hash table of IPFS is primarily a list of data providers for the data stored in IPFS, and the keys to these records are essentially the addresses of the data items [17]. The IPFS is considered to be a powerful tool on which multiple Blockchains are built. It provides a truly distributed peer-to-peer system which is highly scalable and secure.

3 Proposed Methodology 3.1 Introduction A system that reads a patient’s unique ID, which would be stored in an RFID card is built. Once the ID is read, the doctor should be able to view the existing records as well as add new records to the database network. The methodology to be followed is segregated into: • Web Application Development • Storage and Network Setup • Authentication Action Web Application Development: The web application can be developed using ReactJS, allowing a doctor to sign up and log in. Once the doctor logs into his account, he should be able to access a patient’s folder, provided the patient has scanned his RFID card. In order to do so, a page needs to be created to read the patient’s Unique Identification (UID) value. Furthermore, when access to the patient’s folder is provided, the page should display all the details of the patient, alongside an option to add another file, as well as options to open the pre-existing files. Storage and Network Setup: Files are to be stored on a distributed network. IPFS is a viable candidate in this regard. Fleek offers a service that enables uploading and pinning files to IPFS. Fleek can also be used to host websites on IPFS. It provides a public gateway, allowing web browsers to access content via IPFS. On uploading files onto Fleek, the files are divided, hashed and then distributed amongst the peers in the Fleek network. The Firebase storage is used to store the general information of the user along with the hashes of their files stored on the IPFS. Firebase is used due to their high security and ease of use. Authentication Action: RFID acts as a form of authentication as it contains a unique ID pertaining to a given patient. This ID would be encoded in an RFID card using

688

P. John et al.

Fig. 1 Circuit diagram for RFID

a written code. Once the ID has been written into an RFID, it will also be stored in a database. An RFID reader should be connected to a node to read the value from the card, and a comparison should be made with the value in the database to provide access to the patient’s folder. The RFID card and reader should be coded to write and read values. A connection is made as per Fig. 1 below using jumper cables and a breadboard.

4 Implementation As previously mentioned in the proposed methodology, the implementation is split into the creation of an authentication sector with RFID, a web application made for doctors to sign up and log in, read the RFID, insert, and view patient documents, and a storage sector that would utilize an IPFS network for storing and retrieving files, and a Firestore database for storing client details.

4.1 Authentication Sector To run the authentication application of the product, an MFRC522 sensor module, which would act as the RFID scanner, a NodeMCU, which is a micro-controller unit that can connect objects and transfer data using the Wi-Fi protocol, and a MIFARE

Storage Automation Using the Interplanetary File System and RFID …

689

Fig. 2 Operable RFID scanner

Classic 1 k RFID tag, which operates at a frequency of 13.56 Hz and contains a mechanism to segregate memory into different segments, which is ideal for high volume transactions, have been utilized. The tag also includes the read and write capabilities. Figure 2 below shows the fully implemented RFID scanner. This module can read and write data within a range of 10 cm and connects to the Firestore real-time database. Three external libraries are imported, which include MFRC522 for the sensor module, Firebase Arduino Master for firebase functions, and ArduinoJson, which is a dependency for the firebase library. Two codes have been written for RFID functionalities on the Arduino IDE and uploaded into the NodeMCU, which are: • Writing into RFID: The patient’s UID is written into a memory segment of the Mifare 1 K card (block 2). • Reading from RFID: Wi-Fi and fire authentication proceeded by reading the UID value from the memory segment (2nd block) and updating the Firestore real-time database. RFID Algorithm 1. • • • • 2. • • •

Writing into RFID RFID card/tag scanned. Card/Tag UID (original and immutable) and type detected. Patient UID written into a memory segment (block 2). Card/Tag authenticated. Reading from RFID Wi-Fi authentication (connection to a Wi-Fi network). Connection to the Firestore real-time database. RFID card scanned.

690

P. John et al.

• Card/Tag authenticated. • Value read from card/tag memory segment (block 2). • Value updated on real-time Firestore database.

4.2 Storage Sector Fleek IPFS and Firestore are utilized to store files and patient details. An account is created on both Fleek and Google Firebase. Fleek allows to manage an IPFS network and add files to it. These files are stored as hashed values on the network and are also added to the Firestore database. Along with Google Firebase, there is access to host services such as Firestore and Realtime DB. The patient information is added into the database manually, and the files that are uploaded onto the IPFS network are added upon insertion. These files that are uploaded are only the hashed values of the files for security purposes, so they cannot be tampered with. The purposes of each division utilized in this sector are as follows: • Cloud Firestore is used to store all the patient information. It is a NoSQL document database that allows to easily sync, store, and query data for mobile and web apps on a global scale. • Firebase is used for the authentication process. It is used for building secure authentication systems while improving the sign-in and onboarding experience for end users. • Realtime-Database is used to sync the RFID card by using real-time processing to handle the workloads whose state is constantly changing.

4.3 Web Application Sector ReactJS is used for the front-end development of the application. The UI consists of: • A Signup and Login page • A page to read the RFID card value from the Realtime-Database. • A dashboard consisting of patient details along with links to each file that preexists in the Firestore database (these files are hashed values) and an option to add patient files into the IPFS network. The components used to create a working web application are as follows. • Fleek Storage API works as a public gateway for IPFS storage as Fleek provides secure storage for files on IPFS and distributes them for web applications. • Bootstrap is used to create the UI. It comes with a CSS framework that also improves responsiveness. • Reactpdf provides a React component API for opening PDF files and rendering them using PDF.js. PDF.js is an open-source JavaScript library for rendering PDF files.

Storage Automation Using the Interplanetary File System and RFID …

691

5 Results The application functions as expected according to the requirements set beforehand. The RFID card successfully writes into it the patient UID in a memory segment, and upon scanning the RFID scanner correctly authenticates the card and reads the UID from the memory segment into the Firebase real-time database. The web application is stable, and after the doctor has logged into his account, the entry page can read the value from the Firebase real-time database (values change if a new RFID card is scanned). If the UID does exist in the Firestore database, patient information is retrieved, and links are formed to view the pre-existing files. There is an “add file” option for every patient through which doctors can add any pdf files to the patient’s folder on the IPFS network. Fleek stores these files as a hashed value, and this hashed value is stored in the Firestore database for retrieval. Opening a file escorts to a new tab on the browser with the opened pdf file (retrieved from the IPFS network). The following set of figures elaborate the functioning of the application: Figure 3 illustrates the writing of a patient’s UID into the RFID card/tag. The initial line is the UID chosen by the hospital to assign to the patient (PES2UG19CS043). The card is then scanned and detected by the system. Post authentication, the data is written into block 2 of the card successfully. Once the RFID card is scanned into the scanner during general check-ups or emergencies, the data from the card memory segment is read, authenticated, and sent to the Firebase Realtime database as shown in Fig. 4. Figure 5 is the first page visible on the REACT application. This page is meant for doctors to log in. There is also a signup option for doctors to create a new account. Only verified doctors can obtain access to the patient details. In Fig. 6, it can be observed that the UID of the patient gets updated into the Realtime database (uid-00000002). After scanning, when the doctor clicks on the read RFID button as in Fig. 7, the value of the patient’s RFID (UID) gets updated onto the patient ID textbox and from there, the patient details can be accessed by the doctor.

Fig. 3 Writing into RFID card

692

Fig. 4 Reading from an RFID card

Fig. 5 Login page for doctors

Fig. 6 UID value detected post scanning in realtime database

Fig. 7 UID value read from realtime database post scanning

P. John et al.

Storage Automation Using the Interplanetary File System and RFID …

693

Figure 8 shows the records of patient information stored onto the Firestore database. It contains patient’s general information along with the file’s hash values in their respective folders. Figure 9 shows the patient details displayed on the web application. Clicking the upload file button shown in Fig. 9, directs to the upload page shown in Fig. 10, where a file from the local storage (in this case gg.pdf) and a file key to identifying the pdf document (Anirudh-Xray) are selected. Upon clicking the Upload File button shown in Fig. 10, the file gets uploaded into the IPFS network. It can be observed in Fig. 11 that, Anirudh-Xray has been added to the fleek IPFS storage. After uploading the file onto the IPFS network, on returning to the dashboard screen, a file link with the file’s key as the link can be found.

Fig. 8 Patient details stored in firestore database

Fig. 9 Patient details displayed in the web application

694

P. John et al.

Fig. 10 File being uploaded onto the IPFS network

Fig. 11 Fleek IPFS storage

Fig. 12 Web application post uploading a file

Clicking the link shown in Fig. 12, opens the pdf document in a new tab on the browser as in Fig. 13 (the file is opened by accessing the IPFS network through its file hash stored in the Firestore and retrieving the file).

6 Conclusion and Future Work A system has been constructed to handle medical records, which acts as a fully functional and efficient means of storing and accessing patient information. RFIDs

Storage Automation Using the Interplanetary File System and RFID …

695

Fig. 13 Viewing patient file as a pdf document

act as a creative authentication system by only allowing doctors to read and edit patient information upon scans, thereby preventing anyone else from accessing this information. Emergency cases can be handled swiftly and efficiently as the RFID card can be scanned to instantly retrieve patient information, and the doctors can act accordingly instead of waiting for an external member to fill out the critical patient’s information, thereby saving crucial time. The IPFS network setup allows files to be stored in a distributed fashion and hashing of these files provides an additional layer of security. These files are immutable, as each file would generate a different hash value, and since these values are monitored, the files cannot be read if the value has been interfered with. The system meets both functional and non-functional requirements and can be considered a better alternative to the current centralized, local storage structure of medical records by providing greater security, effectiveness, and availability. It is further expected to improve this proposed system by integrating a server that runs tirelessly, allowing for a much grander scale of records and less load on a singular node. Another encryption standard needs to be implemented to improve client- side security. RFID write process can be fully automated by gaining access directly from the vendor. The future scope of the project involves applying the methodology to different industries which would benefit both customers and the industry, as convenience and security will be provided to customers using RFID and IPFS in which the customers will be required to carry just one card where all their data can be accessed, and storage automation reduces manual labor and cost for the industries. Since there are no existing decentralized storage automation systems in the medical field, a comparative analysis is futile. The proposed methodology serves as the foundation for the future work mentioned above.

696

P. John et al.

References 1. Trautwein D, Raman A, Tyson G, Castro I, Scott W, Schubotz M, Gipp B, Psaras Y (2022) Design and evaluation of IPFS: a storage layer for the decentralized web. In: Proceedings of the ACM SIG-COMM 2022 Conference (SIGCOMM ’22). Association for Computing Machinery, New York, NY, USA, pp 739–752. https://doi.org/10.1145/3544216.3544232 2. Nambiar AN (2009) RFID technology: A review of its applications. In: Proceedings of the world congress on engineering and computer science, vol 2. International Association of Engineers, Hong Kong, China, pp 20-22 3. Yao W, Chu C–H, Li Z (2010) The use of RFID in healthcare: Benefits and barri- ers. In: 2010 IEEE International conference on RFID-technology and applications. Guangzhou, China, pp 128–134. https://doi.org/10.1109/RFID-TA.2010.5529874 4. Ajami S, Rajabzadeh A (2013) Radio Frequency Identification (RFID) technology and patient safety. J Res Med Sci: Off J Isfahan Univ Med Sci 18(9):809–813. PMID: 24381626; PMCID: PMC3872592 5. Rieche M, Komensky´ T, Husar P (2011) Radio Frequency Identification (RFID) in medical environment: Gaussian Derivative Frequency Modulation (GDFM) as a novel modulation technique with minimal interference properties. In: 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, pp 2180-2183. https:// doi.org/10.1109/IEMBS.2011.6090410 6. Chen CL, Wu CY (2012) Using RFID yoking proof protocol to enhance inpatient medication safety. J Med Syst 36:2849–2864. https://doi.org/10.1007/s10916-011-9763-5 7. Widjaja AE, Chen JV, Sukoco BM, Ha QA (2019) Understanding users’ willingness to put their personal information on the personal cloud-based storage applications: An empirical study. Comput Hum Behav 91: 167–185. ISSN 0747–5632. https://doi.org/10.1016/j.chb.2018. 09.034 8. Uthayashangar S, Dhanya T, Dharshini S, Gayathri R (2021) Decentralized blockchain based system for secure data storage in cloud. In: 2021 International conference on system, computation, automation and networking (ICSCAN), pp 1–5. https://doi.org/10.1109/ICSCAN53069. 2021.9526408 9. Murthy CVNUB, Shri ML, Kadry S, Lim S (2020) Blockchain based cloud computing: architecture and research challenges. IEEE Access 8:205190–205205. https://doi.org/10.1109/ACC ESS.2020.3036812 10. Yaga D et al. (2019) Blockchain technology overview. arXiv preprint arXiv:1906.11078. https:// doi.org/10.48550/arXiv.1906.11078 11. Xie Y, Zhang J, Wang H, Liu P, Liu S, Huo T, Duan YY, Dong Z, Lu L, Ye Z (2021) Applications of blockchain in the medical field: narrative review. J Med Internet Res 23(10):e28613. https:// doi.org/10.2196/28613.PMID:34533470;PMCID:PMC8555946 12. Alla S et al. (2018) Blockchain technology in electronic healthcare systems. In: IIE Annual Conference. Proceedings. Institute of Industrial and Systems Engineers (IISE) 13. Zheng Q, Li Y, Chen P, Dong X (2018) An Innovative IPFS-Based Storage Model for Blockchain. In: 2018 IEEE/WIC/ACM International conference on web intelligence (WI). Santiago, Chile, pp 704–708. https://doi.org/10.1109/WI.2018.000-8 14. Benet J (2014) IPFS—Content addressed, versioned, P2P file system. arXiv. https://doi.org/ 10.48550/arXiv.1407.3561 15. Chen Y, Li H, Li K, Zhang J (2017) An improved P2P file system scheme based on IPFS and Blockchain. In: 2017 IEEE International conference on big data (Big Data), pp 2652–2657. Boston, MA, USA. https://doi.org/10.1109/Big-Data.2017.8258226 16. Maymounkov P, Mazi‘eres D (2002) Kademlia: A Peer-to-Peer Information Sys- tem Based on the XOR Metric. In: Druschel P, Kaashoek F, Rowstron A (eds.) Peer-to-peer systems. IPTPS 2002. Lecture notes in computer science, vol 2429. Springer, Berlin, Heidelberg. https://doi. org/10.1007/3-540-45748-85 17. Henningsen S, Florian M, Rust S, Scheuermann B (2020) Mapping the Interplanetary Filesystem. In: 2020 IFIP Networking conference (Networking). Paris, France, pp 289-297

Storing and Accessing Medical Information Using Blockchain for Improved Security G. Manonmani and K. Ponmozhi

Abstract Medical information is highly sensitive and needs to be protected. Medical data should be accessible only to those who need to access it and for reasonable periods of time. Storing medical records securely is a highly complex task. Thus, given the increasing complexity of digital transactions, the need for security across different channels is evident and is a key factor affecting the readiness of healthcare organizations to meet the challenges of security. This paper utilizes Blockchain innovation to make a patient-centric electronic well-being record while keeping a solitary genuine variant of the client’s information. Store the electronic health records or medical records of the patient using Blockchain technology for security. Hyper ledger fabric with smart contracts issued so that it allows only the permissible user to access the records. All medical records of the user are stored in a single account and made accessible to the patient and the doctor through the hospital network. The deployment of Blockchain is done utilizing Amazon-managed Blockchain services through AWS (Amazon web services) Blockchain enables high security and a dependable interface. The performance of the Blockchain is measured in terms of latency, and throughput. The implementation results are analyzed for the secure, reliable, and fast interactions of data across the Blockchain platform. Keywords Hyper fabric · Smart contracts · Blockchain · Amazon web services · Virtual private cloud · Peer node · Channel · Chaincode

G. Manonmani (B) Department of Information Technology, Hajee Karutha Rowther Howdia College, Tamil Nadu, Uthamapalayam 625533, India e-mail: [email protected] K. Ponmozhi Department of Computer Applications, Kalasalingam Academy of Research and Education, Tamil Nadu, Krishnakoil 626126, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_51

697

698

G. Manonmani and K. Ponmozhi

1 Introduction With the increase in online medical consultation and centralized medical record management systems, preserving the privacy of medical records became a concern. The extension of Blockchain technology to the healthcare field has had a profound impact due to its decentralized, tamperproof, and transparent nature. Blockchain stores all records securely without any intermediaries or intermediaries taking an active role in verifying the records [1]. The Blockchain technology was originally created for bitcoin, but now other groups have found new ways to utilize the technology and applied it to the healthcare space [2]. Blockchain in health care this perspective has not only made medical records tamperproof but also cost-effective. A few examples of existing Blockchain in healthcare projects are Digix, an Ethereum- based supply chain system for pharmaceuticals; MediLedger, a blockchain-based health records management system; and A eternity, a blockchainbased finance system in medical sector. Blockchain technology in the healthcare sector has been gaining attention due to its highly beneficial business advantages [3]. With the correct interpretation of the Blockchain, private medical records can be kept in a decentralized database and is completely safe from any tampering. From my research, it seems that it is almost impossible to put a security breach or hack into a Blockchain because every record is completely encrypted and kept on a shared data network [4]. All of these properties of a Blockchain make it a very safe system to store medical records, but it has another major advantage: The integration of Blockchain technology into the healthcare sector will allow for a decentralized, standardized, secure, and affordable healthcare system. As a system of record, Blockchain can decentralize the flow of information. Also, it will ensure that there will be no central authority and no intermediaries involved [5]. This means that the pricing of a service will be strictly determined by supply and demand, and the consumer will not need to worry about being ripped off. In the healthcare industry, there are an increasing number of counterfeit and stolen drugs, which could be corrected by Blockchain in healthcare systems [6, 7]. The general public has a greater awareness of fraudulent prescriptions and corruption within the healthcare system, and the integration of Blockchain technology in the healthcare field is a way to eliminate this problem. Looking into the future the healthcare industry has long embraced technological solutions to achieve new levels of efficiency and effectiveness, but there has been a shortage of the necessary resources to implement these changes [8]. Blockchain technology in healthcare could very well be the answer to the existing problems in medical health records management sector [9]. In this study, we examine academic research that aids in understanding the Blockchain through which the health data or the electronic medical records are preserved with privacy, with a particular emphasis on the security of medical health records on a Hyper-ledger-based Blockchain platform. The study’s purpose is to give academics a common understanding of the security and privacy of medical data on

Storing and Accessing Medical Information Using Blockchain …

699

Fig. 1 Overview of the proposed methodology

the Blockchain. In this research, we also suggested a conceptual framework for the security of medical and health data privacy, followed by a technology solution for ensuring medical and health data privacy on the Blockchain (Fig. 1).

2 Literature Review Blockchain is a decentralized, distributed ledger that uses cryptographic algorithms to secure data and prevent tampering. This makes it an attractive option for the storage of sensitive medical data, as it provides a secure and transparent way to store, manage, and share information. Blockchain has gained high visibility among electronic medical data storage systems as it preserves the privacy of the data and enables secure tamper-proof data storage and sharing [9]. With the emergence of modern technologies to enable data-based systems Blockchain is turning out to be an eminent technology [10]. Few among many of the related works are reviewed in this section. A study conducted by Wang et al. [16] evaluated the use of Blockchain for electronic medical record (EMR) management. The authors found that Blockchain can enhance the privacy, security, and interoperability of EMR systems by using cryptographic algorithms to secure data and providing a shared ledger for data sharing between authorized parties. They also noted that Blockchain can increase the efficiency of EMR systems by enabling real-time data sharing and reducing the need for manual data entry.

700

G. Manonmani and K. Ponmozhi

To enable the storing and sharing of medical data, a Blockchain-based medical information system architecture is proposed [11]. Decentralization and nontampering are features of the program. A Blockchain-enabled medical data management system is proposed in this study. This system has shared management and multi-node maintenance capabilities that stop the alteration and leakage of medical data. The system reduced the influence of third parties in deploying decentralized medical records storage. A consortium Blockchain-enabled medical data-storing system is proposed by Zhang and his co-authors [12]. The system mainly focuses on the existing complexity of the exchange and privacy protection of medical data. An attribute-based access control technique is used to establish access control, in which patients set up access restrictions for their medical data that are particular to certain attributes, and record requesters are categorized according to a set of attributes. For patients, created a hybrid storage option to store encrypted medical records off-chain and write access policies for medical records on the consortium Blockchain network. It is possible to implement access privilege control and access history tracking by utilizing Blockchain and smart contracts. It is challenging to guarantee data security and to maintain the data of the diagnostic procedures of the healthcare system [13] due to its massive volume. This research presents a new HBESDM-DLD model to address these problems. The proposed system utilizes deep learning (DL)-based diagnostics and secure medical data management. The system is developed using the hyper ledger fabric-centered multichannel Blockchain, the system stores patient visit data and electronic health records (EHRs) in external databases, and the sharing of medical data is carried out securely through the proposed system. The proposed system [14] recommends record sharing in a chain-like structure, based on a Blockchain, in which every record is globally connected to the others. This study focused on making medical data accessible for a set length of time after verifying the necessary credentials, particularly for patients who travel internationally. With the patient’s participation in the sanction process, authorization and authentication are carried out on the Shibboleth identity management system, revealing the patient’s data for the designated time. The suggested method performs better than alternative record-sharing systems; for example, it takes noticeably less time to read, write, remove, and revoke a record. Blockchain not only addresses the privacy concern of the data, but it also provides an effective sharing methodology by adding features that make the data immutable. The smart contracts in blockchain technology converge the medical data management system towards a secure data storing and sharing approach. The emergence of IoT and software defined networks increase the vulnerability of preserving the privacy of the data [15]. The use of Blockchain for the secure storage of medical data has the potential to enhance privacy, security, and interoperability, while also providing a secure and transparent way to manage and share information [16]. The findings of these studies highlight the potential of Blockchain to improve the management of electronic medical records, genomic data, and personal health records. However, further

Storing and Accessing Medical Information Using Blockchain …

701

research is needed to explore the practical implementation of Blockchain in healthcare systems, as well as to address potential challenges and limitations. This paper utilizes Blockchain technology to overcome the existing complexities and drawbacks regarding security and privacy of normal data storage systems.

3 Proposed Methodology This section discusses the introduction of the managed, permissioned, and private hyperledger fabric and its implementation using Amazon web services (AWS). Hyperledger is an open-source Blockchain technique created and released by the Linux foundation. Compared to its contrast public network Blockchain technology named Ethereum, hyperledger is a highly secure and limited access Blockchain. The Blockchain can be implemented using amazon managed blockchain on AWS. AWS also provides a developer guide to ease the deployment of managed Blockchain. The step-by-step process of creating a managed private hyperledger fabric-enabled Blockchain is depicted in Fig. 2. The managed hyperledger Blockchain is created in the AWS console. Several steps are involved in building a Blockchain network. Starting from the creation of a network with a single member to invite or add multiple members to a channel. The Amazon-managed Blockchain service enables the creation of both hyperledger fabric and Ethereum. Hyper ledger is a private Blockchain whereas Ethereum is a public Blockchain. In the proposed system use hyperledger fabric to create a permissionmanaged private Blockchain. The process involves several technical terminologies, among those endpoint, Peer node, client, member admin, channel, chaincode, and smart contracts are essential terminologies that require description. Table 1 elucidates the terminologies. The hyperledger fabric version 2.2 is used to create the network, and once the network name is assigned with a description, then a member needs to be created to manage the Blockchain. The certificate authority has to be configured with the administrator username and password. The first member created in the network is the administrator of the managed Blockchain. Once the network is created the managed Blockchain is accessible in 30 min approximately. Once the status of the Blockchain becomes available, it can be further configured based on the requirement of the application. Figure 3 depicts the status of the network after becoming available. Once the network is configured the users are created and the records are shared by the admin with the patient and the doctor. The patient has the option to allow the doctor to view his records. The admin of the network has to be requested for access to records by the doctor and the patient approves the request with the concurrence of the admin. The electronic health records were stored in the blockchain ledger, and it makes the document or records tamper-proof. Multi-member channels are deployed so that the data can be accessed by multiple members of the blockchain. The experimental setup of the realized network and the performance of the deployed blockchain network are discussed in the following sections.

702

G. Manonmani and K. Ponmozhi

Fig. 2 Steps for creating blockchain using AWS

4 Experimental Setup and Configurations The Blockchain network is set up with Hyperledger Fabric 2.2 with the name Health_records_BC. The instance is created in amazon EC2 with Amazon Linux 2 kernel and SSD volume type. The virtual server type is t2.micro and the architecture is 64-bit(X86) with 1 GB of Memory and 1 virtual CPU. The account is a free tier account with on-demand pricing based on usage. The security group is created with the network created. The peer nodes of the Blockchain network are created with Blockchain instances as bc.t3.small with 2 virtual CPU, and 2 GB of RAM. The DB is configured as CouchDB, and the logs for chaincode and peer nodes are enabled. The chain code is developed using Golang code for the creation of Health care record management service. The chaincode is deployed and instantiated on the multimember channel. The performance parameters are recorded by the AWS

Storing and Accessing Medical Information Using Blockchain …

703

Table 1 Important terminologies in Amazon-managed blockchain S.no

Terminology

Description

1

VPC endpoint

It’s a private link that enables the interaction between the VPC and the services in AWS. The EC2 instances of AWS services are linked to the VPC to overcome risks and network constraints

2

Peer node

At least one peer node should be available for a member of the Blockchain. These nodes interact with other member peer nodes to perform transactions. The shared ledger copy is maintained by every peer node

3

Client

A hyperledger fabric client is an instance of EC2 in AWS, responsible for running the docker. The client represented as CLI is used to interact with the member peer nodes

4

Member admin

An enrolled member of the Blockchain with administrative privileges

5

Channel

The channel defines the scope of the ledger, channel enables the ledger to be shared or private. Channel is the communication medium for the members of the network

6

Smart contracts

Smart contracts are the logic on which every transaction is performed. Every member can have their own smart contracts defined for transactional logic. The members can define smart contracts as multiple smart contracts that can be used in a Blockchain network

7

Chaincode

Multiple smart contracts are bundled into Chaincode. Chaincode are of different types namely . Lifecycle . Configuration . Query . Endorsement . Validation

Fig. 3 Screenshot of the blockchain network status as available in AWS

704

G. Manonmani and K. Ponmozhi

Managed Blockchain network and the EC2 instances. Throughput and latency are the key parameters for performance evaluation.

5 Results and Discussions Several utilization graphs such as CPU utilization metrics and Memory utilization metrics are shown in the managed Blockchain network console of the AWS under the Amazon CloudWatch metrics option. It also shows the logs of the peer nodes and the chaincode. The CPU utilization is low with the number of users or peer nodes created. The utilization graph for an hour is depicted in Fig. 4. The memory utilization also is not high for the hour as Fig. 5 clearly depicts that memory. Usage has not exceeded more than 25% of its capacity. The ec2 instance provides with several monitoring options like count of the network packets in and out. It also provides the count of CPU credit usage and CPU credit Balance. Figure 6 show the monitoring features available in EC2 instance. Though the EC2 and the Managed Blockchain of AWS monitor the performance of the Blockchain in terms of CPU utilization, memory utilization, network packets, and CPU credit usage. Most of the researchers try to evaluate the network with latency Fig. 4 CPU utilization graph as in AWS managed blockchain

Fig. 5 Memory utilization graph as in AWS managed blockchain

Storing and Accessing Medical Information Using Blockchain …

705

Fig. 6 network packet and CPU usage of EC2 instance of the blockchain

and throughput. These two metrics are considered important performance metrics. Latency is defined as the time taken by the network to complete a transaction it can also be defined as the time difference between a request raised and its response. Throughput is defined as the Utilization or load of the system for a unit of time. Generally, throughput decreases with the increase in the number of users and the number of transactions. The throughput is calculated using AWS CLI services and EBS, and the latency is recorded using the route s3 console in the AWS. Analysis clearly depicts that throughput does not decrease with an increase in the number of transactions as the AWS systems deliver high performance with the help of services like AWS Accelerator and more. The throughput is calculated with the formula: ((V olume r ead bytes) + (V olume write bytes)) /(Period (volume idle time) − (volume idle time)) The values of volume read and write bytes are fetched from the Elastic Bock Store (EBS) per volume metrics in the Cloudwatch console. The formula is applied using the following steps. 1. Select the tab Graphed metrics of the EBS. 2. The formula needs to be applied using the Sum tab in the statistic dropdown list. 3. The period for which throughput needs to be calculated has to be selected in the period drop-down list. 4. Choose Empty expression option and in the formula box type the formula. (m3 + m1)/(Time Period(m2) − m2) 5. Apply the formula.

706

G. Manonmani and K. Ponmozhi

Throughput 160

Throughput (MBPS)

140 120 100 80

Throughput

60 40 20 50

75

100

125

150

175

200

Number oft ransactions Fig. 7 Throughput of the blockchain

The throughput is calculated for various periods with a different set of transactions and recorded. The throughput graph measured in terms of the number of transactions is depicted in Fig. 7. The latency of the AWS-managed Blockchain is depicted in Fig. 8. Latency is the time delay between sending a request and receiving a response, using latency we can measure the network performance. One can measure these metrics using tools such as network performance testing software, network monitors, and packet analyzers. The results clarify that latency is low in AWS-managed Blockchain networks. The AWS local zones, AWS Direct connect, and other services help in reducing the latency in AWS Blockchain networks. Latency is analyzed with the throughput, and it is measured in milliseconds. Figure 8 shows the throughput graph.

Fig. 8 Latency of the blockchain

Storing and Accessing Medical Information Using Blockchain …

707

6 Conclusion Preserving highly sensitive medical information is a complex task. The utilization of Blockchain to resolve the existing complexity is proposed in this paper, and the Blockchain network deployment is realized using the AWS-managed Blockchain and its enabled services. Hyperledger fabric is used as the framework for Blockchain. Multiple smart contracts are deployed using the chaincode, and the chaincode is rightly instantiated on the channels. Data transfers were secure, and a multimember channel is created where the chaincode is deployed. A Blockchain network is set up with less complexity in less time, and the performance measures were recorded and verified using the inbuilt monitoring tools and external tools. The performance of the Blockchain network is impressive considering the time and effort taken to deploy it. In the future, a public network needs to be deployed rather than a private Blockchain network. The AWS services are on-demand services where the cost for the deployment of the Blockchain network also increases with time and utilization. In the future a cost-effective Blockchain network needs to be created, that not only reduces the implementation cost but also reduces the cost with an increase in data and number of users. The use of the Ethereum framework for deployment needs to be tested in comparison with the hyperledger fabric.

References 1. Xia Q, Sifah EB, Smahi A, Amofa S, Zhang X (2017) BBDS: Blockchain-based data sharing for electronic medical records in cloud environments. Information 8(2):44 2. Uddin M, Memon MS, Memon I, Ali I, Memon J, Abdelhaq M, Alsaqour R (2021) Hyperledger fabric blockchain: Secure and efficient solution for electronic health records. CMC Comput. Mater. Continua 68:2377–2397 3. Liu J, Li X, Ye L, Zhang H, Du X, Guizani M (2018). BPDS: A blockchain based privacypreserving data sharing for electronic medical records. In: 2018 IEEE Global Communications Conference (GLOBECOM), pp 1–6. IEEE 4. Nicolai B, Tallarico S, Pellegrini L, Gastaldi L, Vella G, Lazzini S (2022) Blockchain for electronic medical record: assessing stakeholders’ readiness for successful blockchain adoption in health–care. Measuring Business Excellence, (ahead-of-print) 5. Mahajan HB, Rashid AS, Junnarkar AA, Uke N, Deshpande SD, Futane PR, Alhayani B (2022) Integration of healthcare 4.0 and blockchain into secure cloud-based electronic health records systems. Appl Nanosci:1-14 6. Chelladurai U, Pandian S (2022) A novel blockchain based electronic health record automation system for healthcare. J Ambient Intell Humaniz Comput 13(1):693–703 7. Chen CL, Yang J, Tsaur WJ, Weng W, Wu CM, Wei X (2022) Enterprise data sharing with privacy-preserved based on hyperledger fabric blockchain in IIOT’s application. Sensors 22(3):1146 8. Guo H, Li W, Nejad M, Shen CC (2022) A hybrid blockchain-edge architecture for electronic health record management with attribute-based cryptographic mechanisms. IEEE Trans Netw Serv Management 9. Pinto RP, Silva BM, Inacio PR (2022) A system for the promotion of traceability and ownership of health data using blockchain. IEEE Access 10:92760–92773

708

G. Manonmani and K. Ponmozhi

10. Long A, Choi D, Coffman J (2022) Using amazon managed blockchain for ePHI an analysis of hyperledger fabric and ethereum. In: 2022 IEEE World AI IoT Congress (AIIoT), pp 276282. IEEE 11. Qu J (2022) Blockchain in medical informatics. J Ind Inf Integr 25:100258 12. Zhang D, Wang S, Zhang Y, Zhang Q, Zhang Y (2022) A secure and privacy-preserving medical data sharing via consortium blockchain. Secur Commun Netw 13. Sammeta N, Parthiban L (2022) Hyperledger blockchain enabled secure medical record management with deep learning-based diagnosis model. Complex & Intell Syst 8(1):625–640 14. Butt GQ, Sayed TA, Riaz R, Rizvi SS, Paul A (2022) Secure healthcare record sharing mechanism with blockchain. Appl Sci 12(5):2307 15. Kavin BP, Srividhya SR, Lai WC (2022) Performance evaluation of stateful firewall-enabled sdn with flow-based scheduling for distributed controllers. Electronics 11(19):3000 16. Rajput AR, Li Q, Ahvanooey MT (2021) A blockchain-based secret-data sharing framework for personal health records in emergency condition. Healthcare 9(2): 206. MDPI

Systematic Literature Review on Object Detection Methods at Construction Sites M. N. Shrigandhi and S. R. Gengaje

Abstract The challenge of object detection in computer vision is the most important and extensively used in many research. In the current environment, object detection is important in many fields. In order to accomplish machine vision understanding, object detection methods seek to recognize all target objects in the image and to ascertain the categories and position data. Construction is such a profession where there are significant safety risks due to the concentration of numerous workers, vehicles, and construction tools in a small area for a brief period of time. The goal of computer vision-based research in the construction sector has been to increase safety and productivity by detecting workers and equipment on various construction sites. The present paper presents a brief review of object detection methods implemented at different construction sites. Keywords Construction · Hazards · Object detection · Traditional · Deep learning · RCNN etc.

1 Introduction Detecting occurrences of particular classes of visual objects in digital images is the most essential computer vision task. Numerous practical applications exist for object detection, including robotic vision, surveillance, and safety monitoring at construction sites. Its primary duty is to make predictions about the location and type of objects from images. Object detection can be implemented by using traditional methods and deep learning methods. Its main task is to predict the position and category of objects from images [1]. Despite effective object detection accuracy by conventional and deep learning methods over publicly available research M. N. Shrigandhi (B) · S. R. Gengaje Walchand Institute of Technology, Solapur, India e-mail: [email protected] S. R. Gengaje e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_52

709

710

M. N. Shrigandhi and S. R. Gengaje

datasets with a wide variety of object classes, object detection at construction sites is still very difficult because of the issue, particularly when detecting the objects in the presence of occlusions and the varying size of objects that occur on construction sites. Construction is such a profession where it provides job securities to even unskilled people. According to the Indian government, the second largest employer in India is the construction industry, which employs 49 million people and provides 9% of the country’s GDP. Construction is a labor-intensive industry, and over 60,000 fatal accidents are reported on projects around the world every year [2]. Consequently, the safety of construction workers is the main priority for construction managers. Construction personnel and vehicles are the most dynamic elements on job sites and the main cause of safety problems, as opposed to stationary construction buildings and massive equipment. By identifying workers and equipment on construction site, computer vision-based research has sought to improve safety and productivity in the construction industry. There are different types of construction projects like residential building, public buildings, and infrastructural projects. Accidents can happen at building sites that could be fatal. On a construction site, there are numerous risky situations. Some of the major hazardous situations are mentioned below [3]. 1. Falls: Working at heights and both inside and outside of buildings are common during construction. Working from roofs, scaffolds, ladders, or stairways is always dangerous. The likelihood of falls is increased by elements like inclement weather, defective equipment, negligence, improper scaffolding or ladder use, and limited mobility. Approximately 9% of all construction project deaths occurs because of falls from height 2. Falling materials and objects: Materials or objects that fall have the potential to seriously injure the head. Fatal and non-fatal cases are caused by falling materials/objects. 3. Electrical shock/Fire Hazards: The most frequent electrical risks on a construction site include shocks to workers, gas leaks, and arc flashes or explosions that start fires. When working methods are disorganised and rushed, such as when electrical work is done in rainy conditions, electricity offers additional risks. 4. Struck by objects (Construction Equipment): Construction equipment like cranes, earth moving equipment, hauling equipment and many equipment/tools for construction of various structural components can be deadly if not operated properly. 5. Harmful materials: Numerous tools and materials that could be dangerous to workers are also used in construction. There are many construction equipment like crane, power shovel, dumper, bull dozer, earth moving equipment, back hoe loader, front hoe loader, trucks, forklifts, etc. Construction sites frequently have a criss-cross pattern of moving construction equipment, transporting building materials, which may cause a number of accidents. Object detection at construction site is mainly to locate the presence of objects instances such as workers and moving objects related to construction and determine the different types of the located objects in an image which can further used for various applications like safety monitoring and productivity analysis.

Systematic Literature Review on Object Detection Methods …

711

2 Methods Used for Object Detection at Construction Sites Object recognition in computer vision is one of the most fundamental and extensively studied problems. Object detection can be done by using traditional approach and modern approaches. The modern approaches are usually implemented using Deep Learning methods. In this section a brief review based on traditional approach and modern approach is presented.

2.1 Object Detection Using Traditional Approach Traditionally, there are three steps in the object detection process: choosing an informative region, extracting the object’s features, and classifying it. Usually, the traditional methods uses HOG, SIFT, and VJ detectors. This section presents literature review for object detection at construction site using traditional methods. In order to find and discriminate dump trucks and earthmoving machinery in construction videos, Rezazadeh Azar and McCabe [4] suggested two possible cascade approaches using currently existing image and video processing techniques, Haar-HOG, and Blob-HOG. Due to its reliable performance, the HOG detector served as the primary classifier in both techniques; nevertheless, this approach for real-time applications is too sluggish. Consequently, two quick detection filters were applied earlier to the HOG classifier to restrict the search areas. The Haar detection algorithm is used in the Haar-HOG method, and the final classifier receives moving objects from Blob-HOG using a foreground object detector. With more accurate detection rates and fewer false positives, the Haar-HOG method outperformed other approaches. Additionally, it has the ability to find objects in videos despite camera movement.

Object Detection Techniques

Traditional Object Detection

Deep Learning based Object Detection

Two Stage Detectors Fig. 1 Object detection techniques

One Stage Detectors

712

M. N. Shrigandhi and S. R. Gengaje

To extract the machine’s location at any time and to create the aforementioned applications, the next stage is to incorporate a tracking engine with these identifying frameworks. The detection of articulated machinery like hydraulic excavators and loaders did not yield promising results, despite the fact that these algorithms were proven to be suitable for dump trucks. The study’s conclusions can aid practitioners in making an appropriate decision regarding how to identify such machinery in real-time applications in videos including proactive work-zone safety, performance control, and productivity measurement. A method for automatic object recognition utilizing common video cameras on building sites is presented by Chi et al. [5]. The suggested technique facilitates identifying and classifying personnel and mobile heavy equipment in real-time. From a stream of images, the background subtraction technique extracts motion pixels, groups them into areas to represent moving objects, and then uses classifiers to determine which object each region represents. The algorithm for foreground identification and segmentation is used to recover the moving object’s dynamic foreground regions from the video stream after the still background sections have been removed. The connected component approach is used to aggregate the pixels in the foreground into one area after morphological image processing is used to recreate disconnected and missing foreground regions. This makes it possible to extract the specific target region. The corresponding regions within an image series can be determined to represent the connected regions, which now depict moving objects. Once the classifiers have been fed with object information, such as object shape and appearance, objects in the image are finally identified using them. This study used two widely used classifiers: neural network classifiers and normal Bayes classifiers. The computeraided procedure was put into practice on actual construction sites for the method’s evaluation, and encouraging results were obtained. Rubaiyat et al. [6] analyses construction surveillance images with the goal of automatically detecting the usage of construction helmets, such as whether a worker is wearing a helmet or not. The authors first identify the object of interest (a construction worker) based on the images gathered, and then, using computer vision and machine learning techniques, the authors determine whether the worker is wearing a helmet or not. Construction worker detection is performed in two steps, firstly the frequency domain information from the image is combined with the well-known human being detection algorithm- histogram of oriented gradient; in the second, the construction worker’s helmet usage is determined using a mix of color-based and Circle Hough transform feature extraction approaches. After this algorithm is applied, the system can identify helmets made of certain colours, like yellow, blue, red, and white. Park et al. [7] proposes an innovative technique for automatically identifying construction workers in video frames. The technique sequentially makes use of colour histograms, background subtraction, and HOG shaper characteristics. Moving object foreground blobs are identified by background subtraction, and HOG characteristics are applied to the identified blobs to identify people. Based on their colour histograms, the discovered people are divided into workers and non-workers. Its performance is

Systematic Literature Review on Object Detection Methods …

713

assessed using two measures that are chosen. Precision comes foremost, while initialization time delay comes second. The suggested method is initially tested for identifying five individuals wearing various combinations of safety equipment. Preliminary testing indicate that a “construction worker” is anyone who is wearing a safety vest. Ten videos with various backgrounds each served as the subject of experiments. The experiments produced results with 0.67 s of time lapse and 99.0% accuracy, which corresponds to the definition of “construction worker.“ These findings show that the suggested system can successfully begin tracking of construction workers. Table 1 Performance of algorithms implemented using traditional methods Reference number

Algorithm implemented

Purpose

Findings

Limitations

16

Haar-HOG, BlobHOG

Detection of Dump Trucks

The Haar–HOG method performed better than Blob–HOG method with higher detection rates

The present method did not yield promising outcomes in terms of detection of hydraulic excavators and loaders

20

Background subtraction algorithm, Bayes and neural network Classifier

Detection of loader, backhoe and worker

1. Classification accuracy for worker are more precise than loader and a backhoe 2. Correctly classified with multiple sets of objects like worker and loader, worker and backhoe, and worker, backhoe and loader

The algorithm doesn’t separate two different objects while they were parallel to each other’s lines of sight

11

HOG, Circle hough Detection of transform worker and helmets

The system is able to identify helmets made of specific hues, such as white, red, yellow, and blue

Without the color information the system fails to differentiate between a circular helmet and a circular human head

1

Background subtraction, HOG, HSV color histogram

1. 99.0% accuracy was achieved through experiments 2. Within 0.67 s of their initial appearance, workers were discovered

It is challenging to determine the shape of a human body when the background and person are the same color

Detection of construction worker and pedestrian

714

M. N. Shrigandhi and S. R. Gengaje

2.2 Object Detection Using Modern Approach The modern approaches are usually implemented using Deep Learning methods. Generally there are two types of object detection algorithms using deep learning, single-stage models, and multi-stage models. In this section a brief review based on these models is presented.

2.2.1

Object Detection Using Multi Stage Models

Multi-stage models, which include models like RCNN, FRCNN, and faster RCNN, are mostly region proposal-based. In this section we present the brief review based on region proposal methods used for the object detection at various construction sites. Computer vision-based research has played a significant role in improving safety and productivity on construction sites by identifying the presence of equipment, people, and plants. To automatically detect the existence of objects, Fang et al. [8] proposed the IFaster R-CNN (Improved Faster Regions with Convolutional Neural Network Features) method, which can also give construction managers information to help them make better decisions about safety and planning. This research has been broadly classified into two parts. In the first part the authors have created a collection of objects from construction sites (personnel and large equipment) for the CNN model to be trained to detect their presence. In the second part, the authors have used a deep learning model to extract feature maps from images. Then Feature maps are used to extract an area proposal, then object recognition is done. A special dataset is created to train the IFaster R-CNN models to recognise workers and equipment, for example excavator, in order to verify the model’s capacity for real-time object detection. The findings show that the IFaster R-CNN can accurately and reliably identify the presence of workers and excavators. Angah and Chen [9] offered a three-stage framework for multiple human tracking that is more effective. The authors employed 2D Mask RCNN and 3D Mask RCNN to detect the bounding boxes and pose locations of humans in images during the first step of detection. Additionally, the authors have contrasted how well the two pipelines perform in detection and tracking. The second stage involves matching. The authors have implemented many cost calculation metrics such as feature matching, Intersection over Union, and pose similarity. This stage involves deriving the tracking objects’ trajectories. However, miss detection or false matching could cause tracking object trajectories to be interrupted. Re-matching takes place during the third step. The authors referred to the most recent fresh detections without pairs as unmatches and for those broken trajectory detections that are left as orphans. On the basis of the identical picture attributes that were used for matching, rematching of orphans and unmatches one at a time is done. Re-matching is performed by comparing the detections that aren’t matched in the current frame with the orphans from prior frames. The Multi-Object Tracking Accuracy i.e. MOTA, which is a representation of the rematching stage, aims for better tracking outcomes. Two performance enhancements

Systematic Literature Review on Object Detection Methods …

715

for human tracking were suggested by the authors i.e. a gradient-based technique for prediction and rematching of locations. The MOTA, which takes into account the mismatch, was used, and the tracking performance was more accurately portrayed. The issue of Falls From Heights (FFH), a leading source of accidents and fatalities on construction sites, has been addressed by Fang et al. [10]. Despite being informed of the risks associated with doing work at heights without a safety harness, many workers still forget or actively choose to forgo wearing one. In this paper, the authors describe a computer vision-based automated system for checking if workers are wearing their harness when operating at heights. The method combines two convolutional neural network (CNN) models. They have created an algorithm with two steps. In the first phase, to identify workers in a construction site image, Faster-R-CNN is used. And in the second step it uses a deep CNN model to identify the presence of safety harness. Using the Zeiler and Fergus network, which has five convolutional layers, the Faster-R-CNN in the first phase effectively starts the process of acquiring the feature maps by extracting them from the CNN. To gather and generalise the attributes of the image, the image is first fed into the CNN. The CNN then shares these features with the RPN and Fast-R-CNN as their respective inputs. The region proposal was then extracted from the convolution network feature map using the RPN module, allowing for the acquisition of its target score and regressed boundaries. The CNN models precision and recall rates were 80 and 98%, whereas the Faster-R-CNN’s were 99% and 95%, respectively. The findings of this study can help construction safety managers to create a system to proactively identify any unsafe activity, like as not wearing a safety harness, and to take prompt remedial action to reduce the likelihood of FFH. The faster region-based convolutional neural network that is faster RCNN and single shot multi-box detector that is SSD are two deep learning techniques for object detection that are thoroughly evaluated by Liu et al. [11]. In the first step the authors have created their own image dataset from four different construction sites containing 990 images of three different objects such as site fences, safety barricades and modular panels. Using the TensorFlow platform, in the second stage, object detection models were developed. The three key responsibilities in this step were establishing the object identification API, TensorFlow, and the computer development environment. Third, to increase the object detection algorithm’s generalizability to new data, the training data set was used to train models based on specific object detection techniques, which were then adjusted using validation data sets. On the labeled images, RCNN and SSD-based models were used to train two new models. Accurate detection was influenced by the amount of images, the selected technique, and the training procedures. The performance of the trained models were analysed using the chosen metrics like precision, recall, IOU, mAP, and AP as the final stage. As a part of application, the object detection techniques developed in this study can be applied to construction sites to increase safety and monitoring object installation progress on construction sites. Chen et al. [12] have addressed the problem of automatic identification of equipment activities using construction site surveillance videos. The authors in this research have introduced a novel framework for the automatic analyzing of numerous

716

M. N. Shrigandhi and S. R. Gengaje

excavators’ activity and productivity at a construction site. To achieve this, three convolutional neural networks are used for detection, tracking, and reorganization of the work done by excavators. Excavator detection, excavator tracking, idling state identification, activity recognition, and productivity analysis are the framework’s five core modules. For the first module, the authors have used a Faster-R-CNN model to find the excavators in the frames of the video. As a part of second module, multiobject deep Simple Online and Real-Time i.e. SORT tracker was applied to all the detected excavators across all the frames in the video. To follow the excavators, the deep SORT tracker uses appearance as well as trajectory information from the detected frames of the video. The third module is employed to determine whether an excavator is in idle state or not. For this, the pixel coordinates of bounding boxes is extracted and using this the centroids and the area of the bounding boxes is calculated. Further, this is used to calculate the distance change, area change, standard deviation for distance change and standard deviation for area change is calculated. In the activity recognition task, 3D ResNet was employed to identify the work done by the excavator’s in the video clips, including excavating, loading, and swinging. In the last module, based on the outcomes of activity recognition, a method is developed to automatically determine the productivity of the excavator.

2.2.2

Object Detection Using Single Stage Models

The single-stage models primarily consist of regression-based approaches and it consists of models like YOLO, SSD and RetinaNet. In this section we present the brief review based on regression based methods used for the object detection at various construction sites. One of the most crucial factors that should be taken into account when executing the necessary tasks at construction sites is the safety of the personnel. For instance, falls from height are thought to be the main reason for accidents that result in injuries and fatalities on building sites. A real-time object detection algorithm based on a CNN model is established by Shanti et al. [13], in order to recognise the two primary PFAS components, a safety harness and a lifeline, as well as the common safety precaution of donning a safety helmet. Because falls from heights sometimes result in life threatening accidents for construction personnel, identifying workers who are working at heights without utilising a suitable personal fall arrest device is a breach of the EHS requirements on protection against fall. The YOLOv3 technique is utilised to train the model using a deep learning network. Different tests were performed on the model to evaluate its performance, and the results are encouraging. Nath et al. [14] have described three deep learning models built on the YOLO architecture to check workers’ PPE compliance in real-time, such as whether they are wearing hard hats, vests, or both. The first method uses an algorithm to identify workers, hard hats, and vests. Next, a machine learning model such as a neural network or decision tree checks to see if each identified worker is correctly wearing a hardhat or vest. In the second strategy, the system uses a single CNN framework to simultaneously identify specific workers and confirm PPE compliance. The third

Systematic Literature Review on Object Detection Methods …

717

Table 2 Performance of algorithms implemented using multi stage methods Reference number

Algorithm implemented

Purpose

Findings

Limitations

13

IFaster R-CNN

Detection of workers and excavators

Detection accuracy is high−91% for worker detection and 95% for excavator detection

1. Small scale objects are neglected 2. Performance influenced by on–site occlusions

18

2D mask RCNN and 3D mask RCNN

Detection of worker

Multiple worker tracking is implemented by overcoming occlusion



10

Faster R-CNN

Detection of safety harness

Good precision and 1. Limited to recall rates–99 and selected number of 95%, respectively activities working at heights 2. Unable to detect workers and harness due to sample size and harness color

17

Faster RCNN

Detection of barricades, construction site fences and ceiling panels

Performance of detection of barricades and fences is good

Unable to detect some small object since the training dataset contains very few images of small objects

21

Faster R-CNN

Detection of excavators

1. Able to measure multiple excavators operations 2. Able to automatically calculate the productivity based on the information about the excavators’ actions

1. Activity recognition is affected by light conditions of video 2. Activity of one excavator is affected by other excavator when the bounding boxes of the two excavators is fully overlapped during detection and tracking

method starts by identifying only the workers in the given image, which is then cropped and categorised using CNN based classifiers such as VGG-16, ResNet-50, and Xception based on the presence of PPE apparel. It is discovered that the second strategy yields the best results. An internal image dataset that was produced utilizing crowdsourcing and web mining is used to train all algorithms.

718

M. N. Shrigandhi and S. R. Gengaje

Arabi et al. [15] proposes a practical deep learning based remedy to detect construction equipment, from the very beginning of the development process until the deployment of the solution. In the paper, the two stages of real-world deep learning solutions are covered. Data collection and preparation, model choice, model training, and model assessment are all addressed in the first step, which is the development phase. The study’s second stage involves model optimization, application specific hardware selection, and solution evaluation. Mobilenet is used as feature extractor. Depthwise separable convolutions serve as the primary building component in this type of networks. The standard convolution is factored into two separate operations by depthwise separable convolution. Separate convolution kernels, sometimes referred to as depthwise convolution, are applied to each input channel in the initial step. The information from the first operation is then combined using pointwise (1 × 1) convolution. In this study, the detector used is a Single Shot Detector. The base network, often referred to as an auxiliary network, is used by this model to extract features. To execute classification and localisation regression, SSD employs a variety of feature maps, including some that are produced by the base network. The model was carefully chosen to match the embedded systems’ hardware limitations. Following that, two primary distinct embedded systems were suggested to handle the requirements of various scenarios. Nvidia Jetson TX2 with TensorRT optimization was released for applications that require real-time but exact performance, such as safety and construction equipment tracking. A method for detecting helmets based on improved YOLOv4 is suggested by Benyang et al. [16] in order to lessen safety hazards brought on by helmet use. By assembling an independent data collection of onsite building construction video, utilising the K-means method to cluster the dataset in order to gain more precise edge information and correctly a priori frame dimensional centre. The network training procedure then employs a multi scale training strategy to enhance the models adaptability to various detection scales. The experimental outcomes demonstrate that, the model’s detection accuracy and speed were enhanced compared to YOLOv4 in the helmet wearing detection task, which satisfies with the real-time requirements of the task, and the model’s mAP value reached 92.89% and the detection speed reached 15 frames per second. Nimmo et al. [17] suggested a pedestrian detection method that uses an Intel RealSense R200 camera as input to a neural network that is built on a single shot multibox detector for detection of pedestrians at construction sites. The suggested method makes use of a neural network to detect before determining the separation between a pedestrian and the camera employing stereoscopy in real-time employing low cost hardware. The Single Shot Multibox Detector 512 and the InceptionV4 network, are two networks that are presented for pedestrian detection. A dataset built from video recorded at a Fulton Hogan site was used to train both networks. On the PASCAL VOC 2007 test dataset, the SSD InceptionV4 network obtains 1% mAP, and on the Fulton Hogan dataset, 51% mAP. On the Fulton Hogan dataset, the SSD 512 network scores 93% mAP, and on the PASCAL VOC 2007 test dataset, 66% mAP. In manual inspection, the SSD 512 network performed two times better than

Systematic Literature Review on Object Detection Methods …

719

the SSD InceptionV4 network, reaching a detection rate of 100% for pedestrians inside five meters of the camera. Fangbo Zhou et al. [18] details the YOLOv5 based safety helmet recognition method’s network setup, classifier parameters, and data set processing. YOLO series algorithms, which provide incredibly high speed and precision, is used to a variety of scene detection applications as a result of the ongoing development of object identification technologies. YOLOv5 models (s, m, l, and x) with various parameters were used in the experiment. The authors suggested a safety helmet detection method based on YOLOv5 and annotated the 6045 data sets collected in order to establish a digitally safety helmet monitoring system. Finally, for training and testing, they employed the YOLOv5 model with various parameter settings. There is a comparison and evaluation of the four models. In light of experimental results, YOLOv5 average detection rate is 110 frames per second, which fully satisfy all real time detection requirements. The efficiency of the helmet detection based YOLOv5 is demonstrated by the fact that the mAP of YOLOv5x achieves 94.7% utilizing the pretraining weight of the trainable target detector. Nipun D. Nath et al. [19] proposes YOLO-based CNN models for quick construction object detection. A big image dataset called Pictor-v2 is first constructed. The Pictor-v2 dataset, which includes 1105 crowdsourced and 1402 webmined annotated images of buildings, equipment, and workers. These three common objects in construction images were detected in real time using DL-based YOLO algorithms that were trained, validated, and tested using this dataset. Two versions of this model, YOLOv2 and YOLOv3, are trained using transfer learning, and tested them on various combinations of data in order to evaluate the object detection’s agility for crowdsourced, webmined, or both. According to the results, the model performs better if it has been trained on both webmined and crowdsourced images. Furthermore, YOLOv3 performs better than YOLOv2 by emphasising smaller, more difficult to detect objects. On crowdsourced data, the top performing YOLOv3 model had a 78.2% mAP. The most effective model was also put to the test when it came to seeing objects of various sizes, crowding, and illumination. Overall, it was discovered that the model had better accuracy in detecting larger, less crowded, and well-lit objects. The model’s ability to detect large sized buildings and worker instances with exceptionally high precision is one of its significant capabilities. However, in low-light situations, the model frequently has trouble detecting these objects. In order to safeguard construction workers and those operating construction machines from potentially harmful scenarios, such as collisions, Son et al. [20] presented a system for real time worksite monitoring that integrates the detection and tracking of construction workers. The integrated tracking network allowed for the detection of construction personnel as well as the verification of detection errors. The suggested methodology was used to analyse image sequences taken from the CMOS image sensors mounted on the front, back, right, and left of the construction equipment while it carried out earthmoving tasks. The integrated architecture that is being proposed comprises of phases for both detection and tracking, which interact at the same time. The fourth and most recent version of YOLO is used in the task of detection to search all the images globally to identify and locate construction personnel on

720

M. N. Shrigandhi and S. R. Gengaje

a frame by frame basis. In the tracking phase, the Siamese network searches locally across succeeding frames for the detected personnel from the preceding frame. The tracking and detection components periodically communicate to locate workers and confirm their identity in succeeding frames. The experimental outcomes demonstrated better results in terms of six metrics when compared to the current method for detecting construction workers; this approach was put forth to improve construction site machinery operation safety. Reliability was increased without a discernible slowdown in speed. In many construction applications where detecting construction personnel in images is a first stage, improving construction pesonnel identification performance as well as for safe operation of construction machinery based on merging detection and tracking is worthwhile to adopt.

3 Discussion and Future Scope This paper presents a brief review on object detection at construction sites using traditional and deep learning methods. Although traditional methods have good detection accuracy but still they have few limitations like they are unable to differentiate between two different objects without color information and unable to detect two different objects when placed in parallel to each other’s’ line of sights. So there is still scope to overcome these limitations using traditional methods. The modern methods are implemented using RCNN, Faster RCNN, IFaster RCNN, and other versions, YOLO versions and SSD. Compared to the traditional methods, the detection accuracy as well as the processing speed is much higher using modern approaches. And also using the deep learning algorithms it is possible to detect the target objects with different color and changing weather conditions with greater accuracy. But still there are few limitations like they are unable to detect objects with very small size and unable to detect objects in presence of occlusions and blurriness. So further research is needed for detection of objects at construction sites with very small size and detection in presence of occlusions and blurriness.

4 Conclusion A fundamental and significant topic in computer vision is object detection. Despite effective object detection accuracy by traditional and deep learning methods over openly accessible research datasets with a wide variety of object classes, object detection at construction sites is still very challenging due to the problem, especially when detecting the objects in the presence of occlusions and the various objects of different sizes that occur on construction sites. Due to the temporary condensing of a large number of people, vehicles, and construction equipment in a limited space, construction sites may pose serious safety risks. Construction personnel and equipment are the main and most active participants on construction sites and the main cause of

Systematic Literature Review on Object Detection Methods …

721

Table 3 Performance of algorithms implemented using Sıngle Stage methods Reference number

Algorithm implemented

Purpose

Findings

Limitations

19

YOLOv3

Detection of Personal Fall Arrest System (PFAS)–safety harness, lifeline and safety helmet

1. Accuracy of Some errors and 91.26% and misdetections of Precision of 99% the model in detecting the desired objects 2. Capable of finding the target objects in a variety of light, color, and weather circumstances

5

YOLO-v3

Detection of Personal Protective Equipment of workers–Hard Hat, vest or both

Out of 3 approaches using YOLO-v3 Model, 2nd approach (model used for localization and classification) performs better than the other two techniques in terms of PPE detection precision

9

SSD

Detection of Dump Truck, Excavator, Grader, Loader, Mixer Truck, Roller

1. 91.36% mAP − and 47 FPS for Nvidia Jetson TX2 with TensorRT optimization 2. 91.22% mAP and 8FPS for Rassbperry Pi 3B + with Intel NCS embedded system

7

Improved YOLO V4

Detection of safety Detection speed helmet and detection accuracy are improved compared to YOLOV4

1. Susceptible to occlusion, poor illumination, and blurriness 2. Unable to detect the PPE components color

The detection accuracy is somewhat lesser than Faster RCNN

(continued)

safety concerns when compared to stationary construction structures and large equipment. The goal of computer vision based research in the construction sector has been to increase safety and productivity by detecting people, equipment, and materials on construction sites. The current paper provides an overview of object detection

722

M. N. Shrigandhi and S. R. Gengaje

Table 3 (continued) Reference number

Algorithm implemented

Purpose

Findings

Limitations

12

SSD 512 network and SSD InceptionV4 network

Detection of pedestrians

100% detection accuracy for pedestrians with in five meters of cameras using SSD 512 network

1. Very small dataset for training and evaluation 2. In activities like when logging or clearing scrub, a pedestrian may be in close proximity to a machine yet may be hidden by foliage, this method is inadequate for detecting pedestrians

14

YOLOv5

Detection of safety Accuracy of helmet YOLOv5x is 94.5% and detection speed of YOLOv5s is 110FPS

-

2

YOLOv2 and YOLOv3

Detection of building, equipment and worker

Able to detect massive buildings and worker instances with highest accuracy

In low light, the model frequently has trouble detecting the objects

4

YOLOv3 and YOLOv4

Detection of worker

1. YOLOv3 − demonstrated very fast processing speed compared to two stage approach 2. YOLOv4 showed improved speed and reliability in comparison to YOLOv3 3.Detection and tracking when integrated, shows a speed up of 22fps

Systematic Literature Review on Object Detection Methods …

723

methods at construction sites for identifying various types of objects at the construction work place. This paper offers a thorough assessment of related studies in this topic using bibliometrics and methods for content based literature analysis. In this paper, we have presented the object detection at construction sites using traditional methods like HOG and HAAR algorithm and deep learning models like Convolutional Neural Network (CNN), Faster R CNN (FRCN), Improved Faster Regions with Convolutional Neural Network Features (IFaster RCNN), Single Shot Multi Box Detector (SSD), You-Only-Look-Once (YOLO) with its different versions, and 2D Mask RCNN and 3D Mask R-CNN. These models can be regarded as the fundamental deep learning architectures at the moment. The present review covers detection of different construction objects like workers, pedestrians, safety helmets, safety harness, vest, dump trucks, excavators, barricades, etc. Although a much research has been in object detection for safety helmets, further research is needed for detection of different types of construction vehicles with greater accuracy. This further can be used for applications like productivity analysis of any construction vehicle.

References 1. Zou Z, Chen K, Shi Z, Guo Y, Ye J (2019) Object Detection in 20 Years: A Survey. arXivLabs, Computer Science, Computer Vision and Pattern Recognition 2. Lingard H (2013) Occupational health and safety in the construction industry. Constr Manag Econ 31:505–514. https://doi.org/10.1080/01446193.2013.816435 3. Purohit DP, Siddiqui NA, Nandan A, Yadav BP (2018) Hazard ıdentification and risk assessment in construction ındustry. Int J Appl Eng Res 13(1). ISSN 0973–4562 4. Rezazadeh Azar E, McCabe B (2012) ASCE, Automated visual recognition of dump trucks in construction videos. J Comput Civ Eng, ASCE 5. Chi S, Caldas CH (2010) Automated object ıdentification using optical video cameras on construction sites. Comput-Aided Civ Infrastruct Eng. Wiley Online Library 6. Rubaiyat AHM et al (2016) Automatic detection of helmet uses for construction safety. In: 2016 IEEE/WIC/ACM international conference on web ıntelligence workshops (WIW). https://doi. org/10.1109/WIW.2016.045 7. Park MW, Brilakis I (2012) Construction worker detection in video frames for initializing vision trackers. Autom Constr, Sci 28. doi.org/https://doi.org/10.1016/j.autcon.2012.06.001 8. Fang W, Ding L, Zhong B, Love PE, Luo H (2018) Automated detection of workers and heavy equipment on construction sites: A convolutional neural network approach. Sci Trans Adv Eng Inform 9. Angah O, Chen AY (2020) Tracking multiple construction workers through deep learning and the gradient based method with re-matching based on multi-object tracking accuracy. J Autom Constr. Elsevier Publications 10. Fang W, Ding L, Luo H, Love PE (2018) Falls from heights: A computer vision-based approach for safety harness Detection. J Autom Constr. Elsevier Publications 11. Liu C, ME Sepasgozar S, Shirowzhan S, Mohammadi G (2021) Applications of object detection in modular construction based on a comparative evaluation of deep learning algorithms. Construction Innovation Emerald Publishing Limited. https://doi.org/10.1108/CI-02-20200017

724

M. N. Shrigandhi and S. R. Gengaje

12. Chen C, Zhu Z, Hammad A (2020) Automated excavators activity recognition and productivity analysis from construction site surveillance videos. J Autom Construction. Elsevier Publications 13. Shanti MZ, Cho CS, Byon YJ, Yeun CY, Kim TY, Kim SK, Altunaiji A (2021) A novel ımplementation of an aı-based smart construction safety ınspection protocol in the UAE. IEEE Access 14. Nath ND, Behzadan AH, Paal SG (2020) Deep learning for site safety: Real-time detection of personal protective equipment. J Autom Constr. Elsevier Publications 15. Arabi S, Haghighat A, Sharma A (2019)A deep learning based solution for construction equipment detection: from development to deployment. Comput Vis Pattern Recognit. arXiv:1904. 09021 16. Benyang D, Xiaochun L, Miao Y (2020) Safety helmet detection method based on YOLOv4. In: 2020 16th International conference on computational ıntelligence and security (CIS), pp 155–158. https://doi.org/10.1109/CIS52066.2020.00041 17. Nimmo J, Green R (2017) Pedestrian avoidance in construction sites. International Conference on Image and Vision Computing New Zealand (IVCNZ) 2017:1–6. https://doi.org/10.1109/ IVCNZ.2017.8402499 18. Zhou F, Zhao H, Nie Z (2021) Safety helmet detection based on YOLOv5. In: 2021 IEEE International conference on power electronics, computer applications (ICPECA), pp 6–11. https://doi.org/10.1109/ICPECA51329.2021.9362711 19. Nath ND, Behzadan AH (2020) Deep convolutional networks for construction object detection under different visual conditions. Front Built Environ 6. https://doi.org/10.3389/fbuil.2020. 00097 20. Son H, Kim C (2021) Integrated worker detection and tracking for the safe operation of construction machinery. Autom Constr, Sci 126. doi.org/https://doi.org/10.1016/j.autcon.2021. 103670 21. Jeelani I, Asadi K, Ramshankar H, Han K, Albert A (2021) Real-time vision-based worker localization & hazard detection for construction. J Autom Constr. Elsevier Publications

Unique Web-Based Assessment with a Secure Environment S. K. Sharmila, M. Nikhila, J. Lakshmi Prasanna, M. Lavanya, and S. Bhavana

Abstract An online test server was created as part of our ongoing efforts to enhance and expand the utilization of computers in students’ learning. Its goals are to make the testing process easier for the students and to offer a clear, impartial judgment. The created web application is in PHP and JavaScript. It is hosted on a Linux machine and utilizes Apache as the web server and MySQL as the database management system. The website may be browsed using a web browser that can process AJAX requests (Asynchronous JavaScript and XML). One may add or remove questions and question categories, create tests, and modify their settings using the administration module (total number of questions, time taken for each question, IP addresses that are allowed to access the test). The test administrator may also keep an eye on the students. Multiple-choice questions with one or more valid responses are permitted. The result is automatically displayed once a student completes a test, and there is also the option to examine the question and the right answers. As a result of being in service for two years, the system has shown to be remarkably reliable. We want to create a second iteration of a server, one with enhanced features and greater capabilities, in the near future. Keywords Objective assessment · Admin and login user · Client–server architecture

1 Introduction Students or learners can use computers to perform activities or assessments with the help of online testing programs. It fixes the issue with the manual system caused by the workload of the examiners. This method confirms the correct response and saves time by completing the exam quickly and within the examiners’ given time, as opposed to the current practice, which loses time by having examiners review the S. K. Sharmila · M. Nikhila (B) · J. Lakshmi Prasanna · M. Lavanya · S. Bhavana Department of Information Technology, Vignan’s Nirula Institute of Technology and Science for Women, Pedapalakaluru, Guntur, Andhra Pradesh, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_53

725

726

S. K. Sharmila et al.

answer sheets after a test. The main goal of a Web-based online assessment system is to comprehensively evaluate the student using a fully automated manner that not only saves a tonne of time but also generates results that are rapid and correct. Students may take tests whenever they choose via the Internet, eliminating the need to bring extra equipment like pens and paper. Online test systems provide students with a quick and easy way to register for examinations. It immediately and precisely presents the findings following the examination. Only students with a legitimate login and password are permitted entry to take the exam. The online exam’s multiple-choice questions have the appropriate amount of answers. There are no limitations on the total amount of possible answers and the question can be distributed at random, so no two students will ever receive an identical set of inquiries. The user may select just one of the correct answers, and this determines the time restriction even if there might be more than one [8, 9]. The user can examine their results after completing the test. In addition to other useful features, this program allows students to take tests remotely while also assuring their comfort and safety. The main purpose of this project is to manage the online MCQ test in a secure environment. This safe environment is nothing but staying in the same exam throughout the exam. • PHP: The popular, all-purpose scripting language known as Hyper Text Pre Processor was initially created for web development. Using PHP is a common option for modern online development. A general-purpose programming language known as PHP is particularly well suited for server-side web development since PHP is often run on a web server. This design attributes its current dominance in the internet sector to its well-organized modules and improved maintenance of diverse technologies. The fact that well-known institutions like Harvard University and the famous social media platform Facebook are both built on PHP gives an indication of its popularity and reliability. This is achievable because PHP websites can be readily upgraded, updated, and maintained. The advantage of PHP is the platform’s independency and open-source. • JavaScript: The client application as well as other programs may access objects programmatically thanks to JavaScript, an object-oriented scripting language. It is generally utilized as client-side JavaScript, which is implemented as an integral part of the web browser and enables the creation of dynamic websites and improved user interfaces. JavaScript is a subset of the ECMA Script standard and is described as a dynamic, prototype-based language with weak typing and first-class functions. Brendan Eich of Netscape first created JavaScript under the name Mochu, which was then renamed to Live Script, and eventually to JavaScript. JavaScript was originally influenced by numerous languages and was supposed to appear like Java, but it was easier for non-programmers to work with.

Unique Web-Based Assessment with a Secure Environment

727

• Apache: The name “Apache” was selected out of admiration for the Apache Native American Indian tribe, which is well-known for its superb military tactics and unflappable tenacity. Additionally, it creates a lovely pun like “a patchy web server,” a server constructed up of several patches, but this wasn’t its original meaning. The group of programmers that created this novel software quickly adopted the moniker “Apache Group” for themselves. Apache is responsible for accepting directory requests from Internet users and sending the desired information in the form of files and Web pages.

2 Literature Survey 2.1 SIETTE Model The System of Intelligence Evaluation utilizing Trials for Tele-education (SIETTE), an online examination platform, was suggested by Guzman and Cornejo (2005). (SIETTE). Adaptive tests are generated and created using a web-based tool called SIETTE. It may be used to accomplish educational objectives by including adaptive student self-assessment test questions with suggestions and feedback. SIETTE has functionality for portable login and access [7]. The opportunity to retake courses, numerous teachers, random question creation, random query allocation, and random choice creation are on the other hand not present.

2.2 CBTS Model CBTS developed a computer-based testing method in 2013 (CBTS). A web-based system called CBTS was developed to address issues such as the inability of automation to account for timing flexibility, candidates logging out after the allocated time amount of time, guaranteeing the integrity of the results, stand-alone deployment, the need for flexibility and robustness, support for examination processes, as well as the formation of examination result repositories [3].

728

S. K. Sharmila et al.

2.3 EMS Model Rashad et al. (2010) proposed the Exam Management System (EMS), an internet online examination system (EMS). EMS helps with exam administration, gathers responses, automatically scores submissions, and produces test-related reports. Additionally, it controls test administration and automatic grading for student examinations. A secure login, mobility, and EMS assistance features like administrating and grading are available for many teachers. The other traits, such as the capacity to resume, the distribution of random questions, the selection of random questions, and the distribution of random choices, are not present [5].

2.4 OEES Model The content-display layer and software layer and the data-operating surface, and the database are logically separated into four independent divisions by functions of the Online Examination and Evaluation System (OEES) and which is based upon that B/S structure (Ling 2002). Communication between the two neighboring sections is accomplished using a shared interface. The component layer for displaying content is a user interface. The Hangmen Nei Math, Physics, and Information Engineering College Online Examination are the mainstays of online learning. It is fast and efficient and lowers the enormous number of material resources. It has created a web-based testing system. This article describes the design’s fundamental concepts, illustrates the system’s major goal, talks about the system’s security, and describes how test questions were made. Nor Shahida Mohd Jamail Abu Bakar Md Sultan Faculty of Computer Science and Technology in Selangor, Malaysia The institution’s evaluation procedure includes reviewing student work regularly [13]. As a result, the standard of students that the schools produce will be determined by the quality of the test questions. Additionally, creating test questions is complex, time-consuming, and difficult for teachers. Modern technology will let the teacher upload the question bank to the database. There were issues about how current technology would allow the instructor to randomly produce different sets of questions without having to worry about redundancy or match from a test bank that was expanding. Schramm looked into a web-based electronic learning system that could quickly display and assess arithmetic problems with limitless patience. It has to be able to input and output mathematical formulae, create charts on the fly, and create randomized expressions and values [1]. For a range of e-exam packages, AI-Bayati and Hussein provided and used generic software that is oriented at the deaf (HI). The test materials that are part of this package have been converted into figures spelling and sign language as a consequence. The goal of generic software is to give the instructor a blank canvas on which to draw his desired collection of exam formats, such as multiple-choice, word

Unique Web-Based Assessment with a Secure Environment

729

matches, fill-in-the-blank, etc., for the required subject matter, such as mathematics, languages, science, etc. A web-based test system is a useful tool for evaluating mass education. A ground-breaking online examination system that is based on a browser/server architecture DCOM technology was created by Zhenjiang to handle objective questions like programming, applying Microsoft Windows, and running Microsoft Word, Excel, and PowerPoint, among other functional questions. It has been used to assess students remotely for basic computer science competencies in college courses and national examinations for Chinese graduates of high school from Zhejiang Province. Technology isn’t reliable enough. Additionally, it uses closedsource technology rather than open-source. The system’s intended audience consists of computer science students. All-purpose students are not the target audience [4]. No other languages are supported, either. Lie He provided the internet educational evaluation system that employed Bloom’s taxonomy to assess both instructors’ instructional practices and students’ learning results in real-time. The performance of the system, which incorporates experimental data from science and mathematics courses, is highly encouraging to high schools nearby. As a result, HTML is now being used for server-client communication.

3 Proposed System This system will have the capacity to handle a wide range of services and efficiently process all student records. Exams may be made more structured and tidy with the use of this method, which also helps you save time and paper. After an objective examination is finished in this manner, students are given their exam results right away [2]. These programs successfully save student information. Information on students can be seen and modified by administrators. Additionally, this system keeps track of topic and grade information. As can be seen in Fig. 1, this has increased the fourlayer layered architectural facilities. These layers include a display, server, storage service, and core modules (which are made up of three modules each). The talks of each level on this topic are listed below.

3.1 Presentation Layer The many Internet-capable gadgets that can be utilized to use the online testing process are represented by this layer. Desktop and laptop computers, as well as portable gadgets like tablets and smartphones, may fall under this category.

730

S. K. Sharmila et al.

Fig. 1 Online multiple-choices exam system

3.2 Core Module This layer represents the wide range of Internet-capable devices that may be used to conduct online testing. This group may include portable devices like smartphone and tablet computers as well as desktop and laptop PCs.

3.3 Server Here, we’re referring to the level from where the online exam system may be run as a web application. This method makes use of Apache Webserver, which is thought to be the most used server.

Unique Web-Based Assessment with a Secure Environment

731

3.4 Storage Service This layer deals with rapid information and data retrieval and archiving using Relational Database Management Systems. The MySQL database, which works effectively with the Apache Web server, was employed in this investigation. The advantage of SQL is portability and its Interactive Language. The login user serves as the User-ID for this, and the password is generated by the user for security reasons. A contract is created when a user opens a website and wants to do any activities. The user must click the login button. After entering his or her user-id and password, the user will go to the Home page, which is the following page. Users must sign up with the system to have a valid username and password [7]. By using this model we can manage the browser very securely. If any student tries to do any malpractice then he is out of the exam. Thereby, the person with talent moves forward with nice grading. This is very useful for teachers to assess the bright student.

4 Results 4.1 Admin Input The project must undergo thorough testing before it can be merged into an internal, already-running system. Its accuracy and the information that it analyses are both crucial. Since the system relies on a network, it underwent extensive internal testing before being introduced gradually to the other components. This includes modules for creating tests using Test-IDs and secret codes as well as for notifying the administrator of the maximum time limit. Data security can be achieved by creating a Secret-ID by the admin to secure the test website from other unethical users and differentiating them from the ethical users who really need to take the exam [10]. Admin Test Details are shown in Fig. 2.

4.2 User Login The registration on the User side is like giving their Roll Number, Name, Mail-ID, Phone Number, Gender, Department, and Year as the necessary credentials to be given. Whereas the Sign-in from the user be like one of the necessary credentials they have used in User Login giving the Test-Number and entering the Secret Code set by Admin, thereby the user is logged into the test [11]. The below Fig. 3 shows the User Login.

732

S. K. Sharmila et al.

Fig. 2 Admin test details

Fig. 3 User login

This graph which is Fig. 4 mainly tells about the User Registrations with an accurate time. The units of time taken for registration are 5 units each and the Number of Registrations scaled 2 units each. Thereby, the Fig. 5 graph tells about the User Authentication concerning the time. The units of time taken for Attempts are 5 units each and the time taken for accessing the test scaled 2.5 units each.

Unique Web-Based Assessment with a Secure Environment

733

Fig. 4 User registration accuracy time level

Fig. 5 User Authentication time level

This is the Admin Login which are having the User-ID and the password for login into the test. In Fig. 6 it can be depicted clearly that the Admin Login. The test can be created using Test-ID in a way where the admin needs to share the exam to students. In a way of sharing the exam to the students, the admin must share the Test-ID in a secured way. Results are displayed by giving the Test-ID and thereby the marks of the particular user login are displayed there. Thereby, finally, in Fig. 7, the dataset Mapping can be done with the data set size and thereby concluded with the

734

S. K. Sharmila et al.

Fig. 6 Admin login

Accuracy level. High Accuracy can be depicted based on the size of the dataset which proportionally uses the particular method for the process of examination within less time as the output [12].

Fig. 7 Dataset mapping accuracy level

Unique Web-Based Assessment with a Secure Environment

735

Fig. 8 Dataset mapping time level

Dataset Mapping can be done with the evaluation of the size of the Dataset and the Accuracy level both in the form of graphical data. The units of DataSet for Dataset size are 0.5 units each and the Accuracy Level scaled 10 units each. The graph depicts the dataset size along with the time level in Fig. 8. The units of DataSet for Dataset size are 0.5 units each and the Time Level scaled 2 units each.

5 Conclusion The Examination System, a Web program, was created with the primary goal of reducing paper consumption and moving all types of documents to digital formats. The computerized approach makes it easy and precise to get the required information. A person with minimal computer experience may be able to easily utilize the system. Additionally, the technology produced the concise findings that the management required. The online testing environment is developed. PHP and SQL which were created for them, completely accomplish the system’s objectives. The system worked quite well, and all the teachers and users included in it are aware of its advantages. The system fixes the problem.

736

S. K. Sharmila et al.

References 1. Thomas (2008) E-Assessments and E-Exams for Geomatics Studies. Department of Geomatics Hafen City University Hamburg 1, 22297 Hamburg, Germany 2. Al-Bayati MA, Hussein KQ (2008) Generic Software of E-Exam Package for Hearing Impaired Persons (Mathematics as case studies). In: 2nd Conference on Planning and Development of Education and Scientific Research on the Arab States 3. Zhenming Y, Liang Z, Guohua Z (2003) A novel Web-based Online Examination System for Computer Science Education. In: 33rd ASEE/IEEE Frontiers in Education Conference, 2003 4. Lei H (2006) A novel web-based educational assessment system with Bloom’s Taxonomy. Current Development in Technology-Assisted Education 5. Weaver D et al (2005) Evaluation: WebCT and the Tate, L. (2002), using the interactive whiteboard to increase student retention, attention, participation, interest, and success in a required general education college course”. Retrieved January 30:2007 6. Downing D et al (2000) Dictionary of computer and Internet terms, Barron’s Educational series 7. Ainscough TL (1996) The Internet for the rest of us: marketing on the World Wide Web. J Consumer Market 8. Tallent-Runnels MK (2006) Teaching courses online: A review of the research. Rev Educat Res 9. Object Oriented Analysis and Design with Applications by Grady Booch 10. Object-Oriented Software Engineering-Ivar Jacobson Pearson Education 11. www.c-sharpcorner.com 12. www.slideshare.com 13. www.w3schools.com

A Draft Architecture for the Development of a Blockchain-Based Survey System Turgut Yıldız and Ahmet Sayar

Abstract The respondents, amid concern that responses may be known to others in cases such as data security, anonymity, traceability, and transparency, usually object to honest answers. Because of that, the results of surveys can be wrong, and the people, companies, etc. who prepare the survey make the wrong decisions. The problem of changing survey records for various purposes not only creates inaccurate outcomes but also can result in big problems. In this study, the anonymity feature of Blockchain technology and its benefits and negative sides in surveys are assessed. First of all, a draft structure was formed so that services were defined and data and process flows were planned. The draft structure has also been recommended as a distributed system on the open-source Hyperledger Fabric network. A detailed process that is fulfilled by establishing confirmation bias is recorded in all distributed ledgers; all relevant survey responses are thus traceable, questionable, and verifiable. Keywords Blockchain · Survey · Hyperledger fabric · Distributed system · Anonymous

1 Introduction These days, people appreciate being anonymous and the confidentiality of personal information. We give so much importance to secluding ourselves from others in daily life. Issues may arise when these values are violated by third parties. These issues affect the person in a negative way, such as stress, anxiety, and danger to their life, and they make up their mind to retire from public life. For instance, large corporations prefer to keep a close watch on the people they employ and even focus on aspects such as internal feedback and employee satisfaction by conducting surveys about the T. Yıldız (B) · A. Sayar Department of Computer Engineering, Kocaeli University Kocaeli, Kocaeli, Türkiye e-mail: [email protected] A. Sayar e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_54

737

738

T. Yıldız and A. Sayar

company. During the surveys, general and specific information is obtained from the respondent. Unfortunately, the desired survey performance cannot be achieved due to human psychology and the instinct of self-defense [1]. It would not be surprising to see unfavorable outcomes, such as the manager tampering with undesirable survey results or the imposition of sanctions on employees who believe they filled out the survey anonymously. Surveys consist of questions that were prepared according to a certain plan in order to understand the opinions, ideas, thoughts, and experiences of certain people or communities on a subject. Surveyors, in the internal or external assessment methods, can figure out thanks to information of which unit the person is filling out the questionnaire works, age and gender. These are among the most important factors for the surveyor and the way of understanding the final outcome. However, the undesirable results of the internal performance surveys may not be liked by management. This information may affect the result, which is why management should diagnose and intervene in them. For corporations and organizations, providing a lot of benefits from the surveys is sometimes more important than the privacy of the surveyed people. The ones, who are aware of this reality can manipulate the surveys. The proposed draft blockchainbased survey system provides a transparent, reliable, and sustainable solution by hiding information such as the name, address, phone number, department, and even login periods of the computers used during the survey in the blockchain. All thanks to the use of cryptography in blockchain technology. In keeping with that, blockchain technology also solved the problem of changing the survey results or the submitted survey. By storing the questionnaires in distributed ledgers provided by Hyperledger Fabric, automatic replication and the processes by which the data passes from the source of origin to the last user can also be monitored. Also, data, or more precisely, surveys, can be reliably protected until the last node collapses on the network. Distributed ledgers, because of their structure, resemble blockchains and do not allow any changes in data. Furthermore, in the system, all organizations can be depicted as validating the survey. Surveys that have been changed or distorted in any organization are noticed by other structures in the system, and the distorted ones’ results are restored to their old and correct form. That way, in any organization, secure and consistent data storage and control opportunities are given. The rest of the article may be summarized as follows: In the third section, related studies are explained. In Chap. 4, the architecture and details of our system are given. In the fifth chapter, the results of the method are discussed.

2 Blockchain and Hyperledger The recommended system may basically be summarized as a study on how the survey system can be organized as a blockchain network. In the following paragraphs, the

A Draft Architecture for the Development of a Blockchain-Based …

739

open-source Hyperledger Factory network and Hyperledger infrastructure will be briefly introduced.

2.1 Blockchain Technology In 1991, two cryptography researchers, Haber and Stornetta, came up with the idea of the blockchain. Haber and Stornetta worked on a system design that keeps the data in successive blocks and at the same time does not allow data changes in the block. When the concept is realized, no one will be able to interfere, and thus data security will be ensured. Along with the Bitcoin document [2], which was published in 2008, blockchain as a suspenseful technology grew in popularity until now and developed rapidly. Bitcoin’s decentralized and secure technology keeps data records by maintaining ledgers at each node. And Blockchain is a kind of database as well. That new data recording technology does not allow for change, hack or manipulate the processed data. The aim of blockchain technology is to record, readout, and distribute data but not to regulate it. Blockchain technology ensures singularity in the ledger, where the transactions of all participants in the network are kept; moreover, it allows data to be written to all nodes at the same time. Therefore, a decentralized system will occur, because the same information is kept in the ledgers of nodes. One of the biggest advantages of the decentralized system is that it allows the system to maintain itself even if the nodes are removed from it. In blockchain technology, as in Fig. 1, each block keeps the hash code of the previous block and acts as if there would be a chain between the data 3.

2.2 Hyperledger Hyperledger is a non-profit and open-source blockchain project that was announced by the Linux Foundation in December 2015 and released in 2016. Hyperledger is always evolving in the blockchain field thanks to communities such as corporate, academics, and start-ups. By hosting many frameworks, faster blockchain software is produced day by day. Hyperledger has already released six frameworks, and ten more are still under development [3, 4]. Hyperledger Fabric was the preferred framework in this study’s approach. Using distributed ledger technology, Hyperledger Fabric provides a modularity function to the blockchain and ensures both communication and synchronization on the network by keeping a smart contract, ledger, and peer in each distributed unit. That’s how synchronization is not disrupted when adding or removing nodes to or from the system.

740

T. Yıldız and A. Sayar

Fig. 1 Blockchain network overview

3 Related Works Since the development of Bitcoin crypto money, which was the first application of blockchain technology, by an anonymous person or team, named Satoshi Nakamoto in 2008, many developments and attempts have been made in the field of blockchain in order to facilitate daily life. Besides, today, Dikilita¸s and his team [5] have been studying the current research areas in the field of blockchain. As a result of the examinations, it has been concluded that it is a quite new field. However, no blockchainbased survey system has been developed so far. Blockchain-based voting systems can be cited as an example of this technology in the literature. It has always been difficult to create a secure electronic voting system that offers the fairness and confidentiality of current voting schemes while providing the transparency and flexibility offered by electronic systems. Electronic voting systems have been the subject of many studies for decades in order to minimize the cost of an election while ensuring electoral integrity by meeting security, privacy, and compliance requirements [6]. Replacing the traditional pen-and-paper layout with a new electoral system has the potential to limit fraud and also make the voting process more traceable and verifiable [7]. Hjálmarsson and his colleagues [8] initiated research into electronic voting systems on the blockchain. After the mentioned study, blockchain structures have become highly preferred in new applications due to their immutability, verifiability, and decentralization properties [9]. Voting systems and surveys, which are structurally quite similar, are used as examples below. Open source and end-to-end verifiable online voting software have been created in the “Flow my Vote” application [10]. A blockchain network has been adopted for this application, and by regarding voting as a transaction and using this network, it is possible to follow the vote count. Thus, everyone can agree on the result because

A Draft Architecture for the Development of a Blockchain-Based …

741

votes can be counted and checked by each of them. Thanks to the blockchain audit, the election is fair and transparent. The other application is called “Voatz” [11], and it is a platform where voting and counting processes are created by the blockchain. The bases are made by authorized people who have their biometrics checked and then take an exam. On the basis of blockchain, Hyperledger Fabric is used, as we did. It was first carried out at the US military base in Virginia, then in Denver and Utah. While there is not much difference in terms of operation between the Voatz application and our proposed system, data is stored according to the distributed system structure. The third example, the “We vote” application [12], uses a token system. Each token means a ballot, and the system prohibits voting more than once. At the same time, Ballotchain [13] provides an open voting service by using the blockchain structure. The purpose of using this system is to operate over a network where everyone can access and follow the transactions. Voting and survey systems generate data on different topics. While voting systems provide short-term benefits, poll systems can take longer. While blockchain-based electronic voting systems and blockchain-based survey systems are basically similar in terms of usage, they differ from one another in terms of data diversity and statistical inference.

4 Proposed Approach In this part of the article, we will discuss how the suggested architecture will be implemented to connect survey systems to the blockchain that is part of the Hyperledger Fabric network. Our experiment began with the establishment of a network for distributed systems by use of virtual computers that were hosted on the Google Cloud Service Provider [14]. This network is connected to each other through a docker swarm. The architecture of the blockchain network is developed over the top of the Hyperledger Fabric network [15]. There are further studies developed on the open-source Hyperledger Fabric network that are comparable to the ones listed in the related sections. In their investigation, Toka and his team [16] have incorporated Internet of Things devices into the Hyperledger Fabric network. Because it is more applicable than the blockchain technologies available today and offers an example, the Hyperledger Fabric network has become our top choice as a consequence of this integration. As can be seen in Fig. 2, the network structure of Hyperledger Fabric is made up of orderers and organization units. The architectural strategy that we have provided makes use of a total of four virtual computers. These virtual machines share the same capabilities, which are outlined in the following paragraphs. • Ubuntu 18.04 • Hard Disk capacity 30 GB • 8 GB RAM/e2-standard-2 vCPU

742

T. Yıldız and A. Sayar

Fig. 2 Blockchain architecture built with Hyperledger network

In the Hyperledger network, the Orderer service ensures that incoming transactions are distributed to peers so that incoming data will be transmitted from peer to peer to each organization over a channel without confusion. This architecture enables virtual machines to communicate with each other and work together over the Docker Swarm Network [17]. A Hyperledger Fabric Network consists of orderers and organizations when it is established. In organizations, a smart contract consists of a ledger and pears. Peers host smart contracts. Smart contracts are pieces of code triggered by peer requests and data. When these codes are run, access is provided to the ledger. Smart contracts write the data resulting from the transactions to distributed ledgers. The data that needs to be processed in the organization is first transmitted to the peer. This data that comes to the spouse is added to the blockchain ledger by running the relevant transaction in the smart contract. If we want to record or read data by accessing any ledger, the client needs to be notified and triggered. Then, through the client orderer, we provide a connection to the peer of the relevant organization over the channel that the Hyperledger Fabric Network communicates and ensure the operation of the smart contract. By accessing the ledger, the running smart contract makes sure that the right transactions are made. Figure 3 shows the survey system hosting the Hyperledger Fabric Network. As the survey submitted to the user by the survey organizer is answered, data comes

A Draft Architecture for the Development of a Blockchain-Based …

743

Fig. 3 Connection of the survey system with the Hyperledger network

to the Hyperledger Fabric Network and is written to the blockchain ledgers, where it is recorded that it comes from anonymous people and cannot be changed. Then, the Hyperledger Fabric Network can be accessed, and the data can be read from the blockchain network to make the necessary analyses. The blockchain network transparently displays the incoming surveys that are filled out. However, there is no information about which answer came from whom; on the contrary, it hides this data. In fact, structures and algorithms that are impossible to change thanks to distributed ledgers, that are only read synchronously with each other, and that provide stability and minimize data loss due to distributed system technology are used. The consensus algorithm we use in Hyperledger Fabric Network is the Raft consensus algorithm. This algorithm consists of a distributed architecture with the Crash Fault Tolerance mechanism as leading and following nodes. In summary, if we cannot access any of the three organizations in our system, our system will continue to work without data loss. As a result of this study, the data entered by the user will be sent to the Hyperledger Fabric network. By doing this, you’ll be able to store data securely and prevent unauthorized users from accessing or altering it. Confidentiality is fully ensured by the blockchain technology, which is at the core of the Hyperledger Fabric Network [18]. In this way, authorized persons are allowed to add and read data on peers connected to each other via a channel through the distributed system of blockchain technology, while unauthorized persons are prevented from transacting on this network.

744

T. Yıldız and A. Sayar

Ensuring data integrity In the distributed system established with Hyperledger Fabric, there is distributed ledger technology. Since every ledger is actually a backup, it will understand that the data to be changed in the blockchain is not similar to other distributed ledgers and will not allow this change. It is evaluated that the survey system is found to be more reliable by the users in terms of its possibilities and capabilities, and thus, more efficient surveys will solve the problems effectively. The more data stored in a distributed ledger, the less likely it is to crash the system or lose data.

5 Conclusions With widely used techniques, trust in surveys cannot fully fulfill its main function due to situations such as being anonymous, being followed, being dismissed, or being distorted by the management. Institutions and organizations using this technology will reach more reliable survey results, as the survey system established with the blockchain architecture cannot be manipulated, and data entry is provided in a secure and anonymous manner. At the same time, confidence in the surveys they will make will increase, and prestige will be gained. With the results they will obtain from the surveys, they will make on various problems, the solutions will be more realistic, the analyses will yield better results, and positive developments will be observed in terms of time and cost. Thanks to the blockchain network established with Hyperledger Fabric, the survey system we propose provides solutions to important problems as it brings confidentiality, data integrity, and different possibilities and capabilities to our draft architecture. With the proposed draft architecture, it is aimed to provide additional features to traditional survey systems, such as making survey answers accessible, making data unalterable, and maintaining user data confidentiality.

References 1. Stuart Haber W (1991) Scott Stornetta.: How to timestamp a digital document. J Cryptol 2. Nakamoto S (2008) Bitcoin: a Peer-to-Peer Electronic Cash System 3. Rottenstreich O (2021) Sketches for blockchains. In: International conference on communication systems & networks 4. Hyperledger. https://www.hyperledger.org/, Accessed 05 Dec 2022 5. D˙ik˙il˙ita¸s Y, Toka KO, Sayar A (2021) Current Research Areas in Blockchain. Avrupa Bilim ve Teknoloji Dergisi, sy 26, Art. sy 26, https://doi.org/10.31590/ejosat.977320 6. Noel Runyan, Jim Tobias.: Top-to-Bottom Review I California Secretary of State (2007) 7. Nicholas Weaver.: Secure the Vote Today (2016) 8. Hjálmarsson FP, Hreiðarsson GK, Hamdaqa M, Hjálmtýsson G (2018) Blockchain-Based EVoting System. In: 2018 IEEE 11th International conference on cloud computing (CLOUD), pp 983–986. https://doi.org/10.1109/CLOUD.2018.00151

A Draft Architecture for the Development of a Blockchain-Based …

745

9. Jamie (2018) Liquid democracy uses blockchain to fix politics and now you can vote for it 10. Follow my vote.: Blockchain Voting: The End To End Process. Access: 30 November 2022 https://followmyvote.com/ 11. Voatz. Voatz secure and convenient voting anywhere. Access: 30 November 2022 https://voatz. com/ 12. We Vote.: Online Electronic Voting Internet Voting Assemblies Elections. Access: 02 December 2022 https://www.wevote.eu/en/ 13. Ballotchain: Access: 09 December 2022 https://www.reply.com/ 14. Google Cloud Services.: Get started with Gogole Cloud Services, Access: 02 December 2022 https://cloud.google.com/docs/get-started/ 15. Hyperledger.: Hyperledger – Open Source Blockchain Technologies, Accessed 02 Dec 2022. https://www.hyperledger.org/ 16. Toka KO, Dikilita¸s Y, Oktay T, Sayar A (2021) Securing IOT with blockchain. Int Arch Photogramm Remote Sensing Spatial Informat Sci 46:4/W5, 529–532. 4p 17. Ioini NE, Pahl C (2018) A Review of Distributed Ledger Technologies. In: On the Move to Meaningful Internet Systems. OTM 2018 Conferences. Ed. by Herve´ Panetto et al. Vol 11230. Series Title: Lecture Notes in Computer Science. Cham: Springer International Publishing, 2018, pp 277–288. https://doi.org/10.1007/978-3-030-02671-4 16 18. The Linux Foundation’s Hyperledger Fabric enables confidentiality in blockchain for business. Blockchain Pulse: IBM Blockchain Blog. Apr. 17 2018. https://www.ibm.com/blogs/blockc hain/2018/04/hyperledgerfabric-enables-confidentiality-in-blockchain-for-business/

Additive Congruential Kupyna Koorde Cryptographic Hashfor Secured Data Storage in Cloud P. V. Shakira and Laxmi Raja

Abstract Cloud computing is the process of storing data without direct active management by the user. Cloud computing security is an essential concern to protect the cloud environment, data, information, and applications against unauthorized access. Storage security is concerned with securing data storage systems. Many works have been developed for protecting data communication. However, the data integrity rate and data confidentiality rate were not improved. In order to address these problems, Additive Congruential Ephemeral Cryptographic Kupyna Koorde Hash Storage (ACECKKHS) Method is introduced for efficient secured data storage using better data confidentiality as well as integrity rate. ACECKKHS method comprises key generation, encryption, as well as decryption to enhance privacy performance in a cloud environment. From ACECKKHS Method, the user registers his/her information with CS to achieve protected data storage. After registration, CS provides an ephemeral public key as well as an ephemeral private key to each registered user using Ephemeral Castagnos–Laguillaumie Cryptography by means of an additive congruential generator. After key generation, the cloud user encrypts the information by ephemeral public key as well as transmits with the server to secure data storage. Davies-Meyer Kupyna Koorde distributed hash is employed by CS for storing the encrypted cloud user data with the help of their hash value to minimize the storage complexity. After that, the stored information gets retrieved by the cloud user for performing the decryption with the sender’s private key. In this way, the data gets stored in a secure manner by using ACECKKHS Method. Experimental evaluation is performed with different parameters namely confidentiality rate, data integrity rate, computational cost, as well as storage cost using a number of cloud users’ data. ACECKKHS increases data confidentiality and integrity, and minimizes the computational cost as well as storage cost than the conventional approaches. Keywords Cloud computing · Secured data storage · Additive congruential Ephemeral Castagnos–Laguillaumie Cryptography · Davies-Meyer Kupyna Koorde distributed hash P. V. Shakira (B) · L. Raja Karpagam Academy of Higher Education, Tamil Nadu, Coimbatore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_55

747

748

P. V. Shakira and L. Raja

1 Introduction The number of digital data improves rapidly within the big data period. While the number of data is enhanced, users’ impact over dangerous issues. From the rapid growth of cloud, several users want to accumulate data on server for simple access. In cloud data storage, the most fundamental issue is that unauthorized users access or alter confidential data. So, the service provider needs for offering security of data stored within the server. Several cryptographic techniques have been developed for secured data storage in a cloud environment. C-SCS was developed by [1] by using Goldreich-Goldwasser-Halevi’s (GGH) cryptosystem. The method minimizes the computation as well as storage cost however better data confidentiality and integrity level were not performed. SPADE was developed in [2] for minimizing communication and computation costs. But, it was unsuccessful for providing a stronger privacy guarantee to cloud storage. The ACECKKHS contributions are as follows, • To enhance privacy of data storage on in cloud, ACECKKHS is developed on key generation, encryption, as well as decryption. • To increase data confidentiality rate, ACECKKHS uses Additive congruential Ephemeral Castagnos–Laguillaumie Cryptographic technique to protect sensitive data from unauthorized access. It aids to enhance data confidentiality rate. • To improve data integrity rate, Davies-Meyer kupyna Koorde distributed hash function is employed in ACECKKHS with lesser storage space. • Finally, experimentation is considered for estimating ACECKKHS with existing techniques based on different metrics. The article is summarized by. Section 2 explains the review of related work. Section 3 introduces the proposed ACECKKHS with a brief description. The experimental settings and dataset description is presented in Sect. 4. Section 5 discusses the results and discussion. Section 6 provides the conclusion.

2 Related Works The lightweight privacy-preserving Delegatable Proofs of Storage (DPOS) method was introduced in [3] to reduce the computational cost. But it failed to implement the cryptographic technique for enhancing data confidentiality. A CiphertextPolicy Attribute-Based Encryption method was developed [4] with white-box traceability and auditing. However, CryptCloud + was not efficient to provide full public traceability. PP-CSA technique was developed by [5] for data sharing. Identity-based RDIC was developed by [6] for cloud data storage to decrease the system complexity. However, higher data integrity for secure storage was not achieved. A Remote Data

Additive Congruential Kupyna Koorde Cryptographic Hashfor Secured …

749

Integrity Check with Secured Key Generation was presented by [7] for securing cloud storage. The novel ReliableBox method was developed by [8] to secure and verifiable cloud storage using better data integrity. But the computational cost of cloud storage was not minimized. A-BAC strategy was implemented by [9] for employing the Ethereum blockchain. However, the performance of storage costs was not minimized. A Dynamic Searchable Symmetric Encryption (DSSE) framework was developed [10] to achieve a high level of privacy for secure storage. However, an efficient hashing technique was not implemented to minimize the storage cost as well as improve the data integrity. Shuffle standard onetime padding encryption technique was introduced [11] with higher secure data storage. A Modified Ramp Secret Sharing (MRSS) method was developed in [12] with lesser storage overhead. A secure cloud storage access mechanism was developed [13] for protecting data. A personalized searchable encryption scheme (PSED) was introduced [14] to reduce communication overhead. A distributed data protection system was developed in [15] for secured cloud data storage. Hybrid blockchain architecture was introduced in [16] to manage data integrity. A fuzzy keyword searchable encryption scheme was developed in [17] to support clerical errors with higher accuracy. Dual encryption as well as data fragmentation methods were developed in [18] for secure data storage. The cloud-backed storage system was developed in [19] to store and share huge data. A QoS-oriented replica approach called MDupl was introduced in [20] for minimizing access time. However, a higher confidentiality rate was not achieved.

3 Proposal Methodology Cloud computing is used for accumulating the information on the server. From the fast growth of cloud, cloud storage services help users for minimizing their local storage burden. The cloud storage services also help the users access their data at any time and anywhere. In cloud data storage, the major issue is user data is secret. Therefore, it is essential for developing and protecting cloud storage as secured data over unauthorized users. Secure cloud storage is an emerging cloud service to protect the confidentiality of user data but also provides flexible data access for cloud users. There are several methods have been developed for cloud storage but it is still faced many issues to achieve high data integrity and storage complexity. Therefore, a novel ACECKKHS Method is introduced for efficient secured data storage with higher data confidentiality and integrity rate in the cloud environment. Figure 1 exhibits the ACECKKHS technique to secure cloud. ECLCKHS technique includes two kinds of entities namely cloud users U1 , U2 , U3 , . . . .Un who has the data D1 , D2 , . . . Dm to be stored in the cloud server (C S). ACECKKHS technique includes three major processes namely Registration, data encryption and storage, and decryption. First, the cloud users log in to the server and register their details. After successfully registering his/her details, the cloud server generates a pair of keys such as private and public keys. In cryptography, a key is a

750

P. V. Shakira and L. Raja

Cloud server Register

1 Store into cloud server using Hash

Key generation

2 Data encryption

3 Obtain original data

Data decryption

Fig. 1 Architecture diagram of the ACECKKHS technique

part of the information, usually numbers or letters that helps to encrypt or decrypt the data. In the ACECKKHS technique, asymmetric cryptography called additive congruential ephemeral castagnos–laguillaumie cryptography is employed to attain data storage security. Here, the additive congruential generator is used for generating the ephemeral keys for each user. Once the key generation process is completed, the user stores the information on the server. Before data storage, the user first encrypts their original data with the help of the receiver’s public key. Cloud server generates hash to every information with Davies-Meyer Kupyna Koorde distributed hash function to minimize the memory consumption of storing the user data. Whenever cloud user accesses the data from a cloud server, first they perform the decryption and obtain the original data with higher data confidentiality and integrity in cloud data storage.

4 Registration and Key Generation Key generation is a process of cryptography and generated keys are employed to carry out the encryption and decryption. Registration is a fundamental process before the users can use a cloud storage service. Cloud service providers usually require the creation of a user account before accessing the cloud services. In order to create the account, the user needs to enter their personal informations. After submitting user details, the server transfers the registered message to the user.

Additive Congruential Kupyna Koorde Cryptographic Hashfor Secured …

751

Cloud server uses the Additive congruential Ephemeral Castagnos–Laguillaumie Cryptographic technique to generate the pair of Ephemeral keys for each cloud user. Ephemeral Castagnos–Laguillaumie Cryptographic technique is Homomorphic encryption that allows users for achieving computations on encrypted data without decrypting it. In addition, the technique was also employed to secure computation. In homomorphic cryptography, an Ephemeral key is generated for each execution of a key-establishment process that meets unique to each session. Once the session is finished, then the generated keys are disabled. Then the algorithm creates a new Ephemeral key for the next session. This helps to avoid unauthorized access from the cloud server and improves the confidentiality level. Figure 2 demonstrates the registration phase for storing user information within the server to access various services. After the registration, the cloud server produces a successfully registered message to the cloud user. Upon successful registration, the server creates private as well as public keys to every registered user to store their details. Let us consider a group of quadratic residues modulo with order ‘n’ is denoted as ‘D’ using generator ‘k’ is produced. The additive congruential method is applied to generate the random number for generating the private key at a particular session. R = xi + f mod M

(1)

where R indicates a random number generated, xi indicates an initial value i.e. start value (i < x < M), M indicates a modulus (i.e. > 0), f denotes an increment function (0 < f < M). Therefore, the additive congruential generator is applied for

Cloud user Submit personal details Stored into cloud server

Successfully registered message

Generate public and private key

Fig. 2 Cloud user registration phase

752

P. V. Shakira and L. Raja

generating pseudorandom numbers in a specific range. G = kR

(2)

K E pb = (D, n, k, G)

(3)

From the above equation, ‘D’ is represented as the collection of quadratic residues modulo with order ‘n’, ‘k’ indicates the generator and ‘G’ denotes the additive congruential generator. Similarly, an Ephemeral private key (K E pr ) is generated. K E pr = R

(4)

After the key generation, the cloud server distributes the keys to registered users for accessing services.

5 Data Encryption Behind Ephemeral key generation, cloud performs the encryption and it is a process of converting the original data into cipher text (encrypted text) for avoiding unauthorized access. It helps to protect sensitive information and enhance the security of communication among user and server. The cloud user performs encryption with the help of the receiver’s public key.

Encryption Data Cipher text

Cloud user

Receivers’ public key

Store into cloud server using Koorde Hash Function

Cloud server

Fig. 3 Block diagram of the encryption

Additive Congruential Kupyna Koorde Cryptographic Hashfor Secured …

753

Figure 3 represents data encryption process to convert the input data into cipher text. Consider the number of user data such as C D 1 , C D 2 , C D 3 , . . . C D n and it needs to store on the server. Before data storage, first cloud user performs data encryption with the receiver’s public key given below, C i = C D P K r ri

(5)

where, Ci denotes a cipher text, C D indicates original user data, public key of the receiver is ‘P K r ’, ri indicates a random number selected from R. As a result, the encrypted data is obtained and the user sends the cipher text into the server. It stores data with the help of the Koorde Hash Function.

6 Davies–Meyer Kupyna Koorde Distributed Hash-Based Data Storage After data encryption, cloud user transfers the encrypted to server for storage. Server uses Davies-Meyer kupyna Koorde distributed hash system for data storage to minimize the space complexity and avoid unauthorized data access as well as data integrity. Koorde is a distributed hash table system that stores encrypted data into a server. De Bruijn graph is a mathematical construction used to form relations between vertices and edges. The vertices are nodes that are connected by edges called links. The main advantage of a De Bruijn graph is that the nodes are inserted or removed from the graph. The server inserts the new nodes into a graph for storing the data and it removes the node for eliminating the data. The De Bruijn graph is used for the process of retrieving the data quickly from a server, based on a query provided by the cloud user. In contrary to the existing distributed hash table system, the proposed system uses the Davies-Meyer kupyna hash function for accurate hash generation to minimize the memory consumption of the cloud server. This allows a distributed hash table system to handle large numbers of data’s continual arrivals at the server. Figure 4 shows a De Bruijn-directed graph that consists of vertices and edges. Each vertex has exactly ‘m’ incoming and ‘m’ outgoing edges. Each node has a unique identifier (I d) that maintains the hash table to store the data. The key is a unique identifier of the node for its association with the data value. Hash table accumulates key-value pairs for assigning keys to generate the hash value for particular data. Figure 5 illustrates the Davies-Meyer kupyna hash functionKoorde distributed hash table system. The distributed hash table system considers the input as encrypted data i.e. cipher text Ci . The Davies-Meyer kupyna hash function is applied to generate the hash value.

754

P. V. Shakira and L. Raja

Vertex

Edges

Fig. 4 De Bruijn-directed graph

Encrypted data

Davies-Meyer kupyna hash function

Hashed text

Fig. 5 Davies-Meyer kupyna hash function Koorde distributed hash table system

Next, input message block is given to Davis Mayer compression function. It receives the input encrypted data into a number of blocks (bi ) and previous hash h i−1 . Figure 6 shows depict the Davis Mayer single-block-length compression that obtains the input data block ‘bi ’ as well as previous hash value ( pi−1 ) is initially preset to ‘0’. From Fig. 6, ‘Q’ represents a block cipher and the data block (bi ) to block cipher and XORed with previous hash value. Hash functions are used for data integrity and often in combination with digital signatures Final hash function is generated as given below, P = [Q bi ( pi−1 ) pi−1 ]

(6)

From (6), P denotes a final hash value. Similarly, the other block output is generated. Finally, the last hash of kupyna cryptographic hash function is generated as the output of the final compression function. The final output hash is stored within a server. It aids to avoid information that is not altered by the unauthorized user resulting in an increase in the data integrity rate and data security level.

Additive Congruential Kupyna Koorde Cryptographic Hashfor Secured …

755

Fig. 6 Davis Mayer single-block-length compression

Q

Output hash

7 Decryption The last procedure of the proposed ACECKKHS technique is decryption. Decryption is the process of converting the encrypted data into its original. It is usually a reverse process of encryption. When cloud user wants to access information from the server, they decrypt data with their private key so that an authorized user only decrypts the data. The decryption process is performed using the Additive congruential Ephemeral Castagnos–Laguillaumie Cryptographic technique. Figure 7 demonstrates decryption for attaining original data by cloud user. The decryption is achieved with sender’s private key. C D = Ci R

(7)

From (7), an original cloud user data ‘CD’ is obtained and cipher text is denoted as ‘Ci ’, ‘R’ indicates a private key of the user. Therefore, original data is attained with higher security. This helps to allow the authorized user to access the information for improving data confidentiality. The algorithmic process of the proposed Additive Congruential Ephemeral Cryptographic Kupyna Koorde Hash Storage Method is explained as given below in algorithm 1, // Algorithm 1 Additive Congruential Ephemeral Cryptographic Kupyna Koorde Hash Storage Input: Number of cloud users U1 , U2 , U3 , . . . .Un , data C D 1 , C D 2 , C D 3 , . . . C D n , Cloud server C S Output: secure data storage (continued)

756

P. V. Shakira and L. Raja

(continued) // Algorithm 1 Additive Congruential Ephemeral Cryptographic Kupyna Koorde Hash Storage Begin // Registration and key pair generation Step 1: For each cloud user ‘Ui ’ Step 2: Submit their details to a cloud server Step 3: C S sends a successfully registered message to the cloud user Step 4: For each registered cloud user ‘RU i ’ Step 5: C S generate the Ephemeral public key (K E pb ) Ephemeral private key (K E pr ) Step 6: End for Step 7: End for // Encryption Step 5: For each cloud user data ‘C D i ’ Step 6: Encrypt the data using the receiver public key Step 7: Obtain the cipher text ‘Ci ’ Step 8: Cloud user sends the encrypted data to a server Step 9: End for // Data storage Step 10: For each encrypted data Step 11: Construct the De Bruijn graph based on vertex and edge Step 12: Apply Davies-Meyer kupyna hash function Step 13: Divide the data into the number of blocks Step 14: For each block ‘bi ’ Step 15: Generate hash value ‘P’ Step 16: End for Step 17: Store the hash value into a cloud server Step 18: end for // Decryption Step 19: Cloud user receives the cipher text Step 20: Decryption is performed with the sender’s private key (R) Step 21: Obtain the original data ‘C D’ End

Algorithm 1 describes secured data on cloud server using the Additive Congruential Ephemeral Cryptographic Kupyna Koorde Hash method. Initially, the user’s registration and the key generation process are performed. For each registered user, server produces a pair of keys for encryption and decryption. First, the cloud user data performs the encryption using the receiver public key and transmit to a cloud server. The cloud server receives the data and stores it with the help of the DaviesMeyer Kupyna Koorde Hash function to minimize the storage complexity. The user accesses their data from the cloud, the server allows them to decrypt the data. The user performs decryption when their private key is correctly matched. It increases protected data storage and access within cloud environment.

Additive Congruential Kupyna Koorde Cryptographic Hashfor Secured …

757

Cloud server

Decryption

Obtain original data

Cloud user Cipher text

Sender’s private key

Fig. 7 Block diagram of the decryption

8 Experimental Evaluation Simulation of ACECKKHS technique as well as C-SCS [1] and SPADE [2] are carried out within Java language in a CloudSim network simulator. In simulation, Amazon’s simple storage service (S3) dataset is employed to secure data storage. It is used to perform the scalable storage infrastructure that stores any type of object, Internet applications, backups, data archives, data lakes, and so on. Users can accumulate as well as secure amounts of data on the server for future use. To analyze the performance of the ACECKKHSalgorithm and the results are evaluated in terms of data confidentiality, integrity, computation cost, and storage cost,

9 Performance Results and Discussion Simulation of proposed ACECKKHS and existing [1] and [2] are explained using four various parameters.

10 Impact of Data Confidentiality Rate It is the ability to protect the data from unauthorized access to the cloud server. It is computed as follows,

758

P. V. Shakira and L. Raja

Table 1 Data confidentiality rate

Number of cloud user data

Data confidentiality rate (%) ACECKKHS

C-SCS

SPADE

100

94

90

87

200

93.5

89

86.5

300

94.66

88.33

86.33

400

94.25

90

87.75

500

95

91

88.6

600

94.5

90.16

88.66

700

95.14

88.57

86.85

800

94.37

90.12

88

900

93.88

90

87.66

1000

93.2

89

87.5

 Rate DC =

 N umber o f datapr otected ∗ 100 m

(8)

where, data confidentiality rate is represented as ‘Rate DC ’, m denotes the number of data. It is measured by percentage (%). Table 1 illustrates the data confidentiality rate with number of cloud user data. This is because of applying additive congruential ephemeral castagnos–laguillaumie cryptography. The proposed cryptography technique performs the different ephemeral key pair generation for each registered user in the different communications. Therefore, it helps to avoid unauthorized access during the communication between the user as well as the server. In addition, authorized user accesses information over server through decryption process. For each method, ten different Data confidentiality rate results are observed. The performance of the proposed ACECKKHS technique is compared to the results of other existing methods. Finally, the average is taken for ten comparison results indicates that the Data confidentiality rate of the proposed ACECKKHS is increased by 5% when compared to [1] and 8% when compared to [2].

11 Impact of Data Integrity Rate It is referred to by the number of data stored in a server that is not altered or modified with unauthorized users. It comprises preserving consistency, accuracy,as well as trustworthiness of information through a complete process.  Rate D I =

 N umber o f datanotalter ed ∗ 100 m

(9)

96 94 92 90 88 86 84 82 80

759

ACECKKHS C-SCS

1000

900

800

700

500

SPADE 600

100 200 300 400

Data integrity rate (%)

Additive Congruential Kupyna Koorde Cryptographic Hashfor Secured …

Number of cloud user data

Fig. 8 Graphical results of data integrity rate

where, data integrity rate is denoted as ‘Rate D I ’, number of data denotes ‘m’.It is measured in percentage (%). Figure 8 illustrates the performance results of data integrity rate using three methods namely ACECKKHS and existing techniques C-SCS [1] and SPADE [2]. As shown in the graph, the data integrity rate of the ACECKKHS technique is found to be increased when compared to existing methods. The reason behind the improvement was due to the application of the Davies-Meyer kupyna Koorde distributed hash function. The cloud server generates the hash for encrypted cloud user data. The input data is first divided into a number of the message block. Applying the Davies-Meyer compression function generates the hash for each block. The output hash value of one block is not similar to another block. Finally, the output of the last compression function is taken as the output hash and it is stored in the cloud server. This helps to avoid unauthorized user access to the data resulting in increases in the data integrity rate. Data integrity rate with ACECKKHS technique is enhanced as 5 and 8% compared to [1, 2].

11.1 Impact of Computational Cost Computational cost is defined as the amount of time consumed for achieving protected data storage on server. It is calculated in milliseconds (ms) as well as given below, CC =

m  j=1

D j ∗ T [S DS]

(10)

760 Table 2 Comparison of computational cost

P. V. Shakira and L. Raja Number of cloud user data 100

Computational cost (ms) ACECKKHS

C-SCS

SPADE

12

14

15

200

14

16

18

300

16.5

18

21

400

18.4

20

24

500

20

22.5

25

600

23.4

25.2

27.6

700

25.2

26.6

30.1

800

26.4

28

31.2

900

27.9

29.7

32.4

1000

29

31

34

where, computational cost is indicated as ‘CC’, ‘m’ denotes the number of data ‘D j ’, T (S DG) indicates a time for secure data storage. Table 2 provides the performance results of computational cost in terms of time consumption using distinct amounts of data. Owing to cloud server performing the data storage under three processes namely user rogation, key generation, and data encryption and hash generation. For each registered user first, generates the pair of ephemeral keys using an Additive congruential generator with minimum time. Then the data encryption process is said to be performed by using Castagnos–Laguillaumie Cryptography. Finally, the Davies-Meyer compression function is employed in kupyna hash generation to every encrypted information as well as accumulated within the server. Server uses distributed hash table system for storing the multiple user data with minimum time. The overall computational cost was minimized using the ACECKKHS technique by 8% and 18% when compared to [1, 2] respectively.

11.2 Impact of Storage Cost Storage cost is defined as the amount of memory space taken by the server to store cloud user data. It is measured in Megabytes (MB) and mathematically formulated as given below, SC =

m 

D j ∗ MC(datastorage)

(11)

j=1

where, Storage cost is indicated as ‘SC’, ‘m’ denotes number of data ‘D’, MC denotes the memory space consumed by server for storing the data.

Storage cost (ms)

Additive Congruential Kupyna Koorde Cryptographic Hashfor Secured …

761

50 40 30 20

ACECKKHS

10

C-SCS

0

SPADE 100200 300

400 500 600

700 800 900 Number of cloud user data

Fig. 9 Graphical results of storage cost

Figure 9 depicts the comparative analysis of storage cost using three methods ACECKKHSand existing techniques C-SCS [1] and SPADE [2]. This is owing to the cloud server using the Koorde distributed system that stores the encrypted data into a server in the form of hash value using the De Bruijn graph. The server inserts the new nodes into a graph for storing the data and removes the node for eliminating the user data. The hash value of data takes minimal storage space on cloud server than the original data. The overall storage cost is said to be minimized using ACECKKHS by 11% and 18% compared to [1] and [2].

12 Conclusion The protected cloud storage is a promising service for securing confidentiality of data and gives data access to users through the Internet. An efficient technique ACECKKHS is developed in this paper for secured cloud storage with higher data integrity and confidentiality. In ACECKKHS, the cloud user registers the information to perform secured data storage. Next, server produces key pair for every registered user by means of Ephemeral Castagnos–Laguillaumie Cryptography. Data encryption process is performed using the receiver’s public key as well as sent to the server. Server stores the user encrypted information using Davies-Meyer kupyna Koorde distributed. Finally, the stored information gets retrieved by the authorized user through the data decryption process. It aids to enhance data confidentiality. The proposed ACECKKHS is developed for considering the Amazon Simple storage service dataset. The observed quantitative outcomes confirm ACECKKHS technique gives enhanced performance using a higher data confidentiality rate, data integrity rate, and minimum computational cost as well as storage cost when compared to existing works.

762

P. V. Shakira and L. Raja

References 1. Yang Y, Chen Y, Chen F (2021) A compressive integrity auditing protocol for secure cloud storage. IEEE/ACM Trans Netwo 29(3):1197–1209 2. Zhang Y, Xu C, Cheng N, Shen X (2021) Secure password-protected encryption key for deduplicated cloud storage systems. IEEE Trans Dependable Secure Comput:1–18 3. Yang A, Xu J, Weng J, Zhou J, Wong DS (2021) Lightweight and privacy-preserving delegatable proofs of storage with data dynamics in cloud storage. IEEE Trans Cloud Comput 9(1):212–225 4. Jianting N, Zhenfu C, Xiaolei D, Kaitai L, Lifei W, Kim-Kwang Raymond C (2021) CryptCloud+: secure and expressive data access control for cloud storage. IEEE Trans Services Comput 14(1):111–124 5. Xu Y, Ding L, Cui J, Zhon H, Yu, Jia (2021) PP-CSA: a privacy-preserving cloud storage auditing scheme for data sharing. IEEE Syst J 15(3):3730–3739. https://doi.org/10.1109/ JSYST.2020.3018692 6. Li J, Yan H, Zhang Y (2021) Identity-based privacy preserving remote data integrity checking for cloud storage. IEEE Syst J 15(1):577–585. https://doi.org/10.1109/JSYST.2020.2978146 7. Rehman A, LIU Jian, Muhammad Qasim Yasin, LI Keqiu (2021) Securing cloud storage by remote data integrity check with secured key generation. Chin J Electron 30(3):489–499. https:// doi.org/10.1049/cje.2021.04.002 8. Jiang T, Xu, Wenjuan Meng, Yuan LW, Ge J, Ma J (2021) ReliableBox: secure and verifiable cloud storage with location-aware backup. IEEE Trans Parallel Distrib Syst 32(12):2996–3010. https://doi.org/10.1109/TPDS.2021.3080594 9. Ullah Z, Raza B, Shah H, Khan S, Waheed A (2022) Towards blockchain-based secure storage and trusted data sharing scheme for IoT environment. IEEE Access 10:36978–36994. https:// doi.org/10.1109/ACCESS.2022.3164081 10. Hoang T, Yavuz AA, Guajardo J (2021) A secure searchable encryption framework for privacycritical cloud storage services. IEEE Trans Serv Comput 14(6):1675–1689. https://doi.org/10. 1109/TSC.2019.2897096 11. Ganga Devi K, Renuga Devi R (2020) “S2 OPE security: Shuffle standard onetime padding encryption for improving secured data storage in decentralized cloud environment”, Materials Today: Proceedings, Elsevier, Materials Today: Proceedings, 1–9. https://doi.org/10.1016/j. matpr.2021.01.254 12. Lakshmi VS, Deepthi S, Deepthi PP (2021) Collusion resistant secret sharing scheme for secure data storage and processing over cloud. J Informat Security Appl Elsevier 60:1–16. https://doi. org/10.1016/j.jisa.2021.102869 13. Pavani V, Krishna PS, Gopi AP, Narayana VL (2020) Secure data storage and accessing in cloud computing using enhanced group based cryptography mechanism. Materials Today Proc Elsevier:1–5. https://doi.org/10.1016/j.matpr.2020.10.262 14. Zhang Q, Wang G, Tang W, Alinani K, Liu Q, Li X (2021) Efficient personalized search over encrypted data for mobile edge-assisted cloud storage. Comput Commun Elsevier 176:81–90. https://doi.org/10.1016/j.comcom.2021.05.009 15. Rafique A, Van Landuyt D (2021) Emad Heydari Beni, Bert Lagaisse, Wouter Joosen, CryptDICE: Distributed data protection system for secure cloud data storage and computation.” Information Systems, Elsevier 96:1–23. https://doi.org/10.1016/j.is.2020.101671 16. Hasan M, Oganb K, Starly B (2021) Hybrid Blockchain architecture for cloud manufacturingas-a-service (CMaaS) platforms with improved data storage and transaction efficiency. Procedia Manufact Elsevier 53:594–605. https://doi.org/10.1016/j.promfg.2021.06.060 17. Li M, Wang G, Liu S, Yub J (2021) Multi-keyword Fuzzy search over encrypted cloud storage data. Procedia Comput Sci Elsevier 187:365–370. https://doi.org/10.1016/j.procs.2021.04.075 18. Seth B, Dalal S, Jaglan V, Le DN, Mohan S, Srivastava G (2022) Integrating encryption techniques for secure data storage in the cloud. Trans Emerg Telecommun Technol 33(4):1–24. https://doi.org/10.1002/ett.4108

Additive Congruential Kupyna Koorde Cryptographic Hashfor Secured …

763

19. Mendes R, Oliveira T, Cogo V, Neves N, Bessani A (2021) CHARON: A secure cloud-ofclouds system for storing and sharing big data. IEEE Trans Cloud Comput 9(4):1349–1361 https://doi.org/10.1109/TCC.2019.2916856 20. Hongtao Y, Liu S, Fan Z (2021) MDupl: a replica strategy of cloud storage system. Proc Comput Sci, Elsevier 188:4–17. https://doi.org/10.1016/j.procs.2021.05.047

AMQP Protocol-Based Multilevel Security for M-commerce Transactions Ramana Solleti, N. Bhaskar, and M. V. Ramana Murthy

Abstract Security is the main concern for E-commerce applications. Nowadays, most of the E-commerce applications are done on palmtops, laptops, and mobile phones through some applications. That is why we gave the name M-commerce. When it comes to purchasing goods and services online, the term “E-commerce” is used and if those purchases are with mobile phones, then the term M-Commerce is used. Additionally, “the ability to purchase products from any location via a wireless Internet-connected device” is used to describe M-commerce. Web-enabled wireless phones are the most popular means of mobile communication. The phrase “ecommerce in a mobile environment” might also be used to describe it. M-Payments Commerce’s module is a critical issue that requires more study before practical solutions can be introduced. A number of security issues were discussed in this article. Cryptography can be accomplished using any number of conventional and public key/private key algorithms. As a result, each algorithm has its own advantages and disadvantages based on several factors, such as Key size and Data size. After testing all parameters, the suggested model will be adjusted for certain contexts, which is the next step in the development of the generalized model (Three Level Gateway Protocol for Secure M-Commerce Transactions). Keywords Data security · M-commerce · Cryptography · Private key interface · Multilevel cyber security · AMQP

R. Solleti (B) Department of Computer Science, Bhavan’s Vivekananda College, Sainikpuri, Secunderabad, Telangana, India e-mail: [email protected] N. Bhaskar Department of CSE, CBIT, Hyderabad, Telangana, India M. V. Ramana Murthy Department of Statistics, Osmania University, Osmania, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_56

765

766

R. Solleti et al.

1 Introduction Web-enabled wireless phones are the most popular means of mobile communication. The phrase “e-commerce in a mobile environment” might also be used to describe it [1–3]. Using mobile technology to do business, access corporate information, make a purchase, or manage the supply or demand chain. When it comes to doing business deals, M-commerce refers to using a mobile device such as a smartphone or tablet. WAP technology [4] and e-commerce expertise are used to assist. Selling products, providing services, making payments and other financial activities, exchanging data and doing other things like that are all possible through mobile commerce and wireless technology (WAP) [5]. M-commerce is, in fact, a very fast-growing part of e-commerce. Most of the transactions and 70% of online transactions happen in India are done on mobile phones and all over the world [6] it is about 700-billiondollar market. There are a lot of new things we can do thanks to e-commerce, and this is called “m-commerce.” Everything from the latest technology and services to new business models and marketing tactics are included. It is very different from e-commerce in many ways. This is due to the fact that mobile phones offer a wide range of capabilities that desktop PCs [7] do not have, which are very different from phones. And it gives businesses so many chances to make money. M-Commerce applications are made up of standard infrastructures and electronic technologies that are good enough for wireless mobile data and knowledge [8]. Information can be sent through videos, images, voice and text. The phrase “Mobile Commerce” refers to the usage of wireless electronic commerce via a convenient device such as a mobile phone or a Personal Digital Assistant to conduct business or commerce (PDA) [9]. Additionally, M-commerce is described as the next generation of wireless e-commerce that eliminates the need for wires and physical-in equipment [10–12]. The Different Types of M-Commerce: 1. Convenience is crucial when it comes to travel and ticketing [16]. Mobile phones are and will continue to be time savers. 2. Merchant Transactions/Retail Transactions: Due to the slow speed of the Internet, mobile and Internet transactions are now challenging [14]. 3. Ticketing for movies. 4. Bill payments—making utility and service company payments. 5 Money Transfer—the transfer of funds from one person to another via the use of a financial intermediary [15]. E-commerce is distinct from M-commerce in that it can only be accessed by PC users who have access to a networked computer. M-commerce is now accessible to practically the entire mobile population, thanks to its move to an SMS platform, among other factors. There are many proposed mobile payment protocols currently available; however, the majority of these protocols are based on PKI, which is inefficiently applied to wireless networks, and some other protocols store credit card credentials of the engaging parties on the mobile devices of users or used in the transaction without

AMQP Protocol-Based Multilevel Security for M-commerce Transactions

767

any protection, making it easily susceptible to attacks. Furthermore, the safety of consumer data isn’t a priority for certain mobile payment protocol designers. In order to protect customers’ private information, merchants, payment gateways, and banks all have access to their customer credentials. The goal of the research aims to design a safe, lightweight mobile payment mechanism by solving these concerns with three levels that uses symmetric and asymmetric key operations to protect consumer privacy, ensure end-to-end protection, provide accountability, and get the security requirements of those engaging parties.

2 Related Work All E-Commerce is done through internet most of the cases through wireless is referred to as mobile commerce. Using the Internet, private communication lines, smart cards, and other technologies, it is possible to provide new services to existing consumers while also attracting new ones. The block diagram of characteristics is shown in Fig. 1.

2.1 Mobility and Fast Processing Mobile commerce’s ability to speed the transaction process is a critical characteristic. When a product or service is delivered electronically, such as via download, e-mail, or some other method, the customer receives it almost instantly, and the business owner receives payment much more quickly than when utilizing more traditional methods. Payment must be made in advance, either by credit card or by agreeing to pay through another account, before the item may be downloaded. Internet and network service dependability is obviously a factor in the speed of delivery.

Fig. 1 Characteristics of M-commerce

768

R. Solleti et al.

2.2 Reachability The vendor can also save money by using mobile commerce. There aren’t many times she’ll need to pay for an office, overhead, or employees. A mobile commerce business may not even necessitate the use of a physical office for a small-business owner. It is possible for a vendor to maintain track of sales online or through statements obtained from a processing organization. As a small-business owner, advertising to get the word out about your product or service is the most important expense. This means that business owners can make more money with each sale because of the lower costs. In addition, he can sell the things for less than they would have otherwise.

2.3 Low Cost of Maintenance Another importance of mobile commerce is that it needs little to no effort from the seller. As soon as the goods are set up for mobile delivery, the owner is paid automatically for any sales. While he may occasionally be required to perform maintenance duties, such as fixing a technical problem or upgrading the product, this selling format necessitates fewer administrative resources than other methods.

2.4 Characteristics of Wireless and Wired M-commerce Ubiquity: When a person utilizes a wireless device, he or she has the ability to receive information and conduct transactions from almost any location at any time. Accessibility: As long as you have a mobile phone, you may be reached practically anywhere and at any time. It is also possible to limit the user’s access to certain persons or specified times. Convenience: Data storage and access to information or other people are only some of the features of a wireless device’s mobility and functionality. Localization: Users will be able to act on important information when locationspecific applications emerge. Instant Connectivity (2.5G): Instant connection or “always on” is becoming increasingly widespread with the introduction of 2.5 G networks, GPRS or EDGE. Internet access will be more convenient and faster for 2.5 G subscribers.

AMQP Protocol-Based Multilevel Security for M-commerce Transactions

769

2.5 Issues in Cloud Computing 2.5.1

Privacy

As a result of the simplicity with which cloud service providers can manage and hence monitor the communication and data kept between users and cloud service providers, privacy advocates have taken issue with the cloud model. Many privacy advocates are concerned that telecommunications companies would be given greater authority to monitor the actions of their customers because of cases like the NSA programme, which worked with AT&T and Verizon to record more than 10 million phone calls between Americans. The legal framework has been “harmonised,” yet despite these efforts (such as the US-EU Safe Harbor), companies like Amazon continue to serve huge markets (often the United States and the European Union) by building local infrastructure and allowing customers to select "availability zones." Because the service provider has unlimited access to the data saved in the cloud, cloud computing creates privacy problems. They could purposely or accidentally alter or delete data.

2.5.2

Compliance

A customer may be obliged to use community or hybrid deployment alternatives, which are frequently more expensive and may offer less advantages, in order to comply with regulations such as FISMA and HIPAA in the United States, the EU Data Protection Directive, and the PCI DSS in the credit card sector. As Google “manages and complies with other government policy requirements outside FISMA,” Rackspace Cloud or QubeSpace may claim PCI compliance in this manner. Many service providers have also received SAS 70 Type II certification, which has been criticized on the basis that the auditee’s and auditor’s hand-picked set of goals and requirements may differ significantly. A non-disclosure agreement is typically required to obtain this information from service providers. Customers in the EU must comply with EU data export limitations when working with cloud providers from outside the EU/EEA.

2.5.3

Security

As cloud computing gains popularity, concerns have been raised regarding the security risks associated with its usage. The efficacy and efficiency of existing protection methods are being evaluated in light of the significant differences between this revolutionary deployment approach and classic designs. Cloud computing’s relative security is a sensitive subject that may be slowing its adoption. The issues impeding cloud computing adoption are mostly due to the corporate and governmental sectors’ apprehension about external administration

770

R. Solleti et al.

of security-related services. Cloud computing-based services, whether private or public, are designed to facilitate the external administration of offered services. This provides a significant incentive for cloud computing service providers to emphasize the development and maintenance of robust security management systems.

3 Security Algorithms There are two forms of cryptography: (a) symmetric key or secret key cryptography, often known as traditional cryptography, and (b) asymmetric key or public key cryptography (PKC). In traditional cryptosystems, in order to interact securely, both participating devices should and must disclose their secret key. As a result, there are two issues: first, how to securely exchange the secret key/session key; and second, if n HWDs must interact with each other, a total of O(n2) secret keys must be exchanged. Keeping track of so many secret keys is a difficult task. Both of the above difficulties are resolved with PKC since the two communicating parties do not need to exchange any secret keys. According to this, PKC appears to be the ideal method for implementing secrecy. Cryptographic techniques are already in use to ensure data transmission security over the Internet. The RSA PKC is the de facto cryptographic technique for digital signatures and secret key encryption. It is a very secure method for electronic transactions. RSA is a very secure and widely used method, although there are some potential issues with its application in M-Commerce: (1) Because the key size in RSA is huge, the memory required to hold the key is correspondingly high. (2) The time it takes to decipher encrypted text increases dramatically as the key size grows, and (3) The process of creating a key is difficult and time-consuming. In the case of devices with limited memory, RSA keys may have to be generated by another system. Furthermore, RSA’s encryption and decryption algorithms need a massive amount of computation, compared to secret key cryptography.

3.1 RSA Algorithm One of the most widely used cryptographic algorithm is RSA algorithm (RivertShamir-Adleman) [7]. It is a public key encrypted system. Here’s how it works in the instance of RSA. Alice reveals two numbers, N and e, which she carefully chose. Then Bob may encrypt a message and send it to Alice using these integers. Oscar has unrestricted access to N, e, and the encoded message. Oscar should be unable to decode the message, but Alice is able to do it without difficulty since she knows a secret.

AMQP Protocol-Based Multilevel Security for M-commerce Transactions

771

RSA modulus: N = pq. The encoder exponent e is chosen as gcd(e, ( p − 1)(q − 1)) = 1

(1)

In most cases, e is chosen first, followed by p and q, such that Eq. (1) holds. The majority of the cryptosystems used in the ASU Crypto Rally employ a common approach for transforming the initial message to numbers, after which the real encoding takes place with numbers. The same may be said with RSA, although the transfer from text to numbers is more difficult. Bob finds the encoding of the message by taking reminder from me is divided by N, Encoder : M = memod N

(2)

Alice uses p and q for decoding the message M, after picking the values of e and N, then Alice finds d by Decoder : d = e − 1mod( p − 1)(q − 1).

(3)

Hill and Affine ciphers use inverse to decode, in the same way, inverse is used for decoding process and effectively computing by Euclidean algorithm extension. The message decoded by Alice is given by, computing Decoding : m = Mdmod N

(4)

3.2 Shor’s Algorithm Shor’s Method is a quantum integer factorization algorithm. Simply written, it will determine the prime factors of an odd number N. The algorithm is divided into two parts: 1. The classic portion, reduces factorisation to a problem of determining the function’s period. This is traditionally done on a regular computer. 2. A quantum computer is used to calculate the period using the Quantum Fourier Transform. Steps for Shor’s algorithm. 1. 2. 3. 4.

Choose A randomly based on the condition that A > N. Find gcd of N. If gcd is not equal to 1 the factor of N is found. Use Quantum Fourier Transform to find quantum circuit.

772

R. Solleti et al.

5. Go back to step 1 if the period is even. 6. Factors on N need to find if not.

3.3 Mc Elice Algorithm McEliece consists of three algorithms: a probabilistic key generation method which produces a private and a public key, a deterministic decryption algorithm and a probabilistic encryption algorithm. A set of security settings is shared by all users in a McEliece deployment.: n, k, t. Key generation: 1. Alice chooses the binary (n, k)-linear code C, which can rectify t mistakes. This programme constructs a k × n generating matrix G for the code C and must have an efficient decoding method. 2. Alice chooses a k × k binary non-singular matrix S at random. 3. Alice chooses P, a n × n permutation matrix at random. 4. G = SGP is the first matrix that Alice computes. 5. (G, t) is Alice’s public key, and (G, t) is her private key (S, G, P). Message encryption: Let’s say Bob wants to send Alice a message m, and Alice’s public key is (G,t): 1. Bob encrypts the message m as a k-length binary string. 2. Bob calculates the c = mG vector. 3. Bob constructs an n-bit vector z that contains precisely t ones at random (a vector of length n and weight t). 4. Bob calculates the ciphertext using the formula c = c + z. Message decryption: When Alice receives c, she follows the procedures below to decode the message. 1. 2. 3. 4.

P-inverse P-1’s is computed by Alice. c = cp-1 is calculated by Alice. Alice decodes c to m using the decoding procedure for code c. m = ms-1 is calculated by Alice.

4 Methodology 4.1 AMQ Model Architecture It is necessary to standardize server semantics to ensure that AMQP implementations are compatible. Figure 2 depicts the AMQ model in its entirety.

AMQP Protocol-Based Multilevel Security for M-commerce Transactions

773

Fig. 2 Overall AMQ model

There are two basic duties performed by this data server: it directs messages to appropriate consumers based on arbitrary criteria, and it buffers them in memory or on disc if they can’t be accepted quickly enough by the intended recipients. Monolithic engines that execute various types of routing and buffering do these tasks in a pre-AMQP server. As a result of this approach, AMQ’s products are made up of smaller, modular parts that can be linked together in a variety of unconventional ways. The first step is to divide these duties into two distinct roles: • It is the exchange’s job to accept messages from producers and route them to the appropriate message queues. • Sending messages to consumer apps is the responsibility of the message queue. AMQP enables runtime programming of semantics in two primary ways: 1. The ability to build new exchange and message queue types at runtime using the protocol (Other features can be added to the server as extensions to the standard). 2. The ability to message queues and connect exchanges together at runtime via the protocol in order to design any desired message-processing system.

4.2 Message Flow AMQP messages are composed (as in Fig. 3) of a collection of attributes and opaque content.

774

R. Solleti et al.

Fig. 3 Message flow diagram of AMQ model

With the use of the AMQP client API, a producer application can construct a new “message.“. The message’s “content” and “properties” can both be altered by the message’s producer. “Route information,” which appears to be an address on the surface, is used to identify the message, but practically any scheme can be created. The message is subsequently sent to a server-side “exchange” by the producer. After getting the message to the server, the exchange typically directs it to a set of message “queues” that the server likewise maintains. If the message cannot be routed, the exchange may discreetly discard it or send it back to the manufacturer (producer). The producer controls the treatment of unroutable communications. There may be multiple queues for a single message. It is possible for the server to duplicate the message or utilize reference counting to deal with this. In terms of compatibility, this has nothing to do with it. In the case of many message queues, the message sent to each of them is the exact same. The many copies have no identifying identifier that may be used to identify them. An AMQP-enabled consumer application receives a new message from a message queue as soon as it is received. If this isn’t possible, the message is stored in the message queue and is waiting for a consumer to arrive (in memory or on disc, as the producer specifies). The message queue may return the message to the producer via AMQP if there are no consumers for it (again, if the producer asked for this). An internal message queue deletes messages that can’t be sent to their intended recipients. After the customer acknowledges that the communication was successfully handled, this may occur. Whether or not a message is “acknowledged” is entirely up to the customer. Additionally, the customer may choose to reject a message (a negative acknowledgement). The term “transactions” refers to the collection of producer and customer acknowledgement messages. It is common for an application to execute both duties, so it sends messages and acknowledgements, and then commits or rolls

AMQP Protocol-Based Multilevel Security for M-commerce Transactions

775

back the transaction. When a user acknowledges a message, there is no need for a transaction between the service provider and the end user. Many critical properties can be configured when a client application creates a message queue, those are: Name: As a result, the server will produce a unique name and provide it to the client. There are some situations where an application needs a queue for its own purposes but doesn’t want to share one, therefore the server can provide a name. Exclusive: Every time a connection closes, the queue that was created for that connection is removed. Durable: If this option is enabled, the message queue will remain active even if the server restarts. Temporary messages may be lost if the server is restarted.

4.3 Queue Life Cycles Message queues have two unique lifecycles: • Large-scale message queues that can continue to collect messages even if no one is using them are known as “durable” queues since they are shared by many users and exist independently. • Temporary message queues that are private to and associated with a single consumer. The consumer’s connection is terminated, and the message queue is removed. There are several variants on this theme, such as shared message queues that are erased when the final consumer disconnects. This Fig. 4 illustrates the process of creating and deleting temporary message queues:

Fig. 4 Flow diagram of message queue

776

R. Solleti et al.

4.4 AMQP Client Architecture However, reading and writing AMQP frames straight from an application is not encouraged. There should be no requirement for application developers to know about binary framing formats in order to send a message to a message queue even for the simplest AMQP conversation. There are several layers of abstraction in the proposed AMQP client architecture. A frame-layer is just that: a frame layer. Serialization of AMQP protocol methods stored in a language-specific (structures, classes, etc.) way as wire-level frames are done by this layer. You may automatically create the framing layer using the AMQP specs (which are specified in a protocol modelling language, implemented in XML, and optimised for AMQP). A layer that manages connections. Connectivity and session logic are handled by this layer which processes AMQP frames throughout the whole connection. This layer can contain the whole logic for establishing a connection and session, managing errors, transmitting and receiving content, and so on. The AMQP standards may be used to generate a large portion of this layer automatically. For example, the specs specify which methods convey data, allowing the logic “send method and then optionally send data” to be generated automatically. A layer of APIs. This layer offers a dedicated API for use by apps. The API layer may be based on an existing standard or may offer high-level AMQP functions, as discussed before in this section. It is the goal of AMQP approaches to make this mapping as simple and straightforward as possible. As an example, a higher-level API may be created on top of the AMQP method API, for example. The I/O layer can be as simple as synchronous socket reads and writes or as complex as fully asynchronous multi-threaded i/o, but it is almost always there. Figure 5 shows the proposed structure in its entirety, as above:

5 Experimental Results According to the results of the testing and the diagram that was produced shown in Tables 1 and 2, the average MQTT delay for each file that the publisher provides to the subscriber is 42.0727 s. The AMQP Protocol, in comparison, displays the average time it takes for each producer to transfer a file to a consumer; it is only 0.3105 s. It demonstrates that AMQP has significantly better direct and continuous data availability than MQTT. Because AMQP can handle every shipment made by the producer, MQTT cannot handle queues that are carried out continually by the publisher at sufficiently big data sizes. Throughput, which is typically expressed as an average and measured in bits per second, is a measurement of how many units of information a system can handle in a specific length of time (bps). The testing

AMQP Protocol-Based Multilevel Security for M-commerce Transactions

777

Fig. 5 AMQP client architecture

is carried out 10 times with a total of 100 file transfers, with results being obtained from the publisher to the subscriber after each test. The results of each test are as follows: Table 1 Average delay per packet MQTT Packet

Delay

Average delay

654,334

37.378

0.0000402717516359130

665,554

41.727

0.0000585038101744130

655,786

37.579

0.00004655594223117210

665,882

39.316

0.0000532164588490380

675,664

41.271

0.0000502596318814240

675,754

33.01

0.0000429408684779490

684,456

32.121

0.0000416804484860880

687,798

43.078

0.0000546981874016730

694,454

80.113

0.0000991547044627010

698,886

35.597

0.00004610640078128440

778 Table 2 Average delay per Packet AMQP

R. Solleti et al. Packet

Delay

Average delay

654,334

0.23

0.00000026609129245168

665,554

0.278

0.00000034976604677555

655,786

0.284

0.00000034478823446571

665,882

0.299

0.00000034546305336191

675,664

0.338

0.00000042080992215016

675,754

0.314

0.00000037231891842540

684,456

0.347

0.00000043217310837083

687,798

0.31

0.00000036802153994458

694,454

0.277

0.00000033121253681302

698,886

0.428

0.00000054573807887445

6 Conclusion Using the proposed Three Level Gateway Protocol for Secure M-Commerce Transactions and the AMQP protocol, the security will be more for mobile payment. We can add machine learning and deep learning algorithms for these security protocols to get different attacks and to avoid delays, will include simulations of the protocol and a comparison of it to other payment methods currently in use, such as conventional encryption algorithms and public key/private key cryptosystems.

References 1. Maffeis S (2000) M-Commerce Needs Middleware!. http://www.softwired-inc.com/people/ maffeis/articles/softwired/mcommerce.pdf 2. Yorozu T, Hirano M, Oka K, Tagawa Y (1982) Electron spectroscopy studies on magnetooptical media and plastic substrate interface. IEEE Transl J Magn Jpn 2:740–741 [Digests 9th Annual Conference Magnetics Japan, p. 301, 1982] 3. Young M (1989) The technical writer’s handbook. University Science, Mill Valley, CA 4. Pothuganti K, Sridevi B, Seshabattar P (2021) IoT and deep learning based smart greenhouse disease prediction. In: 2021 international conference on recent trends on electronics, information, communication & technology (RTEICT), pp 793–799. https://doi.org/10.1109/RTEICT 52294.2021.9573794. 5. Ramana S, Ramu SC, Bhaskar N, Murthy MVR, Reddy CRK (2022) A three-level gateway protocol for secure M-Commerce transactions using encrypted OTP. International conference on applied artificial intelligence and computing (ICAAIC) 2022:1408–1416. https://doi.org/ 10.1109/ICAAIC53929.2022.979290 6. Osborne M (2000) WAP, m-commerce and security||. http://www.kpmg.co.uk/kpmg/uk/image/ mcom5.pdf 7. Pietro RD, Mancini LV (2003) Security and Privacy Issues of Handheld and Wearable Devices||. Communication of the ACM 46(9):75–79 8. “PKI Moves Forward Across the Globe”, Wireless Developer Network. http://www.wirelessd evnet.com/channels/wap/features/mcommerce3.html

AMQP Protocol-Based Multilevel Security for M-commerce Transactions

779

9. Vanstone SA. (2003) Next generation security for wireless: elliptic curve cryptography||, pp 412- 415, http://www.compseconline.com/hottopics/hottopic20_8/ —Visa Mobile 3D Secure Specification for M-commerce Security||, ttp://www.cellular.co.za/technologies/mobile3d/visa_mobile_3d.htm 10. Weimerskirch A, Parr C, Shantz SC (2001) Proc. of the 6thAustralian Conf. on Information Security and Privacy, July 11–13, Sidney. 11. Woodbury, A.D., Bailey, D.V., and Paar, C. (2000), —Elliptic Curve Cryptography on Smart Card without Coprocessors||, Proc. of the 4thSmart Card Research and Advanced Applications Conf., September 20–22, pp.1–20. 12. Xydis, T.G. (2002), —Security Comparison: Bluetooth Communications vs. 802.11||, http:// www.ccss.isi.edu/papers/xydis_bluetooth.pdf 13. Yeun, Chan Y. and Farnham, Tim, —Secure M-Commerce with WPKI||, 2001, http://www. iris.re.kr/iwap01/program/download/g07_pap er.pdf 14. J. L., Asokan N., Steiner M. & Waidner M,—Designing a generic payment service||, IBM System Research Journal, Vol.37(1), 1998, Pp. 72–88. 15. Bellare M, Garay JA, Hauser R, Herzberg A, Krawczyk H, Steiner M, Tsudik G, Van Herreweghen E, Waidner M (2000) Design, implementation, and deployment of the iKP secure electronic payment system. IEEE Journal of Selected Areas in Communications, 2000, pp. 611–627. 16. Wang C, Leung HF (2005) A Private and Efficient Mobile Payment Protocol||, London: Springer, LNAI, pp1030–1035 17. http://www.setco.org/set_specifications.html

Decentralised Blockchain-Based Framework for Securing eVoting System Devarsh Patel, Krushang Patel, Pranay Patel, Mrugendrasinh Rahevar, Martin Parmar, and Ritesh Patel

Abstract Building a safe voting system that delivers the fairness and privacy of the present voting systems is quite difficult. We assess a Blockchain as a service application to construct distributed electronic voting systems in this implementation study. Our goal is to offer a decentralised infrastructure that can support and administer an open, equitable, and independently verifiable voting system. The experimental finding demonstrates the value of our proposed solution for both the current and upcoming voting systems. Our proposed solution implements the protocol that achieves fundamental e-voting properties, as well as offers a degree of decentralisation and allows for the voter to change/update their vote. Keywords Blockchain · Electronic voting systems · Decentralisation · Fairness · Privacy · Infrastructure · Protocol · Voter update

1 Introduction Blockchain technology has grown into today’s hottest software topic in the industry as a result of the entry of widely accepted use of Bitcoin, the original cryptocurrency, in people’s day-to-day lives [2]. Since studies have shown that there is a high level of transparency in this system begun suggesting that Blockchain can be utilised a lot over time, in more regions than it was first employed for, such as commerce Supported by organisation x. D. Patel · K. Patel · P. Patel · M. Rahevar (B) · M. Parmar · R. Patel Faculty of Technology and Engineering, Chandubhai S. Patel Institute of Technology, Charotar University of Science and Technology, Changa, Anand, Gujarat, India e-mail: [email protected] M. Parmar e-mail: [email protected] R. Patel e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_57

781

782

D. Patel et al.

and financial transactions. In Bitcoin, for instance, the total number of currencies and the current transaction volume worldwide can be observed immediately and plainly due to the distributed structure of the wallets. In this P2P-based system, there is no requirement for a central authority to authorise or complete the operations [1]. As a result, not only can all types of structural information be retained in this distributed chain, but also money transfers, and the system may be kept secure with the aid of specific cryptographic techniques. Many different types of information, such as people’s assets, marriage licences, bank account books, medical records, etc., can be recorded using this system with the appropriate changes. A few years after Bitcoin, another cryptocurrency with many development environments, Ethereum coin (Ether), distinguishes the Blockchain in a meaningful way by showing that this technology is capable of producing software that can store data with the abovedescribed structure. The Blockchain contains immutable code that is used to enforce smart contracts, which will be covered in more detail later [13]. Once they are written, they can neither be (illegally) erased nor changed. They can therefore continue to function properly, independently, and transparently without any outside influences. As was already mentioned, the Blockchain technology may handle numerous difficulties outside digital trade thanks to its distinctive distributed and secure concept. It might be a great option for projects involving electronic voting. Evoting is the subject of in-depth research, and numerous systems have been tried out and even employed for a while. However, only a few implementations are trustworthy enough to be used today. There are certainly many successful examples of online surveys and polls, but we cannot make the same statement about online elections for businesses and governments. This is mostly due to the fact that democratic administrations, which are the most popular form of government in the modern world, depend heavily on free and fair elections. A strong election system that offers privacy and transparency is also what democratic societies love most. People make a lot of decisions these days (and members in organisations). Such voting methods are employed in a variety of contexts, from TV shows to law and act referendums. Blockchain-based electronic voting systems function similarly to digital wallets. After confirming each member’s identification, the system or authority could grant them access to a digital wallet [12]. The wallet that is sent along with the user credentials must contain a single currency that represents one vote. Money from customer’s wallet is sent to the candidate’s wallet or account when they receive a user’s vote. The unmistakable sign of how many votes were cast for each contender is the total quantity of coins in their wallets. In comparison to EVM, Internet and electronic voting methods can offer more honesty and safety. As well preserves user privacy because qualified voters can use computers or even cellphones to cast anonymous ballots. Because the system is online and totally visible, users have more trust in utilising it. As a result, there can be more participation. By doing away with the necessity for a centralised database and network server, Blockchain creates trust. A completely decentralised open ledger system, to put it another way. The permanent and unalterable public ledger keeps a record of every vote cast. It ensures that votes cast cannot be changed once they have been cast. Because of the consensus mechanism, it is nearly impossible to manipulate the ledger because in order to add a new block, one must first hack all of the

Decentralised Blockchain-Based Framework for Securing eVoting System

783

previous blocks. Depending on the consensus employed, a hacker must take down at least one-third of the network, and occasionally even half, in order to compromise the entire system.

2 Blockchain Blockchain can be compared to a game of basketball. All movements including the throwing of the ball from one player to another and from that player to another are captured in real time and are obvious to all players [7]. Series moves (transactions, deliveries, etc.) between participants in the same game or network are encrypted by the Blockchain. Architecture of blockchain technology is shown in Fig. 1. The distributed ledger technology known as Blockchain keeps various transactions and processes in a sequence of blocks without the aid of a reliable third party [17]. A pair of public and private keys used in Blockchain technology have been shown to be immutable, which aids in maintaining integrity, accountability, and, to some extent, confidentiality [18].

2.1 How Does It Work? In general, a blockchain is a chain of blocks, and a block consists of three elements: Data, hash, and hash of the previous block. Each block in the chain contains its own cryptographic hash and the last block that remains connected in the chain. A

Fig. 1 Architecture of blockchain technology

784

D. Patel et al.

block is the main unit of a blockchain. In blockchain, a block is a collection of data or information. Information is added to blocks within the blockchain and connected to other blocks in chronological order, creating a chain of linked blocks. Thus, it forms a time-series database of transactions made on multiple nodes. H. Computers or servers shared on a network. A unique number added to a hashed or encrypted block called a nonce can only be used once and is chosen by miners to solve the cryptographic puzzle and generate the next block in the chain. This is called proof of work. A hash is a unique alphanumeric identification code or number generated when a transaction occurs on a blockchain. When a transaction occurs on the blockchain, that transaction is recorded in a block, and that block must be verified before being added to the chain. Block Authenticity 5 | Pages need to be verified by a consensus algorithm that requires the majority of nodes (client or server) and top stack nodes in a decentralised network chain to verify blocks before they are added to the chain there is. After the block is verified, a unique identification code i.e. H. Generated Hash. In this way, no third party intervention is required to validate or execute a transaction. Blocks can be recognized by block number or block height and block header hash. The data within a block is recognized by a computerized algorithm called a hash function. This feature locks the data of blockchain participants, making the data immutable. Working of ledger in blockchain technology is shown in Fig. 2. Now Ram wants to send money to Shyam. First, a money transaction is initiated when the transaction is generated by Ram, then this transaction is broadcasted to all nodes or parties of the network. After broadcasting, the transaction is approved as it is either ‘valid’ or ‘invalid’ by the blockchain system. If the transaction is confirmed as valid for that hash code, this subsequent node is delivered to the current new

Fig. 2 Working of ledger in blockchain technology

Decentralised Blockchain-Based Framework for Securing eVoting System

785

Fig. 3 Block structure of blockchain technology

node or block and communicates with each relevant block to confirm the block in a computerized or digital database. Continuously connect to the currently existing blockchain of the ledger.

2.2 Block Structure A comprehensive record of transactions is contained in each block of a Blockchain, which is a chronological list of blocks [10]. It adheres to the linked list data structure, where each block references a preceding block by referencing the hash value of the preceding block, also known as the parent block. The genesis block, which is the initial block in a Blockchain, has no preceding blocks. A block is made up of transaction information and metadata (block header) (block body). The information includes the block version, parent block hash, Merkle tree root hash, timestamp, and nonce. A random number called a “Nonce” is employed in user network encrypted communication [11]. Block structure of blockchain technology is shown in Fig. 3.

2.3 Properties of Blockchain Distribution, transparency, independence, consistency, open source, anonymity, and consistency are the main tenets of the blockchain.

786

D. Patel et al.

Blockchain as a data structure A block in a Blockchain comprises a list of functions. The initial block is the building block that the design is built upon. The number of blocks increased along with the exchange [2]. The current block was linked to the previous block. These kinds of information structures are offered by the Blockchain. The Blockchain is typically meticulously planned and clutter-free. Decentralised One of the important components of an excellent Blockchain development is shared organisation, which appears to be fragmented. Without any outside assistance, anyone may download the app, store it, and then use the web to access it later. Keep all exchanges, including securities, documents, contracts, computer assets, etc., and access them later with a secret key [2]. Consistency The only way a Blockchain framework can permit and trust trading before it adds to the chain is through consistency. When work fails to adhere to one of the stipulations, the trade will look to be invalid. The block’s chains are transferred to a showbased show, which could be a minor authorisation or permit [8]. Anyone may attempt to combine commerce and have a stake in the community agreement, according to the agreement’s terms. To contribute to or facilitate exchanges in licencebased shows, nodes must be authorised and monitored.

3 Reasons for Using Blockchain Technology The database is typically completely under the control of one organisation or centralised authority, which is also in charge of maintaining it. It has the capacity to modify the database and update the information. Typically, the ruling body that created and will be utilising the database also maintains it. In such a situation, the organisation has no motivation to make up or tamper with its own numbers. However, in circumstances involving money or sensitive information, such as voting, it is not advisable to grant a single authority or organisation complete control over a database [15]. Even if the business is confident that no fraudulent changes will be made to a central database, hackers can still more easily exploit it. By enabling everyone to store a replica of a database that is always comparable and look for changes, Blockchain opens databases to the general public. The separate copies must be updated consistently in order to maintain consistency. Consensus technology is used by Blockchain to maintain a reliable decentralised database [22].

4 Motivation and Related Work This project’s main goal is to provide a secure voting environment while demonstrating the possibility of a reliable e-voting system utilising Blockchain [20]. Every administrative decision will be made by the people and members since everyone

Decentralised Blockchain-Based Framework for Securing eVoting System

787

with a computer or mobile phone will be able to utilise e-voting, or at the absolute least, the public’s viewpoint will be more visible and available to managers and lawmakers. This will eventually result in universal direct democracy. It’s crucial for us because elections, particularly in small villages and even larger cities located in corrupt nations, can be readily influenced or corrupted. Additionally, large-scale traditional elections are quite expensive in the long run, particularly when there are millions of voters and hundreds of geographically dispersed polling places. Additionally, voters (mostly for members of organisations) may be out of town for a vacation, a business trip, or for any other reason, making it hard for that specific voter to participate in the election and potentially lowering turnout. If done properly, electronic voting will be able to address these issues. In Russia, the Moscow Active Citizen initiative was established in 2014. Since then, many polls have been carried out on a variety of topics, including, among others, what colour the seats in a brand-new sports stadium should be. In 2017, South Korea implemented a Blockchain-based smart contract voting system. The votes and results, along with all the other significant data, were kept on a Blockchain. The procedure lacked any centralised administration or power. Estonia was the first nation to permit the use of an online voting system in 2007. 30% of the votes cast in the 2015 parliamentary elections were cast electronically. The national ID cards of Estonian people are used to confirm their identities. The identity data on these cards is encrypted. With the help of this system, Estonian citizens can now engage in a variety of online activities, such as electronic voting, online banking, and accessing government portal information. A voting system built on the Blockchain was developed in 2018 by Agora, a Swiss Blockchain start up. The 2018 general elections in Sierra Leone served as a test of sorts for it. A comprehensive, end-to-end verified Blockchain exists in Agora. It is made to give organisations, governments, and institutions access to an online voting system. With this electronic voting system based on Blockchain, registered voters can buy tokens from businesses or the government. A number of numerous organisations, such as the TIVI, the Blockchain Voting Machine, the Abu Dhabi Stock Exchange, and FollowMyVote operating on other projects to e-voting systems based on Blockchain should be implemented. 2017 saw the proposal by McCorry and co.., “Boardroom Voting with Maximum Voter Privacy” It provides a clever contract-based voting that counts itself by procedure. Ethereum is used to create the decentralised Open Vote Network (OVN), a Blockchain-based voting platform. In 2018, Jonathan and colleagues introduced Netvote, a decentralised Blockchainbased electoral process. The user interface is based on decentralised applications (dapps), which are constructed on the Ethereum network. Three dapps are suggested by the writers. The management uses one, the admin dapp, to establish rules, and other things. Another dapp that enables independent voter registration and voting is called Voter. The election results are then counted and made public using the Tally dapp. However, a private Blockchain serves as the system’s foundation.

788

D. Patel et al.

5 Implementation Details 5.1 Design Considerations When creating an electronic voting system, the following factors should be taken into account: Only eligible voters should be authenticated by the e-voting system, which should check whether they are known. Candidates who are ineligible shouldn’t be permitted access electronic voting apparatus. Every voter should only have one opportunity to cast a ballot, and the system should discourage voting again. Voters ought to have complete security, and their votes should be impossible to track. It should not permit anyone’s votes to be tampered with. A single authority should not be able to control the system’s counting.

5.2 Ethereum Blockchain with and without permissions may be divided into two groups. Private Blockchain networks known as permissioned Blockchain include entry requirements. Public Blockchain is unrestricted Blockchain. There are no limitations on who may read from or write to the Blockchain ledger database on the public Blockchain. A public, decentralised Blockchain network is Ethereum. Developers may basically build decentralised apps using Blockchain technology on the Ethereum platform. It is an unpermitted Blockchain network [14]. In this section, the various Ethereum account types are covered. On Ethereum, there are two distinct types of accounts: 1. Outside accounts 2. Accounts of Contract. An account that is externally owned is one that is user controlled. It stands for the external network agent, such as consumer, miner, etc. In order to manage these accounts, public private key encryption is the RSA fashion. Mostly external accounts are used for user interaction making use of the Ethereum Blockchain. Block structure of blockchain technology is shown in Fig. 4. An agreement account is a body of code known as a smart contract that controls Blockchain. Because they are stored in a specific location, these are regarded as evidence. Accounts under contract are nearly universal either called upon others by contract accounts or by a few external accounts. Two languages for high level scripting, solidity, and serpent, were used to create these contracts. These two accounts have ether storage capacity. The exchange symbol for Ether, the native cryptocurrency of Ethereum, is “ETH”. It is used for services and transaction fees

Decentralised Blockchain-Based Framework for Securing eVoting System

789

Fig. 4 Block structure of blockchain technology

on the network of Ethereum. Which are employed to finish deals or pay for gas. An intermediary token is a gas that can be used as getting paid for computational work accomplished for the execution for some transactions, use a smart contract. With ether, gas can be purchased.

5.3 Smart Contracts Blockchain contains self-executing code known as smart contracts. These are employed for codes of conduct agreements between two parties and resemble customary business contracts [4]. When the specified conditions are met, the smart contracts immediately take effect. Smart contracts allow for the trusted execution of agreements and transactions between unreliable or unknown parties without the need for a centralised authority [3]. Smart contracts are written in Solidity language. The syntax of this object-oriented language is similar to Python’s or JavaScript’s. Smart contracts have a number of benefits over conventional contracts, including lower costs and more production. Because they foster trust between parties and are easy for all users to verify, smart contracts are popular [9].

790

D. Patel et al.

5.4 Working of Blockchain Voting System Voter candidates must register in advance, as well. Identity verification should be done prior to creating accounts. Following confirmation of their identities, an authorised person should display a currency or token to users to verify their eligibility. Each user is only permitted to use this coin or token once to cast their vote. There can be only one use of this token per transaction because of the Blockchain verification process [19]. Consequently, a user is only permitted to vote once. The electronic voting procedure built with Blockchain is decentralised [5]. There is no centralised body in charge of overseeing the elections.

5.5 In What Way Identity Verification is Performed? Identity verification is carried out using digital identities or pseudonyms in an electronic voting system built on the blockchain. Depending on how the voting system is implemented, there are several ways to confirm a voter’s identity. Identity verification methods that might be used include:Voter-provided government-issued identification: The voter’s government-issued identification, such as a passport or driver’s licence, is checked against a government database. Voter-created pseudonyms or digital identities can be confirmed using a procedure known as “identity registration” or “on-chain identity registration.” A decentralised self-sovereign identification system or a centralised authority can both do this.

5.6 Implementation in Ethereum The e-voting dApp, a decentralised application, was developed on the Ethereum Blockchain. Solidity-based voting is done using Ethereum smart contracts. Voting has ended through an interface that uses Ethereum accounts on the client side. This strategy employs the truffle framework to test the smart contracts before putting them into use on the Blockchain. Decentralised application development, testing, and deployment are made easier by the truffle framework. It offers a setting regarding Blockchain network growth. Smart contracts can be created, built-in contracts can be compiled, linked, and deployed using the truffle development framework [6]. Migration, vote library and vote tracker of smart contract are shown from Figs. 5, 6 and 7. The environment of the truffle includes ganache. It offers a personal Blockchain for the advancement of Ethereum. You may think of it as a client for Ethereum. Use it to evaluate the truffle-based decentralised application [16]. As decentralised applications are being created, it can be leveraged to deploy contracts. Additionally,

Decentralised Blockchain-Based Framework for Securing eVoting System

Fig. 5 Migration smart contract

Fig. 6 Vote library smart contract

791

792

Fig. 7 Vote tracker smart contract

D. Patel et al.

Decentralised Blockchain-Based Framework for Securing eVoting System

793

it makes it easier to execute tests for smart contracts and Blockchain. The application can be launched on an Ethereum client like Geth after being tested on Ganache. For evaluation, Ganache offers both a physical as well as virtual Blockchain. There are ten available external user accounts. A unique Ethereum address, together with a private key have been assigned to every inside of Ganache. There are 100 test accounts already loaded ethers. The two versions of Ganache are UI and CLI. In this application, the UI version was selected for ease. A node for Ethereum can be handled in a manner akin to ganache. It looks like a node on a computer. Ganache and wallets can be linked for transactions. Utilising Meta-mask is a benefit of this implementation. It’s a Chrome extension connecting to Ethereum nodes using a programme called Meta-mask, which read client wallets. Meta-mask talks with Ethereum nodes through RPC. The Blockchain is updated when a smart contract is activated using migrations [21]. In order, for each smart contract, create a migration, a Javascript file with a number. These files are automatically called in the correct order by the truffle framework.

5.7 Model Process Model process of Ethereum virtual machine (Election Creation) is shown in the above Fig. 8. Election Creation Ballots for elections are produced by election officials using a decentralised app (dApp). The app will communicate with a smart contract Made For The election, in which the administrator lists the candidates and the locations of the voting districts. This smart contract creates smart ballots, which are then deployed onto the Blockchain. Each ballot’s smart contract takes the voting district into consideration as one of the parameters. Each node in the network is given permission to communicate with other nodes via smart contracts after a successful election is created. Flow chart for voter registration is shown in Fig. 9. GUI for voter registration is shown in Fig. 10. Voter Registration The election administration authority create the registration system. Once the election is declared, the administrators must inform the voters of their eligibility. All of the country’s eligible voters must register to vote using unique identification provided by various documents such as an Aadhar card, a pan card, or a driver’s licence to demonstrate their eligibility. Users’ faces are photographed during registration and saved in databases for later use. Candidate Registration When someone requests to be registered as a candidate, they must provide their personal information. If the government approves their request, a genesis block will be created by government miners. Then the person will be able to run for office. Flow chart for candidate registration is shown in Fig. 11. GUI for Candidate registration is shown in Fig. 12.

794

Fig. 8 Model process of Ethereum virtual machine

Fig. 9 Flow chart for voter registration

D. Patel et al.

Decentralised Blockchain-Based Framework for Securing eVoting System

795

Fig. 10 GUI for voter registration

Fig. 11 Flow chart for candidate registration

Authentication Phase To cast a vote in an electronic voting system, a person must first log in using their credentials. Following the authentication procedure, all information supplied will be checked, and if it matches a valid voter, the user will be allowed to cast a vote. The security of the aforementioned system depends heavily on the authentication process. This will help to ensure that someone’s identity isn’t being used fraudulently because every voter matters when choosing the representatives of the people. GUI for authentication is shown in Fig. 13.

796

D. Patel et al.

Fig. 12 GUI for candidate registration

Fig. 13 GUI for authentication

Casting Vote Phase Voter will be led to the voting step once the authentication process has been successfully completed. The voter’s face was photographed by a web camera during registration and saved in a database. Once a voter has entered the voting phase, a face verification procedure will be used. Voters are only permitted to cast their votes if their captured image and the image in the database match. GUI for casting vote phase is shown in Figs. 14 and 15. Displaying result Results will be made available to all voters on their dashboard by the admin after the election period is over (Fig. 16).

Decentralised Blockchain-Based Framework for Securing eVoting System

797

Fig. 14 GUI for casting vote phase

Fig. 15 GUI to display result

Every party that is running for office will have buttons or emblems on the voting website. A voter may cast a ballot in an election. Voters must select a candidate from the list before casting their ballot. The entire system is created in a way that makes user involvement very easy. A voter can only cast one vote because as soon as they retrieve their ballot, their user ID automatically logs them out.

798

D. Patel et al.

Fig. 16 GUI for admin login

Encrypting Votes Following a successful vote, the programme will produce a hash code containing the voter’s individual identity number, voter data, and the preceding block hash value. Each transaction will be unique thanks to this technique. Adding Block A new block will be created after every transaction has been recorded and verified, and depending on the candidate chosen, the data will then be recorded in that block. Every brick is connected to the one before it. Therefore, for such a system, changing a transaction is virtually impossible.

6 Conclusion A smart contract-based Blockchain-based electronic voting system safeguards voter anonymity and ensures a secure and affordable election, according to exploring research. Blockchain technology ensures secure voting by eliminating all the drawbacks and obstacles of earlier electronic voting systems. Election transparency is made possible by this unique technique. The technology used to keep the data ensures that the public can access it and check it using a decentralised method. However, the system does not guarantee error-free data. The data fed into a Blockchain-based voting system must be guaranteed to be error-free.

Decentralised Blockchain-Based Framework for Securing eVoting System

799

References 1. Ariyanto D et al (2020) Survey research on information technology and supply chain management address common method variance (cmv) 2. Caro MP, Ali MS, Vecchio M, Giaffreda R (2018) Blockchain-based traceabilityin agri-food supply chain management: a practical implementation. In: 2018 IoT vertical and topical summit on agriculture-tuscany (IOT Tuscany). IEEE, pp 1–4 3. Casino F, Dasaklis TK, Patsakis C (2019) A systematic literature review of block chain-based applications: current status, classification and open issues. Telemat Inf 36:55–81 4. Conti M, Kumar ES, Lal C, Ruj S (2018) A survey on security and privacy issues of bitcoin. IEEE Commun Surv Tutor 20(4):3416–3452 5. Dai HN, Zheng Z, Zhang Y (2019) Blockchain for internet of things: a survey. IEEE Internet of Things J 6(5):8076–8094 6. FAO F et al (2018) Food and agriculture organization of the united nations. Rome. http://fao stat.fao.org 7. Fish LA (2011) Supply chain quality management. In: Supply chain management pathways for research and practice, vol 25(1), pp 225–234 8. Helo P, Hao Y (2019) Blockchains in operations and supply chains: a model and reference implementation. Comput Ind Eng 136:242–251 9. Kaushik A, Choudhary A, Ektare C, Thomas D, Akram S (2017) Blockchain—literature survey. In: 2017 2nd IEEE international conference on recent trends in electronics, information & communication technology (RTEICT). IEEE, pp 2145–2148 10. Lin J, Shen Z, Zhang A, Chai Y (2018) Blockchain and iot based food trace ability for smart agriculture. In: Proceedings of the 3rd international conference on crowd science and engineering, pp 1–6 11. Liu Q, Yang H (2019) Application of atomic force microscopy in food microorganisms. Trends Food Sci Technol 87:73–83 12. Mann S, Potdar V, Gajavilli RS, Chandan A (2018) Blockchain technology for supply chain traceability, transparency and data provenance. In: Proceedings of the 2018 international conference on blockchain technology and application, pp 22–26 13. Mena C, Stevens G (2010) Delivering performance in food supply chains. Elsevier 14. Monrat AA, Schel´en O, Andersson K (2019) A survey of blockchain from the perspectives of applications, challenges, and opportunities. IEEE Access 7:117134–117151 15. Naik G, Suresh D (2018) Challenges of creating sustainable agri-retail supply chains. IIMB Manag Rev 30(3):270–282 16. Parmar MK, Rahevar ML (2019) Compromising cloud security and privacy by dos,ddos, and botnet and their countermeasures. In: International conference on IS-MAC in computational vision and bio-engineering. Springer, pp 159–169 17. Parmar M, Shah P (2020) Uplifting blockchain technology for data provenance in supply chain. Int J Adv Sci Technol 29:5922–5938 18. Treiblmaier H (2018) The impact of the blockchain on the supply chain: a theory-based research framework and a call for action. Supply chain Manag: Int J 19. Wang X, Zha X, Ni W, Liu RP, Guo YJ, Niu X, Zheng K (2019) Survey on block chain for internet of things. Comput Commun 136:10–29 20. Wu M, Wang K, Cai X, Guo S, Guo M, Rong C (2019) A comprehensive survey of block chain: From theory to iot applications and beyond. IEEE Internet Things J 6(5):8114–8154 21. Xie J, Yu FR, Huang T, Xie R, Liu J, Liu Y (2019) A survey on the scalability of blockchain systems. IEEE Netw 33(5):166–173 22. Zsidisin GA, Ritchie B (2009) Supply chain risk management–developments, issues and challenges. In: Supply chain risk. Springer, pp 1–12

Digital Twins—A Futuristic Trend in Data Science, Its Scope, Importance, and Applications M. T. Vasumathi, Aurangjeb Khan, Manju Sadasivan, and Umadevi Ramamoorthy

Abstract Digital Twins is a prospective technology that is emerging in the data science domain. It is the concept of creating a virtual equivalent of a real-world object that exists parallelly in the real world. With the application of real-world data, a computer program can create a simulated model that can predict or understand the working of a product or a process that may not even be present in the physical world. In short, digital twinning is a digital equivalent of any real-world object, phenomenon, or utility. A digital twin is a model driven by the Internet of Things application and it can be employed to assess the current status of the object in concern and also to forecast its future condition, controlling the behavior and optimizing its performance. Digital Twins Technology has already found its applications in the field of manufacturing, healthcare, automobile, retailing, restaurant management, smart cities, etc. This paper brings a brief insight into this emerging technology, its importance, where it can be employed and its scope in the future. Keywords Digital twins · Simulators · Sensors · Internet of Things

1 Introduction The influence of IoT is tremendous in the world and its applications in healthcare, agriculture, manufacturing, security, and other sectors are unlimited. Digital Twinning is another technology that can exploit the application of the cloud and other M. T. Vasumathi (B) · A. Khan · M. Sadasivan · U. Ramamoorthy CMR University, Bengaluru, India e-mail: [email protected] A. Khan e-mail: [email protected] M. Sadasivan e-mail: [email protected] U. Ramamoorthy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_58

801

802

M. T. Vasumathi et al.

related technologies to the fullest. A Digital Twin which is a big leap in the field of Artificial intelligence is a virtual representation that is exactly a replica of its physical counterpart. Using this technology the working of real-world parallel entities can be controlled by passing the data between them to and forth. Any unpleasant effect in the real-world entity can be foreseen by its virtual counterpart and can be avoided. The digital twin uses several technologies such as the Internet of Things, Cloud computing, and Artificial Intelligence to accomplish the given purpose [1]. The DT is a simulated model that imitates its physical counterpart very accurately by generating data on the various features of the equivalent physical object using the sensors attached to the model. The idea of Digital Twinning was first introduced in the year 2003 by David Gelemeter along with Michael Grieves which was later encouraged and welcomed by National Aeronautical Space Administration (NASA) in their paper, “The Digital Twin Paradigm for Future NASA and U.S. Air Force Vehicles” in the year 2012. That was a big breakthrough for this idea. This concept became popular in 2017 and henceforth it’s been used in several manufacturing industries. The industries making use of digital twinning technology treat it as a system/component that is a collection of virtual information which describes the fundamental basic physical construction level to the advanced level.

2 Related Work A survey paper by Mihai et al. aims to summarize the crucial element of the technology’s detection. The paper provides a thorough explanation of the technology. The report also covers design objectives and aims, identifies design difficulties and inadequacies across industries, and discusses advancements in both research and business. Complex systems testing and analysis are made possible by the Digital Twin (DT), which is not possible with conventional replication and modular evaluation [2]. Using drones to alter real-time data, An et al. presented a strategy for reducing methane emissions that is enabled by the Digital Twin and addresses the problem of global warming. By splitting subsystems of the purpose in methane emissions mapping and prediction, authors have also validated the digital twin [3]. The idea of the digital twin Identifier registry was proposed by Autiosalo et al. Measurements showed that depending on the identification registry, the median response times for retrieving a digital twin document from Twin base ranged between 0.4 and 1.2 s. The underlying architecture of the Twin base was described by the author to support the development of derived and other server implementations [4]. Li et al. looked into whether the suggested framework could be used in a case study for the quick construction of automotive body-in-white in a robotic welding production line. The findings suggest that using bionics and Digital Twins (DT) can expedite the invention and development of new goods and contribute to effective production construction management [5]. For a lightweight digital twin system based on a roadside sensing device, C-V2X network, and multi-sensor fusion, Liu

Digital Twins—A Futuristic Trend in Data Science, Its Scope, …

803

presented the global cooperative algorithm. It offers a way to enhance the digital twin technology and significantly enhance the traffic condition [6]. To enable new functions, such as a hyperconnected experience and low-latency edge computing, Lu et al. suggested a wireless digital twin edge network concept. The edge association problem concerning changing network topology and dynamic network states was addressed in this research [7]. A framework for mobility services based on artificial intelligence (AI) and data-driven cloud-edge devices was created by Wang et al. under the name mobility digital twin (MDT). The suggested MDT framework is supported by cloud-edge architecture constructed with Amazon Web Services (AWS), and its digital functionalities of storing, modeling, learning, simulation, and prediction are made successfully [8]. Kuruvatti investigated how the DT technology might be used in the context of 6G communication systems. It was considered a potentially useful instrument for efficient research, development, operation, and optimization of next-generation communication systems [9]. Azfar et al. investigated various tools and resources to create an effective and practical process for building a 3D digital model of a university campus that can enable digital twin applications [10].

3 Digital Twin-Meaning, Characteristics, and Attributes The following section gives the background on the evolution and conceptualization of Digital Twins.

3.1 Digital Twin-Meaning A digital twin is a copy of a physical object, whether it be living or nonliving, that can learn from various sources and update itself [11]. It is an integration of IoT, AI, and ML with software analytics and geographical network graphs. In simple terms, DT is a cloud-based virtual image maintained throughout the lifecycle as a single platform bringing all the experts together and providing powerful analysis, insight, and diagnostics. In digital twins, information on the design, operation, and servicing is combined with data gathered from the device’s sensors. The data serves as the digital twin’s brain, and intelligence in the form of analytics, physics, and machine learning is placed on top of it to make things productive through modeling, optimization, and early warning.

804

M. T. Vasumathi et al.

3.2 Characteristics of DT Some of the characteristics that set digital technologies apart from other technologies are described here. Connectivity The technology makes it possible for the physical element and its digital counterpart to be connected. This relationship is the foundation of digital twins; without which, there would be no digital twin technology. The physical product’s sensors, which collect data, integrate it, and communicate it through a variety of integration technologies, are the ones enabling this connectivity. Homogenization Digital twins have been made possible by the homogeneity of data and the separation of information from its physical entity. Reprogrammable and Smart Through sensors on the physical object, artificial intelligence technology, and predictive analytics, DT makes it possible for a physical product to be reprogrammable in a specific way. The emergence of features is a result of this re-programmability. Digital Traces Technologies using digital twins leave digital traces. An engineer, for instance, can utilize these digital traces to examine when a machine malfunctions and determine where the issue originated. The maker of this equipment may use these diagnostics in the future to enhance their designs and reduce the frequency of these kinds of faults. Modularity Manufacturers can modify models and equipment by adding modularity to their manufacturing models. By using digital twin technology to create modular machines, producers may identify the subpar-performing parts of the machine and swap them out for ones that fit better, streamlining the manufacturing process [12, 13].

3.3 Attributes of DT The following list of characteristics helps to distinguish authentic digital twins from other forms [14]. • It is a digital representation of an actual “entity”. • DT simulates the entity’s behavior as well as its physical condition. • It is distinctive and linked to a single distinct instance of the entity.

Digital Twins—A Futuristic Trend in Data Science, Its Scope, …

805

Digital Twin

Virtual Environment

Graphical Representatio

3D Model To be built To be fabricated

Physical Environment

Static Data

Assets Info

Physical Representat

As built As fabricated Projec t Info

Dynamic Data

Sensors Readings Performance Real time data

Fig. 1 General framework of digital twin

• It can modify itself in response to recognized alterations in the status, condition, or context of the entity. • By virtualization, it adds value. The general framework of the Digital Twin is shown in Fig. 1.

4 Digital Twin Versus Other Technologies 4.1 Simulator Versus Digital Twin A simulator is a model that is designed to imitate or replicate the working of any machine or system. These are digital models that simulate products, systems, processes, and concepts. They are typically utilized at the beginning of the design phase when using Computer-Aided Design (CAD) software programs. The simulators, which can be built as two-dimensional or three-dimensional models, are used to emulate both computer-based parts and mathematical principles. For training reasons, simulators are used to provide a realistic representation of the controls and operation of a car, an airplane, or any other complex system. A digital twin, on the other hand, is a virtual model that depicts any actual thing that exists concurrently in the real world [15]. The object which has a digital twin is equipped with sensors that produce data on a number of performance-related factors. In other words, a physical object and its digital twin coexist in parallel, and any change to the real object will trigger a matching change in the digital twin, allowing for rapid control of any deviations in the physical thing’s operation. IoT-enabled devices collect crucial data from real-world objects that serve as the basis for digital twins, integrating the real and virtual worlds together.

806

M. T. Vasumathi et al.

Though both simulation and digital twins use digital models to replicate objects and concepts, there are certain differences between the two [16]. The first and foremost difference is that a DT provides a virtual model that is capable of studying several simulations supported by real-time data. The sensors that collect data send and receive signals from the DTs. The continuous interaction between sensors, actuators, and DTs makes the predictive outcome more accurate. This continuous twoway communication enables the controlling of physical objects more effectively. The simulators replicate what has occurred to a product whereas DT can keep replicating simultaneously when changes occur to the physical object.

4.2 Digital Twin Versus Digital Shadow As often it can be seen that digital shadows and digital twins are considered synonymous. But they are different concepts using similar technologies. Digital shadows are created using different techniques and can be made to appear in different forms unlike DTs which are meant for a specific purpose within Model-Based Systems Engineering that maintains a digital representation of a particular entity, say a device or a system throughout its life cycle. The device is updated in real-time through the digital twin [17].

5 Underlying Technologies in DT Technologies which are basically enabling the digital twins are [18]: • Internet of Things (IoT)—DTs are popular because they deal with real-time data which makes it possible for it to monitor and evaluate the performance of the real entity or environment paired with the digital twin. The main source of the realtime data with which the digital twin is functioning is made possible with the help of sensors and devices that are part of the IoT. Sensors are used to continuously collect the data from the physical environment and send it to the edge device and to the cloud where the application layer makes use of these data with the help of technology like a metaverse platform to get digital twins of the object or the environment on the user’s screen. Along with the DT visuals, the application can actually show lots of analytical reports which helps in decision-making. • Cloud Computing—Provide the services like storage, software, database, and data analytics by processing huge amounts of data from the sensors and creating digital twins with the help of different algorithms and platforms in the real-time data processing. A DT developer and a DT user interact through a cloud platform without investment in the infrastructure like server setup, but just by paying only for the services used. The DT developer develops and publishes digital twins for the real-world entity on the cloud platform. The entire process of monitoring,

Digital Twins—A Futuristic Trend in Data Science, Its Scope, …

807

updating, and predicting the real-world entity with the help of the digital twin is performed over the cloud. • Application Programming Interface (API)—API helps the programmer to integrate a newly developed application with the existing software or architecture, it provides tools to share data between different applications and hence helps us to integrate cloud analytical data to the application like the metaverse platform to enable digital twins. • Artificial Intelligence (AI)—AI helps a machine to learn with the help of experience and past data, and executes tasks as a human being. AI makes an IoT-based device more intelligent by implementing various algorithms as per the requirements of the system, at the same time it can help in future prediction in the system context. • Virtual Reality (VR) and Augmented Reality (AR)—VR gives the experience of a real environment in the form of virtual environments to see and interact with, whereas AR gives us the experience of a mix of real and virtual environments connectivity and interaction. AR and VR technologies are the key factors of the digital twins.

6 Features of Digital Twins Every DT will have its own unique features depending on the purpose for which it was developed but some features may be common to all as listed in Table 1.

7 The Architecture of the Digital Twin The various phases in the architecture of the Digital Twins service are discussed below [18]: Data and Data Collection: Any digital transformation starts by constructing an operational Digital Twin that comprises two types of data model namely real-time and Time series data. Real-time data is applied to create the digital characterization of real-world objects, by constructing a graphical representation, and time series data constitute the observation of the state of some physical object at a given point in time. The data can be continuous or discrete. Data Pipeline: A single coherent model is created from the heterogeneous collection of data sources, which is then transferred to the element graph. This is accomplished by creating a data pipeline using a visual programming language, which specifies how to combine various data sources and turn them into a graph. Data Integrity: The data stream is checked to identify if there are problems with the connectivity regulations or if any defects exist with the devices that collect the

808

M. T. Vasumathi et al.

Table 1 Features of the digital twins Feature

Description

Documents management

The electronic documents and images of the current system are monitored continuously and updated in concurrence with the real-world entity throughout its lifecycle

Representation of model

A representation model that is equivalent to the corresponding physical object will imitate exactly the properties and traits of the object

3D model and simulation

A 3D Model representation exhibiting the properties of its physical counterpart along with the simulated environment is created to observe an object’s behavior

Data modeling

The decision-making process can be made efficient by combining Data modeling with connectivity, visualization, and data analytics

Data visualization

The users can view the visual representation of the data pertaining to the real-world object and its environment

Synchronization of models This feature enables all the parameters of the model to be in synchronization with the parameters of the real environment or the entities Analytics

The most important feature of the DT is the Analytics of data and network that is accomplished by employing several AI and machine learning algorithms

physical data. This is a set of analyses required to be performed on a single variate or multivariate data. Data Egress: We have designed to unlock a wide range of analytics values once the data has been gathered, grouped into a digital twin, and evaluated to assure trust in the correctness of the date the final stage is using the digital twin. The general architecture of DT, the Digital Twin, is given below in Fig. 2. Process Historian System

EAM System

Streaming

Data Collection

Data Pipeline

Data Integrity

Data Egress HDFS

LIMS

Fig. 2 General architecture of DT

Continuous Readiness

Digital Twins—A Futuristic Trend in Data Science, Its Scope, …

809

8 Steps Involved in the Creation of the Digital Twins Digital twins are directly connected with real-world entities for instance buildings, and mechanical devices like compressor engines, and here data travels between the real-world entity and its corresponding digital twin. Digital twins are an attachment to the environment and hence the following steps to be followed in creating a digital twin [19, 20]. It is shown in Fig. 3. Step 1: Defining the purpose and scope A clear idea about digital twins has to be recognized and a blueprint must be generated to define the type of twin an organization is planning to pursue. The blueprint also should contain the size and scope of the blueprint that is being generated. It should address issues like who will manage the digital twins and will consumers be able to view the digital twins remotely? so forth. Step 2: Selection of AR Tools Once the purpose is identified and well-defined, components required for the same have to be identified. Some of the components include 1. Information: Digital twins transmit data to and forth to the corresponding realworld physical entity. The DT developer will decide on the type of information and the specialized hardware required. 2. Equipment: The physical components required to accomplish the functionality must be realized and this includes sensors, location trackers, monitors, local cables or a wireless network and developers must also consider the physical structure. 3. Enabling Technology: Eventually the software required to materialize the digital twin must be identified. Solutions with IoT-based device management and 3D representations are essential on an architectural scale. Step 3: Digitally capture the physical environment The next step in designing digital twins is creating a simulation of the 3D-physical environment. This may be possible through the digitization process that would capture the images of the area from different perspectives. With the captured images

Defining the purpose and scope

Selection of AR Tools

Digitally capturing the physical environment

Fig. 3 Steps involved in the creation of the digital twin

Adding functionality to the digital twin

810

M. T. Vasumathi et al.

the DT developers can construct a 3D digital twin model. The machine learning approach can be employed to compile the information obtained from the images. Step 4: Adding functionality to the digital twin After the digital twin’s architectural design has been completed the designers must add functionality to ensure that it functions as expected. The digital twin’s capabilities will depend on the DT’s intended purpose. The initial model created has to be refined to bring good lighting effects and textures where end users feel it is a lookalike environment of the real-world entity. Designers can include interactive nodes thereby providing some control to the end users. Once the digital twin is integrated with the model DT developers must test and enhance the experience for the users.

9 General Working of Digital Twins The working pattern of a DT may be specific to the application or area for which it is developed. Still, all DTs have a standard working methodology that is shown in the figure. As discussed in section VII the conceptualization and necessary components are assembled then the interconnection between the physical system and its virtual counterpart is done within a closed loop called a digital thread [21]. DT makes use of the idea of a “digital thread,” which allows data on a product’s performance and use from its design to manufacturing, sale, use, and disposal or recycling. This offers perceptions into how consumers use products, how well they function, where they may be improved, and what new features consumers might desire. In a nutshell, a digital thread connects the physical and digital worlds. The following actions repeatedly take place inside the closed loop, collecting data from the physical entity and its surroundings and sending it to the central store [22]. • The data that needs to be input to the DT is pre-processed. • DT uses real-time data to imitate the entity’s work in real-time and observes what will be the consequence if the environment changes and identifies the shortcomings. This task can be accomplished by employing AI algorithms to modify the product design or to identify unfavorable trends and control them. • A dashboard is created that would reflect the findings from analytics. • Actionable and data-oriented decisions are made by the stakeholders and therefore the parameters and processing of the physical entity are updated accordingly. These tasks are performed repetitively as and when new data arrives. Figure 4 shows a digital thread.

Digital Twins—A Futuristic Trend in Data Science, Its Scope, …

811

Fig. 4 Digital thread

10 A Case Study—Digital Twins in Restaurant Management Digital twins can be created to manage a restaurant to optimize the operations and view the digital twins of the restaurant’s areas virtually on the screen with the input of real-time data through various sensors, and hence controlling the real-time activities of the restaurant is optimized [23]. A restaurant is concerned with managing the crowd, improving the quality of the service, analyzing the demand, saving the food, and so on. A Digital Twin can be employed to visualize the quantity of food required at a given time, the number of people required to attend to the customers in the minimum time, the team size required to prepare food, and so on. The solution using IoT with the digital twin concept is visualized with some usual questions that a restaurant manager should answer almost every day in the business. Some of the questions are like: How do extreme changes in demand affect your business? How to achieve the optimal wait time? How much food do we need to meet the demand at a certain time and day? How many people do we need to achieve the desired service? How many people do we need to prepare the food at the peak time? The above questions can be optimally answered with the help of IoT-based Digital Twins. This eases the entire process of restaurant management. There is no real-time prediction analysis or management system available for monitoring and controlling the activities of the restaurant so far, however, there exist smart management restaurant systems that manage the communication between the customers and the restaurant through sensors and cloud computing technologies. The existing systems are limited to managing customer orders and optimizing services. But they do not attempt to forecast demands, stock theft, stock requirement notification, real-time inventory visibility, control, maintain accurate details of the customer, calculate and analyze the profitability of the dish, or customer retention.

812

M. T. Vasumathi et al.

10.1 Components Required to Implement Digital Twins in Restaurants The physical environment will be monitored on the screen in the form of digital twins, which will help us to handle critical parameters such as the number of customers, peak time, successful orders, use of resources, and stock levels. Digitalization and data analysis based on the data taken from the sensors installed in the restaurant will help us to show prescribed information on the screen with digital twin visuals to optimize the restaurant management and also to operate in real-time [24]. The components [25] required to implement the system for the proposed architecture of the digital twins for the restaurant are shown in Table 2. Table 2 Components to implement the IoT-based solution for the digital twins in restaurant Raspberry Pi is a debit card-size computer with additional features of GPIO pins. It can connect cameras with ribbon or using USB and is very handy for running the python codes with loaded python libraries. This feature gives us the advantage of implementing any type of algorithm on raspberry with multiple sensors including a camera for image processing and data visualization. Raspberry pi comes up with built-in WIFI which can be used to push the processed data to the IoT cloud for further processing and visualization [26] USB-based camera: is used as a sensor to capture the videos and images from all the corners of the restaurant which will be received from the raspberry pi board to analyze the video about the number of customers at a point in time and the types of food in demand, peak time, size of the cooking team and so on [27] Infrared Sensors IR sensors will be used to detect customers on the table to identify any free table, An IR sensor is an electronic device and small and cheap sensor that measures and detects infrared radiation in its surrounding environment [28]. With the help of these sensors, we will be detecting the empty table as well as counting the customer being served at any point in time Node MCU ESP8266 is a low-cost Wi-Fi microchip with an integrated microcontroller and TCP/IP networking software. We are using this chip for connecting the IR sensors wirelessly with the raspberry pi to get the data from the IR sensors to the pi board [29]

Digital Twins—A Futuristic Trend in Data Science, Its Scope, …

813

10.2 Architecture of the Digital Twins for Restaurant Management We propose a four-layered architecture of the digital twins for restaurant management as presented in Fig. 5. The first layer is the physical layer containing the IR sensors, Node MCUs, and the cameras connected with raspberry pi using WIFI and the USB respectively [30]. This layer is only responsible for collecting the data from the physical environment. The second layer is the edge layer consisting of edge devices like raspberry pi and WIFI routers. Raspberry pi is a mini computer that will be loaded with python programs consisting of algorithms for counting the number of customers at a point in time, types of food in demand, peak time, chef team, and so on. Raspberry pi will process the collected data locally at the edge layer and will send the processed information to the cloud channel. The third Layer is the cloud where the IoT cloud channel will be receiving the sensor’s data points from the edge device and will be running analytics services on received data to generate recommendations as per the business logic. And the Fourth layer is the application layer which will be having a customized web application to visualize the digital twins in one panel of the screen and to show the data analytics results from the cloud to the second panel of the screen. Data analytics can be performed on any of these IoT clouds (AWS IoT Analytics, Thingspeak, Google IoT cloud, etc.) [31]. We can create the dashboard with the required fields on any of these clouds and can get the dashboard on the customized web application to view results in multiple panels. The flow of operations of the proposed work is shown in Fig. 6. The Digital Twins for the restaurant can be designed to visualize the current scenario and predict the upcoming demand. The implementation can be accomplished using a raspberry pi board as an edge device, several cameras will be continuously

Fig. 5 Flow of operations of the digital twins for the restaurant

814

M. T. Vasumathi et al. IT Cloud Hosting services and customized Python Libraries IoT Cloud Input

Camera IR sensor

Raspberry pi

Data Analytics Data Visualization Dashboard

Customized App

Fig. 6 Working model of restaurant management digital twin

capturing images of the restaurant from all dimensions. Infrared sensors can be attached under the table to get counts on the present customers and a node MCU that helps to integrate IR sensors with the raspberry pi board wirelessly. The working model of the proposed Restaurant Management Digital Twin is shown in Fig. 6. An IoT cloud channel is to be created on any of the listed IoT clouds (AWS IoT Analytics, Thingspeak, Google IoT cloud) [31] for data visualizations and analytics. Algorithms like Support Vector Machine (SVM) will be implemented using python on the edge device (Raspberry Pi) to identify food and its type [32]. Business logic codes and sensor specific codes to process the data collected from different sensors and cameras will be written in python along with SVM algorithms as libraries and can be deployed on the IT Cloud. These algorithms and programs will help us to count the customers, compute the peak time based on the restaurant size and the customer’s presence, wait time for the customer, and count the people preparing food. After processing data will be pushed to the IoT cloud from where the information will be presented to the manager’s screen in the form of a dashboard in one division of the screen, with Digital Twin on another division. Monitoring and analyzing digital twins will allow restaurant managers to understand the performance levels and compare them with expectations. So, the entire process becomes more efficient as it can anticipate the situations. The integrated Digital Twins operation correlates streaming IoT data with additional inputs. Digital Twins of the restaurant can be created using any of these metaverse platforms (CryptoVoxels, Decentraland, Somnium Space) that will be a replica of the physical spaces of the restaurant [33].

Digital Twins—A Futuristic Trend in Data Science, Its Scope, …

815

11 Conclusion Digital twinning has been applied in various areas of the manufacturing industry and space. It also finds limited usage in health care. But its application cannot be restricted only to these but can be extended to several domains. Despite its success with many proven instances, digital twins have not been extensively developed due to the complexity of the creation and implementation. But in the present time, several global tech leaders have involved them in the Digital Twin Consortium to promote the creation, acceptance, usage, and interoperability of DTs. But the adoption of DT can be achieved only if vivid technical guidance and compatible frameworks are made available. Every DT is unique and each one is created to virtually represent a product or phenomenon. Digital Twin requires not only infrastructure, platforms, or models but also experts in data, machine learning, cloud technologies, and engineers capable of integrating different parts of hardware and software puzzles. In any case, there is no second thought that digital twins are a revolutionary technology in every field.

References 1. Batty M (2018) Digital twins. Environ Plann B: Urban Anal City Sci 45(5):817–820. https:// doi.org/10.1177/2399808318796416 2. Mihai S et al (2022) Digital twins: a survey on enabling technologies, challenges, trends and future prospects. IEEE Commun Surv Tutor 24(4). IEEE 3. An D et al (2021) Digital twin enabled methane emission abatement using networked mobile sensing and mobile actuation. In: 2021 IEEE 1st international conference on digital twins and parallel intelligence (DTPI), 22 September 2021. IEEE 4. https://mapal-os.com/en/resources/blog/welcome-to-digital-twin-restaurant-technology-thatimproves-your-restaurants-performance-and-customer-experience-in-one-go 5. Autiosalo J et al (2021) Twinbase: open-source server software for the digital twin web. IEEE Access 9:140779–140798. Electronic ISSN: 2169-3536 6. Li L et al (2021) Digital twin bionics: a biological evolution-based digital twin approach for rapid product development. IEEE Access 9:121507–121521. Electronic ISSN: 2169-3536 7. Liu Q (2022) Application of lightweight digital twin system in intelligent transportation. IEEE J Radio Freq Identif 6:729–732 8. Lu Y, Maharjan S, Zhang Y (2021) Adaptive edge association for wireless digital twin networks in 6G. IEEE Internet of Things J 8(22) 9. Wang Z, Gupta R, Han K, Wang H, Ganlath A, Ammar N, Tiwari P (2022) Mobility digital twin: concept, architecture, case study, and future challenges. IEEE Internet of Things J 9(18) 10. Kuruvatti NP, Asif Habibi M, Partani S, Han B, Fellan A, Schotten HD (2022) Empowering 6G communication systems with digital twin technology: a comprehensive survey. IEEE Access 10 11. Hartmann D, Van der Auweraer H (2021) Digital twins. In: Cruz M, Parés C, Quintela P (eds) Progress in industrial mathematics: success stories. SEMA SIMAI Springer series, vol 5. Springer, Cham. https://doi.org/10.1007/978-3-030-61844-5_1 12. https://en.wikipedia.org/wiki/Digital_twin 13. Barricelli BR, Casiraghi E, Fogli D (2019) A Survey on digital twin: definitions, characteristics, applications, and design implications. IEEE Access 7:167653–167671. https://doi.org/10.1109/ ACCESS.2019.2953499

816

M. T. Vasumathi et al.

14. Sturm C, Steck M, Bremer F, Revfi S, Nelius T, Gwosch T, Albers A, Matthiesen S (2021) Creation of digital twins—key characteristics of physical to virtual twinning in mechatronic product development. Proc Des Soc 1:781–790.https://doi.org/10.1017/pds.2021.78 15. Pei W, Ming L (2021) A digital twin-based big data virtual and real fusion learning reference framework supported by the industrial internet towards smart manufacturing. J Manuf Syst 58:16–32. https://doi.org/10.1016/J.JMSY.2020.11.012 16. Wright L, Davidson S (2020) How to tell the difference between a model and a digital twin. Adv Model Simul Eng Sci 7:13. https://doi.org/10.1186/s40323-020-00147-4 17. Bergs T, Gierlings S, Auerbach T, Klink A, Schraknepper D, Augspurger T (2021) The concept of digital twin and digital shadow in manufacturing. Procedia CIRP 101:81–84. ISSN 22128271, https://doi.org/10.1016/j.procir.2021.02.010. https://www.sciencedirect.com/science/art icle/pii/S2212827121006612 18. Qi Q, Tao F, Hu T, Anwer N, Liu A, Wei Y, Wang L, Nee AYC. Enabling technologies and tools for digital twins. J Manuf Syst 58, Part B:3–21. ISSN 0278-6125. https://doi.org/10.1016/ j.jmsy.2019.10.001. https://www.sciencedirect.com/science/article/pii/S027861251930086X 19. Marmolejo-Saucedo JA (2020) Design and development of digital twins: a case study in supply chains. Mobile Netw Appl 25:2141–2160. https://doi.org/10.1007/s11036-020-01557-9 20. Azangoo M et al (2022) A Methodology for generating a digital twin for process industry: a case study of a fiber processing pilot plant. IEEE Access 10:58787–58810. https://doi.org/10. 1109/ACCESS.2022.3178424 21. Abusohyon IAS, Crupi A, Bagheri F, Tonelli F (2021) How to set up the pillars of digital twins technology in our business: entities, challenges and solutions. Processes 9:1307. https://doi. org/10.3390/pr9081307 22. Jones D, Snider C, Nassehi A, Yon J, Hicks B (2020) Characterising the digital twin: a systematic literature review. CIRP J Manuf Sci Technol 29, Part A:36–52. ISSN 17555817. https://doi.org/10.1016/j.cirpj.2020.02.002. https://www.sciencedirect.com/science/art icle/pii/S1755581720300110 23. Newrzella SR, Franklin DW, Haider S (2022) Methodology for digital twin use cases: definition, prioritization, and implementation. IEEE Access 10:75444–75457. https://doi.org/10.1109/ ACCESS.2022.3191427 24. Augustine P (2020) Chapter four—The industry use cases for the digital twin idea. In: Raj P, Evangeline P (eds) Advances in computers, vol 117(1). Elsevier, pp 79–105. ISSN 0065-2458. ISBN 9780128187562. https://doi.org/10.1016/bs.adcom.2019.10.008. https://www.sciencedi rect.com/science/article/pii/S0065245819300610 25. El Saddik A (2018) Digital twins: the convergence of multimedia technologies. IEEE Multim 25(2):87–92. https://doi.org/10.1109/MMUL.2018.023121167 26. Severance C (2013) Eben Upton: Raspberry Pi. Computer 46(10):14–16. https://doi.org/10. 1109/MC.2013.349 27. Puppim de Oliveira D, Pereira Neves dos Reis W, Morandin Junior O (2019) A qualitative analysis of a USB camera for AGV control. Sensors 19. MDPI AG: 4111. https://doi.org/10. 3390/s19194111 28. Centeno A, Aid S, Xie F (2018) Infra-red plasmonic sensors. Chemosensors 6 MDPI AG: 4. https://doi.org/10.3390/chemosensors6010004 29. Singh Parihar Y (2019) Internet of Things and Nodemcu: a review of the use of Nodemcu ESP8266 in IoT products. Int J Emerg Technol Innov Res 6(6): 1085–1088. www.jetir.org. ISSN: 2349-5162. http://www.jetir.org/papers/JETIR1907U33.pdf 30. VanDerHorn E, Mahadevan S (2021) Digital twin: generalization, characterization and implementation. Decis Support Syst 145:11352. ISSN 0167-9236. https://doi.org/10.1016/j.dss. 2021.113524. https://www.sciencedirect.com/science/article/pii/S0167923621000348 31. https://thedailyplaniot.com/iot-data-analytics-platforms/ 32. Pouladzadeh P, Shirmohammadi S, Al-Maghrabi R (2014) Measuring calorie and nutrition from food image. IEEE Trans Instrum Meas 63(8):1947–1956. https://doi.org/10.1109/TIM. 2014.2303533

Digital Twins—A Futuristic Trend in Data Science, Its Scope, … 33. https://www.one37pm.com/nft/a-beginners-guide-to-the-metaverse#:~:text=Popular%20e xamples%20of%20the%20metaverse,known%20as%20online%20gaming%20platforms

817

Loyalty Points Exchange System Using Blockchain Swati Jadhav, Shruti Singh, Akash Sinha, Vishal Sirvi, and Shreyansh Srivastava

Abstract The motive behind this research is to build a blockchain-based system that allows users to manage their rewards/loyalty points, trade rewards/loyalty with other users, and offer a transparent, secure network for transactions. Customers that are loyal to a brand are rewarded with points through loyalty programmes, and after they have accumulated enough points, they can redeem them for goods or services. But there are some issues with the current system of loyalty points such as long collection, low liquidity and also fragmented system results in poor user experience. The proposed system in this paper can solve such issues by providing better security and a smooth user experience. Thus with the help of technologies such as solidity which is used to write smart contracts, and other technologies like ethereum and metamask, a loyalty system has been built. Keywords Loyalty points · Blockchain · Ethereum · Smart contract · Metamask

1 Introduction Loyalty point is the strategy used by various mega companies through which such companies reward their loyal customers with some points whenever they fulfill a specific criterion. When a customer accumulates a certain number of points, he/she is S. Jadhav (B) · S. Singh · A. Sinha · V. Sirvi · S. Srivastava Department of Computer Engineering, Vishwakarma Institute of Technology, Pune, India e-mail: [email protected] S. Singh e-mail: [email protected] A. Sinha e-mail: [email protected] V. Sirvi e-mail: [email protected] S. Srivastava e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_59

819

820

S. Jadhav et al.

eligible to claim the reward which is provided by the company in exchange for his/her loyalty to the company. The reward can be a special discount or free merchandise. This encourages the customers to purchase or utilize more-and-more services or products from that company. The only problem associated with this strategy is that the customer has to accumulate a certain number of points which is usually large in number. A regular customer can accumulate that number of points however one customer can be a loyal customer of just a few companies and hence if he/she wishes to claim the benefit from other companies, they need to accumulate that number of loyalty points of that particular company as well. This makes this programme less useful for the customer. In this work, the solution to this problem has been discussed. A system based on blockchain has been proposed which solves the problem of points accumulation. The proposed system consists of the network of loyal customers of different companies and provides the platform on which these customers can exchange their loyalty points among themselves. Hence if customer ‘A’ wants to claim the reward from company ‘X’ and he has limited amount of loyalty points for that company, he can exchange the loyalty points from a customer ‘B’ who wants the loyalty points of company ‘Y’ which is owned by customer ‘A’ and hence the two customers can exchange their loyalty points and both of them can claim the rewards from the respective companies. The proposed system provides a platform to the customers for the exchangeability of their loyalty points and takes more advantage of the loyalty programme. The system works on the network of blockchain on which each transaction of the loyalty points will be monitored and hence will be secured. It will also have an user friendly GUI which makes the system more easier to use for a wide range of customers.

2 Literature Review A blockchain-based loyalty points system was created on the Ethereum network and tokens are generated according to the smart contract. The manufacturing company generated TECH tokens has the liberty to generate as many tokens as possible. The TECH tokens can be kept within the loyalty system or be released in the market. The TECH tokens are transferable between crypto wallets. But, because the cost of transferring and managing the smart contract is significant, the suggested paper has a constraint in which the loyalty system established and recommended in the research is not available on the primary network. The system can now only be monitored on the Rinkeby network, but it has the potential to be moved to the main net if it is supported by businesses in the future [1]. A loyalty points system can be implemented by using blockchain along with coalition. By employing sidechain technology, the platform enables users to easily swap loyalty points from other existing loyalty schemes which are also based on blockchain. Additionally, by using the Proof-of-Stake consensus method and by allowing customers to participate in the consensus process and get more tokens, customer involvement can be increased even further. New businesses can join the

Loyalty Points Exchange System Using Blockchain

821

coalition without having to go through the time-consuming and expensive process of integrating various blockchain networks. However, the suggested system may face difficulties since user actions may have a major influence on the proposed blockchainbased loyalty programme system’s security and functionality. The block incentives, which dictate user behaviours, have a significant impact on the system’s security and performance [2]. In this article, the author proposes a simple plug-and-play application that users will use to purchase things using their loyalty points. A customer Stellar account is generated upon registration. Each client account will be assigned a distinct Stellar account id. The Stellar wallet associated with the account id will hold the loyalty assets earned by that consumer for each brand (business). Companies may register for the platform on the website and must complete KYC before utilizing it. Each corporate token will be paired with a Stellar account id as soon as they register. On the Hyperledger Fabric, a chaincode (smart contract) will be performed, culminating in the creation of a token. But, because the proposed system relies on the availability and dependability of technology such as mobile applications, blockchain and the Stellar network, it is extremely reliant on third-party technologies. Any interruptions or breakdowns in these systems might have an influence on their functionality [3]. An autonomous loyalty programme for IoT operators and an idea about how it may be put into practise has been proposed. The idea’s objective is to create a link between the business and technical levels in an autonomous way with the proposed design. The use of blockchain-based decentralized network is suggested which removes the role of centralized authority and hence the complete system will be focused on users, and as this paper suggests a loyalty programme for IoT solution providers and engages in widespread usage of IoT devices, this may be a challenge since power efficiency may be an issue because IoT devices are designed to operate on batteries for extended periods of time [4]. This study looks at traditional loyalty programmes and the challenges they create and it highlights that blockchain technology does not replace a company’s existing loyalty programme, but rather serves as a facilitator. Customers may redeem their rewards more quickly and effectively as a consequence of the possibility for people to transfer and consolidate points in a single wallet, resulting in a larger number of redemption transactions and a reduced cost per transaction. One issue that the blockchain-based coupon and promotion tracking system may confront is scalability. With billions of coupons and transactions, the decentralized database may confront difficulties in processing enormous volumes of data in real-time, potentially resulting in system bottlenecks. Furthermore, factors like network latency and node availability may have an influence on system performance [5]. Blockchain is an emerging technology which provides solutions for various realtime problems related with the traditional centralized system. Blockchain comes with the support of new technologies including distributed ledger technology, smart contracts and cryptography which can solve the problems related to data security. Blockchain being a distributed network does not depend upon any central authority

822

S. Jadhav et al.

which in turn makes it more transparent and reliable. With the support of cryptography, blockchain networks are able to carry out transaction more securely. Hence blockcahin can be used to provide secure and transparent platform for both sender and receiver [6]. The traditional customer loyalty programmes have been analysed and a solution has been proposed based on the drawbacks of the traditional method such as lost coupons and cashback process complications. Promotion Asset Exchange (PAX) which is based on blockchain is a framework that has been proposed, to solve the issues of the traditional customer loyalty programmes. By employing the PAX token, the PAX framework digitalizes transaction processes by utilizing smart contracts (a piece of code that is executed when specific circumstances are satisfied) of blockchain technology. The proposed PAX Framework has not yet been tested in a real-world setting, and its performance in terms of transaction confirmation time and transactions per second has not been quantified [7]. A blockchain-based platform for loyalty was provided so that continuous engagement with the customer could be made possible. A customer and a business could perform transactions among themselves for different brand services. These are stored using the Smart Ads. The users who make purchases using the Smart Ads are provided with certain loyalty points which can be used on various platforms which are registered on the same network. Hyper-Ledger Fabric was used in the creation of the project [8]. This proposed system showcases a peer-to-peer exchange mechanism that was devised for clients to accomplish the aforementioned transaction based on their demands and the call auction, also because it offers several properties that are useful to a cross-organizational coalition loyalty programme, Hyperledger Fabric is used as the underlying blockchain technology. This research also suggests a viable multi-host deployment plan for the Hyperledger Fabric blockchain network that is appropriate for our application situation. The existing peer-to-peer point exchange mechanism demands perfect matching of the kind and number of points being exchanged, which results in a low chance of orders being swapped. The suggested approach uses the call auction mechanism to maximize the likelihood of orders being traded, but it necessitates the translation of point exchange orders to stock exchange order structure [9]. In order to utilize blockchain to its full potential, the integration of applications such as CRM, ERP, databases, e-wallets and other loyalty programme platforms has been suggested in real-time. Loyalty providers that are participating in the network can have a common digital wallet to credit rewards in the user’s account. This practice allows customers to exchange unused points with their known ones who are registered on the same network. By investing on blockchain, companies can be benefited financially [10]. The extent of Blockchain technology has been defined in this study. Blockchain technology is now employed mostly in financial applications and cryptocurrencies. The key improvements that a blockchain application will bring about in marketing are the elimination of middlemen, increased trust between organizations, brands, and customers, and more openness, auditability, and accountability. The paper imposes a

Loyalty Points Exchange System Using Blockchain

823

difficulty because of uncertainty about its potential, the necessary organizational modifications for enterprises and technological issues like scalability and interoperability. Furthermore, additional study is needed on marketers’ perceptions of blockchain and its impact on customer behaviour and brand loyalty [11]. The main reason behind the popularity of loyalty point programmes is the fact that it makes the customer actively participate in the programme. One additional feature that can be added in the current system is the ability to buy and sell the idle points within the peer-to-peer network of customers on the basis of call auction. A blockchain network can be designed by utilizing the features of Hyperledger Fabric to make the system cross-functional. The Hyperledger Fabric integration provides some additional features as well like multi-host deployment [9]. A deeper comprehension of how cryptocurrency, as a futuristic element of a creatively designed loyalty programme, might impact customer loyalty. There are three main inquiries. The first has to do with how cryptocurrencies affect how individuals view loyalty in the context of loyalty incentive schemes. The second one focuses on how the customer shapes their loyalty to a specific incentive programme and the influence of emotions and cognitions of the types of rewards. The final query focuses on the potential ramifications and effects of including cryptocurrency as a loyalty programme design feature [12]. This article offers a blockchain-based interpretation of conventional loyalty schemes. The installed loyalty system utilizes the Ethereum network, and rewards are delivered to clients in the form of IZTECH Tokens, which are tokens created in accordance with ERC20 specifications. The smart contract was transferred from the local Blockchain to the Rinkeby network when the development phase was over using the local Blockchain generated using Ganache. The manufacturer assumes ownership of the IZTECH Token contract within the system and distributes the tokens to the marketplaces they have agreements with. Markets provide clients with IZTECH Tokens together with the goods and services they have purchased. With this function, users may spend their rewards in ways that suit their interests [13]. In order to study how a distributed ledger and peer-to-peer blockchain network can offer outstanding prospects for end users and enterprises to move assets and manage user data, the authors of this article built a blockchain-based reward point exchange application. Additionally, the authors have attempted to create a framework idea and implementation for transferring assets in a secure environment with limited access. The approach enhances the member organisations’ current reward-offering systems, where points may only be redeemed for gifts. The authors intend to expand an RPES to gather and assess point exchange data in order to enhance user experiences for businesses [14]. This paper discusses a system that uses the Neo chain and the Promotion Asset Exchange coins. Customers may earn tokens in the installed loyalty programme by using their smartphones to read the QR codes on the things they purchase. With the help of neon.js, the mobile device connects to the Neo Blockchain and requests a token. Customers can use the tokens they have accumulated to purchase goods from the merchant. The merchant, in turn, collects the tokens they get in exchange for the

824

S. Jadhav et al.

goods they provide to customers and receives payment from the manufacturer with whom they have a contract [15].

3 Methodology 3.1 Smart Contract Smart contracts are programmes stored on blockchain which are self-executable, i.e. these programmes run automatically when certain required conditions are met. The smart contract may execute and enforce itself independently and instantly based on a set of preprogrammed parameters. The primary benefit of blockchain technology is the strengthening of security, transparency, and trust between signatures, preventing misunderstandings, falsifications, or modifications, and doing away with middlemen. It is written in a virtual language. Smart contracts can be created by using different languages like Rust, Solidity, etc. Solidity is the language that has been used here to write these smart contracts. Smart contracts are created by combining different functions which are written by following the syntax of the solidity language. While creating the smart contract, the different functionalities which are required for a loyalty points system are kept in mind. The different functionalities implemented in smart contracts are for performing different operations related to the loyalty points system. Some of the functions which have been implemented are as follows: get_address: This function returns the account’s address of a user. This address plays an important role as all the transactions are carried out using the user’s account address. Figure 1 depicts the addUser function. get_amazonPoint: This function returns the total number of reward points in the Amazon wallet of the user. By using the account address of the user, the reward points details are fetched and displayed. Similarly two more functions have been implemented namely get_flipkartPoint and get_myntraPoint that are used to fetch the reward points of a user in his/her Flipkart wallet and Myntra wallet respectively shown in Fig. 2.

Fig. 1 addUser function

Loyalty Points Exchange System Using Blockchain

825

Fig. 2 get_amazonPoints, get_flipkartPoints and get_myntraPoints calls

Fig. 3 get_allPoints call

get_allPoints: This function returns the total sum of the reward points from all the wallets that a user has by fetching the details using the user’s account address as shown in Fig. 3. convert_reward_point: This function is used to convert reward points from one company wallet to another but for the same user. It takes source company, destination company and points to convert as the three parameters. It uses the user’s account address to check whether the user has the required reward points in his/her source company wallet which he/she wants to convert. If this condition is satisfied then the points are converted or transferred from the source company wallet to the destination company wallet of the user. The working of the function is shown in Fig. 4. transfer_reward_friend: This function works similar to convert_reward_point function but the only difference is that here the transaction is carried out between two different users. It also takes three parameters as input, i.e. address of the receiver, company name and the points to be converted. In order to transfer the reward points, two conditions are checked. First condition is that the address of sender and receiver cannot be the same as a user cannot transfer points to himself/herself. Second condition which is checked is that the sender must have sufficient reward points in his/her wallet which he/she has to transfer to the wallet of the receiver. If these two conditions are satisfied then the reward points are transferred from the sender’s account to the receiver’s account in the specified company’s wallet.

826

S. Jadhav et al.

Fig. 4 convert_reward_point and the change in all points after the transaction is performed

A transaction array named trans_records is created to store all the performed transactions. This smart contract is deployed on the ethereum and keeps self-executing whenever certain conditions are met (Fig. 5). Fig. 5 transfer_reward_friend and the change in all points after the transaction is performed

Loyalty Points Exchange System Using Blockchain

827

Fig. 6 Two accounts created in the Metamask wallet for performing transactions

3.2 Metamask Wallet Metamask is a wallet for cryptocurrencies that allows users to make use of Web3. Users may store and exchange cryptocurrency using this free web wallet. It serves as a platform for hosting dApps (Decentralized Applications) and interacting with the Ethereum blockchain ecosystem. The users have sole custody of their keys. Two accounts on the metamask wallet were set up. The accounts were created to demonstrate the transfer of currency from one user to another. Goerli Faucet— a Ethereum testnet Faucet was used in this project. Goerli faucet does not require any authentication and provides a smooth way to test blockchain applications before deploying them. Gas fee is deducted from the accounts whenever the user carries out any transaction. Figure 6 shows the metamask wallets that are used in this project.

3.3 Goerli Faucet—ETH Blockchain In this project Goerli testnet, a decentralized computer network with a distinct ledger from the Ethereum-ETH network is used, so transactions do not cross connect. It validates a proof of work/authority consensus technique rather than the Ethereum

828

S. Jadhav et al.

mainnet’s proof of stake (PoS). Goerli-faucet is a popular Ethereum testnet web3 application that has been used to test our project, i.e. loyalty points exchange system using blockchain before releasing. The project has been deployed on Goerli-faucet testnet over others because it supports and uses a broader range of softwares (e.g. Geth, Parity, Nethermind, and Hyperledger) than rivals such as Rinkeby and Kovan, because of which the Goerli-faucet testnet is a totally different ledger from the Ethereum network, so whatever happens on Goerli-faucet testnet stays on Goerlifaucet testnet. Goerli testnet has been used because it provides a secure environment in which to test their app applications for ensuring all the security thefts and bugs before deploying them on the Ethereum mainnet. The workflow is depicted in Fig. 7

Fig. 7 Workflow

Loyalty Points Exchange System Using Blockchain

829

4 Results The loyalty points exchange system has been developed with various features like exchangeability of loyalty points among different users as well as of different companies by same user, security and smooth user experience. The smart contracts have been created using solidity and deployed on ethereum blockchain. The features provided to the users are as follows.

4.1 Add User The user has an option to add his/her metamask account. Once the account has been added to the blockchain network, the user will get an option to perform a transaction of loyalty points with the other users within the same network. The exchangeability of loyalty points is completely based on the mutual consensus of the concerned users.

4.2 Inter-Transfer One user can transfer the loyalty points within the same brand with some other user also. With the help of a transfer function implemented in the smart contract, a user can transfer his/her own loyalty points to someone else. But as this is intra-transfer, the loyalty points which are transferred to a user can be used only for that specific brand. This helps the user to collect coins in a faster manner and thus solves the issue of long collection time.

4.3 Intra-Transfer One user can convert the loyalty points of one brand to some other brand. With the help of a convert function implemented in the smart contract, a user can convert his/her own loyalty points to some other brand’s loyalty points. This helps the user to collect coins in a faster manner and thus solves the issue of the fragmented system.

4.4 Reliability The system being developed on a distributed network of blockchain promises the reliability factor. The transactions made on this network are publicly visible to both sender and receiver which adds transparency to the overall system. Also, Blockchain

830

S. Jadhav et al.

being a decentralized and distributed technology is highly secure from all kinds of alteration or fabrication. Hence reliability is achieved with the help of no alteration feature of the system.

5 Conclusion Blockchain technology plays a critical role in monitoring loyalty points and other cases in this paper. Businesses may use blockchain to validate points and loyalty codes to guarantee that consumers are correctly utilizing them. As a result, blockchain technology poses and addresses the challenge of maintaining track of all loyalty points by employing distributed storages/databases for millions of loyalty points, insurance, respective user ownership, redemption, all when avoiding fraud. With the help of the project deployed in the network, the business can now manage and track the pricing of products, reduce errors and frauds whereas the user can easily exchange loyalty points from one company to another, and perform real-time loyalty point tracking to tap the potential worth of every loyalty point of every firm by implementing the procedures outlined above. In addition, the user may now send his loyalty points as a gift ticket to his friends in exchange for the equivalent coins from other companies’ points. The smart contract-based loyalty points system will protect the total value of the points, while the decentralized top trading platforms (Ethereum, blockchain and shibInu) will allow points to move easily from one point to another point.

References 1. Sönmeztürk O, Ayav T, Erten YM (2020) Loyalty program using blockchain. In: 2020 IEEE international conference on blockchain (Blockchain), pp 509–516https://doi.org/10.1109/Blo ckchain50366.2020.00074 2. Nguyen CT, Hoang DT, Nguyen DN, Pham H-A, Tuong NH, Dutkiewicz E (2021) Blockchainbased secure platform for coalition loyalty program management. In: 2021 IEEE wireless communications and networking conference (WCNC), pp 1–6. https://doi.org/10.1109/WCN C49053.2021.9417501 3. Agrawal M, Amin D, Dalvi H, Gala R (2019) Blockchain-based universal loyalty platform. In: 2019 international conference on advances in computing, communication and control (ICAC3), pp 1–6. https://doi.org/10.1109/ICAC347590.2019.9036772 4. Gheitanchi S (2020) An autonomous loyalty program based on blockchains for IoT solution providers. In: 2020 IEEE global conference on artificial intelligence and internet of things (GCAIoT), pp 1–6. https://doi.org/10.1109/GCAIoT51063.2020.9345892 5. Agrawal D, Jureczek N, Gopalakrishnan G, Guzman M, McDonald M, Kim H (2018) Loyalty points on the blockchain. Bus Manag Stud 4. https://doi.org/10.11114/bms.v4i3.3523 6. Swati J, Nitin P, Saurabh P, Parikshit D, Gitesh P, Rahul S (2022) Blockchain based trusted secure philanthropy platform: crypto-gocharity. In: 2022 6th international conference on computing, communication, control and automation (ICCUBEA, Pune, India, pp 1–8. https:// doi.org/10.1109/ICCUBEA54992.2022.10011026

Loyalty Points Exchange System Using Blockchain

831

7. Bülbül S, ¸ ˙Ince G (2018) Blockchain-based framework for customer loyalty program. In: 2018 3rd international conference on computer science and engineering (UBMK), pp 342–346. https://doi.org/10.1109/UBMK.2018.8566642 8. Manjunatha MS, Usha S, Chaya Bhat C, Manu R, Kavya S (2019) Blockchain based loyalty platform. Int J Recent Technol Eng (IJRTE) 7(6S5). ISSN: 2277-3878 9. Tu S-F, Hsu C-S, Wu Y-T (2022) A loyalty system incorporated with blockchain and call auction. J Theor Appl Electron Commer Res 17:1107–1123. https://doi.org/10.3390/jtaer1703 0056 10. Bhatnagar I (2017) Rekindle loyalty programs using blockchain (White paper). Tata Consultancy Services Limited (TCS) 11. Antoniadis I, Kontsas S, Spinthiropoulos K. Blockchain and brand loyalty programs: a short review of applications and challenges. Proc Econ Bus Adm. ISSN: 2392-8174, ISSN-L: 23928166 12. Gongora I, Dasanayaka V (2022) The role of cryptocurrency in shaping customer loyalty. UMEA School of Business and Statistics 13. Sönmeztürk O (2020) Blockchain application on loyalty card. ˙IZM˙IR 14. Pramanik BK, Shakilur Rahman AZM, Li M (2020) Blockchain- based reward point exchange systems. Multim Tools Appl 79:15–16, 9785–9798. https://doi.org/10.1007/s11042-019-083 41-2 15. Lim Y, Hashim H, Poo N, Poo D, Nguyen H (2019) Blockchain technologies in E-commerce: social shopping and loyalty program applications. https://doi.org/10.1007/978-3-030-219055_31.

Performance Analysis of Various Machine Learning-Based Algorithms on Cybersecurity Approaches Boggarapu Srinivasulu and S. L. Aruna Rao

Abstract Pervasive use and development of the Internet and its mobile applications extended cyberspace. Cyberspace is prone to prolong and automated cyberattacks. Cyber security methods render advancements in security measures to find cyberattacks. Conventional security systems are ineffective as cybercriminals were smart enough to avoid classical security systems. Traditional security system is ineffective in identifying polymorphic security attacks. Machine learning (ML) approaches had a significant contribution to various applications of cybersecurity. In spite of the success, there exist certain difficulties in assuring the reliability of the ML mechanism. There were incentivized malicious adversaries presented in cyberspace that are ready to use these ML vulnerabilities. This study offers a detailed examination of various ML models to detect cyberattacks and accomplish cybersecurity. This study presents a detailed discussion of existing ML models for cybersecurity comprising intrusion detection, spam detection and malware detection in recent days. In addition, the basic concepts of cybersecurity and cyberattacks are elaborated in detail. In addition, we have discussed the existing ML models for cybersecurity along with their aim, methodology and experimental data. At the end of the study, a detailed overview of cybersecurity, cyberattacks and recent cyberattack detection models is elaborated briefly. Keywords Cybersecurity · Machine learning · Cyberattacks · Data driven models · Security · Artificial intelligence

B. Srinivasulu · S. L. A. Rao (B) Department of IT, BVRIT HYDERABAD College of Engineering for Women, Hyderabad-90, India e-mail: [email protected] B. Srinivasulu e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_60

833

834

B. Srinivasulu and S. L. A. Rao

1 Introduction In the modern era, the Internet is becoming a crucial one and needed in everybody’s life making this interlinked network prone to various menaces [1]. There are many security threats in cyberspace such as jail-breaking, two-faced malware intrusion and network intrusion. Such menaces would affect the security of networks or devices [2]. Most security companies across the world were focused on devising novel technologies for protecting software applications, computer devices and networks from malware infections and network intrusion attacks. Cyber-attacks were less risky, cheaper and convenient compared to physical attacks [3]. Cybercriminals just have some expenses beyond an Internet connection and a computer. They were unconstrained by distance and geography. Owing to the anonymous nature of Internet, it is tough to prosecute and find [4]. It is noted that assaults against information technology will be very attractive, and it is anticipated that complicated cyberattacks will keep on increasing. Figure 1 represents the infrastructure of cybersecurity. A stable and secure computer mechanism should assure the integrity, confidentiality and availability of data [5]. The security and integrity of a computer system were compromised if unauthorized program or individual enters a network or computer intends to disrupt or harm the normal action flow [6]. Cybersecurity can be defined as the security measures that are considered for protecting user assets and

Fig. 1 Architecture of cybersecurity

Performance Analysis of Various Machine Learning-Based Algorithms …

835

cyberspace against unlawful attacks and access. [3]. Internal and Inherent weakness in the implementation and configuration of network and computer forms vulnerability that is prone to threats and cyberattacks. Some instances of vulnerabilities in framing a network system are amateur or untrained personnel, incorrect configuration, and lack adequate process [7]. Such susceptibilities will mount the chances of attacks and threats from outside or within a network. Several people from various domains are increasingly reliant on cyber networks. In simple terms, threat can be defined as an agent that causes undesirable and harmful effects on behaviour and actions of a network or computer using a specific penetration method [8]. Cybersecurity was to defend integrity of programs, data and networks from cyberthreats to cyberspace. Cybersecurity considers nearby problems of different cyberattacks and modelling defense methods (countermeasures) that protect integrity, availability, and confidentiality of information and digital technology [9]. • The term Integrity was employed to thwart any deletion or modification in an unauthorized way. • Confidentiality is exploited to prevent the disclosure of data to illegal systems or personnel. • Availability is leveraged to ensure that systems accountable for processing, distributing and saving data is accessible if required and by those who want them. Several cybersecurity specialists trust that malware will be the main tool for executing malevolent plans to breach cybersecurity efforts in cyberspace. This study offers a detailed examination of various ML models to detect cyberattacks and accomplish cybersecurity. This study presents a detailed discussion of existing ML models for cybersecurity comprising intrusion, spam and malware detections in recent days. In addition, the basic concepts of cybersecurity and cyberattacks are elaborated in detail. In addition, we have discussed the existing ML models for cybersecurity along with their aim, methodology and experimental data. At the end of the study, a detailed overview of cybersecurity, cyberattacks and recent cyberattack detection models are elaborated briefly.

2 Background Information In this section, the relevant technology of cybersecurity data science involving different kinds of defense strategies and cybersecurity incidents is discussed. Cybersecurity In recent times, the information and communication technologies (ICTs) has changed dramatically, which is pervasive and intrinsically connected to the modern world. Therefore, defending ICT applications and systems from cyberattacks has been very much concerned for the security policymaker over the last few years. The act of

836

B. Srinivasulu and S. L. A. Rao

defending ICT system from different cyberattacks or threats is called cybersecurity. Various aspects are related to cybersecurity: measures to secure ICT; the raw information and data it comprises and their transmitting and processing; related physical and virtual components of the system; the degree of security originating from applications of those measures and finally, the related domain of expert endeavour. Generally, cybersecurity is concerned with understanding different cyberattacks and developing equivalent security systems that preserve numerous properties as follows. • Confidentiality is leveraged to thwart the disclosure and access of data towards unauthorized entity, system or individual. • Integrity can be employed to preserve any destruction or modification of data by unauthorized means. • Availability was utilized to guarantee reliable and prompt access of systems and data assets to authorized entities. Cybersecurity is used in different contexts, from commercial purposes to mobile computing, and is split into different classes. This network security focuses primarily on protecting a network from intruders or cyberattackers; application privacy that keeps the devices and the software free from cyberthreats or risks; data privacy considers the privacy and security of pertinent information; operational security that involves the process of protecting and handling data resources. Traditional cybersecurity system is made up of computer and network security system encompassing antivirus software, intrusion detections or firewall system. Cyberattacks and security risks Typically, the risk related to any attack considers three privacy factors like impact include what the attack does, threats, viz., who is attacking, and vulnerability includes the weaknesses they are attacking. A security incident was an act which threaten the CIA of systems and data assets. Different kinds of cybersecurity incidents cause privacy risks to the individual or systems and networks of the organization. They include • Unauthorized access describes the act of accessing data to systems, network or information without authorization which causes violation of privacy policy; • Malware named malicious software is software or program that is intended on purpose to create damage to the server, computer network, client or computer, for example, botnets. Instances of distinct kinds of malware involving Trojan horses, computer worms, ransomware, viruses, adware, malicious bots, spyware and so on; Ransomware, or Ransom malware, is a novel form of malware that thwarts user from accessing devices, personal files or systems, then demand an anonymous online payment for restoring the access. • Denial-of-Service (DoS) can be referred to an attack intended to shut-down a network or machine, which makes it unreachable to its intended use by flooding target with traffic which causes a crash. Normally, the DoS assault employs one computer with an Internet connection, whereas distributed denial-of-service

Performance Analysis of Various Machine Learning-Based Algorithms …

837

(DDoS) attacks use more than one computer and Internet connection to flood target resources; • Phishing is a kind of social engineering, leveraged for a wide-ranging malevolent activity attained via human interaction, where fake attempts take part to achieve delicate data namely login credentials, personally identifiable information, credit card and banking details disguising oneself as a trusted entity, or individual through an electronic communications namely instant message, email, text and so on. • Zero-day attack is employed to define the menace of unknown security vulnerabilities where the patch hasn’t been released or the application developer was not aware. Cybersecurity defense strategies Defense strategy was essential to secure information or data, networks and information systems from intrusions or cyberattacks. They take the responsibility to prevent security incidents or data breaches and reacting and monitoring to intrusions, that is determined by any sort of unauthorized activities which deteriorates the information system. Typically, intrusion detection systems (IDS) can be denoted by ‘software application or device that monitor systems or computer network for policy violations or malicious activity’. The most common security solutions include user authentication, antivirus, firewalls, cryptography systems, access control and data encryption however ineffective based on requirements in the cyber field. At the same time, IDS overcomes the problem by examining security information from numerous key points in a system or computer networks. Furthermore, IDS is utilized to find internal and external attacks. For example, a network IDS (NIDS) and host-based IDS (HIDS) are the renowned kinds related to scope of single computer to larger network. In HIDS, the system monitors essential files on single system, where it monitors and analyses network connection for suspicious traffic in NIDS. Likewise, anomaly-oriented IDS and signature-oriented IDS are the two commonest variants.

3 Analysis of Various ML Models for Cybersecurity In this study, we have investigated the performance of different ML models to find cyberattacks and accomplish cybersecurity. This study presented a comprehensive discussion of existing ML models for cybersecurity encompassing intrusion detection, spam detection and malware detection in recent days. Cui et al. [10] formulate a flexible ML detection algorithm for cyberattacks in distribution systems taking spatiotemporal patterns into account. By the graph Laplacian related to system-wide measurements, the abovementioned patterns can be detected. In addition, to train spatiotemporal patterns, a flexible Bayes classifier (BC) was employed which is violated if cyberattacks occur. Cyber-attacks will be

838

B. Srinivasulu and S. L. A. Rao

identified by making use of flexible BC online. An et al. [11] suggest using unsupervised ensemble AE linked to the Gaussian mixture model (GMM) for adaption to many fields irrespective of the skewness of all domains. The attention-oriented latent representations and reconstructed attributes of the minimal error were used in the hidden space of the ensemble AE. To predict the sample density in the GMM, the expectation maximization (EM) approach was employed. Almalaq et al. [12] proposed an attack detection technique based on DL for energy systems to solve this problem, which is trained through logs and information collected by phasor measurement unit (PMU). Specification or Property making was employed for constituting features, and data can be forwarded to several ML approaches, out of these RF was chosen as the fundamental technique of AdaBoost. Avatefipour et al. [13] devise an innovative and effective anomaly detection (AD) method dependent on a modified one-class SVM in the CAN traffic. This presented technique uses an enhanced method called the modified bat technique for identifying the structure with maximum accuracy level in offline training. The authors in [14] introduced a precise secured framework to find and halt data integrity assaults in WSNs in microgrids. An intellectual AD technique relevant to predictive intervals (PIs) is presented for differentiating malicious assaults with distinct severities at the time of a secured operation. The devised AD technique was framed depending on the upper and lower bound prediction approach for offering the best practicable PIs on the smart metre readings at electric customers. It even uses the combinatorial idea of PIs for solving the instability problems occur from the NNs. Saheed and Arowolo [15] aim to illustrate how supervised ML approaches (ridge classifier, RF, KNN and DT) and deep RNN are used to formulate an effective and efficient IDS in the IoMT platform for forecasting and classifying unexpected cyberthreats. Normalization and preprocessing of network data will be executed. Then, the researchers optimized features by making use of a bio-inspired PSO. In [16], the authors modelled an AD-IoT mechanism, which is an intellectual AD related to RF-ML method for solving the IoT cybersecurity threats in smart cities. The modelled solution will find compromised IoT gadgets at dispersed fog nodes effectually. Kalech [17] devised cyber-attack detection approaches related to temporal pattern detection. This pattern detection approaches do not only search for anomalies in data sent by the SCADA elements on the network however searchers for anomalies that arise by exploiting legitimate commands so that incorrect and unauthorized time intervals amongst them can cripple the mechanism. In particular, the authors devise 2 approaches relevant to ANN and Hidden Markov Model (HMM). Wang et al. [18] devise an attack detection method for power systems related to ML that is trained through information and logs gathered by PMUs. The researchers execute feature construction engineering and transfer the information to various ML methods, where RF was selected as the fundamental technique of AdaBoost. In [19], a new method has been offered for diagnosing possible false data injection attacks (FDIA) in DC-MGs for improving the cybersecurity of electrical systems. So, to find cyberattacks in DC-MG and to find the FDIA to distributed energy resources (DERs) unit, a novel singular value decomposition (SVD) and wavelet transform (WT) procedure related to deep ML has been modelled. Furthermore, this study

Performance Analysis of Various Machine Learning-Based Algorithms …

839

renders a devised selective ensemble DL method by utilizing the GWO algorithm for finding the FDIA in DC-MG. In [20], an effective and efficient security control technique was modelled for identifying cyberattacks on smart grids. This modelled technique will combine feature detection and reduction methods to minimize the more features and attain an enhanced detection rate. To eliminate irrelevant features and to improve detection performance, a correlation-oriented feature selection (CFS) algorithm was employed. An instance-based learning (IBL) method will classify cyberattacks and normal events through the chosen optimal features. Elkhadir et al. [21] present a novel variant of PCA called QR-OMPCA. Initially, this technique will integrate the mean calculation into feature extraction operation, thereby the optimal mean is acquired to improve the intrusion detection accurateness. Then, it includes a rapid QR decomposition. Chen et al. [22] devise resilient function methods for non-linear processes that are prone to target cyber-attacks, along with that detection and managing standard types of cyberattacks. Cui et al. [23] formulated an ML-oriented AD (MLAD) method. Initially, load predictions offered by NN were employed for reconstructing the scaling and benchmark dataset by making use of k-means clustering. Then, the cyberattack template was predicted by NB classification related to the statistical features and cumulative distribution operation of scaling data. Lastly, dynamic programming was employed for computing the parameter and occurrence of single cyberattack on load prediction datasets. In [24], the researchers devise a physics-guided ML for detecting cyberattacks on intrusion detection EVs taking changing driving scenarios into account. To reflect the transient physical features of EV, the researchers gather device-level (for example, voltage and current in the motor drive) and vehicle level signals. After that, new data features regarding physical dynamics of the vehicle and critical performance of a system were devised, with which, the authors use data-driven approach with high-fidelity vehicular methods and physical power electronics. Kravchik and Shabtai [25] introduce work on finding cyberattacks on industrial control system (ICS) utilizing CNNs. The authors propose a technique for AD related to measurement of the statistical deviation of estimated value from the monitored value. The researchers implemented the devised approach by utilizing various DNN structure which includes distinct variants of CNN and RNN. In [26], SVM is considered to be an ML approach that could complement efficiency of this IDS, offering a second line of recognition to minimize the false alarm count, or as an alternative detection method. The authors evaluate the efficiency of this IDS against two-class and one-class SVM, utilizing non-linear and linear forms.

4 Results and Discussions This section examines the cybersecurity performance of different ML models. Table 1 shows the overall performance of different models available in the literature.

840

B. Srinivasulu and S. L. A. Rao

Figure 2 examines a comparative precn and recal examination of different cybersecurity approaches. The results implied that these methods have offered reasonable outcomes. Based on precn , the PSO-KNN model has gained higher precn of 98.89% while the PSO-RC, CNN, LCNN, HaRM, CAN-ML and flexible ML-CDSP models have obtained lower precn of 97.60%, 93.54%, 93.68%, 91.99%, 92.39% and 94.24%, respectively. Also, in terms of recal , the PSO-KNN system has reached superior recal of 94.56% while the PSO-RC, CNN, LCNN, HaRM, CAN-ML and flexible ML-CDSP methods have acquired lesser recal of 95.32, 94, 93.68, 92.17, 94.83 and 94.09% correspondingly. Figure 3 scrutinizes a comparative accu y , FScor e and MCC investigation of distinct cybersecurity approaches. The outcomes referred that these approaches have offered reasonable outcomes. With respect to accu y , the PSO-KNN algorithm has achieved maximum accu y of 98.90% while the PSO-RC, CNN, LCNN, HaRM, CAN-ML and flexible ML-CDSP models have attained lower accu y of 97.61, 92, 94, 92.21, 94.89 Table 1 Comparative analysis of various approaches and measures Methods

Accuracy

Precision

Recall

F-Score

MCC

PSO-KNN

98.90

98.89

94.56

92.33

97.77

PSO-RC

97.61

97.60

95.32

91.06

95.14

CNN

92.00

93.54

94.00

93.65

93.88

LCNN

94.00

93.68

93.68

92.68

93.34

HaRM

92.21

91.99

92.17

92.23

92.57

CAN-ML

94.89

92.39

94.83

92.02

92.46

Flexible ML-CDSP

95.85

94.24

94.09

94.15

94.17

Fig. 2 Pr ecn and r ecal analysis of various methodologies

Performance Analysis of Various Machine Learning-Based Algorithms …

841

Fig. 3 Accu y , FScor e and MCC analysis of various methodologies

and 95.85% correspondingly. Besides, interms of Fscor e , the PSO-KNN algorithm has gained higher Fscor e of 92.33% while the PSO-RC, CNN, LCNN, HaRM, CANML and flexible ML-CDSP systems have gained decreased Fscor e of 91.06, 93.65, 92.68, 92.23, 92.02 and 94.15% correspondingly. Finally, based on MCC, the PSOKNN model has gained superior MCC of 97.77% while the PSO-RC, CNN, LCNN, HaRM, CAN-ML and flexible ML-CDSP methodologies have obtained lower MCC of 95.14, 93.88, 93.34, 92.57, 92.46 and 94.17% correspondingly.

5 Conclusion In this study, we have evaluated the performance of different ML models to detect cyberattacks and accomplish cybersecurity. This study presented a comprehensive discussion of existing ML models for cybersecurity encompassing intrusion detection, spam detection and malware detection in recent days. Moreover, the basic concepts of cybersecurity and cyberattacks are elaborated in detail. In addition, we have deliberated on the present ML models for cybersecurity along with their aim, methodology and experimental data. At the end of the study, a detailed overview of cybersecurity, cyberattacks and recent cyberattack detection models are elaborated briefly.

References 1. Parizad A, Hatziadoniu C (2022) Cyber-attack detection using principal component analysis and noisy clustering algorithms: a collaborative machine learning-based framework. IEEE

842

B. Srinivasulu and S. L. A. Rao

Trans Smart Grid 2. Rashid MM, Kamruzzaman J, Hassan MM, Imam T, Gordon S (2020) Cyberattacks detection in iot-based smart city applications using machine learning techniques. Int J Environ Res Public Health 17(24):9347 3. Alsamiri J, Alsubhi K (2019) Internet of things cyber attacks detection using machine learning. Int J Adv Comput Sci Appl 10(12) 4. Zheng H, Wang Y, Han C, Le F, He R, Lu J (2018) Learning and applying ontology for machine learning in cyber attack detection. In: 2018 17th IEEE ınternational conference on trust, security and privacy in computing and communications/12th IEEE ınternational conference on big data science and engineering (TrustCom/BigDataSE). IEEE, pp 1309–1315 5. Alshehri A, Khan N, Alowayr A, Alghamdi MY (2023) Cyberattack detection framework using machine learning and user behavior analytics. Comput Syst Sci Eng 44(2):1679–1689 6. Delplace A, Hermoso S, Anandita K (2020) Cyber attack detection thanks to machine learning algorithms. arXiv:2001.06309 7. Miao Y, Chen C, Pan L, Han QL, Zhang J, Xiang Y (2021) Machine learning–based cyber attacks targeting on controlled information: a survey. ACM Comput Surv (CSUR) 54(7):1–36 8. Dutta V, Chora´s M, Pawlicki M, Kozik R (2020) A deep learning ensemble for network anomaly and cyber-attack detection. Sensors 20(16):4583 9. Komisarek M, Pawlicki M, Kozik R, Choras M (2021) Machine learning based approach to anomaly and cyberattack detection in streamed network traffic data. J Wirel Mob Netw Ubiquitous Comput Dependable Appl 12(1):3–19 10. Cui M, Wang J, Chen B (2020) Flexible machine learning-based cyberattack detection using spatiotemporal patterns for distribution systems. IEEE Trans Smart Grid 11(2):1805–1808 11. An P, Wang Z, Zhang C (2022) Ensemble unsupervised autoencoders and Gaussian mixture model for cyberattack detection. Inf Process Manag 59(2):102844 12. Almalaq A, Albadran S, Mohamed MA (2022) Deep machine learning model-based cyberattacks detection in smart power systems. Mathematics 10(15):2574 13. Avatefipour O, Al-Sumaiti AS, El-Sherbeeny AM, Awwad EM, Elmeligy MA, Mohamed MA, Malik H (2019) An intelligent secured framework for cyberattack detection in electric vehicles’ CAN bus using machine learning. IEEE Access 7:127580–127592 14. Kavousi-Fard A, Su W, Jin T (2020) A machine-learning-based cyber attack detection model for wireless sensor networks in microgrids. IEEE Trans Industr Inf 17(1):650–658 15. Saheed YK, Arowolo MO (2021) Efficient cyber attack detection on the internet of medical things-smart environment based on deep recurrent neural network and machine learning algorithms. IEEE Access 9:161546–161554 16. Alrashdi I, Alqazzaz A, Aloufi E, Alharthi R, Zohdy M, Ming H (2019) Ad-iot: Anomaly detection of iot cyberattacks in smart city using machine learning. In: 2019 IEEE 9th annual computing and communication workshop and conference (CCWC). IEEE, pp 0305–0310 17. Kalech M (2019) Cyber-attack detection in SCADA systems using temporal pattern recognition techniques. Comput Secur 84:225–238 18. Wang D, Wang X, Zhang Y, Jin L (2019) Detection of power grid disturbances and cyber-attacks based on machine learning. J Inf Secur Appl 46:42–52 19. Dehghani M, Niknam T, Ghiasi M, Bayati N, Savaghebi M (2021) Cyber-attack detection in dc microgrids based on deep machine learning and wavelet singular values approach. Electronics 10(16):1914 20. Gumaei A, Hassan MM, Huda S, Hassan MR, Camacho D, Del Ser J, Fortino G (2020) A robust cyberattack detection approach using optimal features of SCADA power systems in smart grids. Appl Soft Comput 96:106658 21. Elkhadir Z, Chougdali K, Benattou M (2017) An effective cyber attack detection system based on an improved OMPCA. In: 2017 ınternational conference on wireless networks and mobile communications (WINCOM). IEEE, pp 1–6 22. Chen S, Wu Z, Christofides PD (2020) Cyber-attack detection and resilient operation of nonlinear processes under economic model predictive control. Comput Chem Eng 136:106806

Performance Analysis of Various Machine Learning-Based Algorithms …

843

23. Cui M, Wang J, Yue M (2019) Machine learning-based anomaly detection for load forecasting under cyberattacks. IEEE Trans Smart Grid 10(5):5724–5734 24. Guo L, Ye J, Yang B (2020) Cyberattack detection for electric vehicles using physics-guided machine learning. IEEE Trans Transp Electrification 7(3):2010–2022 25. Kravchik M, Shabtai A (2018) Detecting cyber attacks in industrial control systems using convolutional neural networks. In: Proceedings of the 2018 workshop on cyber-physical systems security and privacy, pp 72–83 26. Ghanem K, Aparicio-Navarro FJ, Kyriakopoulos KG, Lambotharan S, Chambers JA (2017) Support vector machine for network intrusion and cyber-attack detection. In: 2017 sensor signal processing for defence conference (SSPD). IEEE, pp 1–5

Privacy-Preserving Data Publishing Models, Challenges, Applications, and Issues J. Jayapradha and M. Prakash

Abstract The distribution of Electronic Health Records is highly needed for various analysis purposes and medical studies. However, the data should be disclosed to the data recipient in such a way that the privacy of the individual is ensured. Privacypreserving data publishing is a challenging task because the distributed data should be protected against multiple privacy threats. Several privacy models, methods, techniques have been proposed and studied in earlier works, however, the study concerning privacy models considering adversary background knowledge is very limited. In this study, various privacy models and their attack models have been analyzed and depicted, the main focus of the study is the adversary’s background knowledge privacy-preserving data publishing model. Various research challenges in security and privacy have been identified and summarized. Furthermore, the various applications of privacy-preserving data publishing such as Cloud, E-health, Social Network, Agriculture, and Smart City have been studied and the need for privacy is briefed. Finally, the paper highlighted the study’s intuitions regarding present unresolved challenges as well as probable future approaches in privacy-preserving data publishing. Keywords Electronic health record · Privacy-preserving · Security · Privacy models · Background knowledge

J. Jayapradha (B) Department of Computing Technologies, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu 603203, India e-mail: [email protected] M. Prakash Department of Data Science and Business Systems, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu 603203, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_61

845

846

J. Jayapradha and M. Prakash

1 Introduction Electronic Health Records are rapidly increasing and adopted to store patient data. The patient’s data comprises of patient’s name, medications, and symptoms, duration of treatment, lab reports, allergies, and diagnosis codes. The patient data has lots of sensitive information. Apart from Electronic Health Records, all the applications also possess various sensitive information [1]. Nowadays, the health sectors and various organizations disclose their data to data recipients and third parties. Data privacy and secrecy is an important consideration that data providers must handle. The data are generated from various devices connected to the internet to monitor the patient’s health. Electronic Health Records are dealing with two features of privacy: “contextoriented privacy” and “content-oriented privacy”. If the adversary can be able to find the type of sickness that a patient possesses refers to context-oriented privacy. Context-oriented privacy can be achieved by investigating the background of the physician. Content-oriented privacy refers to the probability of investors in the healthcare domain revealing the patient’s sensitive attributes to third parties such as insurance agencies, pharmaceutical agencies, and marketing agencies without the patient’s approval [2]. Earlier, the name and unique id of the patient was removed before publishing considering the data is protected well. However, the removal of explicit identifiers such as name, unique id cannot preserve the patient’s privacy. Sweeney [3] shown that removing explicit identifiers alone does not assist to protect privacy. To reidentify a person, quasi-identifiers such as age, sex, and postcode may be integrated and compared with other external sources of information. Sweeney introduced a kanonymity approach to secure individuals’ privacy. Although k-anonymity protects published data from identity disclosure, it does not protect against attribute disclosure. The l-diversity [4] and t-closeness [5] models have been created as extensions of the k-anonymity model. Figure 1 displays five types of privacy-preserving data publication strategies as well as associated attack models. Many researchers have done an extensive study for the four models such as record linkage model, attribute linkage model, table linkage model, and probabilistic model. We have focused on the modeling adversary’s background knowledge since many researchers and studies have not concentrated much on the same. The four models try to protect data privacy but fail to take into account the adversary’s prior knowledge. Privacy plays a major role in various applications. All organizations concentrate on their data privacy as data leakage might spoil the company’s reputation. Though all the application focuses on privacy, we have discussed the five major applications such as 1. Cloud Computing, 2. Social Networks, 3. E-health 4. Smart City, and 5. Agriculture. In each application, various privacy-preserving techniques, models, and their limitations were discussed. In recent years, privacy-preserving has become crucial in various applications as the world is in the digital era.

Privacy-Preserving Data Publishing Models, Challenges, Applications …

847

Fig. 1 Privacy models and their attack models for PPDP

2 Identified Security and Privacy Research Challenges Security and Privacy Research Challenges start with data creation (from different sources, devices), data collection, data storage, transportation, data processing and analysis, data availability, and usability. Table 1 discusses a few research challenges and their solutions. Table 1 Identified research questions and their solutions [6–8] Q. no

Identified research questions regarding security and privacy

Solutions

1

What are the methods for overcoming security and privacy concerns?

We must investigate all concerns and implement appropriate security and privacy algorithms

2

How can the challenges in different ecosystems be addressed from the end-to-end point?

It aims to maintain a high-capacity distributed architecture

4

Can all the challenges be grouped under a The challenges are grouped under four common framework? features (1) Infrastructure Security, (2) Data Privacy, (3) Data Management, and (4) Integrity and Reactive Security

5

How much percentage of data generated from various devices and sources has a security problem?

A recent survey by HP concludes that 70% of data contain a security problem

6

Is there any solution to solve the security and privacy issues?

There is no magical solution to solve the issue as the data grows. Encryption and Tokenization may act as critical elements

848

J. Jayapradha and M. Prakash

3 Modeling Adversary’s Background Knowledge Privacy-Preserving Data Publishing Model 3.1 Skyline Privacy Bee-Chung pioneered the notion of Skyline privacy in 2007 [9]. Many privacy models assume that the invader’s prior knowledge cannot predict the victim’s sensitive data with any degree of certainty. On the other, adversaries have a number of options for acquiring access to external data, such as a social network or voter list. A good privacy model should take into account the adversary’s extensive background knowledge. Extensive background information is referred to as “External Knowledge (EK)”. It employs multidimensional privacy criteria techniques while taking adversary background knowledge into account. The key difficulty with this strategy is that various attackers will have varying levels of background knowledge when the information is broadcast globally. As a result, determining the right value for adversaries’ EK is difficult. There are three forms of background knowledge: (i) target individuals based on a single sensitive value, (ii) individual sensitive values may vary, and (iii) duplicates with different sensitive values per individual. To address this, a multidimensional technique for measuring EK is proposed. A skyline tool is proposed to study adversaries’ various EK and determine if the released data is secure. The release candidate is created as a consequence of the anonymization findings. A release candidate is a collection of discrete groups of people with sensitive values. Let Td be the raw microdata and Td * denote the anonymized microdata. Let Td ∗ = {BD1 , XS1 . . . . . . . . . BDn , Xsn }, where BDi is the group, and Xsi is the multi-set comprising all the occurrences of all sensitive values of group BDi . The group (BDi , Xsi ) is called a quasi-identifier group. In Table 2, The bucketized data has the following form: {BD1 = {Geo, Bobby, Henria, Liu}, Xs1 = {Covid, Diarrhea, Diarrhea, Covid}} and {BD2 = {yangy, Beeng, Cargo, Jaiee}, XS1 = {Diarrhea, Flu, Diarrhea, Covid}}. Similarly, all alternative reconstructions of the dataset are performed in order to determine the highest confidence in the adversary’s background knowledge. The skyline calculation over encrypted multi-source data was developed in the cloud server in 2019 [10]. In 2019, Mahboob [11] introduced the skyline query for multi-party privacy preservation in huge data using Homomorphic Encryption. The procedure of the skyline privacy model is illustrated in Table 2 utilizing the medical dataset.

3.2 Privacy-MaxEnt Wenliang presented the Privacy-MaxEnt notion in 2008 [12]. It is well recognized that the most important factor to consider and determining the amount of an adversary’s

Privacy-Preserving Data Publishing Models, Challenges, Applications …

849

Table 2 Skyline privacy model implementation for a patient data Explicit Identifier Name

Age

Geo Bobby Henria Liu Yangy Beeng cargo Jaiee

21 23 25 28 36 34 38 31

Raw data

QID Zip code 601101 601102 601103 601105 600403 600404 600505 600606

QID

Sensitive Attribute Disease Covid Diarrhea Diarrhea Covid Diarrhea Flu Diarrhea Covid

Age Geo Bobby Henria Liu Yangy Beeng cargo Jaiee

Zip code 60010*

2*

3*

600***

Sensitive Attribute Disease Covid Diarrhea Diarrhea Covid Diarrhea Flu Diarrhea Covid

Anonymized data QID Age Geo Bobby Henria Liu Yangy Beeng cargo Jaiee

21 23 25 28 36 34 38 31

Zip code 601101 601102 601103 601105 600403 600404 600505 600606

Sensitive Attribute Disease

B1

B2

Bucketized data.

external capabilities is difficult. Skyline privacy restricts the disclosure of background information. The skyline does not express probabilistic external knowledge. The probability is represented as Prb (Sd |QID ) in privacy–maxEnt. For example, Prb(breast fibroid|Male) = 0. The “maximum entropy concept” underpins PrivacymaxEnt. The probability Prb (Sd |QID ) is regarded as an unknown variable, with external information serving as a restriction on the unknown variables. Entropy is maximized by performing a balanced calculation of Prb (Sd |QID ). The main issue with PPDP is that the adversary uses the sensitive attributes and quasi-identifier to identify the target. Data publishers are vulnerable to “linking assaults” as a result of this re-identification. Privacy-maxent, like the notion [9], uses data bucketization to divide data. To break the linkages between sensitive attributes and Quasi-identifier in each bucket, all of the sensitive attributes are mixed together. Because each Quasiidentifier has several sensitive values, the attacker cannot link the sensitive value to the particular Quasi-identifier. Where there are unbiased judgments, we can reach maximum entropy. The invader background knowledge bound is preset, and the privacy metric is computed in accordance with the bound concept. The privacy computation yields a bound score as well as privacy for a tuple. The data publisher must determine whether or not to accept the computable bound. When the data is disclosed and the adversary

850

J. Jayapradha and M. Prakash

knowledge is within the calculated bound, the acceptable bound provides privacy preservation. (1) “Top-K positive association rules” and (2) “Top-K negative association rules” define the bound. Only the equality background knowledge requirement is addressed in [12].

3.3 Skyline (B, t)-Privacy Skyline (B, t)-Privacy was suggested in 2009 [13]. Both skyline privacy and privacymaxent are unaware of an adversary’s true exterior knowledge. To address this, Tiancheng [13] suggested skyline (B, t)-privacy. Li presented a standard paradigm for analytically modeling various types of adversary background information. The breadth of background knowledge is reduced. It corresponds to the raw dataset T. The adversary’s previous opinion of the dataset-sensitive attribute over the Quasiidentifier is estimated by modeling the external information. To discover the adversary’s prior belief function Pb(pri) that fits the raw data T, the “Kernel Regression Estimation” approach is utilized. The bandwidth BW is a kernel component used to estimate the adversary’s background knowledge level. To compute the sensitive value of the invader’s posterior belief, an approximation inference approach called “-estimate” is utilized. To calculate the privacy level in skyline (B, t)-privacy, both the invader’s posterior belief and previous belief are required. To maintain privacy, the data discloser must choose a substantial value of BW. The anonymized released data fulfills skyline (B, t)-privacy only if the largest difference between the invader’s posterior and prior belief of all people in the dataset is Tx , where x = 1, 2,…n. In Table 3, the process of privacy-maxent is explained using a patient data.

4 Applications of PPDP In the previous sections of the paper, the elaborate descriptions of different privacypreserving data publishing techniques, classifications, and performance estimates had been discussed. This section describes the applications of PPDP. The applications to be discussed in the paper are 1. Cloud Computing, 2. Social Networks, 3. E-health, 4. Smart City, and 5. Transport.

4.1 Cloud PPDP According to Microsoft Azure [14], cloud computing is the delivery of different computing services such as databases, hosts, memory, analytics, applications, connectivity, and business information through the internet (the cloud). Cloud computing has made a big move from traditional methods to cutting-edge methods.

Privacy-Preserving Data Publishing Models, Challenges, Applications …

851

Table 3 Privacy-MaxEnt process for patient data Explicit Identifier Name Geo Bobby Marie Liu Yangy Beeng Joey Marie Jaiee

QID Sex M M F M M M F F M

Regions CGR CGR AP CHN CGR CHN CO CGR CU

Original data Tb

Sensitive Attribute Disease SARS Infection infection SARS Infection Pneumonia Infection SARS Flu

QID Sex M M F M

Regions CGR CGR AP CHN

M M F F M

CGR CHN CO CGR CU

Sensitive Attribute Disease {SARS, Infection, infection, SARS} {Infection, Pneumonia, Infection} {SARS, Flu}

Buckets

1

2 3

Bucketized Tb* QID Qid1 Qid1 Qid2 Qid3 Qid1 Qid3 Qid4 Qid5 Qid6

Sensitive Attribute {sa1,sa2,sa2,sa1}

Bucket 1

{sa2,sa3,sa2} 2 {sa1,sa4} 3

Tb* in Abstract

The cloud has several advantages over traditional methods such as Cost, Global Scale, Speed, Productivity, Performance, Reliability, and Security. Ricardo and Joao [15] discussed the different types of clouds available for different services and cloud deployment can be chosen according to the services of a business. The most critical aspect is that the entities trust the cloud with their data. Because the cloud also broadcasts data to external parties, it must secure data privacy. To maintain data privacy, any privacy-preserving technology may be used. As a consequence, one of the primary foci of privacy-preserving approaches is cloud computing. Balaji [16] proposed a privacy-preserving data publishing model to publish a large scale of data and guarantee users’ access privilege at a different level of utility. A multi-level utility data anonymization scheme is implemented on a large scale of associated graphs. Through his experiment, he also proved that the proposed technique is efficient, scalable, and potent in balancing privacy and utility. Hong [17] focused on k-nearest neighbor (KNN) computation as a privacy-preserving technique over the distributed databases among the various cloud platforms. Unfortunately, outsourcing is restricted to a single critical setting, which is considered inefficient due to the frequent interactions between the client and server in existing outsourcing protocols. To address the above issue, Hong proposed a semi-honest model. His experiments also proved that both the privacy-preservation of the distributed database

852

J. Jayapradha and M. Prakash

and the KNN query is efficient. Owners upload encrypted data to the cloud for KNN categorization here. Because the data owner uploads the encrypted data, he has access to his key, which eliminates the need for user-server interaction. The semi-honest model can be applied on a large scale of data and guarantee less privacy leakage. Xun [18] presented three potential cloud platform options for association rule mining. Item, database, and transaction privacy are all protected here. Data owners may keep their data in the cloud and delegate mining tasks to a cloud server. To safeguard data outsourcing and association rule mining, three strategies have been developed. k-anonymity, k-privacy, and k-support are proposed to perturb the data before uploading it to the server. To provide anonymity, the outcome is based on the “distributed ElGamal cryptosystem.” Guo [19] has taken up “Face recognition (FR)” privacy in the cloud platform and proposed a novel technique called “Affine Transformation.” This unique technique achieves privacy-preserving face recognition with the combination of diffusion, permutations, and transformations. Without any interactions, both face recognition and feature extraction are executed in the encrypted domain. It also proves that the experiments have shown an increase in efficiency.

4.2 E-Health PPDP Nowadays, every hospital has patient history records in electronic format. Every patient’s history and current data are private and considered to be very sensitive. However, the increase in the patient’s data leads to a storage problem. To overcome this, cloud infrastructure has been used to exchange and store data. Though the cloud stores data safely, disclosure of data causes a lot of privacy leakage. Thus, to protect the e-health record from unwanted adversary attacks, privacy preservation techniques are implemented. Assad [20] discussed the review approaches of both e-health cryptographic and non-cryptographic clouds that have been made on the taxonomy of the approaches. The cryptographic approaches use encryption and non-cryptographic approaches rely on policies with limited access. Adegunwa [21] addressed the privacy and security-related issues and discussed the PPDP anonymization methods for medical records in big data. Here, the data publisher anonymizes the data and acts as a middleman between the recipient and the data owner. A middle man should ensure the privacy of the data all the time. Nowadays, all the patient records from different hospitals are shared for various research purposes. Concerns concerning the privacy of patients’ records arise. Preserving the privacy of the patients’ records while sharing/disclosing to the third party is essential. Abdul [22] proposed a novel anonymization method that was different from the existing methods. The adversary’s detailed background knowledge was taken into account for the proposed privacy-preserving technique. The author had also discussed generalization techniques and their limitations. For e-health, a defined interval of data privacy

Privacy-Preserving Data Publishing Models, Challenges, Applications …

853

strategy was developed. This method can be applied to other e-health systems also. For both utility and privacy, an experimental comparison of generalization and fixed interval techniques is provided. The key concept behind the suggested strategy is that the quasi-identifiers in the e-health record are appropriately identified using the provided fixed interval techniques, and the original e-health records are updated with the average of the raw data. Id-based anonymization is also proposed for categorical attributes anonymization. The results have proved that the proposed technique is an efficient one that handles the privacy issues arising from the adversary’s background knowledge. Shekha [23] investigated the various facets of cloud computing security and privacy protection for Electronic Health Records (EHR). The techniques’ different features include EHR security and privacy, architecture of the protected EHR cloud, cryptographic and non-cryptographic approaches, and e-health safety standards on the cloud platform. Andrew [24] developed a metric to measure the synthetic e-health data quality. The Health GAN method creates synthetic e-health data. The Health GAN is used to train the internal as well as the external environment. By using the Health GAN in the external environment, de-identification can be avoided. The government bodies also support the distribution of EHR records to various institutions for research. The dissemination of EHR data may result in privacy violations. Most of the patients do not like sharing their data. Though the identification variable is removed, the sensitive attributes can be identified by linking with external records. Grigorious [25] proposed a method that transforms the record into subsets by splitting it to provide higher privacy and higher utility than the existing methods. The novelty of the approach was that the method itself generated a diagnosis id rather than by the data owner. The main focus of the method is to prevent identity closure.

4.3 Social Network PPDP In 2019 [26], Privacy Rights Clearing House states that most people are connected to social networks. There are many online social networks such as Facebook, Twitter, Ginger, and Foursquare. The information shared by the user in the social network does not guarantee privacy. 1. Profile information, 2. Status, 3. Location, and 4. Shared contents are the various sorts of knowledge that may be shared on the social network. The above four pieces of information shared in a social network can reveal a person’s details. Sharing all this information leads to a pathway to the attackers to get the information about the individual. The attacker or hacker can identify the individuals quickly by sharing the above contents on online social media. So, each individual should be aware of the information they share, and choices should be made to protect their privacy. Just by using location-based services, an attacker can quickly follow an individual and his/her activity. Ponnurangam [27] discussed the concept of Westin. Westin categorizes privacy indexes into three, viz., 1. Fundamentalists, 2. Pragmatists, and 3. Unconcerned. According to Westin’s privacy categorization, there are 25% fundamentalists, 60% pragmatists, and 15% uninterested. Due to the

854

J. Jayapradha and M. Prakash

usage of social networks, there is a high proliferation of data that contains rich personal information that needs to be preserved. Safia [28] considered the social network as “bipartite graphs,” wherein each node represents the set of information about the profile. Here, a solution has been identified called “safety partitioning condition,” which has guaranteed avoiding various attacks. The proposed solution labeled “bipartite graph” meets privacy preservation requirements and is useful for data mining tasks. The primary method adopted in the proposed approach is the clustering method. To publish data, most of the anonymization method focuses on producing one instance of publishing data whereas the sequential release of data is not considered. Researchers have stated that the subsequent data release may lead to privacy breaches even though the data is anonymized. Safia [28] discussed the privacy problems due to the subsequent releases. It has been analyzed and a solution has been proposed. The proposed method is based on the anonymization technique that groups nodes in particular classes and hides the mapping of the attributes’ nodes and values. Experiments were conducted on complex queries and resulted in better utility with less overhead in run time. Niharika [29] discussed the survey conducted. A 42-question survey was administered to persons about the privacy of their data in online social networking sites. The result was astonishing. It stated that 19.30% of the people were not concerned about their privacy, 42.13% believed their data is protected as they have defined all the privacy settings correctly, 23.8% of the people were concerned about their privacy. However, they had specified the security settings correctly, 8.02 people still shared their information. However, they are concerned about security and 6.71% of people did not share their information on online social media. Frank [30] analyzed the privacy and security for a small group of people in online social networks (OSN). The main motive of the study was to capture the people who were not interested in revealing their personal information to the third person and those who are also interested in exposing their data to a stranger if there is a mutual friend between them. The observation for the above case study was done on Facebook. The people’s willingness was high in the second case. The case study also concluded that the person who tightened the security settings did not allow an unknown person to access his/her information. This case study did not involve much in the relationship among the user’s privacy and trust in the Facebook platform; instead, it just analyzed the users’ interest in OSN.

4.4 Agriculture PPDP Agriculture is an essential domain across the world. There are many agricultural activities such as 1. Monitoring, 2. Water Use, 3. Imagery, 4. Harvesting, 5. Crop Management and Optimization, 6. Food Security, and 7. Data Privacy. Like other applications, agriculture also ends up with security and privacy issues. As per Manlio’s theory [31], the overall challenges in smart farming are categorized into technical and non-technical. The non-technical difficulties are 1. Incentives, 2. Investments, and 3. Innovative tools. The technical challenges are related to 1. Data, 2. Network,

Privacy-Preserving Data Publishing Models, Challenges, Applications …

855

and 3. Information. The technical challenge of data is the concern in the study. Data is the most key topic in the agriculture domain. The problems related to data are security and privacy, rights to use and ownership. Agriculture is not a public activity (i.e., private activity), so transparency cannot be expected. The data transferred to the cloud should be protected against adversaries. Previously, technologies were not widely employed in agriculture, but they will play an important role in agriculture in the future. Abhishek [32] discussed the significant growth of the IoT and its impact on precision agriculture. The main objective of precision agriculture is to ensure that the soil and crop are given the proper supplement for production by implementing information technology. Precision agriculture ensures Profitability, Sustainability, and Environment protection. However, precision agriculture uses IoT technology and thus, data transmission happens over the Internet. Emma [33] explored the factors influencing smart farming and socio-technical factors using a “Multi-level Perspective.” For the study, twenty-six farmers were interviewed in Australia and various concerns were raised about the security and privacy related to data.

4.5 Smart City PPDP A smart city is not just a notion. The government is investing a lot in smart city projects. Youyang [34] discussed the importance and limitations of the smart city. Smart parking, smart information communication, smart housing, smart health care, and other projects have been designed and deployed into real-time applications. People begin to live in smart cities and smart houses that are equipped with a variety of sensor gadgets that are linked through different network systems in the future. Every individual’s day-to-day activity will be captured, collected, and analyzed for diverse purposes and even for the development of the technologies. Though smart cities seem to be highly sophisticated and user-friendly, each activity of the individual is recorded at the back end that may lead to the disclosure of complete personal information of the individuals. Individuals’ locations can be tracked by using location-based applications. Therefore, strong measures should be taken to avoid privacy breaches. Earlier, several privacy mechanisms, such as biometric, cryptography, and encryption had been followed in different applications. When the Internet of Things (IoT) comes into the picture, the smart city is more vulnerable when connected directly to the web. The following limitations are considered when the sensors are directly connected to the web, 1. The computational power of sensors is very constrained, 2. The requirements for IoT are rigorous when compared to traditional algorithms. The traditional algorithms cannot resist the smarter attacks. Smart meters can capture the power consumption data and pass the fine-grained information to the power suppliers. Though smart meteres are incredibly effective at managing electricity and are fault-tolerant, they represent a substantial threat to the privacy of household appliance usage. The ON/Off states of the home appliances can be inferred by the smart meters.

856

J. Jayapradha and M. Prakash

Due to this inference, ‘active power’ based attacks can happen. Jingyao [35] focused on reactive power and illustrated the attacks on reactive power to surmise the information related to the usage of home appliances. Jingyao et al. (2017) presented a unique technique named “Reactive Power Obfuscation (RPO)” to fight against such threats. Through this technique, the inferring of home appliances information will be very difficult. Eleanor [36] discussed the data explosion in his blog. According to the US and British research, 59% of internet users in the US and 47% of internet users in the UK know that their personal information can be collected from their smart device navigation system. Garner has already predicted that 26 billion users will be connected to the Internet in the year 2020. Privacy concerns are becoming an obstacle for the growth and opportunities of various industries. Yousra [37] discussed the need for privacy in their work. The smart city concept is beautiful and multifaceted. Due to the more extensive connectivity of devices across the internet, privacy issues are also increasing. The correlation between privacypreserving data publishing and cloud computing is studied. Despite increased cloud platform usage, various organizations and industries cannot adopt cloud platforms due to privacy and security requirements. The privacy requirements differ from consumer to consumer. Though several technologies have been evolved, no precise solution has been achieved. A privacy-preserving technique for data publishing in the cloud for smart cities is proposed to surmount the above constraints. A hybrid method is developed by combining k-anonymity, l-diversity, and (α, k)-anonymity. The experimental results also proved that the model possessed higher privacy. Qinlong [38] proposed encryption techniques such as “attribute-based, identity-based broadcast” to enhance privacy in the smart city. Oleksiy [39] introduced a two-layer architecture to mitigate privacy violations in the smart city. The inner layer handles the personal information and the non-personal and general information is handled by the outer layer of the framework. Mapping of both inner and outer layers is implemented to explore the performance overhead between the layers.

5 Anonymization Methods and Challenges Researchers have offered numerous state-of-the-art publications since privacy has been a prominent concern in recent years. Generalization, Suppression, Bucketization, Slicing, Anatomization, and Pseudonymization are common anonymization methods. Figure 2 and Table 4 depict the various privacy-preserving anonymization methods and limtations.

6 Limitations and Future Directions in PPDP Junqiang [40] discussed the current status and issues of PPDP in his work. The information of people is released to a third-party data receiver via the data publisher,

Privacy-Preserving Data Publishing Models, Challenges, Applications …

857

Fig. 2 Data anonymization methods Table 4 Data anonymization methods and limitations S.No Method

Description

Limitations

1

Generalization

It interchanges the quasi-identifier values with range of values

Cannot resist background knowledge

2

Suppression

The whole value or part of an attribute value is suppressed and replaced by *

If the suppression degree is high, then utility may be lost

3

Bucketization

It partitions the dataset into Leads to membership disclosure multiple segments and each segment is allotted with a unique id known as group ID

4

Slicing

Developed to overcome the When the sensiitve attribute have weaknesses of generalization. similar values, then the original Slicing can performs two value can be identified different operations on dataset 1. Vertical and 2. Horizontal

5

Anatomization

It disrupts the linking relation among the attributes

6

Pseudonymization It replaces the original data value Leads to High Utility loss with the pseudonyms

Cannot resist background knowledge

858

J. Jayapradha and M. Prakash

making privacy-preserving data publication difficult. The recipient may be a third party or a legitimate user, or even a privacy attacker. The data publisher should play an important role in data anonymization by selecting acceptable techniques. There are still many questions that remain unanswered. Can a technique provide an optimal balanced solution between utility and privacy? Can any personalized technique improve both utility and privacy? Is it possible to obtain an efficient algorithm for real-world data? Various open issues and future directions in PPDP are discussed and depicted in Fig. 3. Dynamic data publishing Privacy model: The privacy of static data publication is addressed by many researchers, whereas the privacy of dynamic data publication is not addressed in most of the works. In dynamic data publishing, data keeps on changing and will be published often. Adeel [41] discussed and analyzed the attacks specifically assigned for dynamic publishing. In such cases, the guarantee of privacy is ensured only for a single release. Legal and social issues: NCBI [42] created a guide for health professionals with legal and social issues. A few issues cannot be dealt with under the umbrella of technical content. Few problems can be dealt with only through technical stuff and legal concerns. As a result, all data suppliers should be informed of the related ethical, societal, and legal considerations.

Fig. 3 Roadmap representation of the open issues in PPDP

Privacy-Preserving Data Publishing Models, Challenges, Applications …

859

Privacy-preserving in high-dimensional data: Padmavathi [43] investigated the complexities of dealing with high-dimensional data. Working with high-dimensional data is difficult. Though there are various techniques for privacy-preserving data publishing, not all the methods can be applied to high-dimensional data. The current approach includes a predefined strategy for dealing with lesser dimensions. Few techniques and protocols have been proposed to deal with the larger dimensions and with multi-party data providers. Utility preserving Method: Asmaa [44] reviewed the need for a utility preserving method. As the PPDP is meant for data publishing to third parties or service providers, sensitive information should be hidden. Many of the individuals do not wish to expose his/her details to third parties. The attributes will be anonymized/encrypted/perturbed to protect the exposure of sensitive information. Also, the anonymized data should ensure that the data is meaningful. The existing techniques used for privacypreservation are lessening the utility. The development of the “utility-based method” is still a mystery. The above open issues still persist in the PPDP. The privacy of the health care data from various adversary attacks can be prevented using an appropriate anonymization technique that balances the both privacy and utility of the data. Balancing privacy and utility is always a challenge faced by researchers. Table 5 illustrates the recent methodologies, its advantage, and their disadvantage. Table 5 Literature survey of recent methodologies Proposed model

Category

Advantage

Research gap

Personalized extended (α,k)-anonymity model

Single sensitive attribute

Higher privacy with personalized privacy-preservation

Setting of guarding node may be a challenge

(p, αisg)-sensitive Single sensitive k-anonymity model, (p attribute + , αisg)-sensitive k-anonymity model and (p + i, αisg)-sensitive k-anonymity model

To protect the dataset from skewness and sensitive attacks

Fixing up the threshold might lead to great challenge

(c, k) anonymization

Multiple sensitive attribute

Fingerprint correlation attack is prevented

Execution time is high

(p, k)-angelization

Multiple sensitive attribute

Less Execution time

Fails in achieving high privacy

1: M MSA-(p, l)-diversity

1:M, MSA

Identity and The model is tested in a attribute-based privacy small dataset however it disclosures needs to be tested in the larger dataset

860

J. Jayapradha and M. Prakash

7 Conclusion Publishing data of an individual without disclosing sensitive information is a great challenge. Various privacy-preserving approaches have already been developed and deployed to preserve sensitive information in released data. However, they are insufficient to protect several privacy breeches such as skewness attack, homogeneity attack, similarity attack, and background attack. The study has reviewed the complete privacy models and their attack models for privacy-preserving data publishing and depicted. Conversely, the paper focused on the modeling adversary’s background knowledge PPDP, resulting in insights of their process, and discussed their advantages and limitations. Besides, the research challenges of security and privacy are identified and discussed in detail. Subsequently, different applications, the need for privacy-preserving in those applications, and various works carried out in privacypreservation research are discussed in detail. Likewise, open issues, current status, and future directions of the privacy-preserving data publishing are discussed elaborately. Even though all the major privacy models and their attack models are analyzed and categorized, lots of models are proposed as an advancement of traditional privacy models, which is yet to be discussed in the future. Nonetheless, lots of privacy models and their attack models are available, the overall protection of the data entails appropriate privacy policies. The existing PPDP models and methodologies always aim to balance privacy and utility which is an unsolved issue till now.

References 1. Gkoulalas-Divanis A, Loukides G, Sun J (2014) Publishing data from electronic health records while preserving privacy: a survey of algorithms. J Biomed Inform 50:4–19 2. Jayapradha J, Prakash M (2021) An efficient privacy-preserving data publishing in health care records with multiple sensitive attributes. In: Proceedings of the 6th international conference on inventive computation technologies. ICICT, pp 623–629 3. Sweeney L (2000) Simple demographics often identify people uniquely. Carnegie Mellon University, Data Privacy Working Paper, pp 1–34 4. Machanavajjhala A, Kifer D, Gehrke J, Venkitasubramaniam M (2007) L-diversity: privacy beyond k-anonymity. ACM Trans Knowl Discov Data (TKDD) 1–12 5. Li N, Li T, Venkatasubramanian S (2007) t-closeness: Privacy beyond k-anonymity and ldiversity. In: IEEE 23rd international conference on data engineering, pp 106–115 6. Jayapradha J, Prakash M (2021) f-Slip: an efficient privacy-preserving data publishing framework for 1:M microdata wit multiple sensitive attributes. Soft Comput 7. Hathaliya JJ, Tanwar S (2020) An exhaustive survey on security and privacy issues in Healthcare 4.0. Comput Commun 153:311–335 8. Moura JA, Serrão C (2019) Security and privacy issues of big data. In: Handbook of research on trends and future directions in big data and web intelligence. IGI Global 9. Chen B-C, LeFevre K, Ramakrishnan R (2007) Privacy skyline: privacy with multidimensional adversarial knowledge. In: Proceedings of the 33rd international conference on very large databases, pp 1–13 10. Zheng Y, Lu R, Li B, Shao J, Yang H, Raymond Choo K-K (2019) Efficient privacy-preserving data merging and skyline computation over multi-source encrypted data. Inf Sci 498:91–105

Privacy-Preserving Data Publishing Models, Challenges, Applications …

861

11. Qaosar M, Rokibul Alam KM, Zaman A, Li C, Ahmed S, Siddique MA, Morimoto Y (2019) A framework for privacy-preserving multi-party skyline query based on homomorphic encryption. IEEE Access 7:167481–167496 12. Du W, Teng Z, Zhu Z (2008) Privacy-MaxEnt: integrating background knowledge in privacy quantification. In: ACM international, conference on management of data, pp 1–14 13. Li T, Li N, Zhang J (2009) Modeling and integrating background knowledge in data anonymization. In: Proceedings of the 25th IEEE international conference on data engineering, pp 1–12 14. Microsoft FELIX, Get to know Azure (2020). https://azure.microsoft.com/en-in/overview/ what-is-cloud-computing/ 15. Mendes R, Vilela JP (2017) Privacy-preserving data mining: methods, metrics, and applications. IEEE Access 5:10562–10582 16. Palanisamy B, Liu L (2015) Privacy-preserving Data Publishing in the Cloud: a multi-level utility controlled approach. In: Proceedings of IEEE 8th international conference on cloud computing, pp 130–137 17. Rong H, Wang H-M, Liu J, Xian M (2017) Privacy-preserving k-nearest neighbor computation in multiple cloud environments. IEEE Access 4:9589–9603 18. Yi X, Rao F-Y, Bertino E, Bouguettaya A (2015) Privacy-preserving association rule mining in cloud computing. In: Proceedings of ACM symposium, information computer communication security, pp 439–450 19. Guo S, Xiang T, Li X (2019) Towards efficient privacy-preserving face recognition in the cloud. Signal Process 164:320–328 20. Abbas A, Khan SU (2014) A review on the state-of-the-art privay preserving approaches in the e-health clouds. IEEE J Biomed Health Inform 18(4):1431–1441 21. Akinkunmi O, Rana ME (2019) Privacy preserving data publishing anonymization methods for limiting malicious attacks in healthcare records. J Comput Theor Nano Sci 16(8):3538–3543 22. Majeed A (2019) Attribute-centric anonymization scheme for improving user privacy and utility of publishing e-health data. J King Saud University–Computer In-Form Sci 31:426–435 23. Chenthara S, Ahmed K, Wang H, Whittaker F (2019) Security and privacy-preserving challenges of e-health solutions in cloud computing. IEEE Access 7:74361–74382 24. Yale A, Dash S, Dutta R, Guyon I, Pavao A, Bennett KP (2020) Generation and evaluation of privacy preserving synthetic health data. J Neurocomputing 416:244–255 25. Loukides G, Liagouris J, Gkoulalas-Divanis A, Terrovitis M (2014) Disassociation for electronic health record privacy. J Biomed Inform 50:46–61 26. Privacy Rights Clearing House, Social Networking Privacy: How to be safe, secure and social (2019). https://privacyrights.org/consumer-guides/social-networking-privacy-how-besafe-secure-and-social 27. Kumaraguru P, Cranor LF (2005) Privacy indexes: a survey of Westin’s studies. In: Privacy and American business, pp 1–22 28. Bourahla S, Challal Y (2017) Social networks privacy preserving data publishing. In: Proceedings of the 13th international conference on computational intelligence and security, pp 258–262 29. Bourahla S, Challal Y (2018) Privacy preservation in social networks sequential publishing. In: Proceedings of the 32nd international conference on advanced information networking and applications, pp 732–739 30. Niharika Sac deva, Privacy in India: Attitudes and Awareness V 2.0 (2012). http://precog.iiitd. edu.in/research/privacyindia/ 31. Nagle F, Singh L (2009) Can friends be trusted? Exploring privacy in online social networks. In: The Proceedings of 2009 international conference on advances in social network analysis and mining, pp 312–315 32. Bacco M, Barsocchi P, Ferro E, Gotta A, Ruggeri M (2019) The digitization of agriculture: a survey of research activities on smart farming. Array 3–4:1–11 33. Khanna A, Kaur S (2019) Evolution of Internet of Things (IoT) and its significant impact in the field of precision Agriculture. Comput Electron Agric 157:218–231

862

J. Jayapradha and M. Prakash

34. Jakku E, Taylor B, Fleming A, Mason C, Fielke S, Sounness C, Thorburn P (2019) If they don’t tell us what they do with it, why would we trust them? Trust, Transpar Benefit-Shar Smart Farming, Wagening J Life Sci 90–91:100285 35. Qu Y, Nosouhi MR, Cui L, Yu S (2019) Privacy preservation in smart cities. In: Smart cities cyber security and privacy, pp 75–88 36. Fan J, Li Q, Cao G (2017) Privacy disclosure through smart meters: Reactive power based attack and defense. In: IEEE/IFIP international conference on dependable systems and networks (DSN), pp 13–24 37. Eleanor, Internet of Things Industry Brings Data Explosion, but Growth Could be impacted by Consumer Privacy Concerns (2015). https://trustarc.com/blog/2014/05/29/internet-of-thi ngs-industry-brings-data-explosion-but-growth-could-be-impacted-by-consumer-privacy-con cerns/ 38. Aldeen YAAS, Salleh M (2019) Techniques for privacy preserving data publication in the cloud for smart city applications. In: Smart cities cyber security and privacy, pp 129–145 39. Huang Q, Wang L, Yang Y (2017) Secure and privacy-preserving data sharing and collaboration in mobile healthcare social networks of smart cities. Secur Commun Netw 1–12 40. Mazhelis O, Hämäläinen A, Asp T, Tyrväinen P (2016) Towards enabling privacy preserving smart city apps. In: Proceedings of the IEEE international smart cities conference (ISC2), pp 1–7 41. Liu J (2012) Privacy preserving data publishing: current status and new directions. Inf Technol J 11(1):1–8 42. Anjum A (2014) Towards privacy-preserving publication of continuous and dynamic data spatial indexing and bucketization approaches. Databases [cs.DB]. Université de Nantes 43. NCBI (2009) Understanding genetics: a new york, mid-atlantic guide for patients and health professionals. In: Chapter 8, Ethical, Legal and Social issues. https://www.ncbi.nlm.nih.gov/ books/NBK115574/ 44. Ganapathy P, Shanmugapriya D (2019) Advances in information security, privacy and ethics. In: Handbook of research on machine and deep learning applications for cyber security. IGI Global 45. Rashid AH, Yasin NBM (2015) Privacy preserving data publishing: review. Int J Phys Sci (10)7:239–247

Recent Web Application Attacks’ Impacts, and Detection Techniques–A Detailed Survey B. Hariharan and S. Sathya Priya

Abstract Web application attacks are an inexorably significant area in data security and computerised criminology. It has been seen that attackers are cultivating the ability to sidestep security controls and send off an enormous number of refined attacks. A few research have been made to address these attacks using an extensive variety of innovations, and one of the most noteworthy difficulties is compellingly answering new and obscure attacks. Web application attacks are on the rise, and reviews show that they are one of the best explanations behind data breaches. As these attacks become more common, organisations must understand what they are up against, how to mitigate risks, and how to protect against them. This survey intends to investigate various types of web application attacks, examine the most successful types of web attacks, and analyse them based on web application attacks and the discovery methods. It is expected to add to this growing field of focus by exploring more intricate and practical strategies for web application attack identification and be helpful for emphases in the view of web attacks. Keywords Web application attacks · Machine learning · Web application attack detection · Web attack models

1 Introduction Web applications play a huge role in people’s everyday schedules, especially when people start to transfer their applications and their data to the cloud. Web applications are engaging attack targets because of their shared traits and the way they store a lot of confidential client data. In this regard, it is critical to protect web applications from intrusion. Among many shortcomings, distributed denial of service (DDoS)-related B. Hariharan (B) · S. Sathya Priya Department of Computer Science and Engineering, Hindustan Institute of Technology and Science, Chennai, Tamil Nadu, India e-mail: [email protected] S. Sathya Priya e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_62

863

864

B. Hariharan and S. Sathya Priya

shortcomings, which are exploited by sending systematically planned requests, involve the greatest part. Structured Query Language (SQL) injection and cross-site scripting (XSS) are recorded as the first and third most fundamental web application security bets, separately [1]. Overall, there are two methods for managing and recognising the attacks referred to. The first is the imprint-based method, which is to look for unequivocal attack plans in requests; the second is the anomaly-based strategy, which is to spread out conventional sales profiles with the objective that unusual sales can be isolated from normal ones. The imprint-based system is embraced more fiercely than the anomalybased procedure because, ordinarily the imprint-based one has a lower duplicity rate and achieves higher precision. The Web Application Firewall (WAF) contains a massive set of rules that can distinguish between SQL injection and XSS. Whatever the case may be, the standardbased strategy is risky at this point. It is, first and foremost, basically as extraordinary as the level of the standard set, which suggests that it is unequipped for recognising attacks that are not in its imprint dataset [2]. Furthermore, bypassing WAF should be possible by subscribing to existing poisonous requests or encoding themselves on various events [3]. Thirdly, a very enormous attack design set or demand with long lengths consumes heaps of processing assets to complete the example examination [4]. The web is a critical piece of a huge package of strategic policies that the affiliation partakes in each day. It is the repository of information and the home of cloud-based motorised limits. It holds the data that clients purposely give through happy association frameworks, shopping cases, login fields, and request and submit structures. However incredible and good this research may be, they are altogether defenceless against web application attacks from cyber criminals [4]. Understanding how web applications work and focusing on their commonly exploited flaws can benefit security by joining forces and executing strategies. It will limit the possible results that businesses and clients experience in the event of an information breach [5].

1.1 Web-Based Attacks Web applications truly do raise various security concerns arising from inappropriate coding. Serious flaws or weaknesses allow criminals to gain immediate and unrestricted access to datasets and manipulate sensitive information; this is known as a web application attack. A considerable number of these datasets contain significant data (for example, individual information and monetary subtleties), making them regular targets of attacks. Even though such destructive incidents (frequently committed by supposedly happy people) destroying corporate sites are still common, aggressors now prefer accessing the sensitive information residing on the dataset server because of the enormous changes in the consequences of data breaches. In the system portrayed above, it is not difficult to perceive how an attacker can rapidly get to the information dwelling on the dataset through a portion of imagination and, with

Recent Web Application Attacks’ Impacts, and Detection …

865

karma, carelessness, or human blunder, prompt weaknesses in the web applications [6]. As mentioned, sites rely on datasets to convey the expected data to guests. If web applications are not secure, for example, helpless against at least one of the various types of hacking methods, then the entire set of sensitive data is at risk of a web application attack. SQL injection attacks, which target information bases directly, are still the most well-known and dangerous type of vulnerability [8]. Different aggressors might infuse malevolent code, utilising the client contribution of weak web applications to deceive clients and divert them towards phishing locales. This is called a cross-site scripting attack and might be utilised even though web servers and information base engines contain no weaknesses themselves. It is frequently used in conjunction with other attack vectors, such as social engineering attacks. There are numerous different kinds of normal goings-on, like registry crossing, and neighbourhood record incorporation, and that’s only the tip of the iceberg. Ongoing research shows that 75% of digital attacks are completed at the web application level [7].

1.2 Web Applications Working Web applications work by first examining a data-feature selection and delivering a web record as exhibited by the client’s preferences. The data is well-known and open to all activities, each goal is accomplished and the record is both justifiable and dynamic [8]. Web applications assume that there should essentially be no work on the client’s end that can be bought by affiliations a second time or re-attempted to meet a business’ interesting decisions.

1.3 Online Attacks Precisely when attackers exploit weaknesses in coding to get close enough to a server or informative file, such modernised self-destructive attacks are known as application-layer attacks. Clients acknowledge that the sensitive individual data they uncover on the site will be shielded privately [11]. Online attacks suggest that the visa, government retirement partner, or clinical data could become public, prompting possibly grave results [9]. Web applications are especially vulnerable to hacking. Since these applications should be uninhibitedly open, they can’t be protected behind firewalls or by taking a chance with Secure Sockets Layer (SSL). A gigantic number of these research, approach, either straightforwardly or by implication, to, especially certain client information. Designers make it their business to glance through deficiencies, so that this data can be taken or rerouted. Research to obstruct web application attacks ought to be a fundamental necessity for IT security.

866

B. Hariharan and S. Sathya Priya

1.4 Types of Web Attacks Though the techniques of cybercriminals are ceaselessly being developed, their secret attack strategies remain decently consistent. The following types are presumably the most notable web attacks: • Cross-site scripting It invloves an attacker introducing destructive code onto the site that can then be utilised to take information or perform different sorts of attacks. However, this system is overall unsophisticated, remains incredibly average, and can genuinely attack [13]. • SQL Injection (SQLI) This happens when an attacker submits destructive code into an information structure. Assuming the framework’s failure to clean this data, it very well may be submitted into the dataset, progressing, erasing, or disclosing information to the attacker [10]. • Way crossing By this inadequate security of information that has been inputted, these webserver attacks combine encoded plans into the webserver’s fundamental hierarchical order, that permits attackers to gain client authorizations, datasets, game-plan records, and different types of data stored on hard drives [15]. • Close by Record Thought This sensibly magnificent attack strategy integrates obliging the web application to execute a record tracked down somewhere else on the framework [11]. • Distributed Denial of Service attacks Such harmful events happen when an attacker attacks the server with requests. Mostly, attackers use an association of compromised computers or bots to mount this attack. Such exercises weaken the server and hold real visitors back from getting to the organisation [17]. Though attackers don’t generally employ these strategies, they regularly use them to control modernised systems, making them vulnerable to other malware and violations.

1.5 Protecting Against the Site Attacks An organisation’s capacity to utilise online assets to obtain and store client information has many advantages, yet it additionally makes way for undermining attackers. Luckily, there are strategies to be used to evaluate and verify the site and its mysterious servers and educational records. They are as follows: • Computerised weakness filtering and security testing: These assist in finding, isolating, and reducing vulnerabilities, as frequently as practical before confirmed

Recent Web Application Attacks’ Impacts, and Detection …

867

attacks occur. Setting resources into these preventive measures is a financially sharp strategy for decreasing the probability that vulnerabilities will change into electronic disasters [12]. • Web Application Firewalls (WAFs): These work on the application-layer and use rules and information about known attack theories to limit authorization to applications [19]. Since they can get to all layers and surfaces, WAFs can effectively safeguard data from attack. • Secure Development Testing (SDT): This is used by all security partners, including analyzers, planners, modelers, and bosses. It gives data about the freshest attack vectors. It helps the group in developing a model and empowering a rational, unique strategy for controlling and preventing site attacks and limiting the results of breaches that can’t be stopped. The denial, control, and stability of web application attacks are their typical works [13]. Mounting a multi-pronged guard containing improvement, mechanised adventures, and human strength will permit one to monitor, independently understand, and eliminate threats of different sorts rapidly and effectively. Table 1 contains the list of incidents in light of web application attacks and their significant effects. The most recent web Attack Events are organised below:

2 Related Study

Table 1 Various web attack events S. no.

Event

Attack

Impacts

1

Kaseya ransomware attack

Malware attack

This attack impacted their clients and many of their companies

2

Cisco vulnerability

SQL injection attack

The attack allowed attackers to gain access to the frameworks, to which the authorization chief was dispatched

3

Amazon DDoS attack

DDoS attack

The organization experienced one of the biggest DDoS attacks ever

4

British airways attack

Cross-site scripting

They prevailed with regards to performing credit card theft on 380,000 booking exchanges before the breach was found

5

Twitter celebrities attack

Social engineering

Many notable leaders’ and celebrity’s records were hacked

868

B. Hariharan and S. Sathya Priya

Following the breakthrough in artificial intelligence innovation, scientists in the field of organisational security have widely embraced deep learning. A ton of exploration work has been done centred around web attack identification. It appears that security location innovation based on human consciousness is rapidly becoming a critical course strategy. Strategies for web attack location in light of deep learning are driven by extensive information analysis [21]. Along these lines, deep learning models can investigate inputs by extricating these helpful elements and gaining examples from these highlights through iterative preparation. Based on deep learning procedures, web attack recognition strategies improve location execution indefinitely. As of now, the commitments of existing related works are for the most part reflected in two points: First is the technique applied to break down the Uniform Resource Locator (URL) demands and change them into vectors, and the other is the deep learning model used to find out and identify web attacks [14]. Three sorts of techniques for URL investigation are summarised below. (1) Measurable qualities in light of coordinating and counting typical words or sentiments from crude traffic are most generally used to address URL demands, for example, the length of URL demands, the strange expression of sentiments in the requests, the kinds of odd words, and the number of boundaries. (2) Addressing URL demands in light of a customary semantic and syntactic examination of crude information has become a well-known technique in the field of web attack discovery. Highlights removed from semantic and syntactic examination contain the profundity of the linguistic structure tree, the number of roots in the grammar tree, the number of leaf hubs in the punctuation tree, and so on. (3) The strategy for examining URL demands and changing them into vectors naturally shows its predominant capacity for addressing URL demands precisely. It has turned into the best-in-class technique in the field of web attack recognition.

3 Survey of Existing Works In the existing frameworks, the web application attacks might include security misconfigurations, broken confirmation, meeting the board, or different issues. The absolute most perilous and predominant web application attacks, nonetheless, exploit weaknesses related to ill-advised approval or sifting of untrusted inputs, bringing about the infusion of malicious content or space-explicit language code. Attackers appear to track down better approaches to discovering the malicious code with applications utilising different dialects and strategies. In the meantime, during the last 10 years, there have been various components intended to identify more sorts of such attacks on IoT web applications [23]. In like manner, intrusion detection systems, like Grunt and WAF are utilised to safeguard against web attacks, yet they are right now vulnerable because most WAFs depend upon ordinary verbalization-based channels delivered utilising acknowledged attack engravings, and they require a lot of master planning. Deep learning has been executed in many areas with clear accomplishments [15]. For instance, deep learning can be utilized in enhanced data analysis

Recent Web Application Attacks’ Impacts, and Detection …

869

Table 2 Characterization of different works in web attacks S. no.

Author/year

Technique and algorithm used

Datasets

1

Derya Erhan et al. (2020)

K SVD algorithm

KDD, MAWI, and DARPA

2

Zhihong Tian et al. (2020)

Natural Language Processing and Convolutional Neural Networks

HTTP, FWAF, HttpParams

3

Rashidah F. Olanrewaju et al. (2021)

Secure online transaction algorithm

TBA, CBA, TPA, and data logs

4

Jothi K. R. et al. (2021)

Artificial Neural Network

Lib-injection

5

Wen-Container Hsieh et al. Raft consensus (2022) algorithm

JMS_DATA_SCI

machines to develop endurance. Meanwhile, deep learning has been applied to sort out confirmation in an evaluation by its capacity to separate and self-learn. Recognising web attacks from regular clients and aggressors inside a DDOS attack has four basic issues. Table 2 represents the characterization of different works based on web attack detection and prevention systems. Erhan et al. [11] in 2020, proposed a hybrid DDoS discovery structure model, which employed the word reference delivered from the organisation’s traffic boundaries via the K SVD calculation. Tian et al. [9] in 2020, proposed a distributed learning framework model. They proposed a web attack discovery framework that exploits breaking down URLs utilizing natural language processing (NLP) and convolutional neural networks (CNN). This framework was intended to recognise web attacks and send alerts. Numerous simultaneous deep models were utilised to improve the framework’s security using the Hypertext Transfer Protocol (HTTP), the FWAF, and the HttpParams datasets. Olanrewaju et al. [13] in 2021, recommended a framework called the frictionless and secure client confirmation framework. A protected client approval part for web applications with a frictionless encounter utilizing the solid internet-based exchange calculation was proposed. A mechanised approval plot was arranged considering client-led login occasions. The uniqueness of the client character was endorsed in the proposed structure at the login interface, trailed by a recommendation for a fitting client affirmation process. The insistence association was carried out in four distinct login parts, the capacities of which were determined by the analyzer and verifier. Jothi et al. [10] in 2021, proposed a SQL Injection Location Framework model for detecting SQL injection by detecting patterns in data. The upside of this structure is that it recognizes all kinds of Infusion methods. All the component extraction and choice were done by the real model. The dataset utilised in this model was Lib-infusion.

870

B. Hariharan and S. Sathya Priya

Hsieh et al. [15] in 2022, proposed a framework named “Blockchain-based DNS Framework”. A clever component for checking sites utilising blockchain innovation was proposed. This instrument won’t add any heap to clients, and gives sealed capabilities given the attributes of the blockchain, and this component is safer than different models that utilise the Raft consensus algorithm.

4 Research Gaps • Various attacks show various marks in their URLs, and in this manner, they highlight that determination is difficult. • The beneficial execution of frictionless confirmation procedures in electronic applications is not focused on specific works. • Existing overviews are more application-explicit and do not focus on the whole scope of safety and security in distributed computing frameworks and web administration organisations. • The current EDL-Web Attacks Detection System Framework can only recognise SQLI and XSS, not different kinds of attacks [16].

5 Conclusion and Future Directions It is extremely difficult to classify all types of web application attacks and the most common types of attacks. Numerous researchers are carrying out new calculations with less misleading positive rates that identify a wide range of web attacks, yet it requires numerous years to design the best framework for preventing web-based attacks. In the impending future, attackers are getting more careful and will like to produce blended traffic to attack the current security systems. Thus, the necessity is to devise a fiery arrangement that ought to be sufficiently nonexclusive to recognise all Web application attacks and prevent them. Future investigations ought to be centred around the identification of different kinds of attacks, and feature selection can be centred more around those reviews.

References 1. Lin M, Chiu C, Lee Y, Pao H (2013) Malicious URL filtering—a big data application. In: Proceedings of IEEE international conference big data, pp 589–596 2. Kar D, Panigrahi S, Sundararajan S (2015) SQLiDDS: SQL injection detection using query transformation and document similarity. In: Proceedings of international conference on distributed computing and internet technology, pp 377–390 3. Le A, Markopoulou A, Faloutsos M (2011) PhishDef: URL names say it all. In: Proceedings of IEEE Infocom, pp 191–195

Recent Web Application Attacks’ Impacts, and Detection …

871

4. Qiu J, Du L, Zhang D, Su S, Tian Z (2020) Nei-TTE: intelligent traffic time estimation based on fine-grained time derivation of road segments for smart city. IEEE Trans Ind Informat 16(4):2659–2666 5. Bisht P, Madhusudan P, Venkatakrishnan VN (2010) Dynamic candidate evaluations for automatic prevention of SQL injection attacks. ACM Trans Inf Syst Secure 13(2):398–404 6. Luo C, Su S, Sun Y (2019) A convolution-based system for malicious URL requests detection. Comput Mater Continua 61(3):399–411 7. Li M, Sun Y, Lu H, Maharjan S, Tian Z (2020) Deep reinforcement learning for partially observable data poisoning attack in crowdsensing systems. IEEE Internet Things J 7(7):6266– 6278 8. Hwang YH (2015) IoT security and privacy: Threats and challenges. In: Proceedings of 1st Acm workshop on IoT privacy trust and security 9. Jamdagni A, Tan Z, He X (2013) RePIDS: a multi-tier real-time payload-based intrusion detection system. Comput Netw 57(3):811–824 10. Tan Z, Jamdagni A, He X, Nanda P, Liu RP (2014) A system for denial-of-service attack detection based on multivariate correlation analysis. IEEE Trans Parallel Distrib Syst 25(2):447–456 11. Deryaerhan (Member, IEEE), and Emin anari.: Hybrid DDoS detection frame work using matching pursuit algorithm. Comput Secure 60:206–225 12. Tian Z, Luo C, Qiu J, Du X, Guizani M (2020) A distributed deep learning system for web attack detection on edge devices. arXiv:1702.08568 13. Olanrewaju RF, Ul Islam Khan B, Morshidi MA, Anwar F, Binti Mat Kiah L (2021) A frictionless and secure user authentication in web-based premium applications. In: Proceedings of IEEE 14th international colloquium on signal processing and its applications, pp 103–106 14. Jothi KR, Pandey N, Beriwal P, Amarajan A (2021) An efficient SQL injection detection system using deep learning. In: Proceedings of VI international conference on networks, communication and computing, pp 80–85 15. Hsieh W-B, Leu J-S, Takada J-I (2020) Use chains to block DNS attacks: a trusty blockchainbased domain name system. 7(6):4682–4696 16. Ma J, Saul LK, Savage S (2009) Beyond blacklists: learning to detect malicious websites from suspicious URLs. In: Proceedings of ACM SIGKDD international conference on knowledge discovery and data mining, pp 1245–1254 17. Lee I, Jeong S, Yeo S (2012) A novel method for SQL injection attack detection based on removing SQL query attribute values. Math Comput Model 55(1–2):58–68 18. Yong F, Jiayi P, Liang L, Cheng H (2018) WOVSQLI: Detection of SQL injection behaviors using word vector and LSTM. In: Proceedings of 2nd international conference on cryptography, security privacy, pp 170–174 19. Martin B, Brown M, Paller A, Kirby D (2011) CWE/SANS top 25 most dangerous software errors. The MITRE Corporation 20. Bugliesi M, Calzavara S, Focardi R (2017) Formal methods for web security. J LogAl Algebr Methods Pro-Gramming 87:110–126 21. Gupta MK, Govil MC, Singh G (2014) Static analysis approaches to detect SQL injection and cross-site scripting vulnerabilities in web applications: a survey. In: International conference on recent advances and innovations in engineering (ICRAIE-2014). Jaipur, pp 1–5 22. Antunes N, Vieira M (2009) Detecting SQL Injection Vulnerabilities in Web Services. In: 2009 Fourth Latin-American symposium on dependable computing. Joao Pessoa, pp 17–24 23. Ghanbari Z, Rahmani Y, Ghaffarian H, Ahmadzadegan MH (2015) Comparative approach to web application firewalls. In: 2015 2nd international conference on knowledge-based engineering and innovation (KBEI). Tehran, pp 808–812 24. Alzahrani A, Alqazzaz A, Zhu Y, Fu H, Almashfi N (2017) Web application security tools analysis. In: 2017 IEEE 3rd international conference on big data security on cloud (big data security); IEEE international conference on high performance and smart computing (hpsc), and IEEE international conference on intelligent data and security (ids). Beijing, pp 237–242 25. Han J, Kamber M (2001) Data mining: concepts and techniques. Morgan Kauf Man

Security Concerns in Intelligent Transport Systems Jasmeet Kour, Pooja Sharma, and Anuj Mahajan

Abstract Intelligent Transport Systems are the use of advanced sensors, communication technologies and management approaches in a unified way to offer innovative services for managing traffic and various kinds of transportation. It encourages customers to use transportation networks more intelligently, safely, and efficiently. Utilizing the vehicle data gathered, ITS systems standardize the deployment of frameworks while increasing traffic safety and passenger satisfaction. VANETs are a significant part of Intelligent transportation systems (ITS). Sometimes, they are cited as Intelligent Transportation Networks. These networks are applied in real-time scenarios, where vehicles disseminate with each other and the significant unit with the help of nearby road units (RSU) and corresponding onboard units (OBUs). However, due to their infrastructure, these VANETs experience some security challenges. As a result, academics have launched numerous research works to identify and look into the security concerns surrounding ITS, connected and autonomous vehicles (CAV), and vehicular ad hoc networks (VANETs). This survey paper includes the study of various attacks in these networks, and the applications of Security techniques to improve ITS in terms of throughput, efficiency, and defending them from attacks. Keywords Intelligent transport system · Vanets · Wireless Ad Hoc networks · Security attacks in Vanets · Security methods

1 Introduction To provide a safer driving environment that can help to prevent traffic congestion and accidents, the topic of smart transportation systems has drawn significant attention from academic and industrial researchers due to the growing vehicle population and the advancement of technology. Connected vehicles carry delicate and confidential material which needs to be shared with nearby vehicles in a safe environment. J. Kour · P. Sharma (B) · A. Mahajan School of Computer Science and Engineering, Shri Mata Vaishno Devi University, Katra, Jammu and Kashmir, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_63

873

874

J. Kour et al.

Ranging from primitive to more sophisticated applications that combine real-time data and feedback from multiple sources, management systems like car route planning, wavering message indications, automated plate recognition, traffic signs or cameras deployed to monitor implementations like Surveillance equipment, instantaneous incident detection, and interrupted vehicle detection systems, Intelligent transport systems vary in technologies applied. The term “Intelligent transportation system” (ITS) refers to an amalgam of operational controls and elucidations. Ensuring better operation and safety, the entire driving echo-sphere is included in ITS. ITS centers specifically on the improvement of throughput and safety of citified roads through vigorous controls and analytics. Although ITS refers to entirely each transportation mode it mainly focuses on enhancing the safety of transport and efficiency in different circumstances like traffic management, mobility, road transportation, etc. Originally ITS innovation was created to improve traffic flow, the safety of passengers, and road health. Due to its heavy reliance on wireless communications, numerous threats could rattle its action and cause catastrophic accidents. Applications in ITS are largely characterized by the subsequent primary modules:• Comforts and infotainment services. • Traffic flow supervision by conforming to traffic signal timing and observing patterns of traffic • Road security. • Driving operations of autonomous nature. The main goal of these modules is the betterment of driving experience and pedestrian safety. Attack surface of a smart vehicle is shown in Fig. 1.

Fig. 1 Attack surface of a smart vehicle

Security Concerns in Intelligent Transport Systems

875

1.1 ITS Architecture ITS prominent structure comprises the following primary spheres: • Vehicles. • Vehicle-to-Vehicle/V2V • Infrastructure spheres, their associations like V2X and In-Vehicle (V2I/I2V, V2V) transmissions. An OBU mounted on the IN vehicle domain of automobiles refines the communication flow within the corresponding spheres. An improvised infrastructure is produced by V2X within RSUs and OBUs, which are arranged laterally towards ITS path, highways, and rail nets. Host applications offering various services are provided by RSU units. On the other hand, OBU makes use of the services provided by the Road Side Unit through the Application Unit (AU). Equipped with sensors each vehicle collects data sequentially and transmits it to nearby devices in a wireless framework. VANETS A special kind of mobile ad hoc network is the vehicular ad hoc network. It allows communication between automobiles and surrounding fixed equipment, which is typically referred to as roadside equipment. Intelligent use of vehicular networking is defined by VANET, or Intelligent Vehicular Systems Networking (a component of ITS). The primary purposes of VANET are to offer services for infotainment, traffic management, and safety-related information. Real-time information is necessary for safety and traffic management, and it can influence life-or-death choices. The main issue with deploying VANET in public is the lack of a simple and effective security solution. A Vehicular Ad Hoc Network (VANET) system without security is vulnerable to numerous attacks, such as the spread of fraudulent warning signals and the suppression of real warnings. As a result, security becomes a crucial consideration while creating such networks. The VANET is crucial because it could be among the first commercial uses of ad hoc network technology. The bulk of nodes is capable of building self-organizing networks without prior knowledge of one another as vehicles These nodes have a very low-security level, making them most prone to assault in the network. When there is no infrastructure and a medical emergency arises, VANET is essential for transmitting information that could save lives. However, new difficulties and issues arise alongside these beneficial VANET applications. Every vehicle that joins the network, regulates and maneuvers the network’s topology as well as its communication needs. The communication between moving automobiles in a particular area is looked up by vehicular ad hoc networks. V2V or Vehicle-to-Vehicle communication is when a vehicle is in direct communication with another vehicle. When a vehicle interacts with an infrastructure unit, such as a Nearby Roadside Side Unit (RSU), this is called vehicle-to-infrastructure (V2I) communication.

876

J. Kour et al.

Vulnerabilities in the communication system between vehicles cause the following security challenges:Limited connectivity: Many automobiles still come under most cyberattacks despite an increase in vehicular connectivity due to a lack of software updation throughout the years. Limited computational performance: It has been seen that, when compared to computer systems, automobiles typically have limited computational capacity. This limitation arises from the fact that automobiles have a longer lifespan than typical computers and must withstand higher heat and vibrations. This drawback makes automobiles more vulnerable to hacking than computers. Unpredictable attack scenarios and dangers: A vehicular architecture is vulnerable to attacks from several access points, including vehicle databases, isolated communication systems, and car parts. The continual development of new threats makes it impossible for manufacturers to foresee where hackers will strike next. Critical danger to the lives of drivers or passengers: A car may break down even if only a few sensors are tricked or only a few illegal messages are delivered. The lives of drivers, passengers, and pedestrians may be more in danger as a result of these malfunctions. VANET transmits safety messages, so it’s critical to ensure that both communication and content are secure. Providing a secure environment for message transmission when there are potentially unreliable nodes to deploy a variety of attacks, consisting of, black hole attacks, malware injections, denial of service (DoS), man-in-the-middle attacks, etc., is challenging because of the sporadic interconnection between automobiles in VANET. These unreliable nodes jeopardize the network by sending compromised messages, which are received by nearby nodes. The connectivity becomes more complex due to the connected vehicles’ growing mobility. Man in the middle (MITM): Wireless networks are used by automated vehicles to keep in touch with nearby infrastructure and other vehicles on the route. Additionally, these vehicles communicate with OBUs through wired or wireless techniques. A man-in-the-middle attack occurs when an intruder pretends to be one of the two entities—in this case, two VANET vehicles in an intravehicular communication or an RSU in infrastructure to vehicle communication—while manipulating the messages that are sent between them. Attackers can actively intercept, replay, and alter messages sent between two entities by seizing control of an OBU or RSU. One of the riskiest attacks for automated cars is a denial of service (DoS) attack. In certain vehicles, the denial of OBU or ECU service may result in fatal accidents or human casualties. Attackers can disable cameras, lidar, and radar to stop them from detecting things like roads, objects, and warning signs. When the braking system fails, the car may stop suddenly or be unable to stop where it is needed. Malware assaults: These kinds of attacks use Trojan horses, worms, and viruses to affect the network of an autonomous vehicle that is in operation. The software components of the RSUs and the OBUs are also impacted by malware. These assaults exacerbate the precarious size of Vanets.

Security Concerns in Intelligent Transport Systems Table 1 Vulnerabilities, compromised security features, and counter steps

877

Vulnerabilities

Compromised security features

Counter steps

DOS-Service Denial

Accessibility

Digital-Signatures

Sybil

Accessibility, Authentication

Digital-Signatures

Eavesdropping

Confidentiality

Encryption

Black hole

Accessibility, Authentication

Digital-Signatures

Malware

Availability, Authenticity

Digital- Signature

Black hole attacks: These are frequent attacks against accessibility that can happen in different types of Ad hoc systems, including Intelligent Vehicles. When hostile nodes refuse to transmit messages, they are intended within the system. Even though it doesn’t frequently participate, a bad node delegated its active involvement in the system during these strikes. For many Vanet applications, especially for delicate highway security applications, these black hole attacks pose a serious security risk. For preventing the above attacks in VANETS and increasing the efficiency of these systems, various security techniques can be applied.Vulnerabilities, compromised security features, and counter steps are shown in Table 1.

2 Literature Review This section presents a thorough analysis of earlier studies and strategies used in this field. In a survey published in the American Journal of Networks and Communication in 2015, Lakkadi et al. [1] provided Ad hoc architecture which offered an accurate understanding of security objects and vulnerabilities. They also studied numerous security assaults and secure communication strategies and concluded that secure routing protocols or key management can provide resistance against these attacks. El-Rewni et al. [2] presented a three-fold construction technique (monitoring, transmission, regulation) in their research work that was released in 2019 to help identify automobile security vulnerabilities in vehicular communication. The suggested method offered a cutting-edge analysis of assaults and threats specific to the communication layer and demonstrated countersteps. The article concluded that a new architecture is required that will use the Systems of Systems (SoS) methodology to identify and neutralize vulnerabilities. In their research, Ahmad et al. [3] in 2020 have suggested a special trust model called MITM Trust Model, (for man-in-the-middle attacks), which efficiently identifies dishonest nodes carrying out the attack and revokes their credentials. The two

878

J. Kour et al.

axes of assessment were node-centric and data-centric. To formally assess the attainment and correctness of the given model across three attacker models and a predicted trust model, extensive simulations utilizing the VEINS simulator were carried out. A genuine map from a city was considered to validate MARINE (MITM attack resistance trust model). SUMO simulator was used to generate a real mobility trace of the number of cars on the retrieved map. The results showed MARINE outperforms the network of 35% MiTM attackers with 15, 18, and 17% improvements in precision, recall, and F-score, respectively. A privacy-preserving security solution and multiple factors authentication scheme, lightweight in nature for VANETs were proposed by Alfadhli et al. [4] in their publication in the year 2020. They did this by combining functions that are physically unclonable function (PUF) and single-time pseudo-identities of dynamic nature as authentication factors. This research presented a novel anonymous V2V and V2I interaction using a multi-factor authentication mechanism for VANETs. The work included phases for vehicle setup, mutual authentication between the vehicle and the infrastructure, beacon exchange, vehicle revocation, and regional key updates. Their proposed effort can achieve the needed characteristics while not disclosing any private information to the RSU. In 2020, Hassan et al. [5] offered a resolution to provide safe automatic vehicle applications, including the transportation, safety, and comfort of the cars and their occupants. The destination sequence number, E2E, Hop count, and PDR were the four criteria that were taken into account. Pre-calculation of thresholds was done using a combination of these four parameters. A preset threshold value was used as a comparison point for the precalculated values. With the result, of the determination of a black hole attack, an alarm was sounded. Using NS2, simulations were run (V2.34). When compared to previous solutions, the scheme performed better than AODV (Ad Hoc on Demand Distance Vector) with a black hole in terms of Throughput, E2E, ROH, PLR, and PDR. 2020 saw the publication of a study by Chowdhury et al. [6] who examined existing assaults against self-driving cars and in-depth described probable cyberattacks, their effects on those vehicles, and their vulnerabilities. They discussed potential mitigating measures made by the governments and industry for recently reported assaults. Recent studies on how a self-driving automobile can maintain reliable functioning even when subject to continual cyberattacks are included in their survey. They also came up with new directions for research to help with self-driving car security concerns. A research study offering an unlikable validated key agreement along with collision resistance in VANETs was presented by Li et al. [7] in 2021. According to the solution offered, a trustworthy authority creates several tickets that conceal the identity of the vehicle and distributes them via a blockchain network to enable the V2R to be unlinked. Then, by use of homogeneous encryption automobile creates various pseudonyms, and RSU utilizes various tickets to authenticate them. To permit vehicles to establish authentication without retrieving any vehicle-specific data, authenticated key agreements generated anonymously and based upon homomorphic features

Security Concerns in Intelligent Transport Systems

879

are developed for addressing the unlinkability of V2V. Security study and performance evaluation revealed that their created technique has enhanced anonymity, stringent unlinkable features, and improved anonymity, reformed the efficiency of V2V and V2R by 50%, and 10% respectively. To unaidedly check the safety of in-vehicle ECUs, Zhang et al. [8] developed the Framework -Cyber Security Evaluation Framework (CSEF) in 2021. CSEF includes analysis of risk, threat analysis, identification of assets, and testing of security. To meet these needs, they applied this CSEF to an On-Bord Unit (OBU) that was already installed. The use case showed how the suggested CSEF may address OBU’s assets, threats, risks, and vulnerabilities, all of which are crucial in directing others to do security surveys. The CSEF might be expanded further to analyze the cyber security of other crucial ECUs, including the gateway, infotainment systems, and telematic boxes. For safe multimedia data transfer between automobiles, Kamal et al. [9] presented various efficient techniques in 2021 employing symmetric encryption. The key advantage of these improved algorithms is that they require less information to generate fingerprints. To create the fingerprints, the algorithms divide the original 3.7 105 samples of data into about 3600 samples. The signal’s highest peak value was retrieved using the FFT (Fast Fourier Transform). Using a centralized server, authentication was accomplished. By contrasting the HASH value of fingerprints and retaining a log transaction value, the data transfer was authenticated. During experimentation, the effectiveness of the suggested methods was confirmed by obtaining fingerprints and their server-side authentication from a smaller sample size. The suggested approaches are computationally efficient because they just require basic encryption methods to secure data. The data collected is effective for creating fingerprints. The generated findings demonstrated that the size of data was decreased ranging between 0.1 million samples- 3600 samples. By using fewer resources and producing fewer data overall, the proposed work has regulated an effective solution. Additionally, the transaction logs that were saved on the server give current details about safe data transmission. In 2022, McManus et al. [10] examined the effects of cyber threats on Vanet’s traffic operations. Their approach included three primary stages: setting up the simulation environment, constructing a base model, and research and assess the attack model. Using the Veins modeling platform, a simulation of traffic and communication on a road was created. It included the typical traveled time delay for every simulation vehicle. T-test and Chi-square test were used in the statistical analysis. This method included V2X communication and modeled Man in the Middle (MITM) and Denial of Service (DoS) assaults on an urban street network. To examine the effects of unintentional or intentional failure of communication, Peth et al. [11] presented a framework that connected the security of the Connected, Cooperative, and Automated Mobility/CCAM systems having cyber security-sensitive network assessment metrics and the vehicle dynamics components in 2022. In response, they gave an advanced approach for assessing the risk of safety associated with particular Inter-Vehicle (V2V) applications. A well-established but simple application case for V2V communication was selected. The Longitudinal

880

J. Kour et al.

Collision Risk Warning/LCRW application demonstration scenario was given. They identified the network performance indicators that affect safety and driver assistance. Five variables were primarily considered during the experiment: lead vehicle speed, speed difference, PDR, and E2E. Several test cases were defined. To cut down the amount of testing required, the impact of network performance measures was modeled using an offline ex-post analysis. The ZalaZone Automotive Proving ground’s convenient motorway track element was chosen after scenarios were determined. Cohda Wireless MK5 OBU DSRC equipment was utilized for V2V communication. Each measurement was made using data logging software installed on OBUs in test vehicles. The Cohda OBU’s ADR module was used to capture the vehicle parameters and dynamics. Throughout measurement, a standardized network packet capturer (.pcap) was used. Regression modeling was used to determine the relationship between the evaluated and computed values (polynomial and binomial). A brand-new system named “AKAP-IOV” has been developed by Bojjani et. al. [12] in 2022 that offers safe communication, key management, and mutual authentication among cars, RSU, fog, and cloud databases. To ensure the defense of the system against online threats, Scyther and Tamarin were used for testing and verification. The Real-or-Random (ROR) paradigm was used for security analysis. The functionality, performance, efficiency, and security objectives given by AKAP/IOV were compared with recently created methods in a thorough, in-depth analysis. To understand the problems and difficulties that arise in safe communication, a key agreement and new authentication protocol within the different entities in the Internet of vehicles were proposed. When compared to competing systems, the method operates using little communication and processing overhead. The given protocol supports mutual authentication within entities and was evaluated in the NS3 simulator. The following Table 2 summarizes the above literature survey in this field.

3 Discussion ITS is a newly invented technology, yet many of the innovations they encompass were previously explored and evaluated, and the insight they have obtained can be deployed again. Encrypted communication, strong authentication, key management, secure routing, routine audits, and private networks are a few of the methods that have been used and followed up to this point. Different metrics, such as the behavioral feature based on past data, the resources, the location, etc., can be considered depending on the objectives and the ongoing scenario. Some of the security methods which are followed to prevent vulnerabilities in the ITS include:Confidentiality of information: by employing mechanisms such as encipherment, confidentiality can be attained. Strong cryptographic methods can be utilized to offer an intricate secure prototype for delicate information. At the core of cyber security

Security Concerns in Intelligent Transport Systems

881

Table 2 Literature review S. No

Methodology

Limitations

[1]

Encryption, Key Management, Authentication

There is still the possibility of new attacks in the future

[2]

Three-layer framework for better understanding of automotive threats

The need for utilizing System of the Systems approach for resolving threats

[3]

The trust model for combating man-in-the-middle attack

Integration with social networks is required

[4]

A privacy-preserving scheme using multiple factors

Need to have a trusted authority to store the vehicle’s real identity

[5]

Intelligent scheme for black hole detection

Proper creation of a physical environment is needed

[6]

Review and analyze

The algorithms lack reliability

[7]

An unlinkable key agreement providing authentication for collision resistance

It requires vehicles to have mass storage for pseudonym certificates having corresponding privatized keys

[8]

Framework (CSEF) to examine the security of the in-vehicle ECUs, consisting the asset identification, risk analysis, threat assessment, and security test

Requires to be extended to other ECU in the vehicle

[9]

Symmetric encryption, generation of fingerprints

Maintenance of transaction logs is needed

[10]

Analysis using T-test and Chi-square test

Proper resilient planning against attack is the need

[11]

Studying the interactions between the safety and cybersecurity characteristics of an automotive system

The methodology did not distinguish between the effects of random failures, systematic errors, and malicious interventions on network performance metrics

[12]

AKAP-IOV system using real or random paradigms for security

High maintenance is required

lie cryptographic techniques. The adoption of these methods in vehicular networks started in the 90 s. Lightweight encryption is the basic demand in Intelligent Transport Systems. In the midst of lightweight cryptography is a trade-off between Light-weightiness and security. As in [1], the approaches followed for providing confidentiality mainly comprised encryption and key management. Elliptic Curve Cryptosystems (ECC) are an option for key management in some situations where only public- or symmetric key techniques are available. The authors in [7] suggested a collusion-resistant, authenticated unlinkable key agreement for VANETs. A trusted entity creates several tickets that conceal the identity of the vehicle and distributes them via a blockchain-based network. The automobile then uses homomorphic encryption to generate a variety of pseudonyms. RSU authenticates them using various tickets. Furthermore, due to the unlinkability of

882

J. Kour et al.

the V2V, a homomorphic anonymized authentication scheme was developed to allow vehicles to perform authentication without retrieving any vehicle-specific data. Their technique has strict unlinkability and augmented anonymity, and it also increases the productivity of V2V and V2R by 50 percent and 10 percent, respectively, according to performance evaluation and security analysis. For secure multimedia data transfer between automobiles, the authors in [11] suggested efficient techniques using symmetric encryption. For the purpose of creating fingerprints, the method uses less data. To create the fingerprints, it divides the 3.7 105 samples of data into about 3600 samples. They employed FFT (Fast Fourier Transform) to obtain the signal’s greatest peak values. The suggested solutions are computationally dynamic since they encrypt data using straightforward ways. Authentication: A strong device identity method can be used to impart entity and channel authentication. To secure authenticity, tampering with sensors or other devices must be avoided. A novel system known as AKAP-IoV, which provides communication security, and mutual authentication between automobiles, cloud, and fog servers, and roadside units was suggested in [12]. To guarantee that AKAP-IoV was resistant to threats and weaknesses, tests were conducted using Scyther and Tamarin. Access control: Defining proper policies and control rules for data access to ensure the authorized approach. To guarantee the privacy and security of the data, an access control mechanism must be devised to guarantee isolated access privileges to the persons. Trust management: Once communication starts, privacy constraints cannot be introduced due to the heterogeneity of the devices. As a result, prior to beginning a conversation, trust needs to be established among the communicating entities. To increase the reliability of communication in the network, routing attacks and security threats/attacks are prevented using trust management models. A novel trust model— a MITM attack-resistant trust model that can identify dishonest nodes carrying out the attack effectively and revoke their credentials was introduced, as explained in [3]. The authors evaluated the scenario in two ways: data-centric and node-centric. Their simulation findings demonstrated that MARINE surpasses the cutting-edge trust model for a network with 35% MiTM attackers by improving recall, precision, and F-score by 18%,15%, and 17%, respectively. Authors in [9] examined the effects of cyberattacks on the vehicular ad hoc network in traffic operations. The three key stages of the process they came up with were setting up the simulation environment, making a base model, and studying and analyzing the attack model. Clear model assumptions were developed before the simulation was started. The base model was produced after the simulation environment was initialized. The fundamental concept simulates a CAV world devoid of cyberattacks. Then, to mimic the attacks, attack models and attack response models were created. It concentrated on the typical journey time delay for each simulation.

Security Concerns in Intelligent Transport Systems

883

The suggested method incorporated V2X communication and modeled Man in the Middle (MITM) and Denial of Service (DoS) assaults on a network of paved streets. Network segmentation: another strategy to increase network efficiency and security. Network segmentation in ITS typically takes into account nodes with anonymous features, mobile connectivity, and dynamic joining. The separation of clusters from communicating cars in VANETs plays a significant role in showing a network hierarchy. A Cyber Security Evaluation Framework (CSEF) that includes asset identification, threat analysis, risk assessment, and security testing was provided in [8] for independently examining the security of the in-vehicle ECUs. To give a use case, CSEF was implemented on an On-Bord Unit (OBU) that was already installed. The use case showed how the proposed CSEF was able to identify the OBU’s threats, assets, vulnerabilities, and risks which is crucial for directing others who are undertaking security assessments.

4 Conclusion and Future Work In this work, a comprehensive analysis of various attacks in the ITS network and the applications of diverse security techniques to enhance them in terms of throughput, efficiency, and defending them from attacks is conferred. Although ITS technology was created to increase traveler safety and give drivers a valuable driving experience by providing services that fit their demands, different attacks may prevent it from operating as intended. In the future, security can be embellished by working on providing a proper distinction between systematic error, random failures, and malicious interventions so that attacks could be identified and hence proper techniques considering different security factors can be applied. We will be focusing on Denial of service (DOS) attacks and Black hole attacks as these two affect the ITS network severely. Real-Time Risk estimation is also needed as Security attacks change their nature.

References 1. Lakkadi S, Mishra A, Bhardwaj M (2015) Security in Ad Hoc networks. Am J Netw Commun 4:27–34. https://doi.org/10.11648/j.ajnc.s.2015040301.16 2. El-Rewini Z, Sadatsharan K, Selvaraj DF, Plathottam SJ, Ranganathan P (2020) Cybersecurity challenges in vehicular communications. Veh Commun 23:100214 3. Ahmad F, Kurugollu F, Adnane A, Hussain R, Hussain F (2020) MARINE: man-in-the-middle attack resistant trust model in connected vehicles. IEEE Internet Things J 7(4):3310–3322. https://doi.org/10.1109/JIOT.2020.2967568 4. Alfadhli SA, Lu S, Chen K, Sebai M (2020) MFSPV: a multi-factor secured and lightweight privacy-preserving authentication scheme for VANETs. IEEE Access 8:142858–142874. https://doi.org/10.1109/ACCESS.2020.3014038

884

J. Kour et al.

5. Hassan Z, Mehmood A, Maple C, Khan MA, Aldegheishem A (2020) Intelligent detection of black hole attacks for secure communication in autonomous and connected vehicles. IEEE Access 8:199618–199628. https://doi.org/10.1109/ACCESS.2020.3034327 6. Chowdhury A, Karmakar G, Kamruzzaman J, Jolfaei A, Das R (2020) Attacks on self-driving cars and their countermeasures: a survey. IEEE Access 8:207308–207342. https://doi.org/10. 1109/ACCESS.2020.3037705 7. Li X, Liu J, Obaidat MS, Vijayakumar P, Jiang Q, Amin R (2021) An unlinkable authenticated key agreement with collusion resistant for VANETs. IEEE Trans Veh Technol 70(8):7992–8006. https://doi.org/10.1109/TVT.2021.3087557 8. Zhang H, Pan Y, Lu Z, Wang J, Liu Z (2021) A cyber security evaluation framework for ınvehicle electrical control units. IEEE Access 9:149690–149706. https://doi.org/10.1109/ACC ESS.2021.3124565 9. McManus I, Heaslip K (2022) The impact of cyberattacks on efficient operations of CAVs. Front Future Transp 3:792649. https://doi.org/10.3389/ffutr.2022.792649 10. Peth˝o Z, Szalay Z, Török Á (2022) Safety risk-focused analysis of V2V communication especially considering cyberattack sensitive network performance and vehicle dynamics factors. Veh Commun 37:100514 11. Kamal M, Tariq M, Srivastav G, Malina L (2021) Optimized security algorithms for ıntelligent and autonomous vehicular transportation systems. IEEE Trans Intell Transp Syst. https://doi. org/10.1109/TITS.2021.3123188 12. Bojjagani S, Reddy YCAPT, Anuradha T, Rao PVV, Reddy BR, Khan MK (2022) Secure authentication and key management protocol for deployment of Internet of Vehicles (IoV) concerning ıntelligent transport systems. IEEE Trans Intell Transp Syst 23(12):24698–24713. https://doi.org/10.1109/TITS.2022.3207593

Verify De-Duplication Using Blockchain on Data with Smart Contract Techniques for Detecting Errors on Cloud Vishal Satish Walunj, Praveen Gupta, and Thaksen J. Parvat

Abstract Recently, cloud computing has becoming increasingly popular. To store an excessive amount of knowledge there, many consumers choose cloud services that provide knowledge outsourcing. The demand for blockchain innovation, as well as the significance of its application, has fuelled continuous study in a wide range of scientific and practical disciplines. Despite its infancy, the blockchain is seen as a forward-thinking solution to present technical difficulties such as decentralisation, identification, trust, data ownership, and information-driven decisions. Many applications rely on blockchain technology, including supply chain, Internet of Things, multimedia, healthcare, and cloud computing. Blockchain is now one of the most sophisticated technologies for ensuring the protection of sensitive or secret data. Existing systems have drawbacks, including decrease in availability, knowledge Misfortune and corruption, breach of privacy, Lock-in of the merchandiser Leaving DEPSKY overcomes these restrictions, but it has significant computing costs and no error warning system. As per research one-of-a-kind affordable on a trustworthy encrypted network, de-duplication data contracting out the cloud with speedy recovery to overcome this issue. Main objective is to push for the elimination of persistent files stored online, so after each upload by a user file, the quick recovery technique recovers files more quickly than existing methods after checking for deduplication, followed by the three error detection methods that determine whether or not files shadows have been successfully compromised. This research surpasses previous systems and meets all basic security requirements.

V. S. Walunj (B) Department of Computer Engineering, Chhatrapati Shivaji Maharaj University, New Panvel Mumbai, India e-mail: [email protected] P. Gupta Computer Engineering Department, Chhatrapati Shivaji Maharaj University, New Panvel Mumbai, India e-mail: [email protected] T. J. Parvat Vidyavardhini’s College of Engineering & Technology, Vasai Road, Palghar, MS, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7_64

885

886

V. S. Walunj et al.

Keywords Cloud computing · Data outsourcing · Dependable system · Deduplication

1 Introduction You SaaS might also be far more user-friendly and adaptable than the conventional method of abusing computer code. SaaS offers enhanced user knowledge, which encourages users to sign up for high-quality coding services through the internet as a result of the development of technology and the expansion of network measuring systems. Additionally, cloud storage services are becoming more and more common place in everyday life, allowing users to exchange information, back up records, even create customised SaaS systems. Many SaaS products, including Amazon S3, Amazon EC2, Microsoft Azure Blob Storage, Dropbox and Google Drive, have been released in recent years. The storage space offered by these internet providers is abundant, historical data backup, as well as multimedia synchronisation across many equipment and data files secured through cloud services for dependability and accessibility. However for many users, the security and dependability of knowledge documents kept in the cloud continue to be major concerns. The four required restrictions to cloud storage services that DEPSKY self-addressed in 2011 are represented in the tiny print. By applying Krawczyk’s secret sharing technique, DEPSKY improves the cloudof-clouds strategy by avoiding Storage of clear data (data that isn’t encrypted) on CSPs increases storage effectiveness. DEPSKY questions the practise of encrypting the clear data and retaining a copy on each CSP. It deals with data corruption as well as loss of privacy as a result for customers of cloud services, problems like availability loss and vendor lock-in are no longer relevant thanks to the cloud-ofclouds strategy. Data shadows saved in the inability of the cloud service provider to deliver the integrity attribute, clouds are not secure, as was indicated in DEPSKY. When certain events occur, such as power outages, unexpected server behaviour, and malicious assaults, they can vanish or break. As soon as there are more fractured shadows that exceeds (n t), where n is the number of total shadows generated for a knowledge file using a hidden sharing feature and the scope of file reconstruction ends here, data owners will not be able to gather sufficient uninterrupted shadows to reconstruct the information. Error detection could be a procedure that is performed prior to file rebuilding. Error detection may be a step taken prior to file rebuilding. It can locate broken shadows so we may choose the right unbroken shadows for Reconstruction of a file. Even fractured shadows can be recognised by it. The most time-consuming operation, file reconstruction, will be continually carried out without the error detection process until all of the chosen shadows are unbroken. In the worst case scenario, it must do file reconstruction C (n; t) times. Additionally, error detection can be used once in a while or on a regular basis to see

Verify De-Duplication Using Blockchain on Data with Smart Contract …

887

if there are too many broken shadows. If so, file reconstruction has to be done right once to re-distribute the shadows in order to repair the damaged ones. If not, the file cannot be rebuilt and irreversibly lost shadows cannot be recovered after the number of broken shadows reaches a certain point (n t).

2 Literature Survey Juan Du, Daniel J. Dean, Yongmin Tan, Xiaohui Gu, and Ting Yu show in [1] Int Test, which provides a ground-breaking comprehensive attestation graph analysis, might provide an attacker more precise targeting than earlier systems by swapping poor outputs supplied by malevolent attackers with top-notch outcomes supplied by skilled service providers, it can automatically enhance result quality. The Int-Test prototype was tested by the author using stream processing on IBM System S apps on a production cloud computing environment. This is suitable for large-scale cloud systems because it doesn’t call for any specialised hardware or secure kernel support. It is demonstrated by Ghosh et al. in [2] that an large-scale IaaS cloud can be supplied using an ascendible, random model-driven approach, where failures are typically avoided by moving Between three pools, there are actual machines that are hot (running), hot (switched on but not ready), and cold (turned off). Use an interactive Andrei Since monolithic models cannot scale for big systems, a Markov chain-based technique is used to demonstrate the reduction in research complexity and, consequently, the reduction in resolution time. Because of how the three pools interact, they resemble sculptures. With the aid of fixed-point iteration, dependencies between them are cleared out so that the presence of a solution is established. The analytical-numeric answers derived from the monolithic model and the planned method are contrasted. The Learning Automata (LA)-based QoS (LAQ) architecture, demonstrated by S. P. Misra, P. Krishna, K. K. Kalaiselvan, V. Sarithha, and S. Obaidat, is capable of dealing with a variety of the challenges and demands of diverse cloud applications. The intended LAQ architecture makes sure that the available computer resources are used efficiently and that the client applications aren’t using them excessively or inefficiently. In order for services to be given on an as-needed basis with certain degrees of assurance, service provisioning will only be secured by ongoing resource observation and quantification. This framework aids in establishing specific assurances using these measurements in order to provide cloud services with QoS support. The system’s performance is tested LA both with and without & is demonstrated that LA-based resolution enhances system efficiency in terms of latent period and speed up [3]. Myung-Hoon Jeon, Byoung-Dai Lee, Nam-Gi Kim describe in [4] a framework that enables flexible creation of media materials in a cloud computing setting. The qualities include providing an affordable, effective media content encoding environment built on the SaaS (Software as a Service) architecture, a cloud computing service idea, and describing the media content encoding process as a service concept [5–8].

888

V. S. Walunj et al.

Coong Wang, Qian Waang, Kui Ren, Ning C, and Wenjing S. Lou a customizable networked storage integrity auditing that makes use of distributed erasure-coded data and homomorphic tokens [9, 10]. Users may audit the cloud storage using this with low-cost connection and processing. The auditing outcome not only delivers quick data fault localisation but also a robust cloud storage correctness guarantee. It makes it possible to safely and effectively update, delete, and add blocks of dynamic data [11–15].

3 Proposed System The proposed theme eliminates not just the three flaws of DEPSKY but also the four basic restrictions on cloud storage providers. We use a cloud-of-clouds technique and the (t; L;n) minimise shadows storage value by ramping up secret sharing. We also design unique detection algorithms to look for different types of broken shadows. These algorithms will either die alone or together. Additionally, after localising a mistake, it’s probable that the cloud service provider will be unable to correct it. Once one file shadow that was shattered has already been created, this problem may be solved by using a quick recovery approach to fix the broken shadow. Since reconstructing the file or gathering shadows is not required for recovery, the fast-recovery approach lowers computation and transmission costs to the user. Additionally, if there are several broken shadows, allow file reconstruction to repair the file. Check to see if there are any duplicate shadows or if storage is not being used over here. Algorithms • AES File encryption uses this algorithm. This technique collects user-uploaded files that have three shadow copies, encrypts the shadow copies, and stores them in a database. When a file’s owner downloads it, it is decrypted. • MD5 The MD5 (Message Digest 5) algorithm is used to check for file and file shadow duplication in project. Based on the information included in the file, it generates a unique digest value. It checks to see if the digest value is present in the database before accepting a user’s file upload. Tokens on shadows are created using the MD5 method, and they are then saved in a database. The token assists users in determining if a shadow has been hacked or not when they want to validate it. Globals (GP): The proposed design employs the CP-ABE algorithm. A prime order p bilinear group G with generator g, as well as two arbitrary numbers a, b Zp, are chosen by the blockchain network. The individual in charge of the data initiates the global configuration process, which takes the global parameter as an input GP and returns the public key PK and master key MK. Assume that p is a prime integer and

Verify De-Duplication Using Blockchain on Data with Smart Contract …

889

that G and GT are p-order multiplicative cyclic groups. A bilinear map or bilinear pairing is a map e: G G GT that has the following conditions. • • • •

e(ua, vb) = e(u, v∈G)ab, ∀ u, v e , a, b ∈ Zp If g is a G generator, e(g, g) is a GT generator. e(u, v) is effectively computed for every u, v∈G Let H: {0, 1} ∗ →G be a hash function that translates attribute to a random component in .

Key Generation (g, e, GT, H): Using a key generation process, the data controller produces the public and master keys to put the planned system on the market. Additionally, users send registration requests to data owners using an attribute set that includes details like user ID, email ID, department, contact information, and so on. The following is a list of user attributes: {A1, A2, A3, . . . , An}. The owner of the data decides on global parameters GP(p, G, g, e, GT, H) and the master key MK as input, assigns the user attribute policy, and creates a secret key SK and public key Userid ={A1eAn}. To finish the process of creating keys, the owner runs Algorithm of the proposed design. The process of creating keys is mathematically represented as follows: 1. Global settings (p, G, g, e, GT, H). 2. Lists of each user’s attributes are denoted by: Ai = {A1, A2, A3, … …., An} = {1, 2, 3, … …, n}. 3. The parameter list is then indicated as follows: params = [p, g, e, g2, h, Y = (g, g2)α, T 1, T 2, T 3,..., Tn RR] where α ← Zp, g2, h, Ti ← 4. The owner’s public data key is expressed as: PK = {G, g, ga, gb, (g, g), H} W here (a, b) ∈ Zp. 5. The owner’s public data key is expressed as: MKi = {α1, α2,..., αn}. 6. Each user’s secret key with a characteristics can be produced as: [r R α r r ]. SKA = A, d1 = g, d2 = g2 h, di = Ti, ∀i ∈ , where r ← Zp.

4 Methodologies Used • File Distribution: A user wants to outsource their data to CSP at this point. The user first divides their data into smaller pieces and then encrypts those shadows. In order to ensure that a shadow may be recovered even in the case of a mistake, in

890











V. S. Walunj et al.

addition, the user generates a parity shadow in order to generate different tokens for error detection, such as a batch-detection token, a ring-detection token, and a single-detection token, the user then employs that shadowy area. The programmer then assigns the shadow to each CSP and locally saves the tokens for mistake detection. Say a user wishes to pre-calculate some essential erroneous tokens checking and deliver files F1,…,F to CSPs. Batch Detection: The user can check to see if any CSP-stored data files have broken shadows during this phase. A CSP receives a challenge from the user and computes an answer using all the shadows it casts. Using CSP then challenges the second CSP with it. Until all CSPs are added, the challenge and answer process is repeated. The user receives the relevant answer from the final CSP. The user may verify if all shadows are accurate. The user may check to verify whether each and every shadow that is scattered over the cloud is accurate. Ring Detection: The user can assess whether or not each data file kept in the CSPs has a broken shadow during this phase. A CSP receives a challenge from the user and computes an answer using the silhouette it casts. The second CSP is then presented with it as a difficulty by the CSP. Challenge and response are iterated till the finish. Single Detection: The user may now judge if a certain shadow is accurate or not. She or he challenges the server, who saves the shadow he defines in response. The challenge determines how the server responds to the user. The client can decide whether supplied shadow is legitimate after analysing answer. The aforementioned procedure needs to be repeated in order to inspect every a data file’s shadow. Data deduplication: The term “de-duplication technique” has gained importance recently as a result of the fast growth in the amount of information in cloud-based repository. Both file- and block-level deduplication which are two different types of deduplication techniques. File-level duplicate copies eliminates backup copies of the data at the fineness of files. After recognising two files as being identical and having the same hash value. Although this method has a low computing cost, it fails to eliminate duplicates. Deduplication at the block level is another common method that separates each input file into numerous fixed-size or variable-size pieces before using the each block’s hash value to eliminate the blocks that possess already been uploaded to the cloud. Blockchain: Blockchain stores Meta information about data and people in a transparent, tamper-proof structure. Every function call adds new data to the blockchain structure. As a result, data exchange between the suggested designs is transparent and safe for users. The system architecture is provided in Fig. 1 below.

Verify De-Duplication Using Blockchain on Data with Smart Contract …

891

Fig. 1 System architecture

5 System Architecture 6 Result Analysis See Figs. 2, 3, 4, 5, 6 and Table 1.

7 Security Analysis Authentication: To get the public keys, users of the proposed architecture must first register. For the sake of executing the fundamental functionality of the proposed architecture, public keys are considered as users’ addresses. Before granting access to services, the blockchain network validates users. Key Security: In the blockchain network, the suggested design employs the standard CP-ABE algorithm. In place of a single or key management authority, it registers many data owners to produce user keys, minimising the load and offering a stable environment. The data user key is generated by the recognised data owner. Given

V. S. Walunj et al.

Time

892

3 2.5 2 1.5 1 0.5 0

Encryption Decryption

Fig. 2 Result analysis of image

Table 1 Result analysis of image

Operations

Encryption

Decryption

Attach file

1.8

1.7

Split file

1.6

1.6

Verify deduplication

1.2

1.1

Validate file

2.4

2.3

File download

1.7

1.6

the hazards of key exchange through a transmission channel, in the key exchange protocol, both the data owner and the user employ the Diffie-Hellman algorithm. As a result, the recommended architecture provides key security.

8 Conclusion Currently, a rising amount of users, individuals, businesses, often utilise cloud services. As a result, cloud storage may become a fairly common service. A lot of storage space is offered via cloud computing, historical data replication, and transmission synchronisation across various devices. This theme not just by passes the four storage constraints for clouds, yet it also incorporates three distinct techniques for various items, as well as a tool for determining whether or not miscalculation occurs and, if so, localising it. This trustworthy cloud-based data outsourcing theme may be helpful to clients looking to employ cloud storage services. Verify that every shadow is copied and that no storage is being utilised.

Verify De-Duplication Using Blockchain on Data with Smart Contract …

893

Fig. 3 User registration login form

Contributions of Researchers This study discusses the technique used to overcome the issues of cloud storage systems. We suggested a design for a distributed blockchain technology for a cloud storage system, provided that the four benefits outlined below to companies or individual end users: • CP-ABE algorithm-based distributed key generation method • Cloud storage system optimization • Provide a method for access restriction and revocation.

894

Fig. 4 User login form

Fig. 5 Primary cloud login form

V. S. Walunj et al.

Verify De-Duplication Using Blockchain on Data with Smart Contract …

895

Fig. 6 Secondary cloud login form

References 1. Dov Gordon S, Katz J, Liu F-H (2015) Computation for Multiple Clients with Stronger Security Guarantees. In: Dodis Y, Nelsen JB (eds) TCC 2015, Part II, LNCS 9015. International Association for Cryptologic Research, pp 144–168 2. Natth S, Venkatesan R (2009) Grouped aggregate queries on outsourced data streams that are publicly verifiable. Database Syst Trans ACM (TODS) 34(3):1–42 3. Nath S, Venkatesan R (2012) Publicly verifiable grouped aggregation queries on outsourced data streams. In: International conference on data engineering. IEEE, pp 517–528 4. Catalano D, Fiore D (2013) Practical homomorphic macs for arithmetic circuits. In: Advances in cryptology–EUROCRYPT. Springer, pp 336–352 5. Gennaro R, Wichs D (2013) Fully homomorphic message authenticators. In Advances in cryptology-ASIACRYPT. Springer, pp 301–320 6. Backes M, Fiore D, Reischuk RM (2013) Verifiable delegation of computation on outsourced data. In ACM conference on Computer and communications security. ACM, pp 863– 874

896

V. S. Walunj et al.

7. Fiore D, Gennaro R (2012) Publicly verifiable delegation of large polynomials and matrix computations, with applications. In ACM conference on Computer and communications security. ACM, pp 501–512 8. Parno B, Raykova M, Vaikuntanathan V (2012) How to delegate and verify in public: Verifiable computation from attribute-based encryption. In Theory of cryptography. Springer, pp 422–439 9. Parno B, Howell J, Gentry C, Raykova M (2013) In the IEEE symposium on security and privacy. In: Pinocchio: Nearly Practical Verifiable Computation was published. IEEE, pp 238– 252 10. Vu V, Setty S, Blumberg AJ, Walfish M (2013) A hybrid architecture for interactive verifiable computation. In: IEEE symposium on security and privacy. IEEE, pp 223–237 11. Setty S, Vu V, Paanpalia N, Braun B, Blumberg AJ, Walfish M (2012) Bringing the feasibility of proof-based verified computing a little bit closer. In: USENIX security symposium, pp 253–268 12. Setty ST, McPherson R, Blumberg AJ, Walfish M (2012) Making argument systems for outsourced computation practical (sometimes). In: NDSS 13. Chung K, Kalai YT, Liu F, Raz R (2011) Memory delegation is mentioned in advances in cryptology—CRYPTO. Springer, pp 151–168 14. Papadopulos S, Cormode G, Deligiannakis A, Garofalakis M (2013) Lightweight authentication of queries using linear algebra on data streams. In: International conference on Management of data. ACM, pp 881–892 15. Choi SG, Katz J, Kumaresan R, Cid C (2013) Multi-client non-interactive verifiable computation. In: Theory of cryptography. Springer, pp 499–518

Author Index

A Aarif Ahamed, S., 491 Abd Karim Ishigaki, Shafina, 573 Aditya Shirsath, 669 Ahmad, Muhammad Anwar, 629 Akash Sinha, 819 Akhil Vinjam, 403 Ananya, J., 277 Anbarasi, A., 433 Anirudh Manoj, 683 Anuj Mahajan, 873 Apu Ahmed, Md., 359 Archita Dhande, 461 Arefin, Md. Rashedul, 359 Aruna Rao, S. L., 833 Aurangjeb Khan, 801

B Baichoo, Humaïra, 195 Balaji, C. R., 125 Balaji, K., 655 Berru, Yeferson Torres, 387 Bhaskar, N., 765 Bhavana, S., 725 Bhoomil Dayani, 63 Bibhuti Bhusan Dash, 159, 223 Boggarapu Srinivasulu, 833 Brahma Rao, K. B. V., 501

C Cale, Diego, 387 Chand Pasha Mohammed, 263 Chandrashekhar, B. H., 19

Ch. Charan, 97 Ch. Gopi Sahithi, 523 Chimbo, Verónica, 387 Chiniah, Aatish, 195 Cruz, Mia Torres-Dela, 589

D Devarsh Patel, 781 Dhinakaran, D., 277 Dhivya, R., 33 Dinesh Kumar Anguraj, 501 Divya Vadlamudi, 97 Divyesh Butani, 63 Durai, S., 491 Durga Bhavani, A., 605

E Efimov, Aleksei, 419

F Fadzli, Fazliaty Edora, 477 Faridha Banu, D., 289

G Geetha devi, K., 289 Gengaje, S. R., 709 Geriga Akanksha, 655 Gobinath, R., 237 Gokula Vishnu Kirti Damodaran, 125

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 673, https://doi.org/10.1007/978-981-99-1745-7

897

898 H Habib, Md. Rawshan, 359 Hari Krishna, N., 403 Hariharan, B., 863 Harshal Ingale, 535 Hasan, Sakib Al, 249 Helen Parimala, E., 47 Himangi Pande, 461

I Inakoti Ramesh Raja, 523 Indrani Vasireddy, 619 Ismail, Ajune Wanis, 477, 573, 629

J Jagadeesh, J., 523 Janaki Raman Srinivasan, 125 Jasmeet Kour, 873 Jayapradha, J., 845 Jinapriya, S., 19 Joshuva Arockia Dhanraj, 125 Judgi, T., 143 Jyotsna, J., 97

K Kasula Raghu, 169 Kaustuva Chandra Dev, 223 Kavitha, M., 589 Kavitha, R., 589 Khoi, Bui Huy, 447 Khushwinder Singh, 605 Krushang Patel, 781 Kruthika, S., 511 Kumaresan, N., 289 Kumbha Prasadarao, 143

L Lakshmi Prasanna, J., 725 Lalith, U., 97 Lavanya, M., 725 Laxmi Raja, 747 Levoshich, Natalia V., 343 Likhitha Saveri, P., 643 Likith, S., 605 Lucase, L., 47

M Madhura Ingole, 535

Author Index Mahammad Firose Shaik, 523 Maheswari, S., 371 Majeti SaiRajKumar, 1 Manikandan Rajagopal, 237 Manikantha Sudarshan, A. L. V. N., 1 Manisha Bharti, 643 Manish Shashi, 211 Manju Sadasivan, 801 Manoharan, P. S., 547 Manoj Ranjan Mishra, 159 Manonmani, G., 697 Martin Parmar, 781 Meivel, S., 289 Mobusshar Islam, Md., 359 Mohamed Iqbal, M., 491 Mohammed Feroz, 419 Moreira, María Cristina, 387 Mrugendrasinh Rahevar, 781 Murugananthan, V., 589 Muthupriya, V., 655 Muthusamy, A., 181

N Namita Panda, 159 Nazarova, Natalia V., 343 Neethidevan, V., 511 Ngan, Nguyen Thi, 447 Nikhil Shinde, 669 Nikhila, M., 725 Nimsara Warnasuriya, W. M. H., 359 Nirav Bhatt, 63 Nitya Dyuthi, A., 605 Niveditha, M., 547 Nomula Ashok, 143 Noor Alleema, N., 655

O Ojasvi Ghule, 535

P Parthasarathi Pattnayak, 223 Paul John, 683 Pavan Kalyan Lingampally, 125 Pavan Vamsi Mohan Movva, 303 Ponmozhi, K., 697 Pooja Sharma, 873 Pradeepthi, A., 97 Prakash, M., 845 Pranay Patel, 781

Author Index Prasad Shelke, 669 Prasanna, R. G. V., 523 Prathik Arun, 683 Praveen Gupta, 885 Pritam Shinde, 669 Priya, K., 433 Priyadarshini, R., 547 Priyanka, S., 289 Puja Shashi, 211 Puttha Chandrasekhar Reddy, 169

R Rabinarayan Satapathy, 159 Rabinarayan Satapthy, 223 Rachana, P., 315 Radha, P., 511 Radha, V., 371 Radhika Rani Chintala, 97, 303 Raghavendra Rao Chillarige, 619 Raja, G., 1 Rajalakshmi, B., 315 Raj Busa, 63 Rajeev Wankar, 619 Rakesh, M., 1 Rakhi Bharadwaj, 669 Ramana Murthy, M. V., 765 Ramana Solleti, 765 Ramkumar, S., 237 Ramkumar Venkatasamy, 125 Reesaul, Luchmee Devi, 195 Revathi, S., 655 Ritesh Patel, 781 Roshan, A., 289 Roshnee, S. A., 277 Roy, Sibaji, 359 Rushikesh Jadhao, 535

S Sabiha Sulthana, B., 501 Sabiyath Fatima, N., 655 Sai Hari Krishna, V., 403 Sai Jatin, K., 77 Sai ShriKrishnaa, K. S., 77 Salkutsan, Sergey, 419 Samyukta Shashidharan, 77 Sandeep Kumar, 643 Sanjoy Krishna Mondol, 249 Saritha, K., 77, 683 Sastry, L. V., 523 Sathvik Bandloor, 77

899 Sathwik, T., 1 Sathya Priya, S., 863 Sayar, Ahmet, 737 Shahnewaz Tanvir, Md., 359 Shakira, P. V., 559, 747 Shaik Nazeer Basha, 403 Shakti Raj Chopra, 263 Shamla Mantri, 461 Shanmugapriya, N., 33 Shanoo Raghav, 683 Sharmila, S. K., 725 Sheela Chinchmalatpure, 535 Shreyansh Srivastava, 819 Shrigandhi, M. N., 709 Shruti Singh, 819 Shvetsova, Olga, 419 Sibananda Behera, 159 Siddhant Nawale, 107 Siddhesh Wani, 107 Snehitha, P., 501 Sree Padmapriya, A., 501 Srinivasan, R., 589 Suaib, Norhaida Mohd, 629 Subir Kumar Das, 329 Sudhansu Shekhar Patra, 159, 223 Suganya, S., 47 Sunandhita, B., 315 Sunil Raj, Y., 47 Sureka, K., 47 Swarna Shree, G., 289 Swati Jadhav, 107, 819 Swati Jha, 125 Swathi, K., 1 Sweta Leena, 315 T Tang, Weining, 249 Tejaswini, A., 501 Thaksen J. Parvat, 885 Thimmiraja, J., 237 Thiruvenkadam, T., 181 Tryambak Kumar Ojha, 329 U Udhaya Sankar, S. M., 277 Umadevi Ramamoorthy, 801 Utpal Chandra De, 159, 223 V Vaishnav Loya, 107

900 Varun Gujarathi, 107 Vasu Deva Polineni, 403 Vasumathi, M. T., 801 Venkata Chalapathi, M. M., 491 Vijayakumar, M., 181 Vishal Satish Walunj, 885 Vishal Sirvi, 819

Author Index Y Yıldız, Turgut, 737

Z Zhuravleva, Irina A., 343