Pervasive Computing and Social Networking: Proceedings of ICPCSN 2021 (Lecture Notes in Networks and Systems, 317) 9811656398, 9789811656392

The book features original papers from International Conference on Pervasive Computing and Social Networking (ICPCSN 202

100 76 21MB

English Pages 790 [768] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Foreword
Preface
Acknowledgements
Contents
Editors and Contributors
The Implementation of Failure Mode and Effects Analysis (FMEA) of the Information System Security on the Government Electronic Procurement Service (LPSE) System
1 Introduction
2 Research Methods
2.1 Data Collection and Processing
2.2 Research Stages
2.3 Research Variables
3 Result and Dıscussıon
3.1 Potential List of Failures
3.2 Pareto and Ishikawa Diagrams
3.3 Calculation of RPN
4 Conclusion
References
MQTT Attack Detection Using AI and ML Algorithm
1 Introduction
2 MQTT Protocol
2.1 IoT and MQTT Protocol
2.2 Security Overview of MQTT
3 Methods and Martials
3.1 MQTTset
3.2 Proposed MQTT attack detection architecture
3.3 Data Discovery
3.4 Feature Encoding
3.5 Feature Selection
3.6 Classification
4 Performance Metrics
4.1 Confusion Matrix
4.2 Accuracy
5 Result and Analysis
5.1 Feature Selection
6 Discussion
7 Conclusion
References
Detection of Credit Card Fraud Using Isolation Forest Algorithm
1 Introduction
2 Literature Study
3 Materials and Methods
4 Frauds and Normal with Respect to Amount
5 Frauds and Normal with Respect to Time
6 Result and Discussions
7 Conclusion
References
Trust-Based Context-Aware Collaborative Filtering Using Denoising Autoencoder
1 Introduction
2 Related Work
3 Proposed Method
3.1 Item Splitting
3.2 Calculating Implicit Trust
3.3 Integrating Implicit & Explicit Trust Information
3.4 Recommendation Process
4 Experimental Evaluation
4.1 Dataset Description
4.2 Experimental Setup
4.3 The Effect of Layer Size and Corruption Ratio
4.4 Results and Discussion
5 Conclusion and Future Work
References
Exploration on Content-Based Image Retrieval Methods
1 Introduction
2 Related Work
2.1 Text Centered Image Retrieval
2.2 Content Centered Image Retrieval
2.3 Multimodal Fusion Image Retrieval
2.4 Semantic Grounded Image Retrieval
3 Proposed Model for Content-Based Image Retrieval
4 Enactment Assessment of Image Retrieval
5 Conclusion
References
Forward and Control Approach to Minimize Delay in Delay Tolerant Networks
1 Introduction
2 Related Work
3 Proposed Methodology
3.1 Forward and Control Technique
4 Results Analysis
5 Conclusion
References
Human Face Recognition Using Eigenface, SURF Method
1 Introduction
2 Relevant Works
3 Research Methodology
3.1 Data Collection
3.2 PCA Approach to Face Recognition
3.3 Eigenface
3.4 Speed up Robust Features (SURF):
4 Experimental Results and Discussion
4.1 Expected Outcomes After Using Eigenface
4.2 Displayed Results of SURF After Using Different Levels of Dimensions
5 Conclusion
References
Multiple Cascading Algorithms to Evaluate Performance of Face Detection
1 Introduction
2 Proposed Work
2.1 Dynamic Cascading Method
2.2 Haar Cascading Method
2.3 SURF Cascading Method
2.4 Fea-Accu Cascading Method
3 Result Analysis
4 Discussion
5 Conclusion
References
Data Prediction and Analysis of COVID-19 Using Epidemic Models
1 Introduction
2 Literature Survey
3 Methodology Used
3.1 SIR Model
3.2 SEIR Model
4 Result and Discussion
5 Conclusion
References
A Comparative Analysis of Energy Consumption in Wireless Sensor Networks
1 Introduction
2 WSN Design Concentration
2.1 Hardware Design Requirements
2.2 Software Design Requirements
3 WSN Desıgn Requırements
4 Energy Consumptıons Algorıthms
5 Comparative Analysis
6 Conclusıons
References
Performance Evaluation Among ID3, C4.5, and CART Decision Tree Algorithm
1 Introduction
2 Proposed Work
2.1 Decision Tree
2.2 ID3 (Iterative Dichotomiser 3)
2.3 C4.5 (Classification 4.5)
2.4 Classification and Regression Trees (CART)
3 Result Analysis
3.1 ID3
3.2 C4.5
3.3 CART
3.4 Algorithm Comparison
3.5 Accuracy Comparison
4 Conclusion
References
Human Face Recognition Applying Haar Cascade Classifier
1 Introduction
2 Literature Review
3 Methodology and System Overview
3.1 Dynamic Cascading Method
3.2 Train Stored Image (TSI)
3.3 Face Recognize with Using LBPH
4 Results and Discussion
5 Conclusion
References
A Novel Simon Light Weight Block Cipher Implementation in FPGA
1 Introduction
2 Proposed Work
2.1 Simon Block Cipher
2.2 Round Function
2.3 Key Schedule
2.4 Round Function Algorithm
2.5 Clock Gating
2.6 Power Gating
3 Experimental Results
4 Conclusion and Future Scope
References
Assisting the Visually Challenged People Using Faster RCNN with Inception ResNet V2 Based Object Detection Model
1 Introduction
2 Literature Review
3 The Proposed Model
4 Performance Validation
5 Conclusion
References
Analysis of Kidney Ultrasound Images Using Deep Learning and Machine Learning Techniques: A Review
1 Introduction
2 US Kidney Image Processing Using Machine Learning Techniques
2.1 Pre-processing
2.2 Segmentation
2.3 Classification of Kidney Diseases
2.4 Kidney Diseases Detection
3 US Kidney Image Processing Using Deep Learning Techniques
3.1 Image Enhancement
3.2 Segmentation
3.3 Classification
3.4 Disease Detection
4 Conclusion and Future Directions
References
Big Data Mining—Analysis and Prediction of Data, Based on Student Performance
1 Introduction
2 Origin of the Data
3 Research Work
3.1 National Status
3.2 Global Status
4 Benefits
4.1 Benefits for Teachers
4.2 Benefits to the Educators
4.3 Benefits to the Students, Universities and Society
4.4 Benefits for Teachers
5 Proposed Work
6 Result Analysis/Methodology
6.1 Sampling Techniques to Be Used
6.2 Data Collection and Quiz Construction
6.3 The Statistical Test Will Be Used for Chi Square Test Analysis
6.4 Data Collection and Analysis Tools
6.5 Decision Tree Induction Algorithm of Data Mining
6.6 Result and Actions
7 Conclusion
References
A Survey Paper on Characteristics and Technique Used for Enhancement of Cloud Computıng and Their Security Issues
1 Introduction
1.1 Introduction to Cloud Computing
1.2 Cloud Computing Deployment Models
1.3 Importance of Cloud Computing
1.4 Cloud Computing Architecture
1.5 Security Issues with Different Techniques Used in Cloud Computing
2 Literature Survey
2.1 Features of SDN-Based Cloud in Defeating DDoS Attack and Challenges for SDN-Based Cloud
3 Proposed Architecture
4 Conclusion and Future work
References
Audio Classification for Melody Transcription in the Context of Indian Art Music
1 Introduction
1.1 Music Information Retrieval
1.2 MIR from Vocal Melodies
1.3 Vocal Expressions in Indian Art Music
2 Related Work
3 Proposed System
3.1 YIN Algorithm for Frequency Estimation
3.2 Features Used in the Proposed System and Relevance
3.3 Decision Tree Classifier
3.4 Data set of Experimentation
3.5 System Description
4 Results and Discussion
5 Conclusion and Future Scope
References
Analysing Microsoft Teams as an Effective Online Collaborative Network Model Among Teaching and Learning Communities
1 Introduction
2 Literary Survey
3 Methodology
3.1 Questionnaire
3.2 Sample Selection
4 Analysis and Findings
4.1 Basic Functions in MS Teams
4.2 Discussion Using MS Teams
4.3 Conducting Assessments in MS Teams
4.4 Features in MS Teams
4.5 Comparison with Other Network Models
5 Conclusion
References
High Level Identification Using Palm Vein Based on Deep Neural Network
1 Introduction
2 Related Works
3 Proposed System
4 Methodology
4.1 Vascular Example Marker Calculation
4.2 Vascular Example Extractor Calculation
4.3 Pattern Diminishing Calculation
4.4 Coordinating Pictures
5 Experimental Results
6 Conclusion
References
Inventory Control with Machine Learning Approach: A Bibliometric Analysis
1 Introduction
2 Methodology
3 Result and Discussion
3.1 Data Collection
3.2 Bibliometric Mapping Based on Keywords
4 Conclusion
References
ROS-Based Robot for Health Care Monitoring System
1 Introduction
2 Literature Survey
3 Proposed Methodology
3.1 Implementation of Hardware Design
3.2 Software Description
4 Experimental Results
5 Conclusion
References
Iot-Based Fleet Tracking with Engine Control for Automobiles
1 Introduction
2 Previous Works
3 New Methodology
4 Advantages of This System
5 Equipment and Methodology
5.1 Hardware
5.2 Software
6 Working
7 Conclusion
References
A Skip-Connected CNN and Residual Image-Based Deep Network for Image Splicing Localization
1 Introduction
2 Related Works
3 Proposed Method
3.1 Pre-processing the Input Image
3.2 Proposed CNN Architecture Design
3.3 Post-processing
4 Experimental Results and Analysis
4.1 Evaluation Metrics
4.2 Evaluation Based on Various Scenarios
5 Conclusion
References
A Comparative Study of Seasonal-ARIMA and RNN (LSTM) on Time Series Temperature Data Forecasting
1 Introduction
2 Related Work
3 Proposed Work
3.1 Data Preprocessing
4 Result Analysis
5 Conclusion
References
Design and Implementation of Photovoltaic Powered SEPIC DC-DC Converter Using Particle Swarm Optimization (PSO) Technique
1 Introduction
2 PV Cell
3 SEPIC DC-DC Converter
4 Particle Swarm Optimization (PSO)
5 Proposed Particle Swarm Optimization for Maximum Power Tracking
6 Sımulated Performance of Proposed System
7 Conclusion
References
A Systematic Review on Background Subtraction Model for Data Detection
1 Introduction
2 Taxonomy
2.1 Preprocessing
2.2 Background Modeling
3 Foreground Detection
4 Data Validation
5 Literature Review
6 Comparative Analysis
7 Conclusion
References
A Survey on Trust-Based Node Validation Model in Internet of Things
1 Introduction
2 Literature Survey
3 Proposed Model
4 Conclusion
References
Parameter Analysis in a Cyber-Physical System
1 Introduction
2 Related Works
3 Methodology
4 Experimental Results
5 Conclusion
References
Digitization and Data Analytics in Healthcare
1 Introduction
2 E-healthcare System
3 Problems with E-health Records
4 Security Threats and Solutions
5 Growth of Data Analytics E-healthcare in India
6 Pain Points in Data Analytics and E-healthcare Implementation in India
7 Data Breach in India: Few Instances
8 Reasons—Why Breaches Are Very Common in Healthcare Industry
9 India’s Take on Healthcare Security
10 Conclusion
References
Concept Summarization of Uncertain Categorical Data Streams Based on Cluster Ensemble Approach
1 Introduction
2 Related Work
3 Background Approach
3.1 One-Class Learning
3.2 Concept-Based Summarization Learning
4 Cluster Ensemble Approach
4.1 F Problem Development and Normal Structure
4.2 Ensemble Creation Methods
4.3 Functions Consensus
4.4 Direct Methodology
4.5 Particular Dataset Sets
5 Performance Evaluation
5.1 Datasets Retrieval
6 Experimental Results
7 Conclusion
References
Fault-Tolerant Cluster Head Selection Using Game Theory Approach in Wireless Sensor Network
1 Introduction
2 Related Work
3 Proposed System
4 Cluster Formation
4.1 Cost of Cluster Head
4.2 Cost of Backup Cluster Head
4.3 Cost of Cluster Node with Cluster Head
4.4 Cost of Cluster Node with Backup Cluster Head
5 Game Theory
5.1 Nash Equilibrium
5.2 Players Utility Payoff
5.3 Player Utility Payoff Algorithm
6 Simulation and Results
6.1 Simulation Parameters
6.2 Result Analysis
7 Conclusion and Future Enhancement
References
The Prominence of Corporate Governance in Banking Sector with Reference to UAE
1 Introduction
2 Literature Review
3 Methods
4 Results and Discussion
5 Conclusion
References
Ensuring Privacy of Data and Mined Results of Data Possessor in Collaborative ARM
1 Introduction
2 Related Work
3 Levels of Process
3.1 Intra/Data Level
3.2 Inter/Pattern Level
4 Communication Between the Data Possessor
4.1 Fisher–Yates Shuffle Algorithm
5 Mining Process
5.1 Mining
6 Security Analysis
6.1 Security Under the Third-Party/Cloud Attacks
6.2 Security Under Data Possessors’ Attacks
7 Application of Proposed Approach: Medical Management
8 Performance Evaluation
8.1 Computation Cost Analysis
8.2 Communication Cost Analysis
8.3 Data Utility
9 Conclusion
References
Implementation of Load Demand Prediction Model for a Domestic Load Center Using Different Machine Learning Algorithms—A Comparison
1 Introduction
2 Related Work
3 ML Algorithms Used
3.1 Artificial Neural Networks
3.2 Regression Based ML Algorithms
3.3 Regression Tree Algorithms
3.4 SVM for Regression
3.5 Ensemble Algorithms
3.6 Gaussian Process Regression Algorithm
4 Proposed Load Demand Prediction Model Using Machine Learning Algorithms
5 Results and Discussion
6 Conclusion
References
COVID Emergency Handlers to Invoke Needful Services Dynamically with Contextual Data
1 Introduction
2 Related Work
3 Proposed System Design
3.1 Registration
3.2 Service Invocation
3.3 Providing Emergency Services
4 System Implementation Results
5 Conclusion and Future Enhancements
References
Deep Neural Models for Key-Phrase Indexing
1 Introduction
2 Related Research
3 Methodology
3.1 Preliminaries
3.2 Proposed Models
4 Result and Evaluation
5 Conclusion and Future Work
References
A Hybrid Approach to Resolve Data Sparsity and Cold Start Hassle in Recommender Systems
1 Introduction
2 Related Work
3 Research Problem (RP)
4 Proposed Methodology
4.1 Novel Hybrid Proposed Methodology for Predicting and Recommending New User Preference
4.2 The Work Flow of Hybrid Algorithm to Solve CS Hassle
4.3 Finding Similarity Metrices Between the User and Item
4.4 Illustrating Accuracy Metrices
4.5 MovieLens100K Dataset
5 Step by Step Implementation and Experimental Setup
6 Results
6.1 Experimental Results
6.2 Discussions
6.3 Advantages of Proposed Work
7 Conclusion
References
Software Effort Estimation of Teacher Engagement Application
1 Introduction
2 Literature Review
3 Methods
3.1 Phase I
3.2 Phase II
3.3 Phase III
4 Results and Analysis
5 Conclusion
References
Scheduling Method to Improve Energy Consumption in WSN
1 Introduction
2 Related Work
3 Existing Global Barrier Coverage Method
4 Proposed Scheduling Method
5 Evaluation of the Performance
6 Conclusion
References
Energy Dissipation Analysis in Micro/Nanobeam Cantilever Resonators Applying Non-classical Theory
1 Introduction
2 Expression for Thermoelastic Damping Limited Quality Factor
2.1 Stress Field in a Vibrating Cantilever Microbeam
2.2 Thermoelastic Damping Limited Quality Factor by Applying MCST
3 Results and Discussions
4 Conclusion
References
An Integrated Technique for Security of Cellular 5G-IoT Network Healthcare Architecture
1 Introduction
2 5G IOT Security Device and Component
3 Healthcare Security 5G IOT Protocol
3.1 Protocol for Restricted Application
4 Healthcare IOT Key Management Process Algorithm
4.1 Generation and Distribution Process
4.2 Protocol for Extensible Messaging and Presence
4.3 Cloud Distributed Approach 5G Era
4.4 Algorithm for Fuzzy C-means Clustering (FCMC)
5 MATLAB Synchronization
5.1 Privacy Potential Realized
5.2 TSCS and Bisection Comparison
6 Conclusion
References
Predictive Model for COVID-19 Using Deep Learning
1 Introduction
2 Background
2.1 Deep Learning
2.2 Convolution Neural Network
2.3 Generative Adversarial Networks
3 Existing Methodlogy
3.1 Data Generation
3.2 Auxiliary Classifier Generative Adversarial Network
3.3 Cyclic GANs
3.4 Model Architecture
4 Result
5 Conclusion and Future Scope
References
Automatic White Blood Cell Detection Depending on Color Features Based on Red and (A) in the LAB Space
1 Introductıon
2 Suggested Method
3 Quality Assessment
4 Results and Discussion
5 Conclusion
References
Facial Expression and Genre-Based Musical Classification Using Deep Learning
1 Introduction
2 Related Works
3 Dataset
3.1 FER2013 Dataset for Facial Expression Detection
3.2 Kaggle GTZAN Dataset for Genre Classification
4 Proposed System
4.1 Method
5 Summary of the Work
References
Post-Quantum Cryptography: A Solution to Quantum Computing on Security Approaches
1 Introduction
1.1 Quantum Computing
1.2 Benefits of Quantum Computing
1.3 Limitations of Current Quantum Computers
2 Applications of Quantum Computing in Various Domains
3 Impact of Quantum Computing on Cryptographic Approaches
4 Survey on Post-quantum Cryptographic Methods
4.1 Hash-Based Cryptography
4.2 Code-Based Cryptography
4.3 Lattice-Based Cryptography
4.4 Multivariate Cryptography
4.5 Supersingular Elliptic Curve Isogeny Cryptography
5 Conclusion
References
A Detailed Analysis of the CIDDS-001 and CICIDS-2017 Datasets
1 Introduction
2 Description of the Datasets
2.1 CIDDS-001 Dataset
2.2 CICIDS-2017
3 Feature Ranking Models
3.1 Information Gain (IG)
3.2 Gain Ratio (GR)
3.3 Correlation Coefficient
4 Traditional Classifiers
4.1 k-Nearest Neighbor Classifier
4.2 Support Vector Machine (SVM)
4.3 Naïve Bayes Classifier
4.4 Decision Tree (J48)
5 Experimental Setup
6 Results and Discussion
6.1 CIDDS-001
6.2 CICIDS-2017
7 Conclusion
References
Single-Round Cluster-Head Selection (SRCH) Algorithm for Energy-Efficient Communication in WSN
1 Introduction
2 Related Work
3 Proposed Approach
4 Performance Evaluation
5 Conclusions and Future Directions
References
Three-Pass (DInSAR) Ground Change Detection in Sukari Gold Mine, Eastern Desert, Egypt
1 Introduction
2 Study Area and Data Acquisition
2.1 The Study Area
2.2 Data Acquisition
3 Methodology
3.1 Three-Pass Differential Interferometry Approach
3.2 Orbit and Baseline Calculation
3.3 Co-Registration
3.4 Interferograms Generation
3.5 De-Burst and Phase Filtering
3.6 Differential Interferogram Generation
3.7 Height Conversion Stage
4 Results and Discussion
5 Conclusion
References
Integral Images: Efficient Algorithms for Their Computation Systems of Speeded-Up Robust Features (Surf)
1 Introduction
2 Literature Review
3 Existing Method
4 Proposed Method
5 Results and Discussion
6 Conclusion
References
Wireless Communication Network-Based Smart Grid System
1 Introduction
2 Wireless Networking
2.1 Types of Wireless Networks
2.2 Channel Selection
2.3 IP Address
3 Solution Deployment
3.1 Architecture
3.2 Block Diagram
3.3 Proposed Modeling
3.4 Flowcharts
4 Results and Analysis
4.1 Data Flow to Distribution Substation from Home
4.2 Data Flow to Home from Distribution Substation
4.3 Average Time Delay and Message Types
4.4 Time Limits for MMS and GOOSE
5 Conclusion
6 Future Work
References
The SEPNS Model of Rumor Propagation in Social Networks
1 Introduction
2 Related Work
3 The SEPNS Model of Rumor Spreading
3.1 Sentiment in Rumor Spreading
3.2 Rumor Propagation Model
4 Methodology
4.1 Discrete Compartmental Modeling
4.2 Evaluation with Twitter Set
5 Results and Discussion
5.1 Findings on Sentimental Analysis
5.2 Comparison with Other Discrete Compartmental Models
6 Conclusion
References
Chaotic Chicken Swarm Optimization-Based Deep Adaptive Clustering for Alzheimer Disease Detection
1 Introduction
2 Related Work
3 Methodology: Chaotic Chicken Swarm Optimization-Based Deep Adaptive Clustering for Alzheimer Disease Detection
3.1 Deep Adaptive Clustering
3.2 Chicken Swarm Optimization
3.3 Chickens Movement
4 Results and Discussions
5 Conclusion
References
Standard Analysis of Document Control as Information According to ISO 27001 2013 in PT XYZ
1 Introduction
2 Literature Review
2.1 Information Security (IS)
2.2 Information Security Planning
2.3 ISO 27001
2.4 Information Security Management System (ISMS)
3 Research Methodology
3.1 Data Collection
3.2 Data Analysis Method
4 Result and Dıscussıon
4.1 General Documentation
4.2 Creating and Updating
4.3 Control of Documented Information
5 Conclusion
References
Comparative Asset Pricing Models Using Different Machine Learning Tools
1 Introduction
2 Method of Study
2.1 Asset Pricing Models
3 Different Machine Learning Tools
3.1 Decision Trees
3.2 Neural Network
3.3 Ordinary Least Square
4 Data Specification
5 Results and Discussion
6 Conclusion
References
The Digital Fraud Risk Control on the Electronic-based Companies
1 Introduction
2 Literature Review
2.1 Organizations of Non-banks
2.2 Cybercrime
2.3 Mitigation of Risk
2.4 Abbreviations and Acronyms
3 Methodology and Resources
3.1 Resources and Methods
3.2 Systems
3.3 Actual Threats Detection
3.4 Effective Risk Management
3.5 General Computing Environment Threats
3.6 Assessment of Threats
3.7 Description of Effect
4 Criteria for Risk Assessment
4.1 Risks that are Related with System
4.2 Impact Scale
4.3 Assessing Chances (Probability)
4.4 Module of Risk
4.5 Vulnerability Matrix
5 Result and Discussion
6 Conclusion
References
Retraction Note to: Software Effort Estimation of Teacher Engagement Application
Retraction Note to: Chapter “Software Effort Estimation of Teacher Engagement Application” in: G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-839
Author Index
Recommend Papers

Pervasive Computing and Social Networking: Proceedings of ICPCSN 2021 (Lecture Notes in Networks and Systems, 317)
 9811656398, 9789811656392

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Networks and Systems 317

G. Ranganathan Robert Bestak Ram Palanisamy Álvaro Rocha   Editors

Pervasive Computing and Social Networking Proceedings of ICPCSN 2021

Lecture Notes in Networks and Systems Volume 317

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas— UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Turkey Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong

The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.

More information about this series at https://link.springer.com/bookseries/15179

G. Ranganathan · Robert Bestak · Ram Palanisamy · Álvaro Rocha Editors

Pervasive Computing and Social Networking Proceedings of ICPCSN 2021

Editors G. Ranganathan Electronics and Communication Engineering Gnanamani College of Technology Namakkal, Tamil Nadu, India Ram Palanisamy Department at the Gerald Schwartz School of Business St. Francis Xavier University Nova Scotia, NS, Canada

Robert Bestak Czech Technical University in Prague Prague, Czech Republic Álvaro Rocha Department of Informatics Engineering AISTI & University of Coimbra Coimbra, Portugal

ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-981-16-5639-2 ISBN 978-981-16-5640-8 (eBook) https://doi.org/10.1007/978-981-16-5640-8 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022, corrected publication 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

We are honored to dedicate the proceedings of the first edition of Pervasive Computing and Social Networking Proceedings to all the participants, organizers and editors of ICPSCN 2021.

Foreword

On behalf of ICPCSN 2021, I am honored to welcome you to the First International Conference on Pervasive Computing and Social Networking (ICPCSN 2021) held on March 19–20, 2021, in Salem district, India. As an international premier conference, ICPCSN 2021 provides a knowledgeable forum for reporting and learning the recent research developments and applications in all the areas of pervasive computing and social networking models. I am very delighted to present the proceedings of the first series of ICPCSN. ICPCSN is an initiative conference for expanding the research in the areas of pervasive computing, social network mining and a profound analysis on the performance and accuracy of computing and communication systems. Although the conference is in its first year, it has already witnessed a humongous growth. As an evident of that, ICPCSN 2021 has received 212 submissions and the conference has accepted 56 papers. The authors of the paper come from different countries and regions. I want to express my gratitude to the members of program committee and internal and external reviewers for their efforts in reviewing the submissions made to the conference. The guidance received from the Narasu’s Sarathy Institute of Technology, Salem, India, has helped us in many ways, for which I am forever grateful. I also thank the invited keynote speaker Dr. Joy Chen (Da-Yeh University, Taiwan), for sharing his valuable research insights with us. Finally, the conference would not be successful without the contribution of novel research works from the authors. I take this opportunity to thank all the authors for their insightful research contribution and participation in ICPCSN 2021. I strongly believe that this program will stimulate the research works in computing and social networking paradigm and provide the research practitioners and scholars with advanced and state-of-the-art techniques, models, algorithms and tools. I feel honored

vii

viii

Foreword

to serve the knowledge on the recent research developments in the field of pervasive computing and social networking through this exciting conference program. Munusami Viswanathan Principal Narasu’s Sarathy Institute of Technology Salem, India

Preface

The First International Conference on Pervasive Computing and Social Networking (ICPCSN 2021) provides researchers, academicians and industrialists across the globe a significant opportunity to discuss and present the research models and results in the areas of computing and network engineering. The first series of this conference welcomes both theoretical and practical research contributions by creating a considerable impact on the networking and computing domains. ICPCSN 2021 is sponsored by Narasu’s Sarathy Institute of Technology, Salem, India, and the proceedings of ICPCSN 2021 will be submitted to Springer publications. Henceforth, the conference is located in the beautiful Salem district, Tamil Nadu, India, and features keynote talk by outstanding researchers and diverse paper presentation sessions with six to seven research papers on the significant and stateof-the-art topics in computing and networking paradigm. The panel has been led by the renowned research experts in computing and networking paradigm from different universities and research organizations, and the committee has also invited the conference attendees to actively participate in the discussion. The main aim of ICPCSN 2021 is to address a wide range of research in computing and networking topics including theory, methodologies, tools and applications. Each and every paper submitted to ICPCSN 2021 was peer-reviewed by at least two technical program chairs, who evaluated the research works based on their novelty, research significance, technical contribution and real-time application. In particular, all the papers were reviewed by three to four reviewers and they provided their feedback for the further decision process. For this first edition of the conference, we have totally received 212 papers of which 56 papers made to the final program. The contribution of authors to the conference has been highly appreciated, and we are pleased to invite them to continue and contribute to the future editions of ICPCSN.

ix

x

Preface

We welcome you all to the proceedings of 1st ICPCSN 2021 to gain knowledge on emerging computing and networking models. Organizing Committee—ICPCSN 2021 Namakkal, India Prague, Czech Republic Nova Scotia, Canada Coimbra, Portugal

G. Ranganathan Robert Bestak Ram Palanisamy Álvaro Rocha

Acknowledgements

We are pleased to acknowledge the guidance and support provided by Narasu’s Sarathy Institute of Technology, Salem, India, throughout the conference event. Also, we gratefully acknowledge the professional efforts of the international conference program committee members, who have substantially contributed to ensure the quality of the first conference series ICPCSN 2021. We gratefully appreciate the participation and contribution of the keynote speaker: Dr. Joy Chen from Department of Electrical Engineering, Dayeh University, Taiwan. Additionally, we owe special thanks to the organization members of ICPCSN 2021 for leveraging a comprehensive conference management from distributing the call for papers to wider audience, inviting reviewers, handling paper submissions, to communicating with authors throughout the conference and creating a volume with the ICPCSN 2021 proceedings. We like to acknowledge all the faculty, non-faculty and volunteers of the institution for their dedication and expertise in the management of a conference event. We also thank all the participants (authors/presenters/attendees/listeners) for actively contributing towards the success of ICPCSN 2021 event. We would like to extend our gratitude to Springer publications for their inexplicable publication support.

xi

Contents

The Implementation of Failure Mode and Effects Analysis (FMEA) of the Information System Security on the Government Electronic Procurement Service (LPSE) System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Muhammad Aldenny, Hans Kristian, Ford Lumban Gaol, Tokuro Matsuo, and Andi Nugroho

1

MQTT Attack Detection Using AI and ML Algorithm . . . . . . . . . . . . . . . . Neenu Kuriakose and Uma Devi

13

Detection of Credit Card Fraud Using Isolation Forest Algorithm . . . . . . Haritha Rajeev and Uma Devi

23

Trust-Based Context-Aware Collaborative Filtering Using Denoising Autoencoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Abinaya and M. K. Kavitha Devi Exploration on Content-Based Image Retrieval Methods . . . . . . . . . . . . . . M. Suresh Kumar, J. Rajeshwari, and N. Rajasekhar Forward and Control Approach to Minimize Delay in Delay Tolerant Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sudhakar Pandey, Nidhi Sonkar, and Danda Pravija Human Face Recognition Using Eigenface, SURF Method . . . . . . . . . . . . . F. M. Javed Mehedi Shamrat, Pronab Ghosh, Zarrin Tasnim, Aliza Ahmed Khan, Md. Shihab Uddin, and Tahmid Rashik Chowdhury Multiple Cascading Algorithms to Evaluate Performance of Face Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F. M. Javed Mehedi Shamrat, Zarrin Tasnim, Tahmid Rashik Chowdhury, Rokeya Shema, Md. Shihab Uddin, and Zakia Sultana

35 51

63 73

89

Data Prediction and Analysis of COVID-19 Using Epidemic Models . . . . 103 A. M. Jothi, A. Charumathi, A. Yuvarani, and R. Parvathi xiii

xiv

Contents

A Comparative Analysis of Energy Consumption in Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Nasser Otayf and Mohamed Abbas Performance Evaluation Among ID3, C4.5, and CART Decision Tree Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 F. M. Javed Mehedi Shamrat, Rumesh Ranjan, Khan Md. Hasib, Amit Yadav, and Abdul Hasib Siddique Human Face Recognition Applying Haar Cascade Classifier . . . . . . . . . . . 143 F. M. Javed Mehedi Shamrat, Anup Majumder, Probal Roy Antu, Saykot Kumar Barmon, Itisha Nowrin, and Rumesh Ranjan A Novel Simon Light Weight Block Cipher Implementation in FPGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 S. Niveda, A. Siva Sakthi, S. Srinitha, V. Kiruthika, and R. Shanmugapriya Assisting the Visually Challenged People Using Faster RCNN with Inception ResNet V2 Based Object Detection Model . . . . . . . . . . . . . . 171 S. Kiruthika Devi and C. N. Subalalitha Analysis of Kidney Ultrasound Images Using Deep Learning and Machine Learning Techniques: A Review . . . . . . . . . . . . . . . . . . . . . . . . 183 Mino George and H. B. Anita Big Data Mining—Analysis and Prediction of Data, Based on Student Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Pradip Patil and Rupa Hiremath A Survey Paper on Characteristics and Technique Used for Enhancement of Cloud Computıng and Their Security Issues . . . . . . 217 Mahesh Bhandari, Vitthal S. Gutte, and Pramod Mundhe Audio Classification for Melody Transcription in the Context of Indian Art Music . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Amit Rege and Ravi Sindal Analysing Microsoft Teams as an Effective Online Collaborative Network Model Among Teaching and Learning Communities . . . . . . . . . 243 P. Shanmuga Sundari and J. Karthikeyan High Level Identification Using Palm Vein Based on Deep Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 V. Nisha Jenipher, S. Princy Suganthi Bai, A. Venkatesh, K. Ravindran, and Adlin Sheeba

Contents

xv

Inventory Control with Machine Learning Approach: A Bibliometric Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Sudimanto, Ford Lumban Gaol, Harco Leslie Hendric Spits Warnars, and Benfano Soewito ROS-Based Robot for Health Care Monitoring System . . . . . . . . . . . . . . . . 275 Kedri Janardhana, Tagaram Kondalo Rao, A. Arunraja, and E. Esakki Vigneswaran Iot-Based Fleet Tracking with Engine Control for Automobiles . . . . . . . . 287 R. Suganya, R. Nethra, K. P. Guganya, and B. Nila A Skip-Connected CNN and Residual Image-Based Deep Network for Image Splicing Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Meera Mary Isaac, M. Wilscy, and S. Aji A Comparative Study of Seasonal-ARIMA and RNN (LSTM) on Time Series Temperature Data Forecasting . . . . . . . . . . . . . . . . . . . . . . . 315 Sumanta Banerjee and Shyamapada Mukherjee Design and Implementation of Photovoltaic Powered SEPIC DC-DC Converter Using Particle Swarm Optimization (PSO) Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 A. R. Danila Shirly, R. Roshini, E. Priyanka, M. Sindhuja, and A. Steffy Jones A Systematic Review on Background Subtraction Model for Data Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Yarasu Madhavi Latha and B. Srinivasa Rao A Survey on Trust-Based Node Validation Model in Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 Srilakshmi Puli and C. H. Smitha Chowdary Parameter Analysis in a Cyber-Physical System . . . . . . . . . . . . . . . . . . . . . . 361 J. Judeson Antony Kovilpillai and S. Jayanthy Digitization and Data Analytics in Healthcare . . . . . . . . . . . . . . . . . . . . . . . . 373 Shalini Vermani and Prishu Purva Concept Summarization of Uncertain Categorical Data Streams Based on Cluster Ensemble Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 K. Parish Venkata Kumar, N. Raghavendra Sai, S. Sai Kumar, V. V. N. V. Phani Kumar, and M. Jogendra Kumar Fault-Tolerant Cluster Head Selection Using Game Theory Approach in Wireless Sensor Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 R. Anand, P. Sudarsanam, and Manoj Challa

xvi

Contents

The Prominence of Corporate Governance in Banking Sector with Reference to UAE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Santosh Ashok and Kamaladevi Baskaran Ensuring Privacy of Data and Mined Results of Data Possessor in Collaborative ARM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 D. Dhinakaran and P. M. Joe Prathap Implementation of Load Demand Prediction Model for a Domestic Load Center Using Different Machine Learning Algorithms—A Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 M. Pratapa Raju and A. Jaya Laxmi COVID Emergency Handlers to Invoke Needful Services Dynamically with Contextual Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 S. Subbulakshmi, H. Vishnu Narayanan, R. N. Adarsh, Fawaz Faizi, and A. K. Arun Deep Neural Models for Key-Phrase Indexing . . . . . . . . . . . . . . . . . . . . . . . . 483 Saurabh Sharma, Vishal Gupta, and Mamta Juneja A Hybrid Approach to Resolve Data Sparsity and Cold Start Hassle in Recommender Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 B. Geluvaraj and Meenatchi Sundaram RETRACTED CHAPTER: Software Effort Estimation of Teacher Engagement Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 Sucianna Ghadati Rabiha, Harco Leslie Hendric Spits Warnars, Ford Lumban Gaol, and Benfano Soewito Scheduling Method to Improve Energy Consumption in WSN . . . . . . . . . 523 J. Poornimha, A. V. Senthil Kumar, and Ismail Bin Musirin Energy Dissipation Analysis in Micro/Nanobeam Cantilever Resonators Applying Non-classical Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 R. Resmi, V. Suresh Babu, and M. R. Baiju An Integrated Technique for Security of Cellular 5G-IoT Network Healthcare Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549 Manoj Verma, Jitendra Sheetlani, Vishnu Mishra, and Megha Mishra Predictive Model for COVID-19 Using Deep Learning . . . . . . . . . . . . . . . . 565 Hardev Goyal, Devahsish Attri, Gagan Aggarwal, and Aruna Bhatt Automatic White Blood Cell Detection Depending on Color Features Based on Red and (A) in the LAB Space . . . . . . . . . . . . . . . . . . . . . 579 Tahseen Falih Mahdi, Hazim G. Daway, and Jamela Jouda Facial Expression and Genre-Based Musical Classification Using Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 S. Gunasekaran, V. Balamurugan, and R. Aiswarya

Contents

xvii

Post-Quantum Cryptography: A Solution to Quantum Computing on Security Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 Purvı H. Tandel and Jıtendra V. Nasrıwala A Detailed Analysis of the CIDDS-001 and CICIDS-2017 Datasets . . . . . 619 K. Vamsi Krishna, K. Swathi, P. Rama Koteswara Rao, and B. Basaveswara Rao Single-Round Cluster-Head Selection (SRCH) Algorithm for Energy-Efficient Communication in WSN . . . . . . . . . . . . . . . . . . . . . . . . 639 K. Rajammal and R. K. Santhia Three-Pass (DInSAR) Ground Change Detection in Sukari Gold Mine, Eastern Desert, Egypt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653 Sayed A. Mohamed, Ayman H. Nasr, and Hatem M. Keshk Integral Images: Efficient Algorithms for Their Computation Systems of Speeded-Up Robust Features (Surf) . . . . . . . . . . . . . . . . . . . . . . . 663 M. Jagadeeswari, C. S. Manikandababu, and M. Aiswarya Wireless Communication Network-Based Smart Grid System . . . . . . . . . 673 K. S. Prajwal, Palanki Amitasree, Guntha Raghu Vamshi, and V. S. Kirthika Devi The SEPNS Model of Rumor Propagation in Social Networks . . . . . . . . . 695 Greeshma N. Gopal, G. Sreerag, and Binsu C. Kovoor Chaotic Chicken Swarm Optimization-Based Deep Adaptive Clustering for Alzheimer Disease Detection . . . . . . . . . . . . . . . . . . . . . . . . . . 709 C. Dhanusha, A. V. Senthil Kumar, Ismail Bin Musirin, and Hesham Mohammed Ali Abdullah Standard Analysis of Document Control as Information According to ISO 27001 2013 in PT XYZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721 Pangondian Prederikus, Stefan Gendita Bunawan, Ford Lumban Gaol, Tokuro Matsuo, and Andi Nugroho Comparative Asset Pricing Models Using Different Machine Learning Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733 Abhijit Dutta and Madhabendra Sinha The Digital Fraud Risk Control on the Electronic-based Companies . . . . 741 Ford Lumban Gaol, Ananda Dessi Budiansa, Yohanes Paul Weniko, and Tokuro Matsuo Retraction Note to: Software Effort Estimation of Teacher Engagement Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sucianna Ghadati Rabiha, Harco Leslie Hendric Spits Warnars, Ford Lumban Gaol, and Benfano Soewito

C1

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759

Editors and Contributors

About the Editors Dr. G. Ranganathan is Principal, Ranganathan Engineering College, Coimbatore, India. He has done his Ph.D. in the Faculty of Information and Communication Engineering from Anna University, Chennai in the year 2013. His research thesis was in the area of Bio Medical Signal Processing. He has total of 29+ years of experience both in industry, teaching and research. He has guided several project works for many UG and PG Students in the areas of Bio Medical Signal Processing. He has published more than 35 research papers in International and National Journals and Conferences. He has also co-authored many books in electrical and electronics subjects. He has served as Referee for many reputed International Journals published by Elsevier, Springer, Taylor and Francis, etc. He has membership in various professional bodies like ISTE, IAENG etc., and has actively involved himself in organizing various international and national level conferences, symposiums, seminars, etc. Robert Bestak received the Ph.D. degree in Computer Science from ENST Paris, France (2003) and M.Sc. degree in Telecommunications from Czech Technical University in Prague, CTU, Czech Republic (1999). Since 2004, he has been an Assistant Professor at the Department of Telecommunication Engineering, Faculty of Electrical Engineering, CTU. He took part in several national, EU, and thirdparty research projects. He is the Czech representative in the IFIP TC6 organization and vice-chair of working group TC6 WG6.8. He serves as Steering and Technical Program Committee member of many IEEE/IFIP conferences (Networking, WMNC, NGMAST, etc.) and he is a member of the editorial board of several international journals (Computers and Electrical Engineering, Electronic Commerce Research Journal, etc.). His research interests include 5G networks, spectrum management and big data in mobile networks. Prof. Ram Palanisamy is a Professor of Enterprise Systems in the Business Administration Department at the Gerald Schwartz School of Business, St. Francis Xavier

xix

xx

Editors and Contributors

University. Dr. Palanisamy teaches courses on Foundations of Business Information Technology, Enterprise Systems using SAP, Systems Analysis and Design, SAP Implementation, Database Management Systems, and Electronic Business (Mobile Commerce). Before joining StFX, he taught courses in Management at the Wayne State University (Detroit, USA), Universiti Telekom (Malaysia) and National Institute of Technology (NITT), Deemed University, India. His research interests include Enterprise Systems (ES) implementation; ES acquisition; ES Flexibility, ES Success; knowledge management systems; Healthcare Inter-professional Collaboration. Prof. Álvaro Rocha holds the title of Honorary Professor (2019), Information Science Aggregation (2011), Ph.D. in Information Technology and Systems (2001), Master in Management Informatics (1995) and Degree in Applied Mathematics (1990). He is currently Professor at the University of Coimbra, Researcher at CISUC—Center for Informatics and Systems at the University of Coimbra, Collaborating Researcher at LIACC—Laboratory for Artificial Intelligence and Computer Science, Collaborative Researcher at CINTESIS—Research Center for Information Technology and Systems He is also President of AISTI—Iberian Association of Information Systems and Technologies, President of the Portuguese Chapter of the IEEE SMC Society—Systems, Man, and Cybernetics, Editor of RISTI—Iberian Journal of Information Systems and Technologies, and Editor of the Journal of Information Systems Engineering and Management. He has also served as Vice President of Experts at Horizon 2020 of the European Commission, as an Expert in the Ministry of Education, Universities and Research of the Italian Government, and as an Expert in the Ministry of Finance of the Latvian Government.

Contributors Mohamed Abbas College of Engineering, King Khalid University, Abha, Saudi Arabia Hesham Mohammed Ali Abdullah Al-Saeed Faculty for Engineering and Information Technology, Taizz, Yemen S. Abinaya Department of Computer Science & Engineering, Thiagarajar College of Engineering, Madurai, Tamil Nadu, India R. N. Adarsh Department of Computer Science and Applications, Amrita Vishwa Vidyapeetham, Amritapuri, India Gagan Aggarwal Department of Computer Science, Delhi Technological University, Delhi, India M. Aiswarya Department of Electronics and Communication Engineering, Sri Ramakrishna Engineering College, Coimbatore, India

Editors and Contributors

xxi

R. Aiswarya Department of CSE, Ahalia School of Engineering and Technology, Palakkad, Kerala, India S. Aji Department of Computer Science, University of Kerala, Thiruvananthapuram, India Muhammad Aldenny Information System Management Departement BINUS Graduate Program—Master of Information Systems, Bina Nusantara University, Jakarta, Indonesia Palanki Amitasree Department of Electrical and Electronics Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru, India R. Anand CMR Institute of Technology, Bengaluru, India H. B. Anita Department of Computer Science, CHRIST (Deemed To Be University), Bengaluru, India Probal Roy Antu Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh A. K. Arun Department of Computer Science and Applications, Amrita Vishwa Vidyapeetham, Amritapuri, India A. Arunraja Sri Ramakrishna Engineering College, Coimbatore, India Santosh Ashok Amity University, Dubai, UAE Devahsish Attri Department of Computer Science, Delhi Technological University, Delhi, India M. R. Baiju University of Kerala, Kerala Public Service Commission, Thiruvananthapuram, Kerala, India V. Balamurugan Department of ECE, Ahalia School of Engineering and Technology, Palakkad, Kerala, India Sumanta Banerjee National Institute of Technology Silchar, Silchar, Assam, India Saykot Kumar Barmon Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh B. Basaveswara Rao Computer Center, Acharya Nagarjuna University, Guntur, Andhra Pradesh, India Kamaladevi Baskaran Department of Management and Commerce, Amity University, Dubai, UAE Mahesh Bhandari Computer Engineering and Technology, Dr. Vishwanath Karad MIT World Peace University, Pune, India Aruna Bhatt Department of Computer Science, Delhi Technological University, Delhi, India

xxii

Editors and Contributors

Ananda Dessi Budiansa Information System Management Department BINUS Graduate Program—Master of Information Systems, Bina Nusantara University, Jakarta, Indonesia Stefan Gendita Bunawan Information System Management Department BINUS Graduate Program—Master of Information Systems, Bina Nusantara University, Jakarta, Indonesia Manoj Challa CMR Institute of Technology, Bengaluru, India A. Charumathi School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India Tahmid Rashik Chowdhury Department of Computer Science and Engineering, Islamic University of Technology, Gazipur, Bangladesh A. R. Danila Shirly Department of EEE, Loyola-ICAM College of Engineering and Technology, Chennai, India Hazim G. Daway Physics Department, Science College, University of Mustansiriya, Mustansiriya, Iraq S. Kiruthika Devi Department of Computer Science and Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai, India Uma Devi Amrita School of Arts and Sciences, Amrita Vishwa Vidyapeetham, Kochi, India C. Dhanusha Research Scholar, Department of MCA, Hindusthan College of Arts and Science, Coimbatore, India D. Dhinakaran Information and Communication Engineering, Anna University, Chennai, India Abhijit Dutta Department of Commerce, Sikkim University, Gangtok, India E. Esakki Vigneswaran Sri Ramakrishna Engineering College, Coimbatore, India Fawaz Faizi Department of Computer Science and Applications, Amrita Vishwa Vidyapeetham, Amritapuri, India Ford Lumban Gaol Computer Science Department, Binus Graduate Program— Doctor of Computer Science, Bina Nusantara University, Jakarta, Indonesia B. Geluvaraj Garden City University, Bengaluru, India Mino George Department of Computer Science, CHRIST (Deemed To Be University), Bengaluru, India Pronab Ghosh Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh Greeshma N. Gopal School of Engineering CUSAT, College of Engineering Cherthala, Kochi, India

Editors and Contributors

xxiii

Hardev Goyal Department of Computer Science, Delhi Technological University, Delhi, India K. P. Guganya Department of Information Technology, Sri Krishna College of Technology, Coimbatore, India S. Gunasekaran Department of CSE, Ahalia School of Engineering and Technology, Palakkad, Kerala, India Vishal Gupta University Institute of Engineering and Technlogy, Panjab University, Chandigarh, India Vitthal S. Gutte Computer Engineering and Technology, Dr. Vishwanath Karad MIT World Peace University, Pune, India Khan Md. Hasib Department of Computer Science and Engineering, Ahsanullah University of Science and Technology, Dhaka, Bangladesh Rupa Hiremath MIT ADT University Pune, Pune, India Meera Mary Isaac Department of Computer Science, University of Kerala, Thiruvananthapuram, India M. Jagadeeswari Department of Electronics and Communication Engineering, Sri Ramakrishna Engineering College, Coimbatore, India Kedri Janardhana Faculty of Engineering, Dayalbagh Educational Institute, Agra, India F. M. Javed Mehedi Shamrat Department of Software Engineering, Daffodil International University, Dhaka, Bangladesh A. Jaya Laxmi Department of Electrical and Electronics Engineering, Jawaharlal Nehru Technological Univeristy Hyderabad, Hyderabad, India S. Jayanthy Department of ECE, Sri Ramakrishna Engineering College, Coimbatore, India P. M. Joe Prathap Department of Information Technology, RMD Engineering College, Kavaraipettai, Tiruvallur, India M. Jogendra Kumar Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, AP, India A. M. Jothi School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India Jamela Jouda Biological Department, Science College, University of Mustansiriya, Mustansiriya, Iraq Mamta Juneja University Institute of Engineering and Technlogy, Panjab University, Chandigarh, India

xxiv

Editors and Contributors

J. Karthikeyan School of Social Sciences and Languages, Vellore Institute of Technology, Vellore, India M. K. Kavitha Devi Department of Computer Science & Engineering, Thiagarajar College of Engineering, Madurai, Tamil Nadu, India Hatem M. Keshk National Authority for Remote Sensing and Space Sciences (NARSS), Cairo, Egypt Aliza Ahmed Khan Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh V. S. Kirthika Devi Department of Electrical and Electronics Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru, India V. Kiruthika Department of ECE, Sri Ramakrishna Engineering College, Coimbatore, India J. Judeson Antony Kovilpillai Department of ECE, Sri Ramakrishna Engineering College, Coimbatore, India Binsu C. Kovoor School of Engineering CUSAT, College of Engineering Cherthala, Kochi, India Hans Kristian Information System Management Departement BINUS Graduate Program—Master of Information Systems, Bina Nusantara University, Jakarta, Indonesia A. V. Senthil Kumar Director, Professor, Department of MCA, Hindusthan College of Arts and Science, Coimbatore, India M. Suresh Kumar Department of ISE, Dayananda Sagar College of Engineering, Bangalore, India Neenu Kuriakose AmritaViswa Vidyapeetham, Cochin, India Yarasu Madhavi Latha Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Guntur, AP, India Tahseen Falih Mahdi Physics Department, Science College, University of Mustansiriya, Mustansiriya, Iraq Anup Majumder Department of Computer Science and Engineering, Jahangirnagar University, Dhaka, Bangladesh C. S. Manikandababu Department of Electronics and Communication Engineering, Sri Ramakrishna Engineering College, Coimbatore, India Tokuro Matsuo Graduate School of Industrial Technology, Advanced Institute of Industrial Technology, Japan Department of M-Commerce and Multimedia Applications, Asia University, Taichung City, Taiwan; Advanced Institute of Industrial Technology, Shinagawa City, Japan; City University of Macau, Xian Xing Hai, Macau;

Editors and Contributors

xxv

Graduate School of Industrial Technology, Advanced Institute of Industrial Technology, Tokyo, Japan Megha Mishra Department of Science and Engineering, Shri Shankaracharya Technical Campus, CSVTU University, Bhilai, Chhattisgarh, India Vishnu Mishra Department of Science and Engineering, Shri Shankaracharya Technical Campus, CSVTU University, Bhilai, Chhattisgarh, India Sayed A. Mohamed National Authority for Remote Sensing and Space Sciences (NARSS), Cairo, Egypt Shyamapada Mukherjee National Institute of Technology Silchar, Silchar, Assam, India Pramod Mundhe Computer Engineering and Technology, Dr. Vishwanath Karad MIT World Peace University, Pune, India Ismail Bin Musirin Faculty of Electrical Engineering, Universiti Teknologi Mara, Shah Alam, Malaysia; Faculty of Electrical Engineering, Universiti Teknologi Mara, Johor Bahru, Malaysia H. Vishnu Narayanan Department of Computer Science and Applications, Amrita Vishwa Vidyapeetham, Amritapuri, India Ayman H. Nasr National Authority for Remote Sensing and Space Sciences (NARSS), Cairo, Egypt Jıtendra V. Nasrıwala Babu Madhav Institute of Information Technology, Uka Tarsadia University, Bardoli, India R. Nethra Department of Information Technology, Sri Krishna College of Technology, Coimbatore, India B. Nila Department of Information Technology, Sri Krishna College of Technology, Coimbatore, India V. Nisha Jenipher Department of Computer Science and Engineering, St. Joseph’s Institute of Technology, Chennai, India S. Niveda Department of ECE, Sri Ramakrishna Engineering College, Coimbatore, India Itisha Nowrin Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh Andi Nugroho Computer Science Department, BINUS Graduate Program— Doctor of Computer Science, Bina Nusantara University, Jakarta, Indonesia Nasser Otayf College of Engineering, King Khalid University, Abha, Saudi Arabia Sudhakar Pandey Department of Information Technology, National Institute of Technology Raipur, Raipur, India

xxvi

Editors and Contributors

K. Parish Venkata Kumar Department of Computer Applications, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, AP, India R. Parvathi School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India Pradip Patil Indira Institute of Management, Pune, India V. V. N. V. Phani Kumar Department of CSE, Velagapudi Siddhartha Engineering College, Vijayawada, AP, India

Ramakrishna

J. Poornimha Hindusthan College of Arts and Science and KG College of Arts and Science, Coimbatore, India K. S. Prajwal Department of Electrical and Electronics Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru, India M. Pratapa Raju Department of Electrical and Electronics Engineering, Jawaharlal Nehru Technological Univeristy Hyderabad, Hyderabad, India Danda Pravija Department of Information Technology, National Institute of Technology Raipur, Raipur, India Pangondian Prederikus Information System Management Department BINUS Graduate Program—Master of Information Systems, Bina Nusantara University, Jakarta, Indonesia S. Princy Suganthi Bai Department of Computer Science, Sarah Tucker College, Tirunelveli, India E. Priyanka Department of EEE, Saranathan College of Engineering, Trichy, India Srilakshmi Puli Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Guntur, Andhra Pradesh, India Prishu Purva Apeejay School of Management, New Delhi, India Sucianna Ghadati Rabiha Computer Science Department, Binus Graduate Program—Doctor of Computer Science, Bina Nusantara University, Jakarta, Indonesia; Information Systems Department, Binus Online Learning, Bina Nusantara University, Jakarta, Indonesia N. Raghavendra Sai Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, AP, India K. Rajammal Department of Computer Science and Engineering, Sir Isaac Newton College of Engineering and Technology, Nagapattinam, India N. Rajasekhar Gokaraju Rangaraju Institute of Engineering and Technology, Hyderabad, India

Editors and Contributors

xxvii

Haritha Rajeev Amrita School of Arts and Sciences, Amrita Vishwa Vidyapeetham, Kochi, India J. Rajeshwari Department of ISE, Dayananda Sagar College of Engineering, Bangalore, India P. Rama Koteswara Rao Department of ECE, NRI Institute of Technology, Agiripalli, Andhra Pradesh, India Rumesh Ranjan Department of Plant Breeding and Genetics, Punjab Agriculture University, Ludhiana, Punjab, India B. Srinivasa Rao Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Guntur, AP, India Tagaram Kondalo Rao Faculty of Education, Dayalbagh Educational Institute, Agra, India K. Ravindran Department of Information and Technology, Easwari Engineering College, Chennai, India Amit Rege Medi-Caps University, Indore, India R. Resmi University of Kerala, LBS Institute of Technology for Women, Poojappura, India R. Roshini Department of EEE, Saranathan College of Engineering, Trichy, India S. Sai Kumar Department of IT, Prasad V. Potluri Siddhartha Institute of Technology, Vijayawada, AP, India R. K. Santhia Department of Information Technology, Manakula Vinayagar Institute of Technology, Puducherry, India A. V. Senthil Kumar Department of Computer Application, Hindusthan College of Arts and Science, Coimbatore, India F. M. Javed Mehedi Shamrat Department of Software Engineering, Daffodil International University, Dhaka, Bangladesh P. Shanmuga Sundari Department of Computer Science and Engineering, Sri Venkateswara College of Engineering Technology, Chittoor, Andhra Pradesh, India R. Shanmugapriya Department of ICE, Sri Ramakrishna Polytechnic College, Coimbatore, India Saurabh Sharma Thapar Institute of Engineering & Technology, Patiala, India; University Institute of Engineering and Technlogy, Panjab University, Chandigarh, India Adlin Sheeba Department of Computer Science and Engineering, St. Joseph’s Institute of Technology, Chennai, India

xxviii

Editors and Contributors

Jitendra Sheetlani Department of Computer Science and Engineering, Sri Satya Sai University of Technology and Medical Sciences, Sehore, India Rokeya Shema Department of Computer Science and Engineering, International University of Business Agriculture and Technology, Dhaka, Bangladesh Abdul Hasib Siddique International University of Scholars, Dhaka, Bangladesh Ravi Sindal IET DAVV, Indore, India M. Sindhuja Department of EEE, Saranathan College of Engineering, Trichy, India Madhabendra Sinha Department of Economics and Politics, Visva-Bharati University, Santiniketan, West Bengal, India A. Siva Sakthi Department of BME, Sri Ramakrishna Engineering College, Coimbatore, India C. H. Smitha Chowdary Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Guntur, Andhra Pradesh, India Benfano Soewito Computer Science Department, Binus Graduate Program— Doctor of Computer Science, Bina Nusantara University, Jakarta, Indonesia Nidhi Sonkar Department of Information Technology, National Institute of Technology Raipur, Raipur, India G. Sreerag School of Information Technology, Vellore Institute of Technology, Vellore, India S. Srinitha Department of ECE, Sri Ramakrishna Engineering College, Coimbatore, India A. Steffy Jones Department of EEE, Saranathan College of Engineering, Trichy, India C. N. Subalalitha Department of Computer Science and Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai, India S. Subbulakshmi Department of Computer Science and Applications, Amrita Vishwa Vidyapeetham, Amritapuri, India P. Sudarsanam BMS Institute of Technology and Management, Bangalore, India Sudimanto Computer Science Department, Bina Nusantara University, Kemanggisan, Palmerah, Jakarta, Indonesia R. Suganya Department of Information Technology, Sri Krishna College of Technology, Coimbatore, India Zakia Sultana Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh Meenatchi Sundaram Garden City University, Bengaluru, India

Editors and Contributors

xxix

V. Suresh Babu APJ Abdul Kalam Technological University, Government Engineering College, Wayanad, Kerala, India K. Swathi Department of CSE, NRI Institute of Technology, Agiripalli, Andhra Pradesh, India Purvı H. Tandel Department of Information Technology, C G Patel Institute of Technology, Uka Tarsadia University, Bardoli, India Zarrin Tasnim Department of Software Engineering, Daffodil International University, Dhaka, Bangladesh Md. Shihab Uddin Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh Guntha Raghu Vamshi Department of Electrical and Electronics Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru, India K. Vamsi Krishna Department of CSE, Koneru Laksmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India A. Venkatesh Department of Information and Technology, Jeppiaar Engineering College, Chennai, India Manoj Verma Computer Science and Engineering, Sri Satya Sai University of Technology and Medical Sciences, Sehore, India Shalini Vermani Apeejay School of Management, New Delhi, India Harco Leslie Hendric Spits Warnars Computer Science Department, Binus Graduate Program—Doctor of Computer Science, Bina Nusantara University, Jakarta, Indonesia Yohanes Paul Weniko Information System Management Department BINUS Graduate Program—Master of Information Systems, Bina Nusantara University, Jakarta, Indonesia M. Wilscy Department of Computer Science, University of Kerala, Thiruvananthapuram, India Amit Yadav Department of Information and Software Engineering, Chengdu Neusoft University, Chengdu, China A. Yuvarani School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India

The Implementation of Failure Mode and Effects Analysis (FMEA) of the Information System Security on the Government Electronic Procurement Service (LPSE) System Muhammad Aldenny, Hans Kristian, Ford Lumban Gaol, Tokuro Matsuo, and Andi Nugroho Abstract This research presents solving the problem of information system security and risk management using the failure mode and effects analysis (FMEA) method. Although popular in the field of industrial engineering, the FMEA method is still rarely used in research on information systems objects. So, it becomes interesting to be explored further to investigate further exploring the use of FMEA methods in information system security. The object of the research that reported on this paper is the government’s electronic procurement services (LPSE) Web site which has been be suspected vulnerableb to hacking. The variables measured in this study were occurrence, severity (impact), and detection (detection or prevention) of each failure mode. The research data were taken mostly based on direct observations. The results of data processing indicate a high level of vulnerability. During the data collection period, there were at least four security holes that could potentially cause a minimum of seven potential information system failures. The highest security priority focus M. Aldenny (B) · H. Kristian Information System Management Departement BINUS Graduate Program—Master of Information Systems, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] H. Kristian e-mail: [email protected] F. L. Gaol · A. Nugroho Computer Science Department, BINUS Graduate Program—Doctor of Computer Science, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] A. Nugroho e-mail: [email protected] T. Matsuo Graduate School of Industrial Technology, Advanced Institute of Industrial Technology, Tokyo, Japan e-mail: [email protected] Department of M-Commerce and Multimedia Applications, Asia University, Taichung City, Taiwan © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_1

1

2

M. Aldenny et al.

is on system availability and system display consistency with risk priority numbers 576 and 400. Some security tips for the government electronic procurement services (LPSE) Web site have also been mentioned in this study. Keywords Data security · Network security · Risk management · FMEA · Information system security · System failure

1 Introduction The use of information technology (IT) is currently increasing high. It can be seen from the use of IT that it is used to carry out important activities. To achieve this, it needs to be supported by adequate IT management so that the existence of IT infrastructure can support the success of the company or organization in achieving its objectives. Certainly, all forms of IT utilization and processing cannot be separated from threats to the form of data integrity and network security. In an organization, both profit and non-profit, both government and private, large or small organizations, will surely face all forms of internal and external problems to achieve the goals, vision, and mission of the organization. These problems can cause uncertainty that can affect the goals of the organization. This uncertainty is known as risk. David Vose said that risk is a negative effect of the likelihood of an unpredictable event on the achievement of an organizational goal [1]. Risk management can be applied to minimize the effects of these risks. According to Emmet Vaughan and Therese Vaughan, risk management is a scientific approach that aims to overcome all risks by anticipating losses that occur and implementing procedures that can minimize the occurrence of losses [2]. There are various methods and tools for risk management analysis, one of which is quite popular to use is Failure mode and Effect analysis (FMEA). FMEA is a structured method that can be used to identify, prioritize failure modes, and then prevent them as much as possible. FMEA can be used to explore the sources that cause failure and quality problems [3]. The form of failure mode referred to here is anything that falls into the category of disability, such as defects in the process in design, conditions outside the specified specifications requirements, or anything that can cause the resulting product to not function or function but is not appropriate. This study seeks to explore the use of the FMEA method on information system objects. The choice of information system objects in this study is quite interesting because security and risk management issues in information systems are often not prioritized. A problem that is often encountered is the difficulty in getting the information system owner or manager to invest in the security sector. Information week magazine [4] in 1997 announced the results of a survey of 1271 information systems managers in the USA. Based on the survey results, there were only 22% who gave an opinion that the problem of information system security was very vital. Most of them tend to prioritize reducing costs and increasing competition rather than problems in the security sector, even though the actual costs of improving information systems

The Implementation of Failure Mode and Effects Analysis (FMEA) …

3

after hacking can cost far more. Although security problems are often seen as something that cannot be directly measured with currency (intangible), the problem of security holes in an information system can be measured in real terms (tangible), for example, losses due to an information system that is offline and does not work for several hours. The main reason why choosing to use FMEA in risk management analysis including FMEA can be very useful for identifying failures. FMEA is used to capture potential failures, risks and impacts and prioritize them with priority numbers called risk priority numbers (RPN) which range from 1 to 1000. RPN is obtained by multiplying severity, occurrence, and detection. Each of these values is identified on a scale of 1–10 so that the maximum RPN value is up to 1000. This RPN value is easy and realistic [5, 6]. E-procurement is the process of procurement of goods or services online via the Internet, where the entire announcement, registration, bidding, analyzing process, evaluation results of the offer are carried out by utilizing information technology facilities [7]. E-procurement can be done in two ways consisting of e-tendering and e-purchasing. Before the concept of e-procurement, procurement of goods and services still used the manual method, namely by directly meeting the parties involved such as providers of goods and services with the procurement committee [8]. This manual process has several advantages and disadvantages. The advantage gained is that the users and providers of goods and services can find out the procurement process that takes place together. But the weaknesses of the stages of implementing conventional goods and services procurement are felt to be less effective in time and cost. From this point of view, the government has finally determined a positive step by implementing e-procurement for all government agencies. To support the activities of procurement of goods and services, several government agencies established an Electronic Procurement Service Center (LPSE) [9]. This service center manages everything related to electronic processes in the procurement of government goods and services. Electronic procurement service (LPSE) is implemented in the form of electronic procurement of goods and services that facilitate the electronic auction process. Electronic procurement system application (SPSE) is an e-procurement application developed by the government goods or services procurement policy agency (LKPP) for use by agencies throughout Indonesia [10]. However, utilizing information technology in the procurement system of goods and services creates obstacles. This is due to using an electronic procurement system based on information technology that causes vulnerability to the security of the information system itself. So, in practice, this e-procurement system is often disrupted by various cyberthreats. The factor that invited the threat was that LPSE was the procurement of goods, so it was the main target of several parties related to winning the project tender, and then, the network security system owned by LPSE was not as strong as that of the private sector [11, 12]. The cyberthreat that often disrupts the implementation of the e-procurement system is a hacker attack. Some events that have been recorded include server damage due to hacker attacks, and the impact of the attack makes many packages that are

4

M. Aldenny et al.

in the auction process which must be re-tendered, causing the Web site to not be accessed or opened so that documents cannot be downloaded, and some users cannot log in to the LPSE system related to the auction process, and the system often dies [13]. If this problem cannot be fixed on an ongoing basis, the result will be an impact or risk to the sustainability of this system as well as the image of the government in society [14]. Various attempts have been made to involve members participating in the safety of the Web site, such as holding computer training workshops for employees assigned to manage the Web site, but this is still considered not able to meet the overall Web site security targets, while the principle of the government electronic procurement services (LPSE) Web site is that it can be easily accessed, guaranteed availability, confidentiality, and integrity [15, 16]. The purpose of this research is to explore the use of the FMEA method in information systems and identify potential disruptions and problems that exist in the government’s electronic procurement services (LPSE) information system. The output of this research is expected to provide control recommendations that need to be applied for information security risk management, security policies, and standard operating procedures so that the discussion in this study is not too broad, and the authors will limit the research discussion, namely the evaluation of information security risk management analysis conducted by the government’s electronic procurement services (LPSE) Web site with an analysis method using FMEA.

2 Research Methods 2.1 Data Collection and Processing The data used in this study are the primary data obtained directly from the observations of the objects being studied. Data retrieval is done by various methods including observation methods. This method is followed by observing the government electronic procurement services (LPSE) Web site. The data collection techniques used in this study consisted of 4 types, namely: (a)

Study of literature

The first step is to learn about the theories and topics that will be discussed. In this process, all theories related to the topic “Information system security” are collected from various sources: books, journals, the Internet, and so on. (b)

Direct observation

Direct observation of the system aims to learn how the process of data flows into information on the government electronic procurement services (LPSE) Web site. The next part is to concentrate on examples of failures or discrepancies that occur along with the types and causes.

The Implementation of Failure Mode and Effects Analysis (FMEA) …

(c)

5

Historical data collection

The historical data collection stage is the most important in this study because from historical data it is known the types of discrepancies, the source of the process, the number and specifications, to the consequence of the non-conformity in the failure which is one of the important data to be analyzed. (d)

Interview

These interviews were conducted with various parties related to the government’s electronic procurement services (LPSE) information system. Researchers have interviewed various parties involved including system developers (system developers), systems analysis, and system users to collect all the data needed in this study. This is intended to provide a picture or a description of the quantity or types of failures that have occurred, the source and the causes, based on the knowledge of each person.

2.2 Research Stages The flow of this research is shown in Fig. 1. The flow of this study shows step by step from the beginning to the end of the research process. The research flow starts from the analysis of existing processes by reviewing the information system flowchart diagram on the object under study. The results of the flowchart analysis here are expected to help identify security holes in the system. At this stage, interviews with the relevant resource persons will be conducted to collect data. Furthermore, data collected from each process in the system flowchart will be analyzed to form a list of potential failures that may occur. In this stage, the researcher will conduct a literature study referring to the literature on risk management and information system security to dig more lists of potential failures that may occur. After the list of potential failures is formed, the next step is to collect data on the frequency of events from each list of potential failures based on direct observations. The frequency of this list of events will later be used as a reference to make a Pareto diagram. Pareto diagrams are useful for taking priority in the list of potential failures where the highest priority is the failure that most often appears during the data collection time interval. After the Pareto diagram is made, the next step is to make an Ishikawa diagram or better known as a fishbone diagram. This diagram is used to look for causes from a list of potential failures that might occur. At this stage, a literature study and literature search are conducted to compile a list of causes for potential failures that may occur. The next stage in the research flow after making the Ishikawa diagram is to perform an risk priority number (RPN) calculation. RPN is a product of severity, occurrence, and detection [17]. The RPN value is in the range of 1–1000, where the higher the RPN value, the higher the risk of failure. After the RPN value is calculated, the next step is to form a list of action recommendations. This list includes all kinds of ways that can reduce the value of RPN such as prevention, detection, and overcoming potential

6

M. Aldenny et al.

Fig. 1 Research flow

failures. Later on, the stage of data collection on the frequency of potential failures is again carried out to recalculate the RPN value. After the list of recommended actions is carried out, it is expected that there will be a significant decline in the value of the RPN which also means a more secure, stable, and reliable information system.

2.3 Research Variables The study uses a dependent variable, namely risk priority number (RPN) and three independent variables, namely occurrence, severity, and detection. The RPN value is the product of the magnitude of the occurrence, severity, and detection variables. In general, because the FMEA method is more often used in the realm of industrial engineering, all of these variables are measured per unit of production, but here because the object under study is an information system, the units used will be adjusted.

The Implementation of Failure Mode and Effects Analysis (FMEA) …

7

The author formulates the scale of new variables especially for severity and occurrence to better suit the context of the information system. The magnitude of the occurrence, severity, and detection variables is ordinal scale values from 1 to 10. The following units of each adjusted scale size can be seen in Tables 1 and 2: Table 1 Variable severity rating scale Scale Severity

Information

10

System crash

If the whole system crashes, the operating system must be restarted

9

Program crash

Application program crashes, hangs, force close

8

Non-functioning

Program features do not function at all and have an impact on the output of information produced

7

Non-functioning

Program features do not function at all

6

Incorrectly functioning

Program features work but are invalid or accurate

5

Incorrectly functioning with walk around Program features function but are not according to usage rules or specifications

4

Performance cost

The impact decreased due to program performance

3

Efficiency cost

Inefficient programs with CPU, memory, network, or power usage

2

Cosmetic damage

Impact on the appearance, user interface on the back-end system and front-end system

1

Cosmetic damage

Impact on the appearance, user interface on the back-end system

Table 2 Scale of occurrence assessment variables Scale

Category

Information

10

Extreme high

More than 5% of the system uptime is running

9

Very high

More than 4.5% of the system uptime is running

8

High

More than 4% of the system uptime is running

7

Medium high

More than 3% of the system uptime is running

6

Medium

More than 2.5–3% of the system uptime is running

5

Medium

More than 2–2.4% of the system uptime is running

4

Medium low

More than 2% of the system uptime is running

3

Low

More than 1% of the system uptime is running

2

Very low

More than 0.5% of the system uptime is running

1

Remote

More than 0.1% of the system uptime is running

8

M. Aldenny et al.

3 Result and Dıscussıon 3.1 Potential List of Failures After analyzing the information system of the electronic procurement service Web site (LPSE), the government uses a direct look at the system; observing flowcharts and related documents, the authors formulate a list of potential failures that may occur. The following is the result of the analysis of a list of potential failures: (a) (b) (c) (d) (e) (f) (g) (h)

System administration cannot be accessed System not available Data do not appear after inputting Display changes/does not match The service menu cannot be accessed Failed to connect to the database The loading time is too long Potential upload failure

Based on the results of the analysis of the Web site that runs when examined has several vulnerabilities gap including: (a) (b) (c)

There are third-party add-ons that have never been updated and have the potential to be a security hole Many service functions are provided. All of this not only places the burden of access to the old server, but also makes it difficult to maintain the Web site. Information exposed to the public is too detailed. This is not useful for ordinary users but can be useful for hackers.

Plenty of code duplication between menus and databases so that there is a high link between classes so that the Web site will be difficult to maintain and potentially a security hole. Input data on the form are not secured first (data sanitization). This makes the Web site very vulnerable to code injection attacks.

3.2 Pareto and Ishikawa Diagrams Pareto diagram also known as the Ishikawa diagram is one tool that can be used to identify priority problems that must be solved first. The most frequent and frequent problems are the top priority for action. In this study, the Pareto diagram will be used to identify the problems that are the top priority. Pareto diagrams are depicted in the form of bar charts which show events sorted according to the number of frequency occurrences. The sequence starts from the form of the problem whose frequency most often occurs until the least occurs. This Pareto diagram is based on the results of the confectionary data and the frequency of occurrence of the failure modes. From each of these variables, failure

The Implementation of Failure Mode and Effects Analysis (FMEA) …

9

Fig. 2 Potential Pareto failure diagram

mode will be chosen as the main priority using the Pareto principle stating that around 80% of effects originate from 20% of cases [18]. Figure 2 shows the Pareto diagram of the frequency of failures: The next step after making the Pareto diagram is to make an Ishikawa diagram. This diagram was created by conducting a literature study. Fishbone diagram results are then used to make FMEA analyze the causes of each selected failure mode. The causes of failure are grouped into their respective categories and are described as fish bones, while the line in the middle is a failure mode. Figure 3 is the result of Ishikawa’s diagram.

3.3 Calculation of RPN After determining the priority using the Pareto diagram and determining the list of causes through the Ishikawa diagram, the next test step is to calculate the value of the risk priority number (RPN). RPN value is a product of severity, occurrence, and detection variables. There are two priority failure modes in the Pareto diagram: the system is not available and the display changes. Neither the system is available, but the established view changes both of them have a very fatal impact on the functional system. However, the appearance of a changed Web site does not only affect the overall malfunction of the system but can also cause psychological effects. Both of these failure modes are also difficult to detect on the current system because there is no logging facility. The results of the RPN calculation can be seen in detail in Table 3. The RPN calculation results show that the system failure mode is not available as an RPN value of 576 and the display failure mode is changed to have an RPN 400. This RPN value is in the range of 1–1000 where the higher the RPN value the

10

M. Aldenny et al.

Fig. 3 Fishbone system diagram not available

greater the potential risk. After conducting the calculation of the RPN, the results of this test also make a list of recommended actions that contain all types of actions that have the potential to reduce the value of the RPN either by minimizing the impact and frequency of events or by efforts to detect early and prevent failure modes. It is expected that after taking this recommended action, the RPN value can be reduced significantly.

The Implementation of Failure Mode and Effects Analysis (FMEA) …

11

Table 3 RPN calculation results Potential failure

Impact

S

O D RPN Recommended action

System unavailable

Total system failure

9

8

8

576

– Use cloud server to minimize downtime - Use a firewall to prevent DDOS attacks – Use SSL or HTTPS connections for sensitive information – Change the system to a service-based application

System display is not Total system failure and 10 5 consistent psychological impact

8

400

– Conduct data sanitization, so that the data input is not dangerous – Provides a log file to record important activities related to the user, the time, and update changes – Error messages do not need to be displayed in detail – Access rights management must be restricted, monitored, and properly managed

4 Conclusion This research has succeeded in conducting an information system security audit on the government electronic procurement services (LPSE) Web site. Based on the results of the analysis, various security holes have been described in this study. Besides, the results of this study indicate a high level of vulnerability with a value of risk priority number (RPN) in the range of 40–60%, and various recommendations have been elaborated to reduce the level of vulnerability. This research has contributed theory to the scale measurement on severity and occurrence variables on RPN measurement so that it can be applied to information system objects. However, there are limitations to this study because they cannot re-test the RPN value after the list of recommended actions is given. RPN recalculation is needed to confirm whether the vulnerability level of failure mode has been reduced or not. The implication of this research is the application of the FMEA method to information system objects, and it is hoped that in the future other researchers will explore further the calculation of occurrence and detection variables with a more objective scale.

12

M. Aldenny et al.

References 1. Vose D (2008) Risk analysis: a quantitative guide, 3rd edn. Wiley, New Jersey 2. Vaughan E, Vaughan T (2013) Fundamentals of risk and insurance. Wiley, New Jersey 3. Lipol LS (2011) Risk analysis method: FMEA in the organizations. Int J Basic Appl Sci IJBAS XI(5):49–57 4. Dikmen C (1997) Information week. UBM Tech, San Francisco 5. May D (2015) The return of innovation. Cambrige J 11–17 6. Anisa (2010) Evaluasi dan analisis waste pada proses produksi kemasan dengan menggunakan methode FMEA. Jurnal Fakultas Teknik Industri Universitas Indonesia 50–62 7. Jasin M (2007) Mencegah Korupsi Melalui Eprocurement, (Jakarta: Komisi Pemberantasan Korupsi, 2007), hlm. 3 8. Joshi G, Joshi H (2014) FMEA and alternatives versus enhanced risk assessment mechanism. Int J Comput Appl 93(14):2 9. Puspitasari NB, Martanto A (2014) Penggunaan FMEA dalam mengidentifikasi resiko kegagalan proses produksi sarung atm. JaTI Undip IX(2):93–95 10. Hanif R, Rukmi HS (2015) Perbaikan Kualitas Produk Keraton Luxury Di Pt. X Dengan Menggunakan Metode Failure Mode And Effect Analysis Dan Fault Tree Analysis. J Online Institut Teknologi Nasional III(3):137–147 11. Mayangsari DF, Adianto H, Yuniati Y (2015)Usulan Pengendalian Kualitas Produk Isolator Dengan Metode Failure Mode And Effect Analysis (Fmea) Dan Fault Tree Analysis (Fta)*. Jurnal Online Institut Teknologi Nasional Iii(2):81–91 12. Hanif RY, Rukmi HS, Susanty S (2015) Perbaikan Kualitas Produk Keraton Luxury Di Pt. X Dengan Menggunakan Metode (Fmea) Dan (Fta),” Jurnal Online Institut Teknologi Nasional Iii(3):137–147 13. Widyarto WO, Dwiputra GA, Kristiantoro Y (2015) Penerapan Konsep Fmea Dalam Pengendalian Kualitas Produk Dengan Menggunakan Metode Six Sigma. Jurnal Rekayasa Dan Teknik Inovasi Industri Iii(1):13–23 14. Desy I, Hidayanto BC, Maria Astuti H (2014) Penilaian Risiko Keamanan Informasi Menggunakan Metode Failure Mode And Effects Analysis Di Divisi Ti Pt. Bank Xyz Surabaya. In: Seminar Nasional Sistem Informasi Indonesia, Surabaya 15. Mahersmi BL, Muqtadiroh FA, Hidayanto BC (2016) Analisis Risiko Keamanan Informasi Dengan Menggunakan Metode Octave Dan Kontrol Iso 27001 Pada Dishubkominfo Kabupaten Tulungagung. In: Seminar Nasional Sistem Informasi Indonesia, Surabaya 16. Neubauer T, Pehn M (2010) Workshop-based security safeguard selection. Int J Adv Secur III(3):123–134 17. Martin H, Priscila L (2011) The World technological capacity to store, communicate and compute information. Science 332(6025):60–65 18. Ankunda K (2011) The application of the pareto principle in software engineering. 1–12

MQTT Attack Detection Using AI and ML Algorithm Neenu Kuriakose and Uma Devi

Abstract IoT networks are progressively mainstream these days to screen basic conditions of diverse nature, essentially expanding the measure of information traded. Because of the large number of connected IoT devices, the security of such organizations and devices is a major concern. Recognition frameworks accept a critical part in the digital protection field: In light of imaginative calculations, for example, AI, they can recognize or anticipate digital attack, thus ensuring the fundamental framework. In any case, explicit datasets are needed to prepare location models. In the proposed work, MQTTset is a dataset zeroed in on the MQTT convention which is then broadly received in IoT networks. The dataset’s creation is described as well as its approval through the use of a speculative recognition framework, which involves combining the real dataset with digital attacks on the MQTT network. The obtained results demonstrate how MQTTset can be used to prepare automatic machine learning models for executing recognition frameworks in IoT settings. Keywords MQTTset · Internet of things · Machine learning algorithm · Firefly algorithm · Random forest

1 Introduction Internet of Things (IoT) or between appliance correspondences to the web stands an idea permits correspondence among gadgets to the Net. The amount of IoT gadgets has developed to Cisco IBSG forecasts, and the quantity of IoT gadgets resolves to arrive at 50 billion by 2020 [1]. Also, other research work predicts, by 2020, the Web of things gadgets comprised of 20.4 billion units [2]. IoT plays an important role in brilliant metropolitan execution, such as smart house, smart transportation, and smart stopping. In today’s IoT devices, a variety of conventions are used as

N. Kuriakose (B) · U. Devi AmritaViswa Vidyapeetham, Cochin, India U. Devi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_2

13

14

N. Kuriakose and U. Devi

correspondence conventions. These days, numerous conventions are utilized as a correspondence convention in the IoT gadgets. The five most conspicuous conventions utilized for IoT are Hypertext Transfer Protocol (HTTP), Constrained Application Protocol (CoAP), Extensible Messaging and Presence Protocol (XMPP), Advanced Message Queuing Protocol (AMQP), and MQ Telemetry Transport (MQTT) [3]. A few contemplations reserved into explanation to select convention energy proficiency (all-out devoured energy for the assumed implementation period), execution (all-out broadcast period its earnings to direct communications furthermore, get affirmations), asset use (CPU, Slam, and ROM use), and dependability (capacity to keep away from parcel misfortune, for example, QoS) [4]. Besides, when cutting-edge functionalities (e.g., message diligence, wills, and precisely once conveyance), dependability, and capacity to make sure about the multicast message are profoundly thought of, MQTT convention is probably the most ideal alternative [5]. The remaining article is structured as follows: Sect. 2 reports the MQTT protocol; Sect. 3 defines in aspect the methods and materials of the proposed work. Section 4 presents performance metrics. Section 5 analyzes the result, and Sect. 6 explains discussion. Finally, Sect. 7 concludes the research work.

2 MQTT Protocol MQ Telemetry Transport (MQTT) is an acquainted convention utilizing a distribute/buy-in component which is initially planned by Andy Stanford-Clark and Arlen-Nipper. At present, in the Organization for the Advancement of OrganizedInformation-Standards (OASIS) and the MQTT convention likewise has a standard characterized in ISO/IEC 20922: 2016 (Information innovation—Message Lining Telemetry Transport (MQTT) v3.1.1). This convention is utilized broadly IoT framework restricted assets as a result of a few reasons: lightweight, little data transmission prerequisite, open and direct to be actualized [6].

2.1 IoT and MQTT Protocol For Internet of Things (IoT) gadgets, associating with the Internet is moderate of a necessity. Associating with the Internet permits the gadgets to work with one another and with backend administrations. The basic organization convention of the Internet is TCP/IP. MQ Telemetry Transport (MQTT), which is built on top of the TCP/IP stack, has become the standard for IoT communications. MQTT can likewise run on SSL/TLS, which is a safe convention based on TCP/IP, to guarantee that all information correspondence between gadgets is encoded and secure. MQTT was initially concocted and created by IBM in the last part of the 1990s. A single application was to connect pipeline sensors to satellites. It is an informing

MQTT Attack Detection Using AI and ML Algorithm

15

convention that upholds nonconcurrent communication between parties. In both existences, an offbeat convincing convention decouples the message sender and collector, making it adaptable to untrustworthy organizational conditions. Regardless of its name, it has nothing to do with informing lines and, in general, operates on a distributed and buy-in model. It became an OASIS open standard in late 2014, and it is supported in mainstream programming languages by a variety of open-source implementations.

2.2 Security Overview of MQTT As recently referenced, MQTT highlights distinctive security components, yet the vast majority of them are not designed or given as a matter of course, for example, information encryption or substance verification. Validation components, for example, utilizing the actual location of the gadget (MAC), exist and are constrained by the representative by enlisting a gadget’s data once it attempts to interface. The access approval should be possible by the specialist utilizing a system called an access control list (ACL). The ACL, as the name infers, contains records of data, for example, the identifiers and passwords of the various customers that are permitted to get to various items and can likewise indicate what the customer can perform on these tasks. As indicated by Reference [4], secrecy is a significant prerequisite of a protected framework and can be cultivated at the application layer by scrambling the message that should be distributed. This technique for encryption can either be actualized as customer to facilitate or start to finish. To handle the encryption, the specialist unscrambles the data that is being distributed to a theme and separately encodes the qualities that it needs to ship off different customers. In a start to finish circumstance, the specialist cannot decode the data being distributed to subjects and it advances the ciphertext to different gadgets. In the last strategy, the merchant needs less computational assets and less energy as it just capacities as a messenger and does not need any extra modules that can encode/unscramble messages. There has been lots of existing work concentrated on IoT-based attack detection [7, 8].

3 Methods and Martials 3.1 MQTTset MQTTset, a dataset related to the IoT environment, explicitly on the MQTT correspondence convention, was used in the proposed work to provide an underlying dataset for the exploration and modern network to use in their application. The dataset

16

N. Kuriakose and U. Devi

Fig. 1 Proposed MQTT attack detection architecture

is formed by IoT sensors dependent on MQTT where every part of a genuine organization is characterized. The MQTT merchant has started using Eclipse Mosquitto, and the organization is made up of eight sensors. The situation is analogous to a sensitive home environment in which sensors collect data on temperature, light, stickiness, CO gas, movement, smoke, entryway, and fan with varying time stretch because each sensor’s behavior differs from the others.

3.2 Proposed MQTT attack detection architecture In this proposed work, the MQTT attack has been detected with the novel technique of firefly-based feature selection and random forest classification. It also concentrates on label encoding-based feature encoding (Fig. 1).

3.3 Data Discovery In this work, the MQTTset dataset has been used for the evaluation of attack on the IoT environment. In this work, there were 2000 instances recorded, 600 of which were tested, 1400 of which were trained, and 600 of which were taken for testing purposes.

MQTT Attack Detection Using AI and ML Algorithm

17

3.4 Feature Encoding The planning cycle for non-numeric features to numeric qualities and the dataset utilized in the field of interruption identification for the most part contains consistent, discrete, and representative features. The majority of the machine learning calculations is intended to work with numeric values which are subsequently incongruent with representative features. Consequently, it is the command to utilize an encoding plan to plan all emblematic highlights into numeric qualities; in this work, label encoding has been used. Label encoding is a well-known encoding procedure for dealing with all-out factors. In this method, each name has doled out an extraordinary number dependent on sequential requesting.

3.5 Feature Selection The element determination articulates to a critical advance in mining highdimensional information: The noteworthiness of highlights in maintaining the information structure while overlooking the element excess is essential to improve the last exhibition of characterization strategies [9]. Simultaneously, a precise comprehension of highlight areas may require human mediation to adjust the significance of structure-based highlights with those one directed by human mastery [10]. In this work, firefly feature selection algorithm [11] has been utilized. It is based on biochemical and social parts of genuine fireflies. Genuine fireflies produce a short and cadenced glimmer that encourages them in pulling in (conveying) their mating accomplices and fills in as a defensive admonition instrument. FA figures this blazing conduct with the target capacity of the issue to be enhanced. The accompanying three standards are admired for the essential definition of FA. (1) All fireflies are unisex so that fireflies will draw in one another paying little mind to their sex. (2) This application is relevant to their splendor which diminishes as distance increments between two flies. Accordingly, the less brilliant one will move toward the more splendid one. It will pass haphazardly if it cannot identify a more brilliant model. (3) The environment of the target work determines the brilliance of a firefly.

3.6 Classification The path toward finding a model that depicts and perceives data classes and thoughts. The request is the issue of perceiving to which of a lot of classes (subpopulations), a novel insight has a spot with, in view of an arrangement set of data containing discernments and whose characterization investment is known. In this work, there have been 3 classification algorithms taken to detect MQTT attack. The performance

18

N. Kuriakose and U. Devi

metrics of three algorithms have been compared with each classification in the form of accuracy and ROC curve. In this proposed MQTT attack detection, the random forest has been used for classification. Random forest [12] is a classifier that developments begin decision trees. It encompasses many trees in order to characterize the added occasion to every single decision-tree stretches order to arrive at statistics; arbitrary woods gather arrangements and select the most given voting form estimated as the result. The commitment of each tree is inspected data from the initial dataset. Besides, a subset of features has indiscriminately browsed the optional features to build up the tree at each center. Each tree is created without pruning, and essentially, unpredictable boondocks engage endless weak or miserably related classifiers to outline a strong classifier.

4 Performance Metrics 4.1 Confusion Matrix A spectacular asset containing data identified with the adequacy of an IDS model is twofold and multiclass orders. The framework contains genuine positive (TP: assault information accurately delegated positive), bogus negative (FN: assault information wrongly named negative), bogus positive (FP: typical information wrongly named positive), and genuine negative (TN: ordinary information effectively named negative).

4.2 Accuracy The proportion of the accurately arranged typical and assault information over the complete number of ordered information is demonstrated as follows: Accuracy(ACC) = TP + TN/TP TN FP FN

5 Result and Analysis The MQTTset dataset was used for testing, and there were 2000 instances in maximum. Testing was completed with 600 instances, and training was completed with 1400 instances, with 600 being used for testing. The selected features for attack detection are listed in Table 1 (Fig. 2).

MQTT Attack Detection Using AI and ML Algorithm

19

Table 1 Feature selection after firefly algorithm 48

6

0.000237

3.50E-05

46

49

53

52

0.000218

1

1

16.88496

46

6

0.000203

4.50E-06

50

49

46

52

0.000193

55

46

3.2

48

17

0.090855

1.91E-06

46

49

50

1344

0.060826

1

1

0

48

6

0.000242

1.31E-06

46

49

49

52

0.000207

1

1

24.04163

46

6

0.000243

3.81E-06

50

49

46

52

0.000231

55

46

3.2

48

6

0.000284

2.10E-05

46

49

49

52

0.000323

1

1

38.07672

46

6

0.000349

4.05E-06

50

49

46

52

0.000397

55

46

3.2

48

6

3.040038

2.016093

46

49

50

60

1.023944

1

1

0

48

6

3.040042

2.01608

46

49

50

60

1.023962

1

1

0

48

6

3.040043

2.016068

46

49

50

60

1.023975

1

1

0

48

6

0.001038

3.10E-06

46

49

49

52

0.000895

1

1

19.27248

48

6

0.001039

5.96E-06

46

49

49

52

0.000776

1

1

19.52602

48

6

0.000727

3.10E-06

46

49

55

52

0.000736

1

1

16.88496

46

6

0.000968

5.01E-06

50

49

46

52

0.001016

55

46

3.2

46

6

0.000968

4.05E-06

50

49

46

52

0.000924

55

46

3.2

48

6

0.000834

2.86E-06

46

49

49

52

0.000931

1

1

16.80743

46

6

0.000714

3.10E-06

50

49

46

52

0.000699

55

46

3.2

48

6

0.000637

3.10E-06

46

49

49

52

0.000709

1

1

16.88496

46

6

0.000889

5.96E-06

50

49

46

52

0.000883

55

46

3.2

46

6

0.000742

4.05E-06

46

49

46

52

0.000737

55

46

3.2

Fig. 2 Accuracy rate with confusion matrix for target class of testing and training

20

N. Kuriakose and U. Devi

5.1 Feature Selection The efficient accuracy of 100% has been achieved with the proposed random forest model of MQTT attack detection, the testing accuracy result of the proposed work achieved with the 99.7%. Figures 3 and 4 demonstrate the proposed work model’s recall, precision, accuracy, and f-measure performance metrics for the MQTT attack. The proposed work has been compared with the previous work of SVM and K-means algorithm in classification for the MQTT attack in the MQTTset (Table 2).

Fig. 3 Recall and accuracy value of proposed MQTT attack detection model

Fig. 4 F-score and detection time value of proposed model

Table 2 Performance comparison of proposed work with SVM and K-means Method

Accuracy

F1 score

Recall

Detection time

SVM

99

99

98

5.6

K-means

98

97

95

4.3

Random forest

99.8

100

100

2.37

MQTT Attack Detection Using AI and ML Algorithm

21

6 Discussion In this proposed MQTT attack detection, the random forest has been used for classification. Random forest [12] is a classifier that developments begin with decision trees. The main objective of the work is to detect MQTT attacks on the IoT environment. There has been plenty of technique applied to detect MQTT attack [7, 8, 11, 13] efficiently on each stage. The MQTT dataset has been used on data discovery. The label encoding has been used for the feature encoding method. It reduces the string computation time. The firefly algorithm has been used for feature selection, and the classification algorithm of random forest is compared with the SVM and K-means. In the future, this work can be extended to real-time data discovery with the model of the mobile-based alert system will develop with the ensemble learning.

7 Conclusion The proposed work used the MQTTset dataset to detect MQTT attacks on the IoT environment. The dataset includes MQTT-related attacks and the use of the security community. The proposed work used fireflies and random forest algorithms. This work integrates a lot of techniques to select and classify MQTT attacks on network behavior. The work measured the comparative study of the related work with Kmeans and SVM models for performance metrics. The performance result shows that the result of the related work in the MQTT-based IoT attack has been efficiently detected with a random forest algorithm.

References 1. Li Y, Chi Z, Liu X, Zhu T (2018) Chiron: Concurrent high throughput communication for iot devices. In: Proceedings of the 16th annual international conference on mobile systems, applications, and services, Munich, Germany, 11–15 June 2018, pp 204–216 2. Farivar F, Haghighi MS, Jolfaei A, Alazab M (2019) Artificial intelligence for detection, estimation, and compensation of malicious attacks in nonlinear cyber-physical systems and industrial IoT. IEEE Trans Ind Inf 16:2716–2725. [CrossRef] 3. Karimipour H, Dehghantanha A, Parizi RM, Choo KKR, Leung H (2019) A deep and scalable unsupervised machine learning system for cyber-attack detection in large-scale smart grids. IEEE Access 7:80778–80788. [CrossRef] 4. Catal C, Diri B (2009) Investigating the effect of dataset size, metrics sets, and feature selection techniques on software fault prediction problem. Inf Sci 179:1040–1058 [CrossRef] 5. Tavallaee M, Bagheri E, Lu W, Ghorbani AA (2009) A detailed analysis of the KDD CUP 99 data set. In: Proceedings of the 2009 IEEE symposium on computational intelligence for security and defense applications, Ottawa, ON, Canada, 8–10 July 2009, pp. 1–6 6. Moustafa N, Slay J (2015) UNSW-NB15: a comprehensive data set for network intrusion detection systems (UNSW-NB15 network data set). In: Proceedings of the 2015 military communications and information systems conference (MilCIS), Canberra, Australia, 10–12 Nov 2015, pp 1–6

22

N. Kuriakose and U. Devi

7. Smys S, Abul B, Haoxiang W (2020) Hybrid intrusion detection system for internet of things (IoT). J ISMAC 2(04):190–199 8. Ranganathan G (2020) Real time anomaly detection techniques using pyspark frame work. J Artif Intell 2(01):20–30 9. Mohammadi S, Mirvaziri H, Ghazizadeh-Ahsaee M, Karimipour H (2019) Cyber intrusion detection bycombined feature selection algorithm. J Inf Secur Appl 44:80–88 10. Moustafa N, Turnbull B, Choo KKR (2018) An ensemble intrusion detection technique based on proposed statistical flow features for protecting network traffic of internet of things. IEEE Internet Things J 6:4815–4830 11. Zhang Y, Song X-F, Gong D-W (2017) A return-cost-based binary firefly algorithm for feature selection. Inf Sci 418:561–574 12. Bobrovnikova K, Lysenko S, Gaj P (2020) Technique for IoT cyberattacks detection based on DNS traffic analysis. CERU 2623:19 13. Meidan Y, Bohadana M, Mathov Y, Mirsky Y, Shabtai A, Breitenbacher D, Elovici Y (2018) Nbaiot—network-based detection of iot botnet attacks using deep autoencoders. IEEE Pervasive Comput 17:12–22 14. Oluranti J, Omoregbe N, Misra S (2019) Effect of feature selection on performance of internet traffic classification on NIMS multi-class dataset. J Phys 1299:012035. [CrossRef] 15. Soni D, Makwana A (2017) A survey on mqtt: a protocol of internet of things (iot). In: Proceedings of the international conference on telecommunication, power analysis and computing techniques (ICTPACT-2017), Chennai, India, 6–8 Apr 2017 16. Elkhadir Z, Chougdali K, Benattou M (2017) An effective cyber attack detection system based on an improved OMPCA. In: Proceedings of the 2017 international conference on wireless networks and mobile communications (WINCOM), Rabat, Morocco, 1–4 Nov 2017, pp 1–6 17. Anthi E, Williams L, Słowi´ nska M, Theodorakopoulos G, Burnap P (2019) Supervised intrusion detection system for smart home IoT devices. IEEE Internet Things J 6:9042–9053 18. Eskandari M, Janjua ZH, Vecchio M, Antonelli F (2020) Passban IDS: An intelligent anomaly based intrusion detection system for IoT edge devices. IEEE Internet Things J 7:6882–6897 19. Zolanvari M, Teixeira MA, Gupta L, Khan KM, Jain R (2019) Machine learning-based network vulnerability analysis of industrial Internet of Things. IEEE Internet Things J 6:6822–6834

Detection of Credit Card Fraud Using Isolation Forest Algorithm Haritha Rajeev and Uma Devi

Abstract Credit card payment is increasing day by day. This leads to expanding digitization of banking administrations and versatile financial applications. A large number of transactions are identified as frauds concerning all transactions. Many different types of data mining algorithms are used to find fraudulent transactions. But in the case of big data, the algorithm has some limitations. This article discusses the Isolation Forest algorithm for detecting fraud in the credit card system and also compares the result with the Local Outlier algorithm. The Isolation Forest performs much better than the LOF while taking a gander at bungle accuracy and recall for 2 models. It has also been recognized that the fraud detection rate is almost 27% compared to the LOF discovery rate of only 2%. Isolation Forest has a 99.774% greater precision than 99.65% LOF. Keywords Credit card fraud · Isolation forest algorithm · Local outlier algorithm

1 Introduction A credit card is a payment card issued to the customer, which enables the cardholder to pay the goods to the seller on the obligation of the cardholder to the issuer to pay them in addition to the following accepted fees. The card issuer is typically a bank; the cardholder can get cash from the vendor or as an installment loan. As the amount of credit card transactions increases, the amount of money-related deception increases annually on account of the phonetic use of credit cards. It has become a major issue-based statistics. Credit card compulsion has cost nearly $21 billion worldwide and is expected to move to $31 billion by 2020. There are different approaches to make a credit card trade when the card presents to make a portion, a withdrawal, or a trade. Next is when the card is missing, generally for purchases or portions made on the Internet (a couple of nuances are required like Card Verification H. Rajeev (B) · U. Devi Amrita School of Arts and Sciences, Amrita Vishwa Vidyapeetham, Kochi, India U. Devi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_3

23

24

H. Rajeev and U. Devi

Value, cardholder name, PIN, security question). Credit card deception happens when Credit card information or individual distinctive confirmation number taken and used without approval to deceive money, items, and organizations. Cyber-criminals do not abandon card robbery, they continue to pursue complex procedures, so there is a genuine need for improved and dynamic strategies prepared to acclimate to the snappy progression of model misappropriation, and the exchange will be labeled false if the framework notices a deviation in the client’s normal expenditure. A few methods from information mining are used to identify the issue of detection of credit card fraud. The article uses an Isolation Forest algorithm to detect the anomaly. Isolation Forest is a learning calculation for irregularity identification that breaks away at the rule of segregating anomalies. Isolation Forest algorithm disconnect perceptions by haphazardly choosing highlights and later arbitrarily choosing a split an incentive among most extreme considering least estimation of the chosen highlights. The logic argument proceeds by segregating irregularity perceptions are simpler because a couple of conditions are expected to isolate these cases from ordinary perceptions. On the other hand, isolating typical perceptions require further conditions. Due to this, an abnormality score can be determined as the number of conditions needed to isolate a given perception. Local Outlier Factor algorithm registers the nearby density deviation of given information focuses regarding its neighbor. It considers an exception test that has a substantially lower density than its neighbors. The main intention of this article is: 1. 2. 3.

Provide an Isolation Forest algorithm to identify the frauds in the credit card system. Process the information additionally by searching the main highlights and takes care of the imbalanced dataset issue. Present comparison report of Isolation Forest calculation and Local Outlier algorithm based on precision, arrangement execution, and learning speed.

The remaining of this article is structured as follows: The study of literature is described in Sect. 2. Materials and methods are stated in Sect. 3. Results and discussions are discussed in Sect. 4. Finally, Sect. 5 concludes the research work.

2 Literature Study Based on AI algorithms, a few methods have been applied to make sure about fraud detection of the credit card. Such as Support Vector Machine [SVM], Decision Tree [DT], Random Forest [RF], Naive Bayes [NB], Local Outlier Detection [LCO] and Multilayer Perceptron [MLP]. Credit card transactions can be classified as fraud and normal because of the characterization of credit card exchange. One of the common issues of the credit card exchanges is utilizing two counterfeit neural algorithms such as MLP and ELM. For instance, precision, survey, exactness, real sure rate, counterfeit positive rate framework, and plan time. The results showed

Detection of Credit Card Fraud Using Isolation Forest Algorithm

25

that MLP overcomes ELM. Regardless, counting the envisioning time for each algorithm [1], the execution of Naïve Bayes, k-nearest neighbor and strategic relapse was assessed on the charge card extortion dataset [2]. Improvement of correspondence innovations and web-based business has made the credit card the most well-known strategy of installment for both on the web and normal values. Thus, security in this framework is exceptionally expected to forestall extortion exchanges. Extortion exchanges in credit card information exchange are expanding every year. Toward this path, scientists are additionally trying novel procedures to identify and forestall such frauds. Notwithstanding, there is consistently a need for certain methods that ought to unequivocally and productively recognize these frauds. This article proposes a plan for distinguishing frauds in credit card information which utilizes a Neural Network (NN), Auto Encoder (AE), Local Outlier Factor (LOF), Isolation Forest (IF), and K-Means grouping [3]. In this article [4], the step-by-step process is described to identify the frauds in credit card information. This leads to build some component extractors through measurable strategies to feature distinctive strange practices in various pointers and later utilize the separated element information for the development and expectation of iForest. By consolidating the explicit component extractors [5], represent a theoretical framework that describes the effectiveness of isolationbased approaches from a distribution viewpoint. The outcome indicated that the Isolation Forest strategy accomplished better regarding the identification of speed and exactness of 99.78%, than other AI classifiers, for example, LCO. CFLDOF algorithm is introduced to advance the LDOF algorithm by pruning the dataset with bunching highlight trees [6]. Distance-based outlier detection is a significant information mining method that finds strange information objects as indicated by some distance work. Despite, when this strategy is applied to datasets whose thickness dissemination is extraordinary, normally the recognition productivity and result are not perfect [7]. LOCBC calculation provides another calculation strategy for the nearby exception coefficient and the quantity of dataset filtering is not as much as that for the Relative Thickness-Based K-Nearest Neighbors (RDBKNN) packing calculation for the general density calculation [8–10].

3 Materials and Methods This section is discussed with Dataset, Pre-processing and Experiments. A.

Dataset

The dataset is a credit card transaction that was affected in September 2013 by European cardholders and derived from the lab of Hewlett-Packard and taken from the vault of the UCI AI. It comprises 28 mathematical element input factors (V1, …, V28) which are the after effect of a PCA update, ‘Time’ is the Number of seconds slipped between these operations and the key activity in the informational sets, and the information include ‘Amount’ finally, ‘Class’ which refers to fraudulent exchanges or not. It consists of 284,807 exchanges, 492 of which are false.

26

H. Rajeev and U. Devi

Fig. 1 Flow chart for credit card fraud detection system

B.

Pre-processing Data

There have been 492 frauds out of 284,807 exchanges. It represents 0.172% of all exchanges dataset. And calculate the numbers of the class concerning frequency (Figs. 1 and 2). The normal transaction is more than 250,000, whereas fraudulent transaction is very less due to its imbalanced dataset. So directly apply Isolation Forest and Local Outlier algorithms to solve this particular issue. This is evaluated based on the conditions, where 1 is considered as fraud and where 0 is considered as normal transactions. Using the matplot, the Fraud and Normal transaction based on Amount and Time were calculated (Figs. 3 and 4).

Detection of Credit Card Fraud Using Isolation Forest Algorithm

27

Fig. 2 Level of fraudulent transactions in the dataset

Fig. 3 Frauds with respect to amount

4 Frauds and Normal with Respect to Amount 5 Frauds and Normal with Respect to Time Here, the Amount Fraudulent transactions are small. But in case of Normal, more number of transactions is there and based on Time, more number of transactions are Fraud (Figs. 5 and 6). C.

Experiment

28

Fig. 4 Normal with respect to amount

Fig. 5 Fraud transactions with respect to time

Fig. 6 Normal transactions with respect to time

H. Rajeev and U. Devi

Detection of Credit Card Fraud Using Isolation Forest Algorithm

29

Fig. 7 An illustration of segregating a non-irregular point in a 2D Gaussian distribution

The Isolation Forest algorithm has followed the same principle as the Random Forest algorithm. The algorithm has the tendency of anomaly instances in a dataset to be easier to separate from the rest of the sample, compared the sample points with normal points. The algorithm operated based on the sampling method. Arbitrarily choosing a property and haphazardly choosing a split an incentive for the attribute between maximum and minimum. An illustration of segregating a non-irregular point in a 2D Gaussian distribution is given in Fig. 7 for a non-atypical point and Fig. 8 for a point that is bound to be an abnormality. It is obvious from the images how oddities require less irregular parcels to be disconnected, and contrasted with typical focuses. From a numerical perspective, recursive dividing can be addressed by a tree like structure named Isolation Tree, while the number of parcels needed to confine a point can be deciphered as the length of the way, inside the tree, and to arrive at an ending hub beginning from the root. For instance, the way length of point xi in Fig. 7 is more noteworthy than the way length of xj in Fig. 8 All the more officially, let X = {x1, …, xn} be a bunch of d-dimensional focuses, and X’ ⊂ X a subset of X. An Confi (iTree) is characterized as a data structure with the accompanying characteristics. 1. 2.

For every hub T in the Tree, T is either an outer hub with no kid or an inside hub with one ‘test’ and precisely two little girl hubs (Tl, Tr) A test at hub T comprises of a trait question and answer split worth p with the end goal that the test q < p decides the crossing of an information highlight either Tl or Tr.

30

H. Rajeev and U. Devi

Fig. 8 An illustration of segregating an atypical point in a 2D Gaussian distribution

To fabricate an iTree, the calculation recursively partitions X’ by arbitrarily choosing a property question and answer split worth p until it is possible that (i) the hub has just one case or (ii) all information at the hub have similar qualities. The Isolation Forest ‘separates’ perceptions by randomly choosing an element and afterward arbitrarily choosing a split incentive between the most extreme and least estimations of the chosen to feature. Since recursive apportioning can be addresses by a tree structure, and the quantity of splitting needed to disengage an example is identical to the way length from the root hub to the ending hub. The leads to reach the middle point of the forest of such irregular trees is a proportion of ordinarily and capacity of choice. Irregular division produces observable and more limited ways of dealing with oddities. Henceforth, when the forest of irregular trees which are considered produces more limited lengths for specific examples, and they are almost certain to be inconsistencies (Fig. 9). The LOF Algorithm is an unsupervised anomaly identification technique that figures the nearby density of a given information point concerning its neighbor. It consider an anomaly test that has a substantially lower density than their neighbors.

6 Result and Discussions This work is executed in Google Collaborator using GPU, actualized with Python and recognized with fraudulent transactions. The proposed work provides class imbalance ratio and recommend estimating the exactness utilizing the territory under the

Detection of Credit Card Fraud Using Isolation Forest Algorithm

31

Fig. 9 Isolation forest

accuracy recall curve (AUPRC). The confusion matrix precision is not significant for unbalanced characterization (Figs. 10 and 11). Feature extraction is a cycle of dimensionality decrease by which an underlying arrangement of crude information is diminished to more reasonable gatherings for handling. The quality of these enormous informational indexes is countless factors that require a great deal of figuring assets to measure. Feature extraction is the name for strategies that select and/or consolidate factors into highlights, adequately

Fig. 10 Tree like representation of isolation forest

32

H. Rajeev and U. Devi

Fig. 11 Representation of local outlier factor

lessening the measure of information that should be prepared, while still precisely and depicting the initial information collection. Instead of all data, some sample of data is considered. The size of the dataset is large, so it takes more time to pre-process. In this case, apply the Local Outlier algorithm and obtain a small portion of data that is 1% of all dataset. From the sample, determine how many are fraud and how many are normal and also consider the outlier fraction. It produces 49 Fraud cases and 28,432 valid cases. Apart from that also find the correlation like how all the features are concerning class variables and they are different colors. Later on create independent and dependent features. Since the dataset is imbalanced, so there is a need to apply Isolation Forest and Local Outlier algorithms (Figs. 12 and 13).

Fig. 12 Classification report of isolation forest

Detection of Credit Card Fraud Using Isolation Forest Algorithm

33

Fig. 13 Classification report of local outlier factor

A total of 73 blunders versus Local Outlier Factor identifying 97 errors are recognized by isolation tree. Isolation Forest has a 99.774% greater accuracy than LOF of 99.6% LOF. When considering at mistake precision and recall for 2 models, Isolation Forest performed far superior to the LOF as should be obvious that the discovery of fraud cases is around 27% versus LOF location pace of simply 2%.

7 Conclusion Fraud detection in the credit card system is a big issue for handling these different data mining algorithms is utilized. To detect the credit card fraud transactions, Isolation Forest algorithm and LCO were employed. The technique of Isolation Forest works superior in determining nearly 30% of fraud cases. By utilizing various measurements, for example, accuracy, recall, precision, true positive rate, false positive rate matrix, and classification time are calculated to display the outcomes indicated that Isolation Forest has more peculiarity score than LCO and LCO has more error rate than Isolation Forest. The primary issue with the Isolation Forest is how the expansion of trees is associated with bias, which is likely to reduce the unwavering quality of the irregularity scores for the positioning of the information. The two-layer progressive method is used to overcome the problem. The main characteristic of this approach is to obtain accuracy and to detect the outliers in a large dataset accurately within a small time. In the future, this work can be extended based on the ensemble method.

34

H. Rajeev and U. Devi

References 1. El hlouli FZ, Riffi J, Mahraz MA, El Yahyaouy A, Tairi H (2020) Credit card fraud detection based on multilayer perceptron and extreme learning machine architectures. In: 2020 International conference on intelligent systems and computer vision (ISCV), Fez, Morocco, pp 1–5. https://doi.org/10.1109/ISCV49265.2020.9204185 2. Popat RR, Chaudhary J (2018) A survey on credit card fraud detection using machine learning. 2018 2nd International conference on trends in electronics and informatics (ICOEI), Tirunelveli, pp 1120–1125. https://doi.org/10.1109/ICOEI.2018.855393 3. Rai AK, Dwivedi RK (2020) Fraud detection in credit card data using unsupervised machine learning based scheme 4. Lucas Y et al (2019) Dataset shift quantification for credit card fraud detection. In: 2019 IEEE second international conference on artificial intelligence and knowledge engineering (AIKE), Sardinia, Italy, pp 97–100. https://doi.org/10.1109/AIKE.2019.00024 5. Chun-Hui X, Chen S, Cong-Xiao B, Xing L (2018) Anomaly detection in network management system based on isolation forest. In: 2018 4th Annual international conference on network and information systems for computers (ICNISC), Wuhan, China, pp 56–60. https://doi.org/10. 1109/ICNISC.2018.00019 6. Chen H, Zhao S, Bao H, Kang H (2017) Research of local outlier mining algorithm based on spark. In: 2017 First international conference on electronics instrumentation & information systems (EIIS), Harbin, pp 1–4. https://doi.org/10.1109/EIIS.2017.8298559 7. Yu B, Song M, Wang L (2009) Local isolation coefficient-based outlier mining algorithm. In: 2009 International conference on information technology and computer science, Kiev, pp 448–451. https://doi.org/10.1109/ITCS.2009.230 8. Qiu B, Chenke J, Shen J (2006) Local outlier coefficient-based clustering algorithm. In: 2006 6th World congress on intelligent control and automation, Dalian, pp 5859–5862.https://doi. org/10.1109/WCICA.2006.1714201 9. Sakr M, Atwa W, Keshk A (2018) Sub-grid partitioning algorithm for distributed outlier detection on big data. In: 2018 13th International conference on computer engineering and systems (ICCES), Cairo, Egypt, pp 252–257. https://doi.org/10.1109/ICCES.2018.8639409 10. Huang W, Wu D , Ren J (2009) An outlier mining algorithm in high-dimension based on singleparameter-k local density, In: 2009 Fourth international conference on innovative computing, information and control (ICICIC), Kaohsiung, pp 1192–1195. https://doi.org/10.1109/ICICIC. 2009.98 11. Zhang G, Xu M, Zhang Y, Fan Y (2019) Improved hyperspectral anomaly target detection method based on mean value adjustment. In: 2019 10th Workshop on hyperspectral imaging and signal processing: evolution in remote sensing (WHISPERS), Amsterdam, Netherlands, pp 1–4.https://doi.org/10.1109/WHISPERS.2019.8921003 12. Yu X, Li X, Dong Y, Zheng R (2020) A deep neural network algorithm for detecting credit card fraud. In: 2020 International conference on big data, artificial intelligence and internet of things engineering (ICBAIE), Fuzhou, China, pp 181–183. https://doi.org/10.1109/ICBAIE 49996.2020.00045 13. Wang C, Wang Y, Ye Z, Yan L, Cai W, Pan S (2018) Credit card fraud detection based on whale algorithm optimized BP neural network. In: 2018 13th International conference on computer science & education (ICCSE), Colombo, pp 1–4. https://doi.org/10.1109/ICCSE.2018.8468855 14. Buschjäger S, Honysz P-J, Morik K (2020) Generalized isolation forest: some theory and more applications extended abstract. In: 2020 IEEE 7th International conference on data science and advanced analytics (DSAA), Sydney, Australia, pp 793–794. https://doi.org/10.1109/DSAA49 011.2020.00120 15. Budiarto EH, Erna Permanasari A, Fauziati S (2019) Unsupervised anomaly detection using K-means, local outlier factor and one class SVM. In: 2019 5th International conference on science and technology (ICST), Yogyakarta, Indonesia, pp 1–5. https://doi.org/10.1109/ICS T47872.2019.9166366

Trust-Based Context-Aware Collaborative Filtering Using Denoising Autoencoder S. Abinaya and M. K. Kavitha Devi

Abstract In recent times, extensive studies have been initiated to leverage deep learning strategies to enhance context-aware recommendation. Classical collaborative filtering approaches have shown potency in a wide variety of recommendation activities; however, they are inadequate to grasp dynamic interactions between people and products in addition to data sparsity and cold start problem. There is indeed a burst of attention in using deep learning to recommendation systems owing to its nonlinear modeling potential. In this article, we implement the idea of denoising autoencoders for personalized context-aware recommendation. In specific, the proposed method comprises of split item rating according to all contextual conditions resulting in fictive items that is being fed into the denoising autoencoder augmented with trust information to overcome sparsity, referred to as Item Splitting_Trust-based collaborative filtering using denoising autoencoder (IS_TDAE). Thereby, IS_TDAE is able to predict context-based item preference under all possibility of context situation and able to suggest recommendation according current context situation of the target user. Experiments conducted on two public datasets demonstrate the effectiveness of the proposed model significantly outperforms in top-N recommendation task over the state-of-the-art recommenders. Keywords Context-aware recommender systems · Collaborative filtering · Item splitting · Denoising autoencoder

1 Introduction Recommender system mechanisms are prominent tools to help users to manage vast repositories and mitigate the adverse impacts of information overloading. In numerous domains and web services including Netflix, Amazon.com, YouTube, Twitter and Pandora, they have proved to be beneficial. A recommendation system offers a personalized set of items that would attract a customer, e.g., videos, products, S. Abinaya (B) · M. K. Kavitha Devi Department of Computer Science & Engineering, Thiagarajar College of Engineering, Madurai, Tamil Nadu, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_4

35

36

S. Abinaya and M. K. Kavitha Devi

music, or news. This is achieved by creating frameworks that gain knowledge of users past interaction with items. A lot of immense work has been done to develop different collaborative filtering [1], content-based filtering algorithms [2], or their combination of hybrid recommendation systems [3] along this route. A general feature of these methods is now that they predominantly rely on consumer and product modeling, such as low-rank structures or certain latent factors for a user × item matrix. At the same time, there is an attention that is indeed essential is the background circumstance that implies in providing a recommendation. The context may be place, item consumed time, companion, the emotion of the customer, etc. Preferences of the user about the item might differ as the context varies. Context-aware recommendation systems (CARS) expand conventional recommendation methods by taking into account the individual context in which an object is consumed or consumed by the user and modifying the recommendation appropriately. The methods to create context-aware recommendations are contextual modeling, post-filtering and prefiltering [4]. Before implementing traditional recommendation approach, the prefiltering approaches employs contextual data to filter the unnecessary user-item preferences. Post-filtering techniques, by contrast, add contextual data to the output of traditional methods. Contextual data is combined with consumer and product information within the recommendation process in contextual modeling. Data sparseness is a key challenge in implementing contextual modeling and pre-filtering approaches. While considering contextual modeling, the issue is that there is a considerable increase in computational complexity owing to the inclusion of different data dimensions. The process of item splitting [5], recognizes just a single aspect of context, which is the majority biased of the ratings. In order to minimize the diversity of context dimensions, an alternative method was introduced [6], where the related context variables are clustered together and every cluster is called as new component. Moreover, taking into account, just a specific or lesser number of contextual factors can incur the loss of relevant information. Moreover, taking into account, just a specific or lesser number of contextual factors can incur the loss of relevant information. Now with the rise of social-media, trust aware recommendations have been gaining yet more attention in recent times. Focused on the observation that the preferences of customers are always triggered mostly by their friends [7, 8], various works have been suggested to incorporate trust knowledge into the recommendation system [9– 12]. Those findings suggest that trust interactions are successful in assisting to model consumer preference and increase the efficiency of recommendations. Based on these motivations, trust-based context-aware collaborative filtering using denoising autoencoder was introduced in this paper, which is deliberated to mitigate the adverse impacts of sparse data. Our approach utilizes an item splitting process to design context, which creates fictive items by splitting actual item ratings depending on all variations of contextual factors. Inspired by the principle of autoencoder that reconstructs input data via a restrictive neutral network, a model with a restricted shared layer was generated that merges input rating data over fictive items together with the explicit trust data and the trust relationships that occur implicitly between customers that is being derived on similarity metrics to enhance the sparsity in explicit

Trust-based Context-Aware Collaborative Filtering …

37

trust data. Here, the implicit and explicit trust values are integrated into all the hidden and input layer of the autoencoder in order to lessen the nonlinear connection among the rating and the trust data. Hence, the proposed IS_TDAE has the ability to learn the nonlinear relationship from both the rating which is given exactly to a context-based item and trust data and thereby predicts the context-based preference given by the user for an item under all possible contextual situations. In summing up, this paper has the major contribution as listed as follows: • To the best of our knowledge, trust-based context-aware collaborative filtering using denoising autoencoder (IS_TDAE) is the first model which computes context-based preference of the user given to the item under all the possible combinations of context conditions by denoising autoencoder. • Second, in turn to complement the meager explicit trust data, the implicit value of trust between consumers relying on similarity indices was extracted accordingly, where IS_TDAE combines all the implicit and explicit trust data along with the ratings of the fictive items. • Lastly, trust values are integrated in both the hidden and the input layer of the autoencoder to minimize the nonlinear correlation between such rating and the trust data. • The consideration of the proposed approach in the recommendation generation process indicates a major increase in the estimation accuracy of its top-N products, related to the existing methodology.

2 Related Work The significance of including context, while providing recommendations to the user has been perceived not long following the creation of the recommender frameworks. The main task on CARS [13], proposed utilizing extra information dimension, other than consumers and products, to signify to contextual data. It is essential to properly design and integrate contextual data toward the recommendation mechanism in order to produce appropriate context-based recommendations. Context modeling techniques are divided into three major strategies: contextual modeling, post-filtering and pre-filtering [1]. To produce precise suggestions based upon the context, it is critical to suitably model and combine relevant data into the recommendation framework. A particular category of pre-filtering method called “item splitting"[5] separates a product into fictive items on the basis of a contextual situation that segregates more against the rating of the original product. Symmetrically, consumers can also be separated on the basis of the context of the usage of products. Concerning the progress of matrix factorization method for collaborative filtering [14, 15] proposed to extend to multidimensional tensors that often includes contextual features and various factorization-based techniques [16–18] have been proposed, which intend to diminish the computational multifaceted nature. Advancing with the accelerated growth of social media platform, recommendation procedure that integrates trust data has gained significant promising future

38

S. Abinaya and M. K. Kavitha Devi

[7]. Observed that preferences of the user are often influenced by their peers, and thus, certain trusted data based approaches have been developed. [12] Addressed a proliferation algorithm to establish the relationship of trust. Conversely, only few minimal explicit trust data exists for use in such algorithms. Therefore, taking into account implicit information is a smart alternative to identify consumer preferences more specifically [19, 20]. For e.g., SVD++ focuses on singular value decomposition (SVD) mechanism that integrates both the implicit suggestions and ratings [19, 20] proposed another approach called TrustSVD that incorporates both implicit and explicit user preference focused on trust interactions and ratings to generate optimal recommendations. The drawback is that these basic, standard linear conversions cannot address the complex semantic relations in a substantive way. The advancement of deep learning has demonstrated that the potency of neural networks is greater than among traditional methodology for a lot of pattern detection tasks [34] for e.g., object detection, image recognition [21, 35], or neural translation of the machine. In specific, [22] framework has been designed to use denoising autoencoder to guide user learning feature out of the rating info which suggest that use of deep learning strategies have a strong ability to boost the recommendations. [23] Proposed to infuse user-specific vectors into the hidden layer of the autoencoder to discover precisely the consumer preferences for the top-n suggestion. Apparently, these approaches are impaired by meager issue of rating information, which have a potentially negative effect on the output of the recommendation. In [24], the model learns implicit insight from the created user labels to deal with sparseness issue. The best approach to resolve the data sparse issue is to integrate adjunct information together with the rating data [25, 26]. The model CDL [26] uses denoising autoencoder to obtain compact representations through content data. These depictions were thus closely incorporated with along the matrix factorization to develop consumer preference for recommendations. The VBPR method [25] uses visual attributes acquired by the convolutional neural network (CNN) to significantly enhance top-n recommendation in the Bayesian Personalized Ranking (BPR) [27]. All the mentioned findings suggest that the implementation of additional information for recommendations based on deep learning is a significant concept.

3 Proposed Method The framework of the proposed system IS_TDAE is shown in Fig. 1. As shown in Fig. 1, the process of item splitting is performed to produce context-based fictive items. Secondly, the context specific sentiment based stacked autoencoder was introduced and finally based on the actual concrete score predictions’ top-N recommendation is achieved based upon the exact current context situation of the active user with high accuracy.

Trust-based Context-Aware Collaborative Filtering …

39

Fig. 1 Framework of the proposed system using IS_TDAE

3.1 Item Splitting Let S = {S1 , S2 , . . . , S K } indicate the set of K users, I = {I1 , I2 , . . . , I N } indicate the set of N items and M1 , M2 , . . . , M j indicate j context dimensions, each context dimension has K M1 , K M2 , .., K M j context conditions accordingly. For example as shown in Table 1, a setting of sparse rating matrix D0 is considered which consist of users rating to items given under certain context situation. This multidimensional Table 1 User × item × context matrix of ratings User

Item

Rating

Time

Season

Day_info

S1

I1

5

Weekend

Winter

Festival

S1

I1

4

Weekday

Spring

Normal

S2

I1

3

Weekend

Summer

Festival

S2

I1

4

Weekday

Spring

Normal

S3

I1

3

Weekend

Summer

Festival

S3

I2

2

Weekend

Summer

Festival

40

S. Abinaya and M. K. Kavitha Devi

Table 2 Item splitting

User

Item

Rating

S1

f1

5

S1

f3

4

S2

f2

3

S2

f3

4

S3

f2

3

S3

f4

2

S1

f2

ϕ

S3

f24

ϕ

matrix over different context factors is transformed into a two-dimensional matrix by the process of item splitting and thus creates the fictive items which are the integration of a contextual condition and original item [5]. Particularly, the fictive items are created using the context combinations and split items by [28]. First the Cartesian product of the whole context factor or dimension is calculated (i.e.). A context factor M all along by X M context conditions is built as X M = X M1 × X M2 × · · · · · × X M j whose Attributes are a variation of contextual circumstances from the initial dimensions of the context. Secondly, the fictive item f i of size Q = N × X M was calculated by deriving the Cartesian product among the context dimension M and the set of items I. Finally, convert the multidimensional rating matrix to a 2D rating matrix of user × fictive item by removing the contexts from actual transactional data and substituting as the fictive item set F in place of the item set I. By the above mentioned procedure, a new set of fictive items of size Q are created which is greater than the original items of size N, which therefore makes the matrix sparser. The proposed IS_TDAE method predicts the user rating in all probable context situation of the user for all fictive items by overcoming the sparsity problem. Table 2 demonstrates the result of utilizing item splitting to the data illustrated in Table 1, for example a new fictive item f 1 is produced by taking the Cartesian product of item I 1 and the context situation “Weekend, summer, Festival” (Table 3). Table 3 User × fictive item matrix of ratings f1

f2

f3

f4

f5



f24

S1

5

ϕ

4

ϕ

ϕ



ϕ

S2

ϕ

3

4

ϕ

ϕ



ϕ

S3

ϕ

3

ϕ

2

ϕ



ϕ

Trust-based Context-Aware Collaborative Filtering …

41

3.2 Calculating Implicit Trust Moreover, to address the sparsity issue in the available explicit trust data most of the existing methods, therefore, take the implicit trust data into account. However, discovering further implicit trust data among users still remains a challenge. A general approach to address such problem is to use measures of similarities to determine user relationships. In [29] using Pearson’s correlation coefficient, the trust degree among two users was calculated which results in an inconsistent final similarity of value 1 even there exist only one co-rating between two users. Therefore, [30] addressed this issue, while determining the similarity and normalized the similarity to [0, 1]:

Simu i ,u j

⎧ ⎪ 1, i= j ⎪ ⎨

  = pcc u i , u j + 1 1 ⎪ i = j ⎪ ⎩ 1− m 2

(1)

where m denotes the number of co-ratings between two users. Similarly, [30] suggested similarity threshold θ to predict implicit trust data, where the implicit trust data be able to be accepted if only the threshold θ is lower than the similarity value: 1, Simu i ,u j ≥ θ (2) tu i ,u j = 0, otherwise where tu i ,u j indicates the implicit trust data in binary form among user ui and uj . In the proposed IS_TDAE method, the implicit trust data is calculated over the user ratings to expand the existing trust data and find a denser trust matrix [31]. The most important note is that from the user preference vectors of two diverse users the implicit trust data is calculated, while the value of θ is smaller, it is possible to extract the more links of implicit trust between the users. To obtain the optimal value of θ , various value of θ on IS_TDAE was investigated, and the experimental outcome of IS_TDAE was observed.

3.3 Integrating Implicit & Explicit Trust Information Generally, collaborative filtering algorithms perform predictions by utilizing ratings given by the user for a wide range of products. However, the use of rating value alone is extremely restrictive due to the sparsity problem. Hence, when more data can be collected for the item or the user, the performance of the recommendation task can be improved. In order to enhance the accuracy of proposed IS_TDAE model, therefore, integrates the explicit user trusts information on social networks and the similarity

42

S. Abinaya and M. K. Kavitha Devi

values obtained using Eq. (2) as implicit trust along with user rating information as input into the denoising autoencoder [22]. An effective method in training the model is to insert the trust data both implicit and explicit into the input layer, and then indicate the representations as the predicted ratings in the output layer as in Eq. (3): Meanwhile the correlations between trust data and the ratings are extremely nonlinear with diverse dispersions. Hence, to overcome this issue and to discover further attributes from trust data, the explicit and implicit trust data are injected into all the hidden and input layer of the autoencoder as in Eq. (3):

Rˆ s = ρ Z  (ρ(Z {ri , xi , ei ,} + d), xi , ei ) + d 

(3)

where Rˆ s is the output layer’s complete depiction, Z ∈ RH×(Q+Y+T) and Z  ∈ RN×Q are the weight matrices, ri ∈ RQ are the sparse rows of rating vector R S ,xi ∈ RY and ei ∈ RT is the explicit and implicit trust data for u i ,d ∈ RH and d  ∈ RM are the vectors of bias, and ρ indicates hyperbolic tangent function. On the other hand, if the trust data dimension (Y + T) is extremely large, it is difficult for the autoencoder to use these data efficiently. Thus, a constraint is imposed that the input layers’ dimension Q should be larger than the hidden layer dimension H, and the further should be larger than the explicit and implicit trust data dimension Y + T [32], i.e., M > > H > > (Y + T). Finally, a context-based personalized rating value Rˆ for the entire fictitious item set F for a user u i is predicted using the autoencoder that is being trained subsequently by back-propagation. To learn compact representation, the loss function in Eq. (4) is utilized to train the proposed model by combining the objective function and regularization terms of the proposed IS_TDAE model is exposed as follows:

 λ

L = α Rs , Rˆ s +  Z , Z  , d, d  2

(4)

where α (·) indicates the loss function to calculate reconstruction errors of rating to prevent overfitting can be written as in Eq. (5), ⎞



α Rs , Rˆ s



⎜ = α⎜ ⎝





Rˆ  si ∈C Rˆ  s



Rˆ  si

⎛ 2 ⎟  ⎝ + − α) − Rs i ⎟ (1 ⎠



Rˆ si − Rsi

2

⎞ ⎠

Rsi ∈N (Rs )

(5) where α is the hyper-parameter of the denoising error, 1−α is the hyper-parameter

 of the error during reconstruction, Rs ∈ RM is the distorted element of Rs , C Rˆ  s is the distorted elements set of Rs , N (Rs ) is the unaltered elements set of Rs , Rˆ  si and Rs i were the ith network outputs.(·) is a regularization term that make use of l2 norm and defined as follows in Eq. (6):

Trust-based Context-Aware Collaborative Filtering …

 2  2 (·) = Z 2F +  Z   F + d2F + d   F

43

(6)

where λ is the hyperparameter employed to manage the models’ degree of learning, which influences the models’ generalizability, and therefore, it is essential to conclude the suitable value of λ based on the experimental findings. The parameters are updated using the stochastic gradient descent alternative ADAM optimizer.

3.4 Recommendation Process The recommendation process for the active user ua is generated by the following steps: 1.

2.

For the active user ua , the two-dimensional dense predicted rating vector Rˆ s with fictive items f i is mapped accordingly to multidimensional space with actual items I i and context situation m ∈ (M1 × M2 × · · · × Mn ). Suggest N actual items with the highest predicted scores as recommendations for active user ua according to this corresponding current contextual situation m.

4 Experimental Evaluation 4.1 Dataset Description The proposed IS_TDAE model is compared with the existing state-of-the arts approach by utilizing real-world public datasets such as Epinions and Ciao datasets. The both datasets include both trust relationships and rating value, which are scampered out of two well-known e-commerce websites, i.e., Epinions.com and Ciao.com. Here, users will rate each product with an integer value ranging as of 1 to 5 upon those websites and create a trust connection with some other users by allowing them to make choices. Social network trust links are constructed in a binary format, where 1 is for trust and 0 is for unnoticed relationships. Based on the Date field mentioned in caio and Epinions dataset on which the rating has been given to the item, three context factors or dimension were obtained, namely Time, Season and Day_info. Time has four consequent context conditions such as (Afternoon, Morning, Night, Evening). Season takes four values as (Summer, Rainy, Winter and Spring) and Day info has two values as (Festival, Normal). Table 4 displays the statistics of these two datasets.

44

S. Abinaya and M. K. Kavitha Devi

Table 4 Statistics of Epinions and Ciao

Dataset

Ciao

Epinions

Number of users

7375

22,166

Number of items

106,797

296,277

Number of ratings

284,086

922,267

Number of social links

111,781

300,548

Rating sparsity

0.036%

0.014%

Social sparsity

0.205%

0.061%

4.2 Experimental Setup The experimentation was conducted by splitting the dataset as 60% data in training, 20% for cross-validation and 20% data for testing. The proposed model IS_TDAE is implemented in python Keras API from TensorFlow library. Here, the values in the rating vector are normalized from −1 to + 1 in all the dataset. For all comparative techniques, in our experimental tests according to the relevant reference on every dataset, the hyperparameters are carefully tuned using GridSearchCV approach to determine that each technique produces the maximum value for appropriate comparisons. In the training phase, the IS_TDAE model was found to converge when the epoch value was 300 and altered learning rates (0.1, 0.02, 0.002, 0.001, 0.01) were experimented with, and determined to be optimum when the learning rate η = 0.002. Similarly, the network is optimized with Adam Optimizer with minibatch of size 40. We adopted hyperbolic tangents as transfer function since the dataset are normalized from −1 to + 1, and weight decay l2(0.0002) is added for regularization. The evaluation metrics include P@10, R@10, NDCG@10 and MAP@10 [33]. These are regular metrics for recommendations of top-N items. The following baseline models are evaluated with our own implementation. The parameter settings of the existing methodology are similar to the proposed system (to the possible extent). 1. 2. 3.

4.

5.

U-AutoRec [22]: Collaborative denoising autoencoder that takes only the k-hot encoded rating vector of user given to the items as input. CDAE [23]: Collaborative denoising autoencoder that appends the additional input in the form of user hidden factor into the rating vector for recommendation. TDAE [31]: Trust aware denoising autoencoder combines the user rating to the actual items and the trust value obtained explicitly together to increase the performance of the recommender systems. TDAE++ [31]: It is same as TDAE where it overcomes the sparsity issue of available trust value obtained explicitly by incorporating similarity measure as implicit trust together with the rating value given to the actual items. IS_TDAE: This proposed methodology combines both the implicit and explicit trust value jointly with the rating value given by the user for the context-based fictive items which enhances top-N recommendation according to the current context situation of the user.

Trust-based Context-Aware Collaborative Filtering …

45

4.3 The Effect of Layer Size and Corruption Ratio The latent representation of the input as in hidden layer has the greatest effect on the performance of the models i.e., related to the size of nodes in the hidden layer. The model is evaluated with different values of k to obtain the optimum number of nodes in the hidden layer. Figure 2 depicts the experimental results. From the plots, it is identified that when k = 100, the model’s performance remains constant and there is no substantial improvement in performance above that at the frequency of increased training time. The problem of overfitting in the neural network is initially determined by the corruption ratio with a less-sized training sample. Therefore, a hyper-parameter α that is used to restrict the effect of including noise to the autoencoder differs with various values to observe the effects of the corruption ratio. The initial value of 0.1 was set to α and then eventually increased to 0.9. Figure 3 indicates that if α decreases with low value as its efficiency get improved. The proposed model has the best performance when the parameter α = 0.6 and α = 0.8 for Ciao and Epinions datasets, respectively.

Fig. 2 Influence of layersize a Epinions, b Ciao

Fig. 3 Influence of alpha (α) a Epinions, b Ciao

46

S. Abinaya and M. K. Kavitha Devi

4.4 Results and Discussion The results of the respective methodologies are shown in Tables 5, 6 with the optimal outcomes marked in boldface. A general finding is that the effects of the accuracy, recall, NDCG and MAP metrics were reliable across all datasets, aside from a some exceptions. The proposed model performed well on the Caio and Epinions datasets, which demonstrates its effectiveness on top-N recommendation tasks. From the results of Table 5 and 6, it is observed that from the compared baseline, the proposed method that merges both implicit and explicit trust information along with context-based rating on fictive items into the denoising autoencoder together are found to perform atleast 6.3% better than the others since it contains the potential to discover stronger semantic features with such a nonlinear activation functionalities, and it also can more efficiently retrieve features from the ratings of context-based fictive item. TDAE works better than the other neural network-based model since the autoencoder does not only take advantage of explicit data, and moreover, it help to discover implicit trust similarity between consumers. In comparison, the accuracy of TDAE and TDAE++ model which integrates trust relationship on the Caio data set and Epinions data set performs atleast 6.5% better than CDAE. As a result, it was proved that on datasets with firmer trust value or finer intellectual information among users, and our proposed IS_TDAE model performs better accuracy by recommending appropriate item according to the current contextual situation of the user. Table 5 Experimental results–Caio dataset Method

P@10

R@10

MAP@10

NDCG@10

U-AutoRec

0.0698

0.0711

0.0215

0.0456

CDAE

0.0869

0.0871

0.0293

0.0511

TDAE

0.0912

0.0965

0.0312

0.0547

TDAE++

0.0943

0.0985

0.0329

0.0563

IS_TDAE

0.1056

0.1093

0.0334

0.0578

Table 6 Experimental results–Epinions dataset Method

P@10

R@10

MAP@10

NDCG@10

U-AutoRec

0.7655

0.7897

0.0126

0.0297

CDAE

0.0821

0.0863

0.0159

0.0306

TDAE

0.0942

0.0984

0.0166

0.0315

TDAE++

0.0983

0.0995

0.0174

0.0319

IS_TDAE

0.1067

0.1098

0.0185

0.0327

Trust-based Context-Aware Collaborative Filtering …

47

5 Conclusion and Future Work In this paper, a new form of context-aware suggestion has been proposed. By converting the actual multidimensional rating matrix into a two-dimensional rating matrix using a refined item splitting technique, the proposed approach utilizes all possible contextual details for rating prediction task. Additionally, to overcome the cold start problem and data sparsity issue, the proposed IS_TDAE model integrates both the implicit and explicit value and the context-based rating information of the user together to enhance the accuracy of context-aware recommender systems. In future, any potential improvements could boost the performance of our proposed method. Firstly, other kinds of attributes like text feedback and specific details for products may be used as being side or additional information. Secondly, special kinds of autoencoders like a sparse or stacked autoencoder could be utilized in this model. Thirdly, different kinds of neural network such as recurrent neural network (RNN) or convolutional neural network (CNN) can be utilized to improve the accuracy.

References 1. Goldberg D, Nichols D, Oki BM, Terry D (1992) Using collaborative filtering to weave an information tapestry. Commununications of the ACM 35(12):61–70 2. Balabanovic M, Shoham Y (1997) Fab: Content-based, collaborative recommendation. Communun ACM 40(3):66–72 3. Burke R (2002) Hybrid recommender systems: Survey and experiments. User Model User-Adap Inter 12(4):331–370 4. Adomavicius G, Tuzhilin A (2011) Context-aware recommender systems. In: Recommender systems-handbook, pp 217–253. Springer, Boston, MA 5. Baltrunas L, Ricci F (2009) Context-based splitting of item ratings in collaborative filtering. In: Proceedings of the third ACM conference on recommender systems, pp 245–248 6. Yin H, Cui B (2016) Spatio-temporal recommendation in social media, 1st edn. Springer Publishing Company, Incorporated 7. Scott J (1988) Social network analysis. Sociology 22(1):109–127 8. Ma H (2014) On measuring social friend interest similarities in recommender systems. In: Proceedings of the 37th international ACM SIGIR conference on research & development in information retrieval, New York, NY, USA, pp 465–474 9. Guo G, Zhang J, Yorke-Smith N () TrustSVD: Collaborative filtering with both the explicit and implicit influence of user trust and of item ratings. In: Twenty-Ninth AAAI conference on artificial intelligence, pp 123–129 10. Yang B, Lei Y, Liu D, Liu J (2013) Social collaborative filtering by trust. In: Proceedings of the twenty-third international joint conference on artificial intelligence, Beijing, China, 2013, pp 2747–2753 11. Ma H, Zhou D, Liu C, Lyu MR, King I (2011) Recommender systems with social regularization, in: proceedings of the fourth ACM international conference on web search and data mining, New York, NY, USA, pp 287–296 12. Jamali M, Ester M (2010) A matrix factorization technique with trust propagation for recommendation in social networks. In: Proceedings of the Fourth ACM conference on recommender systems, New York, NY, USA, pp 135–142

48

S. Abinaya and M. K. Kavitha Devi

13. Abinaya S, Kavitha Devi MK (2021) Enhancing top-N recommendation using stacked autoencoder in context-aware recommender system. Neural Process Lett 53(3):1865–1888 14. Koren Y, Bell R, Volinsky C (2009) Matrix factorization techniques for recommender systems. Computer 42(8):30–37 15. Karatzoglou A, Amatriain X, Baltrunas L, Oliver N (2010) Multiverse recommendation: Ndimensional tensor factorization for context-aware collaborative filtering. In Proceedings of the fourth ACM conference on recommender systems (pp. 79–86). ACM. 16. Baltrunas L, Ludwig B, Ricci F (2011) Matrix factorization techniques for context aware recommendation. In: Proceedings of the fifth ACM conference on recommender systems, pp 301–304. ACM 17. Zou B, Li C, Tan L, Chen H (2015) GPUTENSOR: Efficient tensor factorization for contextaware recommendations. Inf Sci 299:159–177 18. Rendle S, Gantner Z, Freudenthaler C, Schmidt-Thieme L (2011) Fast contextaware recommendations with factorization machines. In: Proceedings of the 34th international ACM SIGIR conference on research and development in information retrieval, pp 635–644 19. Koren Y (2008) Factorization meets the neighborhood: a multifaceted collaborative filtering model. In: ACM SIGKDD international conference on knowledge discovery and data mining, pp 426–434 20. Guo G, Zhang J, Yorke-Smith N (2015) Collaborative filtering with both the explicit and implicit influence of user trust and of item ratings. In: 29th AAAI conference on artificial intelligence. AAAI Press, pp 123–129 21. He KM, Zhang XY, Ren SQ, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, pp 770–778 22. Sedhain S, Menon A K, Sanner S, Xie L X (2015) Autorec: autoencoders meet collaborative filtering. In: Proceedings of the 24th international conference on world wide web, New York, pp 111–112 23. Wu Y, DuBois C, Zheng A X, Ester M (2016) Collaborative denoising auto-encoders for top-n recommender systems. In: Proceedings of the ninth ACM international conference on web search and data mining, New York, pp 153–162 24. Pan YT, He FZ, Yu HP (2019) A novel enhanced collaborative autoencoder with knowledge distillation for top-n recommender systems. Neurocomputing 332:137–148 25. He RN, Lin CB, Wang JG, McAuley JL (2016) Sherlock: sparse hierarchical embeddings for visually-aware one-class collaborative filtering. In: Proceedings of the twenty-fifth international joint conference on artificial intelligence, New York, pp 3740–3746 26. Wang Z J, Yang Y, Hu Q M, He L. An empirical study of personal factors and social effects on rating prediction. In: Pacific-Asia conference on knowledge discovery and data mining, pp 747–758. Springer 27. Rendle S, Balby Marinho L, Nanopoulos A, Schmidt-Thieme L (2009) Learning optimal ranking with tensor factorization for tag recommendation. In: Proceedings of the 15th ACM SIGKDD international conference on knowledge discovery and data Mining, New York, 2009, pp 727–736 28. Phuong TM, Phuong ND (2019) Graph-based context-aware collaborative filtering. Expert Syst Appl 126:9–19 29. Papagelis M, Plexousakis D, Kutsuras T (2005) Alleviating the sparsity problem of collaborative filtering using trust inferences. In: 3rd International conference, iTrust 2005, proceedings DBLP, pp 224–239 30. Wang J, Hu J, Qiao S, Sun W, Zang X, Zhang B (2016) Recommendation with implicit trust relationship based on users similarity. In: International conference on manufacturing science and information engineering (ICMSIE), pp 373–378 31. Wang M, Wu Z, Sun X, Feng G, Zhang B (2019) Trust-aware collaborative filtering with a denoising autoencoder. Neural Process Lett 49(2):835–849 32. Strub F, Gaudel R,Mary J (2016) Hybrid recommender system based on autoencoders. In: The workshop on deep learning for recommender systems. ACM, pp 11–16

Trust-based Context-Aware Collaborative Filtering …

49

33. Manning CD, Schütze H, Raghavan P (2008) Introduction to information retrieval. Cambridge University Press 34. Pandian MD (2019) Sleep pattern analysis and improvement using artificial intelligence and music therapy. J Artif Intell 1(02):54–62 35. Bashar A (2019) Survey on evolving deep learning neural network architectures. J Artif Intell 1(02):73–82

Exploration on Content-Based Image Retrieval Methods M. Suresh Kumar, J. Rajeshwari, and N. Rajasekhar

Abstract In the current ages, the progress in computer knowledge and multimedia solicitations has commanded to the construction of massive digital images and huge image databanks, and it is growing speedily. Here are numerous dissimilar expanses in which image retrieval shows a critical part resembling Medical systems, Forensic Labs, Tourism Elevation, etc. Thus, repossession of comparable images is an experiment. To challenge this speedy development in digital causes, it is essential to advance content-based image retrieval (CBIR) schemes, which can function on great databanks. Intelligent or deep learning way of content-based penetrating is requisite to accomplish the examining appeal with correct pictorial substances in a sensible quantity of time. There are some actually smart methods planned by investigators for effectual and vigorous content-based image retrieval. With these approaches, the quantity of approaches has been altered for the effectual image recovery of images. In this article, the review of various methods that have been utilized beginning from Image retrieval using pictorial structures like Bayesian Learning Algorithm, Self-Organizing Maps, Decision Trees Relevance Feedback, Genetic Programming, Navigation Pattern Mining, Association-Based Image Retrieval, Artificial Neural Networks (ANN) methodology is proposed and utilized for the content-based image retrieval. And the latest techniques such as the deep learning, ensemble learning methods with more number of layers and present suits the finest technique for repossession of images from the excessive databanks. In this effort, the goal is to best part the exertions of investigators who accompanied some strong exertion and to deliver an impervious perception for intellectual content-based image retrieval methods. Keywords Image retrieval · Image data store · Texture based · Color based · Shape based · Feature extraction · Artificial neural networks · And deep learning M. S. Kumar (B) · J. Rajeshwari Department of ISE, Dayananda Sagar College of Engineering, Bangalore, India e-mail: [email protected] J. Rajeshwari e-mail: [email protected] N. Rajasekhar Gokaraju Rangaraju Institute of Engineering and Technology, Hyderabad, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_5

51

52

M. S. Kumar et al.

1 Introduction Content-based image retrieval, similarly recognized as query by image content (QBIC) and content-based visual information retrieval (CBVIR), is the submission of computer vision methods to the image retrieval, that is the process of probing for digital images in huge image databank. Content-based interprets that the exploration analyzes the subjects of the image relatively than the metadata such as keywords, labels, or metaphors connected with the image. The word “content” in this situation might denote colors, shapes, textures, or any further evidence that can be attained from the image itself. CBIR is needed since explorations that trust decently on metadata are reliant on annotation excellence and extensiveness (Fig. 1). In content-based image retrieval systems (CBIR), the best effective and modest quests are the color-based quests. Though these approaches can be enhanced if some prepossessing strides are utilized. In the prepossessing procedure, the image cataloging is investigated. In CBIR, image cataloging has to be calculation wild and effectual. The new method is presented, which is created on image histogram structures. The foremost benefit of this technique is the exact speedy cohort and assessment of the practical feature directions [1]. Various image retrieval techniques have been industrialized by investigators and researchers, broadly texture-based, color-based, and shape centered repossession schemes, and the most significant and extensively utilized image repossession methods are listed, Text Centered Image Repossession, Contented Centered Image Repossession, Multimodal Fusion Image Retrieval, Semantic Grounded Image Retrieval and Relevance Feedback Image Repossession [2]. CBIR denotes the aptitude to retrieve imaginings based on image pleased. In the proposed work, a method to CBIR for numerous catalog images that convictions on human involvement computer/machine learning and computer visualization. Proficient level human collaboration is utilized for resolving that feature of the difficult and to service machine learning procedures to permit the scheme to be adjusted

Fig. 1 Overall system of content-based image retrieval. source https://en.wikipedia.org/wiki/Con tent-based_image_retrieval

Exploration on Content-Based Image Retrieval Methods

53

to different image areas. Experiential outcomes for the area of great determination calculated image of flowers. Consequences demonstrate the effectiveness of the loop method to image description and the capability of the method to acclimate the repossession route image area done the submission of machine learning procedures [3]. Audio-visual aid content inquiry is functional in dissimilar real-life system hallucination submissions, and ordinal pictures establish the main portion of interactive program information. In CBIR and image cataloging grounded facsimiles, elevated image illustrations are epitomized in the custom of feature directions that comprises of statistical ideals investigation displays that there is an important slit amid image feature depiction and hominoid pictorial thoughtful. The proposed work to extant a complete assessment of the current expansion in the extent of CBIR and image depiction and examined the foremost features of numerous image recovery and image depiction replicas from low-level attribute mining to current contextual profound knowledge methods. Significant notions and main investigation educations founded on CBIR and image depiction have conversed in aspect, and upcoming investigation instructions are decided to stimulate additional investigation to this extent [4]. Satellite Image Cataloging (SIC) is one of the most important aspects of this submission, as it is still a contest technique due to the various types of data recovered from satellites, as well as the setting issues and the earth’s atmosphere, which have an impact on any kind of SIC submission, as it determines what kind of mark is yielded from the outpost appearance requirement. This type of descriptors mined from satellite data play an important role in SIC training, and therefore, kind in directive to get highlevel structures that premium outline the satisfied of the image must be completed by using low-level structures and machine learning methods [5].

2 Related Work CBIR conveys the clarification for successful recovery of appearance from this enormous image databank. The projected technique hybrid feature originated capable CBIR arrangement is scheduled disbursing plentiful objectivity amount. Altitudinal field features totaling color auto-correlogram, color instants, HSV histogram erections, and frequency domain arrangements like instants over SWT, structures using Gabor wavelet transform are exploited. Advance, to recover precision binarized arithmetical image constructions, color, and edge directivity descriptor kinds are betrothed for emerging functioning CBIR preparation. Plenteous detachment valuation factors are exploited for reclamation. The investigations are attained expending WANG databank which comprises 1000 images from 10 dissimilar groups. Investigational consequence plates that the anticipated technique attains improved in positions of precision associated with further upended establishments [6]. In the contemporaneous period of the ordinal announcement, the practice of ordinal pictures has augmented for articulating, involvement, and understanding evidence. While employed with digital

54

M. S. Kumar et al.

pictures, fairly regularly it is required to pursue a detailed image for a specific condition grounded on the graphic insides of the image. These job appearances calm if you are allocating with tens of pictures then it becomes additional challenging when the amount of images drives since few numbers to a huge number, and the similar content grounded penetrating job develops tremendously compound when the amount of images is in the masses. To contract with the condition, some brainy method of information founded penetrating is obligatory to satisfy the penetrating demand with correct pictorial insides in a sensible quantity of period. There are some keen methods planned by investigators for well-organized and healthy informative centered image repossession. In this investigation, the goal is to focus the exertions of investigators who accompanied some bright effort and to deliver an impervious perception for intellectual information-based image reclamation methods [7]. Remote sensing submissions have increased substantial investigation consideration nowadays. Regardless of the appearance of numerous satellite image submissions, there is a continuous request for outpost image cataloging schemes. The satellite image classification organization proposes to discriminate between the matters being existing in the image. It is exceedingly stimulating since, the reporting expanse of the satellite is more, such that the substances look consequently small. These kinds the process of object difference multifaceted. Moreover, classification correctness is a significant influence, which the classification scheme must permit completed. Taking these challenges into account, this work grants a satellite image cataloging scheme, which can categorize among the plants, territory, and aquatic forms. The goal of this effort is met by dividing the mechanisms hooked into 3 significant stages, which are outpost image early computing feature mining and cataloging. The image early computing stage denoises the appearance by a known mechanism for filtering and the distinction is enhanced by Contrast Limited Adaptive Histogram Equalization (CLAHE) method. As the satellite image retains abundant significant sorts and excerpts numerous sorts such as primary order, GLCM, and LWT features. The attribute route is designed out of these sorts and the SVM is qualified. The presentation of the projected method is perceived to be acceptable in rapports of sensitivity, specificity, and accuracy [8]. Image cataloging is the practice of transmission terrestrial shield modules to pixels. For instance, classes comprise aquatic, innercity, plantation, cultivation, and parkland [9]. As per the development and expansion of numerous multimedia knowledge in the domain of CBIR numerous progressive evidence retrieval organizations have turn into prevalent and have carried the new advance in profligate and operative repossession. In this proposed work, methods of picture cataloging in CBIR are been deliberated and associated. It also presents SVM, a Bayesian classifier for precise and competent repossession of images [10]. The projected exertion also discovers evaluation midst soft computing techniquesbased iterative image fusion methods through various quality evaluation parameters. Investigational consequences accomplished from projected technique demonstrate that the practice of the repletion of image convergence from fuzzy approach and repetition of image fusion from neuro-fuzzy logic can professionally preserve the elusive evidence even though accumulative the spatial evidence of the various applications [11]. Pixel level image integration using fuzzy method different applications

Exploration on Content-Based Image Retrieval Methods

55

has been obtainable. All the consequences attained and deliberated by this technique are the same scene. In the direction to assess the outcomes and associate the approaches, the evaluation standards, evaluation metrics are engaged. The investigational outcomes demonstrated that the outline of the projected method gives a substantial enhancement on the concert of the fusion scheme [12]. A fuzzy-based method is developed to merge images starting different devices, in direction to improve conception and the assessment between DWT-based image fusion and fuzzy-based fusion besides with various eminence assessment parameters. Investigational outcomes show that the procedure of the projected technique can professionally reserve the spectral evidence even though enlightening the spatial determination of the remote sensing images [13].

2.1 Text Centered Image Retrieval Text-based image recovery is also named description-founded-image recovery. It is used to recover the XML forms comprising the images founded happening the verbatim evidence for a precise multimedia query. To incredulous the restriction of CBIR, TBIR denotes the pictorial information of images by physically allocated watchwords/labels.

2.2 Content Centered Image Retrieval In pleased grounded image recovery, imageries are examined and mended based on the resemblance of their pictorial matters to an inquiry image utilizing sorts of the image. A structure mining segment is utilized to excerpting short-flat image sorts from the imageries in the gathering. Usually, mined image structures comprise color, texture, and shape (Fig. 2).

2.3 Multimodal Fusion Image Retrieval Here, recovery comprises data synthesis and machine knowledge procedures. Data fusion, also recognized as the grouping of indications, is a method of reunion multiple bases of indication. Employing numerous modes can absorb the gliding result, chorus consequence, and dark horse result.

56

M. S. Kumar et al.

Image Retrieval

Text Centred Image Retrieval

Content Centred Image Retrieval

Multimodal Fusion Image Retrieval

Semantic Grounded Image Retrieval

Relevance Feedback Image Retrieval

Fig. 2 Different Image retrieval approaches

2.4 Semantic Grounded Image Retrieval Image recovery founded on the contextual connotation of the imageries is presently being sightseen by numerous investigators. This is solitary of the exertions to nearby the matching break difficult. Leveraged information-based image recovery activities to generate contextual marks from a set of comparable pictures to develop a technique for interpreting 3D computed tomography images of the liver. Improved SVM and KNN searches are proposed. The technique attained high explanation precision than other engrained approaches [16] (Fig. 3).

3 Proposed Model for Content-Based Image Retrieval The presentation of the projected method is approved out in five steps as given below: 1.

2.

Image acquisition The multispectral image is acquired in MATLAB utilizing its digital image processing toolbox where the evidence is deposited in distinct matrices of many sheets (having strength values among 0 and 255) Feature extraction Features of images are mined using GLCM, PCA, and DWT with expecting the evidence using neural network and discovery the likeness expending all the comparison considerations. Principle component analysis (PCA) is a mathematical tool to extract the specific structures known as Eigenfaces from image information. In this investigation effort, wavelets are used as raw data to acquire principal components.

Exploration on Content-Based Image Retrieval Methods

57

Image

Feature Extraction

Training

Testing

Neural Network

Retrieved images Fig. 3 Block diagram for image retrieval system

3. (a) (b) (c) (d) (e) 4. (a)

Training the model Comprehend how ANN is trained using Perceptron learning instruction. Elucidate the application of Adaline instruction in training ANN. Designate the procedure of lessening cost functions using Gradient Descent instruction. Analyze how learning rate is adjusted to congregate an ANN. Discover the layers of an Artificial Neural Network (ANN). Testing the model The subsequent classifier is utilized to unlabeled images to resolve whether the image goes to the positive or the negative group.

From the above table, a wide-ranging review on spot recognition, which is a significant portion of content-based image retrieval. Three main groups of advanced methods counting supervised learning, unsupervised learning, and relevance feedback approaches in decreasing the gap between pictorial features and image notions have been examined. Moreover, numerous endorsements have been recommended founded on their applications, advantages, limitations, and final outcomes of present approaches (Table 1).

58

M. S. Kumar et al.

Table 1 Assessment of neural network-based and other image retrieval methods Application

Advantages

Limitations

Results

NN-based image retrieval using Symlet transforms

Enactment intensification

Absence of algorithm particulars

Can be a respectable entrant for up-to-date CBIR schemes

Intellectual object appreciation and recovery scheme

Exceedingly malleable

Problematic to instrument all possessions of human pictorial scheme

A reputable method for image retrieval

Practice of self-manageable neural networks for picture recovery

Enhanced proficiency

Complexity

Self-managing NN can be professionally utilized in image recovery solicitations

Image retrieval using graph-based segmentation

Ability to sanctuary aspect in low-variability image sections while snubbing aspect in high-variability section

Algorithm particulars Execution is are not comprehensible problematic because of the lack of algorithm particulars

Image retrieval technique for 3D model collection

3D model image retrieval

3D data is right problematic to knob

Multi-instance scheme for image classification

User’s attention Preparation on centered classification different datasets can by an artificial agent take different time

A decent learning outline to categorize images to a server user query

Cubic splines-based image retrieval

Correctness with improved enactment

Database size and not for video

The system can be utilized as a decent applicant for image retrieval

Online image retrieval system

Executed on web and experienced in aspect

Numerous investigation process sorts the system compound

From a user’s perspective scheme with tested results is always healthier on systems with fewer analyses

Numerous tissue of Multiple structures of content-based picture attention and recovery repossession

Images of all structures of humans can differ in outline in a different time; thus, the arrangement is very compound to the instrument

Complete a respectable arrangement but actual difficult

Adaptive learning of similarity matching attributes

Unacceptable repossession gauges can set the scheme to an improper equivalent state

The applied method with the need of enactment development

Arrangement’s corresponding procedure vagaries at the run interval for healthier significance

Decent method for employed with 3D prototypical imageries

(continued)

Exploration on Content-Based Image Retrieval Methods

59

Table 1 (continued) Application

Advantages

Agent-based interleaved image retrieval

Abridged exploration Harmonization of space marks the interweaved agents recovery even quicker produces an organization overhead

Limitations

Results With enhanced harmonization, method enactment can be augmented

Dynamic updation of Exploration similarity matching standards-based algorithm conforming method

No authority plaid over Human participation search features reduces the retrieval procedure and marks it as a non-automated scheme

Dynamic incorporation of user search criterion

Intellectual UI decreases human participation time

Complexity with dynamic nature

System can be programmed if human participation is not compulsory. Sorts the method further practical

Art Image retrieval

High pictorial semantic-founded pictures are simply recovered

Participation of semantic variable comprises human participation

Decent method for recovering art images

Image classification and retrieval system

Condensed search universe bounces significance as well as competence

Image classification can take an extended period if the databank dimension is enormous

Contracted window for penetrating an image sorts the arrangement logically insolent

content-based image retrieval system integrating wavelet-based image sub-blocks with dominant colors and texture analysis

proposed techniques indeed outperform other retrieval schemes in terms of average precision and average recall

need to reduce the semantic gap between the local features and the high-level user semantics to achieve higher accuracy

An integrated matching scheme based on Most Similar Highest Priority (MSHP) principle is used to compare the query and database images

Effective content-based image retrieval: Combination of quantized histogram texture features in the DCT domain

Result of a Task needs more time combination of texture as database size is features in operative more image recovery

The projected method produces consequences which display that the grouping of numerous features of quantized histograms provides decent presentation in retrieval as associated to use the single or fewer amount of texture structures amalgamation

60

M. S. Kumar et al.

4 Enactment Assessment of Image Retrieval The analysis is the efficiency of an image exploration procedure that is approximately challenging in the way of radiating the exploration procedure can recover comparable images to the specified contribution appearance mark. The giant inquiry now is in what way to distinguish that picture is pertinent. Determining of two images is related in chastely toward the consumer’s insight. Even if dissimilar users offer different thoughts in similar situations, human insights can easily discern the similarity between similar inputs. Two assessment parameters were utilized now to assess the efficiency of this attribute founded-image recovery instrument such as Recall and Precision. (a)

Recall

The recall is an amount the capability of a scheme to current all pertinent substances. The calculation for computing recall is agreed beneath: Recall =

Amount of recovered imageries that are also pertinent Over − all quantity of relevant images

Recall is the response to the query: In what way nearby am I to receiving all decent equals? (b)

Precision

Precision is an amount of the aptitude of a scheme to current only pertinent substances. The calculation for computing exactness is agreed beneath. Precision =

Amount of recovered pictures that are also appropriate Over − all amount of recovered pictures

The accuracy is the response to the query: How nearby am I to receiving only decent equals? Instead of using a distinct mark for image recovery, the grouping of these features should be used for escalation recovery capabilities. The recovery competence and effectiveness enactment can be added improved if the image gathering is qualified (pre-sort out) and gathered using all machine learning techniques. The images with great resemblances in the sort interplanetary will be gathered composed and will outcome in a lesser exploration galaxy. This will significantly augment the exploration period and fastidiousness.

5 Conclusion Preceding exploration for content-based image retrieval from various conventional methods and image depiction is with standard machine learning methods in numerous

Exploration on Content-Based Image Retrieval Methods

61

provinces. The improvement of structure or sort depiction in terms of structure dimensions will provide a solid structure for cataloging founded prototype information while avoiding issues such as overtraining. The current investigation for CBIR is lifted to the practice of profound learning and they have exposed decent outcomes on numerous images and outstripped man hold attributes theme to the state of refinement of the system. The planned technique is utilized to retrieve the evidence from the catalog images and to provide better accuracy of the given query image. The study will assist scientists working in this field in evaluating images by identifying similar images in a large number of documents, which may help with conclusion endowment. The performance of the system is analyzed based on the parameters: accuracy, precision, and error. The significant images and great processing machines are the core necessities for any profound system. Creating large-scale images for a deep network’s conducted activity is a difficult and time-consuming process. Consequently, the enactment assessment of ANN and deep, advanced, and intelligent learning on a huge database unlabeled images in unverified education method is likewise solitary of the conceivable upcoming investigation instructions to this extent. Additionally, it is conceivable to advance this retrieval enactment using other machine learning procedures along with lessening in the retrieval time. The method may be useful for multi-dimensional probing and mining of substances in the coming years, as well as an instrument for examining different images and hyperspectral satellite images.

References 1. Sergyan S (2008) Color histogram features based image classification in content-based image retrieval systems. In: International symposium on applied machine intelligence and informatics 2. Shubhankar Reddy K, Sreedhar K (2016) Image retrieval techniques: a survey. Int J Electron Commun Eng 9:19–27 3. Vijaya Arjunan R, Vijaya Kumar V (2009) Image Classification in CBIR systems with color histogram features. In: International conference on advances in recent technologies in communication and computing 4. Latif A, Rasheed A, Sajid U, Ahmed, Ali N, Ratyal NI, Zafar B, Dar SH, Sajid M, Khalil T (2019) Content-Based image retrieval and feature extraction: a comprehensive review. Math Probl Eng 5. Altaei MSM, Ahmed SM (2018) Satellite image classification using multi features based descriptors. Int Res J Adv Eng Sci 3(2): 87–94 6. Mistry Y (2018) D T Ingole and M D Ingole. Content based image retrieval using hybrid features and various distance metric, Journal of Electrical Systems and Infomation Technology 5(3):874–888 7. Yasmin M, Mohsin S, Sharif M (2014) Intelligent image retrieval techniques: a survey. J Appl Res Technol 12(1):87–103 8. Dixit A, Hedge N, Reddy B (2017) Texture feature based satellite image classification scheme using SVM. Int J Appl Eng Res 12: 3996–4003 9. https://gisgeography.com/image-classification-techniques-remote-sensing/ 10. Neera Lal, Neetesh Gupta and Amit Sinhal (2012) A review of image classification techniques in content based image retrieval. Int J Comput Sci Inf Technol 3(5):5182 – 5184

62

M. S. Kumar et al.

11. Rao DS, Seetha M,; Hazarath M (2012) Iterative image fusion using neuro fuzzy logic and applications. In: Proceedings of the 2012 international conference on machine vision and image processing (MVIP), Taipei, China, 14–15 Dec 2012, pp 121–124 12. Dammavalam S, Maddala S, Krishna Prasad MHM (2011) Quality evaluation measures of pixel—Level Image fusion using fuzzy logic. Springer Berlin, Heidelberg, vol 7076, pp 485– 493 13. Dammavalam S, Maddala S, Krishna Prasad MHM (2011) Quality evaluation measures of pixel-level image fusion using fuzzy logic. In: International conference on swarm, evolutionary, and memetic computing, pp 485–493 14. Marshall AM, Gunasekaran S, A survey on Image retrieval methods 15. Ahmed G, Barskar R (2011) A study on different image retrieval techniques in image processing. Int J Soft Comput Eng 247–251 16. Kumar A, Dyer S, Kima J, Lia C, Leong PHW, Feng D (2016) Adapting content-based image retrieval techniques for the semantic annotation of medical images. Computerized Med Imaging Graphics 49:37–45

Forward and Control Approach to Minimize Delay in Delay Tolerant Networks Sudhakar Pandey, Nidhi Sonkar, and Danda Pravija

Abstract Delay-tolerant networks (DTNs) which are popularly called as intermittently connected networks are used in many real-time challenging environments such as during natural calamities where the Internet cannot be used. Buffer management and routing are significant issues in delay-tolerant networks. Although, many routing algorithms are developed recently among them most do not include the buffer management scheme. In the proposed work, a new approach for buffer management termed as forward and control technique which performs irrespective of the underlying routing algorithm. The main aim of this approach is to increase the delivery rate and to reduce the delay rate. This approach is evaluated with three popularly known protocols. Results displayed that our forward and control technique has achieved the delivery rate of 85% with a delay of 4000 s when buffer size is 50 MB. Keywords Delay-tolerant networks (DTNs) · Buffer management · One simulator · Relay selection · Delay

1 Introduction Delay-tolerant networks (DTNs) are a kind of challenging network that is evolving day-to-day exhibit some diverse attributes from other networks such as large delay and intermittent connectivity and there may not exist the end-to-end path along with the root node and target node. Delay-tolerant networks are popularly used in challenging and difficult environments where there is no assurance of continuous network connectivity. Sensory networks, interplanetary networks, military ad hoc networks, and networks operating in extreme terrestrial conditions and during natural calamities are examples of delay-tolerant networks [1]. The main issues in DTNs are buffer management and routing. Although there are many routing protocols, there is no inclusion of a proper buffer management scheme. In general, due to the random mobility or movement of the nodes, it is difficult to schedule and store messages in a buffer [2]. Most of the routing S. Pandey · N. Sonkar · D. Pravija (B) Department of Information Technology, National Institute of Technology Raipur, Raipur, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_6

63

64

S. Pandey et al.

protocols stores message or information in the buffer and holds the message when it moves and forwards those message to the other nodes when it finds those nodes this in turn lead to the extreme storage of the multi-copies of the messages which leads to the congestion and buffer overflow, and thus, the performance of the delay-tolerant networks is affected due to the improper management of buffer. Hence, it is necessary to use efficient buffer management schemes along with routing protocols in order to enhance the delivery rate and to guarantee reliability. Here, a new approach for buffer management is termed as forward and control technique. The main purpose of our approach is to increase the delivery ratio and to decrease the delay rate and control the buffer overflow by utilizing the minimum number of resources. This can be achieved by considering few conditions while forwarding the message. Further, this approach is evaluated using the ONE simulator which is popularly used for testing this type of approach and protocols [3] and compared our approach with some popularly known protocols such as Max-Prop, spray and wait, and PRoPHET. In future, this work can be extended to add some other parameters and soft computing techniques [4] to consider energy consumption [5] of the protocol as one important parameter. In the next section, some of the other buffer management schemes and protocols. In the later sections, this approach is followed by the simulation results and conclusion in other sections.

2 Related Work Buffer management technology is an approach that uses and manages various resources in different conditions [6]. These techniques are formulated in order to decide the messages that need to be forwarded and the messages that need to be dropped when there is a buffer overflow. Few buffer management techniques are listed below: (a) (b) (c) (d) (e)

DO-Drop oldest: In this technique, the message which has the least remaining lifetime is dropped (time to leave) [7]. DF-Drop front: This technique is based on first in first out (FIFO) that is the message that has entered recently in the buffer is deleted first [8]. DLS-Drop largest size algorithm: In this, the message that has the largest size is dropped first [9]. Dl-Drop last: Here, in this technique, the most recently received message is dropped. DY-Drop youngest: The message that has left with the largest remaining lifetime [TTL] is deleted [10].

Here, some of the significant protocols such as epidemic, spray and wait, MaxProp, direct delivery, and max delivery were studied. Epidemic [11] is also a type of flooding-based routing protocol in which every node aims to transmit the messages to all the encountered nodes until and unless the message gets transmitted to the destination. One of the important and significant

Forward and Control Approach to Minimize Delay …

65

resources for epidemic routing is the buffer size. Researches [11] have proved that the inclusion of an efficient buffer management policy can make this protocol to improve the delivery ratio. Spray and wait [12] are an L/2 flooding protocol which means the node transmits the L/2 number of copies to the encountered neighboring node and waits. If the message is transferred to the target node or destination, the transmission is done and if in case if the messages do not reach the objective node then it again transfers L/2 copies to the encountered node. Max-Prop [13] is also a flooding-based protocol in which it considers both the schedule of the packets that need to be dropped on the basis of their deliver frequency and the schedule of the packets that need to be transmitted on the basis of their hop count. Direct delivery [14] is also a forwarding-based routing protocol in delay-olerant networks. In this protocol, the information or message packet is directly transmitted to the objective node which in turn does not lead to the relaying and replicating of the messages. Max delivery [15] is an approach to buffer management in DTNs which aims to increase the delivery ratio and reduces the use of resources. This approach objective is to raise the message delivery ratio of the network by implementing the forwarding, cleaning, and dropping components. In the forwarding component, each node has three significant queues with the help of these queues the transmission of the message takes place. In the dropping component, it drops down the messages from the entire queue if at all there are messages which are waiting for the state to enter. In cleaning component, the messages that reached the destination are permanently cleaned. In general DTN [16], protocols are mostly dependent on the underlying characteristics such as random mobility and movement of the nodes. Hence, in order to evaluate these protocols, there must be a proper simulation tool can be performed by using the ONE [3] simulator which is designed to evaluate protocols. In this way, the environment in ONE simulator is simulated in order to obtain the accurate and best results.

3 Proposed Methodology Our proposed method expects to send messages as fast as possible from the root node to the target node by using a limited number of resources, and at the same time, it controls the buffer overflow. In this way, it reaches the target node as fast as possible which in turn results in the decrement of the delay rate. In order to achieve, our proposed method implemented using the forward and control technique. Flowchart of our proposed work is shown (Fig. 1):

66

Fig. 1 Flowchart for forward and control technique

S. Pandey et al.

Forward and Control Approach to Minimize Delay …

67

3.1 Forward and Control Technique The forward and control technique is implemented by the following some set of significant rules. 1. 2.

3.

4. 5. 6. 7. 8.

The transmission of a node at the time of buffering states the message which is at the top is forwarded. The root node or node will transmit or forward messages to the neighboring or encountered node if and only if the encountered node buffer has the capacity to store the message. Here, while transmission, whenever a node encounters another node, they both share the information regarding the memory size (sharable information). In this way, the encountered nodes calculate the remaining buffer size by subtracting the buffer size occupied from the total buffer size. Each node maintains a table for the remaining buffer size available at every instance of time which is named as RB vector table. In the remaining buffer size (RB) table, it stores a pair of keys and values for all the neighboring nodes. It maps the remaining buffer size available in the node. If the encountered node buffer capacity is full then it will transmit those messages to the neighboring node which has maximum buffer capacity. Also if the time to leave the message which is popularly known as TTL completes for a message in that node then the message will be deleted. Also if at all while transmission due to some network issues if a copy of the message is lost then another copy of the message will be delivered and once a copy of the message is delivered to the destination then the destination node will send an acknowledgment that the message is received and then all the copies of the messages held near the root node are deleted.

As shown in Fig. 2, #4 represents the copies of the messages that are present at the root node A and #50 represents the size of the message. T0, T1, T2, and T3 are the time instances, and B, C, D, and E are intermediate nodes, and P is the destination of the message. Also during the transmission path node A finds node B, whereas node B finds node C, D. In the same way, node C finds node B and node E and node D finds node B and P and node E finds node C and node P. (a)

(b)

During the time instance T0, node A has some message that needs to be delivered to the node P. Here, node A finds node B in its transitive path and according to our proposed methodology, it is possible to see each node has a table that stores the remaining buffer size of the encountered nodes. At time instance T1, with the help of this table, node A finds that node B buffer capacity is not sufficient to store the message. Hence, this node forwards or transmits this message to node C (neighboring node of B) which has maximum buffer capacity at that instance of time. Hence, in this way, a copy of this message has been delivered to node C and node A holds three copies of that message also the remaining buffer size of node A has increased as one of the

68

S. Pandey et al.

Fig. 2 Steps of our approach

(c) (d)

copies of the message is forwarded and the remaining buffer size of node C has decreased. At time instance T2, node C transmits the message to node E and in this way node E receives the message. At time instance T3, node E forwards the message to the target node P. Hence, as the message is delivered, node P sends an acknowledgment that it has received the message and all the remaining copies held by node A are deleted.

4 Results Analysis For the simulation of delay-tolerant network, opportunistic network environment (ONE) simulator is used for this protocol. Table 1 represents the parameters that are being taken in the simulation of the protocol. Here, there are two conditions are taken into consideration to check of the performance of the protocol. A.

Number of Nodes

Figure 3 depicts the bar graph of nodes vs delivery ratio. The range of nodes is from 10 to 50. Here, the count of nodes is increased from 10 to 50, thereby, increase in delivery ratio of the network.

Forward and Control Approach to Minimize Delay … Table 1 Experiment parameters

69

Parameters

Values

Message size

250 Kb- 1 Mb

Buffer size

10–50 MB

Transmission range

100 m

Time to live

300 min

Number of nodes

10–50

Simulation time

12 h

Mobility model

Shortest path map

Fig. 3 Delivery ratio versus nodes

Figure 4 depicts the bar graph of count of nodes vs delay. The range of nodes is from 10 to 50. The count of nodes is increased from 10 to 50, thereby, decrease in delay in the network. B.

Buffer Size

Figure 5 depicts the bar graph of buffer size vs delivery ratio. The range of buffer size is from 10 to 50 MB. The buffer size increases from 10 to 50 MB, and there is also increase in delivery ratio of the network. Figure 6 depicts the bar graph of buffer size vs delay. The range of buffer size is from 10 to 50 MB. The buffer size increases from 10 to 50 MB, and there is the least delay in forward and control protocol among others in the network. Figure 7 shows the API model of our proposed work.

70

S. Pandey et al.

Fig. 4 Delay vs nodes

Fig. 5 Delivery ratio versus buffer size

5 Conclusion In the proposed work, a new approach named forward and control technique for buffer management in delay-tolerant networks is suggested. The aim is to deliver messages as fast as possible from the root node to the target node and controls the buffer overflow by utilizing a minimum number of resources. Additionally, our approach is assessed against some notable protocols for example PRoPHET, spray and wait, and Max-Prop by considering various boundaries and different metrics of technical literature. Results show that our approach has achieved a successful delivery rate of 85% which is greater than the other significant protocols such as PRoPHET, spray and wait, and Max-Prop with a delay of 4000 s when the buffer size is 50 MB. In future, the challenge adds different types of parameters to minimize the delay time

Forward and Control Approach to Minimize Delay …

71

Fig. 6 Delay vs buffer size

Fig. 7 API model of our proposed work

and increases the message delivery ratio by using fewer resources. Also, the energy consumption of our protocol is added. The self-learning technologies like machine learning and artificial intelligence can also be applied to obtain the best parameter for our approach that will give the maximum performance. To obtain more accurate results, our approach can be evaluated by applying it in physical world.

72

S. Pandey et al.

References 1. McMahon A, Farrell S (2009) Delay- and disruption-tolerant networking. In: IEEE Internet computing, vol 13, no 6, pp 82–87, Nov.-Dec. 2009. https://doi.org/10.1109/MIC.2009.127 2. Ezife F, Li W, Yang S (2017) A survey of buffer management strategies in delay tolerant networks. In: 2017 IEEE 14th international conference on mobile ad hoc and sensor systems (MASS), Orlando, FL, pp 599–603. https://doi.org/10.1109/MASS.2017.103 3. Keränen A, Ott J, Kärkkäinen T (2009) The one simulator for DTN protocol evaluation 4. Haoxiang W, Smys S (2020) Soft computing strategies for optimized route selection in wireless sensor network. J Soft Comput Paradigm (JSCP) 2(01):1–12 5. Anand JV (2020) Trust-Value based wireless sensor network using compressed sensing. J Electron 2(02):88–95 6. Jain S, Chawla M (2014) Survey of buffer management policies for delay tolerant networks. J Eng 3:117–123. https://doi.org/10.1049/joe.2014.0067 7. Rashid S, Ayub Q, Zahid MSM et al (2013) Message drop control buffer management policy for DTN routing protocols. Wireless Pers Commun 72:653–669 8. Grundy AM (2012) Congestion control framework for delay-tolerant communications. A Ph.D thesis, University of Nottingham, July 2012 9. Krifa A, Barakat C, Spyropoulos T (2008) Optimal buffer management policies for delay tolerant networks. In: 2008 5th Annual IEEE communications society conference on sensor, mesh and ad hoc communications and networks, San Francisco, CA, 2008, pp 260–268. https:// doi.org/10.1109/SAHCN.2008.40 10. Mundur P, Seligman M, Lee G (2008) Epidemic routing with immunity in delay tolerant networks, MILCOM 2008—2008 IEEE military communications conference, San Diego, CA, 2008, pp 1–7. https://doi.org/10.1109/MILCOM.2008.4753334 11. Vahdat A, Becker D (2020) Epidemic routing for partially-connected ad hoc networks. Accessed 01 Dec 2020. (Online) 12. Spyropoulos T, Psounis K, Raghavendra CS (2005) Spray and wait: An efficient routing scheme for intermittently connected mobile networks. In Proceedings of the ACM SIGCOMM 2005 workshop on delay-tolerant networking, WDTN 2005,pp 252–259. https://doi.org/10.1145/ 1080139.1080143 13. Burgess J, Gallagher B, Jensen D, Levine BN (2006) MaxProp: routing for vehicle-based disruption-tolerant networks. https://doi.org/10.1109/INFOCOM.2006.228 14. Oda T, Elmazi D, Spaho E, Kolici V, Barolli L (2015) A simulation system based on ONE and SUMO simulators: performance evaluation of direct delivery, epidemic and energy aware epidemic DTN protocols. In: 2015 18th international conference on network-based information systems, Taipei, 2015, pp 418–423. https://doi.org/10.1109/NBiS.2015.64 15. N˘an˘au C-S¸ (2020) MaxDelivery: a new approach to a DTN buffer management. In: 2020 IEEE 21st international symposium on “a world of wireless, mobile and multimedia networks” (WoWMoM), Cork, Ireland, pp 60–61.https://doi.org/10.1109/WoWMoM49955.2020.00023 16. Li Y, Hui P, Jin D, Chen S (2015) Delay-tolerant network protocol testing and evaluation. IEEE Commun Mag 53(1):258–266. https://doi.org/10.1109/MCOM.2015.7010543

Human Face Recognition Using Eigenface, SURF Method F. M. Javed Mehedi Shamrat, Pronab Ghosh, Zarrin Tasnim, Aliza Ahmed Khan, Md. Shihab Uddin, and Tahmid Rashik Chowdhury

Abstract One such complicated and exciting problem in computer vision and pattern recognition is identification using face biometrics. One such application of biometrics, used in video inspection, biometric authentication, surveillance, and so on, is facial recognition. Many techniques for detecting facial biometrics have been studied in the past three years. However, considerations such as shifting lighting, landscape, the nose being farther from the camera, the background being farther from the camera creating blurring, and noise present renders the previous approaches bad. To solve these problems, numerous works with sufficient clarification on this research subject have been introduced in this paper. This paper analyzes the multiple methods researchers use in their numerous researches to solve different types of problems faced during facial recognition. A new technique is implemented to investigate the feature space to the abstract component subset. Principle component analysis (PCA) is used to analyze the features and uses speed up robust features (SURF) technique, eigenfaces, identification, and matching is done, respectively. Thus, we get improved accuracy and almost similar recognition rate from the acquired research results based on the facial image dataset, which has been taken from the ORL database. Keywords Human face detection · Face recognition · Facial detection · Eigenface · Methods · SURF

F. M. Javed Mehedi Shamrat (B) · Z. Tasnim Department of Software Engineering, Daffodil International University, Dhaka, Bangladesh P. Ghosh · A. A. Khan · Md. S. Uddin Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh T. R. Chowdhury Department of Computer Science and Engineering, Islamic University of Technology, Gazipur, Bangladesh © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_7

73

74

F. M. Javed Mehedi Shamrat et al.

1 Introduction One such optical pattern recognition problem is face recognition [1]. After entering a random image as input in the face recognition system, it will explore the database and identify the person as output. Usually, a face identification system contains four components [2] as shown in Fig. 1: detection, alignment, feature extraction, and matching, the refining steps are localization and normalization (face detection and alignment) before face identification (extraction and matching) is done [3]. Facial image identification separates the facial region from the background. However, face tracking equipment [4] is used to trace the recognized faces in a video. Face alignment focuses on acquiring more precise localization and thus normalizing faces. On the other hand, face recognition gives a rough measure of every identified face’s position and scale. Facial elements such as eyes, mouth, nose, and facial layout are situated [5]. Using the location spots, the face image given as input is standardized in respect of geometrical characteristics such as pose and size using geometrical distortion or transforms. The face goes through more normalization procedures regarding photometrical characteristics such as illumination and grayscale. After standardizing the face geometrically and photometrically, attribute selection is executed to give the necessary information that helps differentiate between faces of various persons and steady in respect of the geometrical and photometrical variations [6]. In the case of face matching, the input face’s obtained attribute vector is compared with that of the registered faces existing in the database. Whether a match is found with sufficient confidence, it offers the name of the face as output and if no match is found, an anonymous face is specified. Artificial intelligence has been properly used to overcome signal processing issues for 20 years [7]. Researchers suggested various models related to artificial neural networks. It is a tough procedure to figure out the most reliable neural network model for solving real-life problems. The following building blocks typically comprise facial recognition systems: • Face detection. The first and important stage for facial recognition is face detection, which is used to recognize faces in photographs. It is a result of the identification of artifacts. A face detector locates the whereabouts of the faces inside an image, and if any face is traced, then the coordinates of a bounding box for all of them are returned. This is depicted in Fig. 2a. • Face alignment. The objective of face alignment is to scale and similarly trim facial images using a group of related points situated at certain positions in the image. Using a landmark detector, a group of facial markers is located in this method. We intend to look for the optimal affine evolution adjusted in the reference

Fig. 1 Infrastructure of a face recognition system

Human Face Recognition Using Eigenface, SURF Method

75

Fig. 2 a Face detector. b Aligned faces and reference points

Fig. 3 Flow diagram of PCA approach

points for 2D alignment. Two facial images are oriented in Fig. 3b and c using the same related points. Face fractalization (varying the posture of a face to frontal) can be implemented by other critical 3D orientation algorithms, i.e., [8]. • Feature extraction: While portraying the face, the pixel values of a facial image are mutated into a close-packed and distinguishing attribute vector called template. Logically, each face of the same individual should point to related attribute vectors. • Face matching: In the face matching process, a similarity score is obtained by contrasting two templates which show the probability that they are part of the same individual. Face representation is indeed the most vital element of the face identification system, and the literature review is focused on in Section 2. The primary features of the current study are • In this system, two key features of face recognition such as eigenface and SURF have been demonstrated with the help of PCA components. • Four different types of eigenvectors (6, 10, 20, and 190) have been computed based on the Euclidean and Manhattan distances.

76

F. M. Javed Mehedi Shamrat et al.

• Besides, Euclidean and Manhattan distances were also shown the predicted accuracy of five different persons based on various types of input images to make this unique approach. • Both SURF (64 and 128) and SIFT (128) are examined on different dimensions along with the doubled image sizes of those dimensions. The following sections of this paper contain: Comprehensive-related works of face recognition have been added in Sect. 2. In Sect. 3, a sufficient description of the collected dataset is given and a descriptive analysis of the introduced features has been explained with required working diagrams. An intuitive comparison of results is generated for showing their performance on the given dataset to make it more understandable with the aid of graphs in Sect. 4. In the final Sect. 5, this paper’s overall idea has been made to show its capability.

2 Relevant Works Multiple prevailing face recognition types of research use PCA (eigenfaces) for face identification. Some of the current works are illustrated. In [9], the authors brought in updated procedures, or scores [10, 11], for uniformity of the face to make the investigation easier. Scores are calculated using only the pixel data of the images in the database (and the weighted mean of the pixel data). A 3D face database is used to eliminate undesirable errors in the calculation of uniformity from issues in 2D images, i.e., illumination. Based on the scores, statistical tests [12] are carried out in different subgroups of the database to differentiate the uniformity of the face and then the result of face recognition is compared with similar subgroups. A significant variation in face uniformity scores between the subgroups of men and women is observed, and the result of face identification is contrasted. The database is then split into most uniform subjects and least uniform subjects based on the uniformity scores, and the face identification outcome is contrasted. They realized that using uniformity in face identification, using the mean-half-face, is helpful for their analysis. They discovered analytical importance between male and female subjects’ face uniformity in the threedimensional database, including variation in face identification precision [13]. In full face, the least uniform subjects generate greater face identification precision than the most uniform subjects. Nevertheless, face identification precision is globally increased when mean-halfface is used in the experiments instead of full face. A computerized face identification system has been made in [14], to analyze the possible use for office door access control. Eigenfaces’ procedure depending on the principal component analysis (PCA) and artificial neural networks [15] have been used. Training images can be acquired offline either by prerecorded and trimmed facial images or online by using the system’s face recognition and identification training components on the actual front-facing images. For the rotational angle of the person’s head from -20 and + 20°, the device may distinguish the face at a reasonable pace at a distance

Human Face Recognition Using Eigenface, SURF Method

77

of 40 and 60 cm from the frame. The experimental result confirmed the impact of illumination and stance on the facial recognition device. The authors used principal component analysis (PCA) in [16], to obtain attributes of facial images and to implement face identification, sparse representation-based classification (SRC) algorithm is employed. Experimental outcome depicts that when the ideal illustration is properly scattered, it can be effectively resolved using convex optimization, which is referred to as an l1-minimization problem. Furthermore, the homotopic algorithm can efficiently resolve the l1-minimization problem, hence for figuring out the object classes, sparse coefficients are employed. The authors suggest a strategy in [8] depending on an information theory method, where facial images are fragmented into a tiny set of distinctive attribute images known as “eigenfaces,” which in reality are the main elements of the preliminary training set of facial images. Identification is carried out by creating a new image into the subspace covered by eigenfaces (“face space”) and then by contrasting the location of the face in the face space with the location of the face of the known persons, the face is identified. An effective method to discover the lower-dimensional space is the eigenface method. In reality, eigenfaces are proprietary vectors that represent each dimension in face space and can be used as different face attributes. Both face forms may be described in the face collection as a linear fusion of singular vectors. To be exact, these singular vectors are the covariance matrices vectors. In displaying the significant characteristics, the eigenfaces played an essential function, thereby reducing the input size of the neural network.

3 Research Methodology 3.1 Data Collection The test was carried out by using the ORL database (face data) [17]. Within the training database, there are 190 images of 38 people (5 images for each person) and 40 images of different people (38 familiar and 2 unfamiliar) are present in the test database. In a straight-up, front-view posture, a photograph of the subject is taken. A picture has a similar unilluminated backdrop and 92 × 112 measurements. Besides, each picture is grayscale (intensity measures of gray are considered image attributes).

3.2 PCA Approach to Face Recognition A sequence of data derived from logically linked variables [18] is converted by key component analysis into a collection of values for non-correlated variables called main components. The number of components may be smaller than or equal to the initial number of variables [19]. The first major variable has the largest potential

78

F. M. Javed Mehedi Shamrat et al.

variance. Under the constraint that it has to be orthogonal to the previous component, each of the effective components has the greatest possible variance [20]. We want to find the key components of the covariance matrix of facial images, in this case, eigenvectors. The first thing we need to do is to build a data set for training. The fundamental techniques followed by this have been depicted in Fig. 3 to show the overall working process. At first, the collected images were separated into two parts: training and testing to calculate the covariant value. After that, both eigenvector and eigenvalues were computed before getting the results from the covariant matrix. In the following stage, when the matrix was calculated, the face image was read also. Subsequently, after calculating the feature vector of each image, the result was computed by Euclidian distance. If A and B are two D-distance vectors, using charts [21], the distinction between them is overcome. Here are the following Eqs. (1) and (2). Manhattan distance : d( A, B) =

D  |ai − bi |

(1)

i=1

  D  Euclidean distance : d(A, B) =  (ai − bi )2

(2)

i=1

3.3 Eigenface Eigenface employs the appearance-based method in computer vision to identify the face of humans. It understands the diversity in the educational variation of photos of the face, which is afterward used to alter and organize photos [22]. Eigenfaces are the important section for the distribution of faces. The goal of the principal component analysis (PCA) part investigation is to make a global error in the preliminary group of pictures and illustrates this diversity using some variables [23]. This is a dimensionality reduction process that focuses on getting rid of the required amount of important facial data segments [22]. An eigenvector is one such vector that does not change its way under the associated direct variation, and eigen features are the combination of eigenvectors that choose one element from facial picture space [23]. The covariance matrix C is computed, and by using the following Eq. (3) [24], the eigenvectors ei and eigenvalues λi are found out in Eq. (2): C=

M 1  ϕn ϕnT = AAT M n=1

Cei = λi ei

(3) (4)

Human Face Recognition Using Eigenface, SURF Method

79

If ν i and μi are eigenvectors and eigenvalues of matrix AT A [24] that is shown in Eq. (5), then: A T Avi = μi vi

(5)

After multiplying both sides together of Eq. (3) with A, we get the value of Eq. (6) AAT Avi = Aμi vi

(6)

Applying C = AAT in Eqs. (6), (7) will be c(Avi ) = μi (Avi )

(7)

The preparation set is altered hooked on a vector P [24], diminished by the mean worth , and expected by a grid of eigenvectors which are shown in Eq. (8), ω = E T (P − )

(8)

It is evident that after subtracting the collected images into training and testing parts, the covariant value was computed. Subsequently, eigenvalue and eigenvector were also calculated before doing projection. Furthermore, all of the required works that had been done were saved in a specific folder after getting the expected outcomes of projection. All of the necessary descriptions are added in Fig. 4 to better understand the overall process.

Fig. 4 Required steps of eigenface technique

80

F. M. Javed Mehedi Shamrat et al.

3.4 Speed up Robust Features (SURF): SURF [25] is a standard pivotal part that involves a storyline and a descriptor of intriguing details. The radar finds that the intrigue focuses on the picture, and the descriptor defines the highlights of the emphasis of the conspiracy and constructs the element vectors of the focus of the intrigue [26]. (1)

Interest point detection: The SURF indicator was found based on the Hessian matrix. Assumed a fact X (x, y) in an image I, the Hessian matrix H (X, σ ) at X at scale σ is distinct in this manner [26, 27]. The Eq. (9) is given below.  H (X , σ ) =

L x x (X, σ ) L x y (X, σ ) L x y (X, σ ) L yy (X, σ )

 (9)

where L x x (X, σ ) is the convolution of the Gaussian second order derivative through the image I at point X, and likewise for L x y (X, σ ) and L yy (X, σ ). (2)

∂2 g(σ ) ∂x2

Interest point description: The point of focus in a picture is a point in its neighborhood that is special. A two-step strategy is usually used to diagnose and clarify this point: A. Feature detectors: Where an algorithm that uses an image as input is a feature detector (extractor) and outputs a set of regions (“local features”). B. Descriptor function: Where a descriptor is computed to a detector-specified picture field. Descriptors are built by removing rectangular areas about the attention facts. The frames are separation in 4 × 4 subregions [26, 27]. The shape is described by a vector in Eq. (10) and shown in Fig. 5. V =



dx ,



dy ,



|dx |,



|d y |

(10)

In face recognition, SURF characteristics can be derived from photographs through SURF detectors and descriptors utilizing SIFT functionality. Interest points are first removed from each face picture during preprocessing such as normalization and histogram equalization. This results in the acquisition of between 30–100 interest points per photo. The SURF feature vectors [28] of the range of interest points are then determined to characterize the picture and these feature vectors are normalized to 1. These characteristics are person-specific since each person’s picture varies in the amount and position of points selected by the SURF detector and the characteristics measured by the SURF descriptor around these points.

Human Face Recognition Using Eigenface, SURF Method

81

Fig. 5 Gaussian second-order partial derivatives and pattern

4 Experimental Results and Discussion 4.1 Expected Outcomes After Using Eigenface Some images from the training database are displayed in Figs. 6 and 7, where all 190 eigenvalues are demonstrated. Every eigenvalue belongs to one eigenvector and shows us to what extent images from the training database differ from the average image in the same path. It is observed that only 10% of the vectors have considerable eigenvalues, whereas the remaining vectors consist of eigenvalues almost close to 0. Eigenvectors consisting of insignificant eigenvalues are needless to consider because they do not contain critical image data. In Fig. 8, the first three eigenfaces are displayed as outputs and these images are quite similar to the input values which had been taken from the explained dataset. The calculated results are displayed based on the different number of principle elements. Among all of the displayed results, the highest results (approximately 98.5%) for both Manhattan and Euclidean distances were for 190 eigenvectors. In

82

Fig. 6 Training images

Fig. 7 Eigenvalues

F. M. Javed Mehedi Shamrat et al.

Human Face Recognition Using Eigenface, SURF Method

83

Fig. 8 Graphical representation of eigenfaces

comparison, the lowest outcomes were observed for six eigenvectors which were about 86.5 and 76.7%, respectively. The distance of Manhattan was considerably higher than the Euclidean distance except for the calculated findings of 20 eigenvectors, which were almost 0.9% higher, after considering all listed eigenvectors. A sufficient explanation has been added in Fig. 9. Euclidean and Manhattan distances have been calculated based on the different types of images of five persons to carry out the rate of recognition. After evaluating all of them, the highest predictable distance was noticed for both distances (Euclidean and Manhattan) where the recognition rate of the fifth person was around 98.5% and 98.8%, however, the lowest rate of prediction was discovered for first person (82.5%, 86.5%). Besides, Fig. 10 shows that both distances such as Euclidean and Manhattan provide similar results (only 94%) for the position of fourth.

Fig. 9 Examined results of face recognition by eigenface

84

F. M. Javed Mehedi Shamrat et al.

Fig. 10 Examined results of per person for different number of images by eigenface

The corresponding Fig. 11 is illustrated based on various levels of image size. If the image size is low, there is not enough data to process and if too high, it takes too much time to read and process the image data. So the ideal size is 227*227. For all of these images, the most predictable accuracy [29, 30] which was just over 99.50% Fig. 11 Accuracy based on image size

Human Face Recognition Using Eigenface, SURF Method

85

was depicted for 227*227 image size, on the other hand, the last result was witnessed for the size of 300*300 which accuracy was close to 97%.

4.2 Displayed Results of SURF After Using Different Levels of Dimensions Our test results were compared with the SIFT approaches with this proposed method. Here, the feature vectors of a dimension are indicated by 64 and 128 and dbl refers to the appropriate size of the given image that was doubled before extracting the feature. After generating the combined outputs of SURF (64 and 128 dimensions) and SIFT approaches (128 dimensions), approximately 0.55 threshold values were observed for the first three consecutive features and around 0.5 threshold results were achieved for the last three features of SURF and SIFT techniques. The obtainable outcomes have been clearly shown in the following Fig. 12. The identification rates on all types of attributes are given in Fig. 13. It is clear that the identification rate of both SURF-64 and SIFT-128 are the same. The 128-dimensional SURF (SURF-128) attribute vectors are a bit better than SURF-64 and SIFT-128. In the case of N-dimensional attribute sets, the comparison for identification exceeds non-doubled attribute sets but it is weaker for attribute sets “doubled” with 53 dimensions than “non-doubled” with 64 dimensions. The SURF profiling algorithm was created to be applied to high-dimensional data. On this particular dataset that cannot explain with more than 64 dimensions. This is because it will give rise to higher interest points for a doubled image in contrast to the non-doubled image, which means 128 dimensions will give higher discrimination data as opposed to 64 dimensions in the match.

Fig. 12 Different ratio of thresholds

86

F. M. Javed Mehedi Shamrat et al.

Fig. 13 Comparison of recognition rate between SURF and SIFT features

5 Conclusion Modern technology is all about performance and speed [31, 32]. Today is the scientific and technical era. For the present culture, new technology is a great blessing [33– 36]. In every aspect of our lives, we see the application of new technologies [37, 38]. Without science and technology, we cannot conceive about our daily life. This research focuses on different face recognition methods. To identify human faces, the eigenface technique is used here. Besides, the SURF process is also demonstrated. Compared to other approaches and even among the techniques, the accuracy rate of the ways is seen. It can be seen from the comparison that each approach has its value, which is dependent on the state of the data. Plans all demonstrate positive progress in the identification of individual features in any given status.

References 1. Mahmud F, Khatun MT, Zuhori ST, Froge S, Aktar dan B Pal M (2015) Face recognition using principle component analysis and linear discriminant analysis. In: Electrical engineering and information communication technology (ICEEICT), international conference on, pp 1–4 2. Anggo M, Arapu L (2018) Face recognition using fisherface method. In: 2nd international conference on statistics, mathematics, teaching, and research, IOP conference Series: journal of physics: conference series, vol 1028, pp 012119. https://doi.org/10.1088/1742-6596/1028/ 1/012119 3. Kaur R, Himanshi E (2015) Face recognition using principal component analysis. In: IEEE international advance computing conference, pp 585–589

Human Face Recognition Using Eigenface, SURF Method

87

4. Manoharan S (2019) Study on hermitian graph wavelets in feature detection. J Soft Comput Paradigm (JSCP) 1(01):24–32 5. Singh S, Prasad SVAV (2018) Techniques and challenges of face recognition: a critical review. Procedia Comput Sci 143:536–543 6. Javed Mehedi Shamrat FM, Allayear SM, Alam MF, Jabiullah MI, Ahmed R (2019) A smart embedded system model for the ac automation with temperature prediction. In: Singh M, Gupta P, Tyagi V, Flusser J, Ören T, Kashyap R (eds) Advances in computing and data sciences. ICACDS 2019. Communications in computer and information science, vol 1046. Springer, Singapore. https://doi.org/10.1007/978-981-13-9942-8_33 7. Smys S, Chen JIZ, Shakya S (2020) Survey on neural network architectures with deep learning. J Soft Comput Paradigm (JSCP) 2(03):186–194 8. Kshirsagar VP, Baviskar MR, Gaikwad ME (2011) Face recognition using eigenfaces. In: Computer research and development (ICCRD), 2011 3rd international conference, vol 2. pp 302–306, 11–13 March 2011 9. Ahmed MR, Ali MA, Ahmed N, Zamal MFB, Javed Mehedi Shamrat FM (2020) The impact of software fault prediction in real-world application: an automated approach for software engineering. In: Proceedings of 2020 the 6th international conference on computing and data engineering (ICCDE 2020). Association for computing machinery, New York, NY, USA, pp 247–251. https://doi.org/10.1145/3379247.3379278 10. Javed Mehedi Shamrat FM, Tasnim Z, Ghosh P, Majumder A, Hasan MZ (2020) Personalization of job circular announcement to applicants using decision tree classification algorithm. In: 2020 IEEE international conference for innovation in technology (INOCON), Bangluru, pp 1–5. https://doi.org/10.1109/INOCON50539.2020.9298253 11. Javed Mehedi Shamrat FM, Ghosh P, Sadek MH, Kazi MA, Shultana S (2020) Implementation of machine learning algorithms to detect the prognosis rate of kidney disease. In: 2020 IEEE international conference for innovation in technology (INOCON). Bangluru, pp 1–7. https:// doi.org/10.1109/INOCON50539.2020.9298026 12. Ghosh P, et al (2021) Efficient prediction of cardiovascular disease using machine learning algorithms with relief and LASSO feature selection techniques. IEEE Access. https://doi.org/ 10.1109/ACCESS.2021.3053759 13. Ghosh P, et al (2020) Expert model of cancer disease using supervised algorithms with a LASSO feature selection approach. Int J Electric Comput Eng 11 14. Ibrahim R, Zin ZM (2011) Study of automated face recognition system for office door access control application. In: Communication software and networks (ICCSN), 2011 IEEE 3rd international conference, pp 132–136, 27–29 May 2011 15. Ghosh P, Anjum AA, Karim A, Junayed MS, Hasan MZ, Hasib MZ, Bin Emran AN (2020) A comparative study of different deep learning model for recognition of handwriting digits. In: International conference on iot based control networks and intelligent systems (ICICNIS 2020) 16. Gan J, Wang P (2011) A novel model for face recognition. In: System science and engineering (ICSSE), 2011 international conference on, pp 482–486, 8–10 June 2011 17. ORL database. Available at http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase. html. Accessed on 13 Dec 2020 18. Ravi K, Kttswamy M (2014) Face recognition using PCA and eigenface approach. Int J Mag Eng Manag Res 1(10) 19. Sazzadur Rahman AKM, Javed Mehedi Shamrat FM, Tasnim Z, Roy J, Hossain SA (2019) A comparative study on liver disease prediction using supervised machine learning algorithms. Int J Sci Technol Res 8(11):419–422. ISSN: 2277-8616 20. Chakraborty D, Saha SK, Bhuiyan MA (2015) Face recognition using eigenvector and principle component analysis. Int J Comput Appl 50(10):42–49 21. Perlibakas V (2004) Distance measures for PCA-based face recognition. Pattern Recogn Lett 25(6):711–724 22. Javed Mehedi Shamrat FM, Raihan MA, Sazzadur Rahman AKM, Mahmud I, Akter R (2020) An analysis on breast disease prediction using machine learning approaches. Int J Sci Technol Res 9(02):2450–2455. ISSN: 2277-8616

88

F. M. Javed Mehedi Shamrat et al.

23. Javed Mehedi Shamrat FM, Tasnim Z, Mahmud I, Jahan MN, Nobel NI (2020) Application of K-means clustering algorithm to determine the density of demand of different kinds of jobs. Int J Sci Technol Res 9(02):2550–2557. ISSN: 2277-8616 24. Slavkovi´c M, Jevti´c D (2012) Face recognition using eigenface approach. Serb J Electric Eng 9(1):121–130 25. Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-up robust features (SURF). Comput Vis Image Underst 110(3):346–359 26. Du G, Su F, Cai A (2009) Face recognition using SURF features. In: MIPPR 2009: pattern recognition and computer vision, processing of SPIE 7496. https://doi.org/10.1117/12.832636 12 27. Kim D, Dahyot R (2008) Face components detection using SURF descriptors and SVMs. In: International machine vision and image processing conference 2008. https://doi.org/10.1109/ IMVIP.2008.15 28. Carro RC, Larios JMA, Huerta EB, Caporal RM, Cruz FR (2015) Face recognition using SURF. ICIC 2015, Part I, LNCS 9225:316–326. https://doi.org/10.1007/978-3-319-22180-9_31 29. Ghosh P, Hasan MZ, Dhore OA, Mohammad AA, Jabiullah MI (2018) On the application of machine learning to predicting cancer outcome. In: Proceedings of the international conference on electronics and ICT—2018. organized by Bangladesh Electronics Society (BES), Dhaka, Bangladesh, pp 60. 25–26 November 2018 30. Javed Mehedi Shamrat FM, Asaduzzaman M, Sazzadur Rahman AKM, Tusher RTH, Tasnim Z (2019) A comparative analysis of parkinson disease prediction using machine learning approaches. Int J Sci Technol Res 8(11):2576–2580. ISSN: 2277-8616 31. Shamrat FMJM, Nobel NI, Tasnim Z, Ahmed R (2020) Implementation of a smart embedded system for passenger vessel safety. In: Saha A, Kar N, Deb S (eds) Advances in computational intelligence, security and internet of things. ICCISIoT 2019. Communications in computer and information science, vol 1192. Springer, Singapore. https://doi.org/10.1007/978-981-15-36663_29 32. Javed Mehedi Shamrat FM, Allayear SM, Jabiullah MI (2018) Implementation of a smart AC automation system with room temperature prediction. J Bangladesh Electron Soc 18(1–2):23– 32. ISSN: 1816-1510 33. Javed Mehedi Shamrat FM, Sazzadur Rahman AKM, Tasnim Z, Hossain SA (2020) An offline and online-based android application “Travel Help” to assist the travelers visually and verbally for Outing. Int J Sci Technol Res 9(01):1270–1277. ISSN: 2277-8616 34. Javed Mehedi Shamrat FM, Asaduzzaman M, Ghosh P, Sultan MD, Tasnim Z (2020) A web based application for agriculture: “Smart Farming System". Int J Emerg Trends Eng Res 8(06):2309–2320. ISSN: 2347-3983 35. Javed Mehedi Shamrat FM, Tasnim Z, Sazzadur Rahman AKM, Nobel NI, Hossain SA (2020) An effective implementation of web crawling technology to retrieve data from the World Wide Web (www). Int J Sci Technol Res 9(01):1252–1256. ISSN: 2277-8616 36. Javed Mehedi Shamrat FM, Mahmud I, Sazzadur Rahman AKM, Majumder A, Tasnim Z, Nobel NI (2020) A smart automated system model for vehicles detection to maintain traffic by image processing. Int J Sci Technol Res 9(02):2921–2928. ISSN: 2277-8616 37. Ahmed MR, Javed Mehedi Shamrat FM, Ali MA, Mia MR, Khatun MA (2020) The future of electronic voting system using Block chain. Int J Sci Technol Res 9(02):4131–4134. ISSN: 2277-8616 38. Javed Mehedi Shamrat FM, Tasnim Z, Nobel NI, Ahmed MR (2019) An automated embedded detection and alarm system for preventing accidents of passengers vessel due to overweight. In: Proceedings of the 4th international conference on big data and internet of things (BDIoT’19), vol 35. Association for Computing Machinery, New York, NY, USA, pp 1–5. https://doi.org/ 10.1145/3372938.3372973

Multiple Cascading Algorithms to Evaluate Performance of Face Detection F. M. Javed Mehedi Shamrat, Zarrin Tasnim, Tahmid Rashik Chowdhury, Rokeya Shema, Md. Shihab Uddin, and Zakia Sultana

Abstract This paper intends to evaluate the previous works done on different cascading classifiers for human face detection of image data. This paper includes the working process, efficiency, and performance comparison of different cascading methods. These methods are dynamic cascade, Haar cascade, SURF cascade, and Fea-Accu cascade. Each cascade classifier is described in this paper with their working procedure and mathematical induction as well. Each technique is backed with proper data and examples. The accuracy rate of the method is given with comparison with analyze the performance of the methods. In this literature, the human face detection process using cascading classifiers from image data is studied. From the study, the performance rate and comparison of different cascading techniques are highlighted. This study will also help to determine which methods are to be used for achieving an accurate accuracy depending on the data and circumstances. Keywords Face detection · Dynamic cascading · Haar cascading · SURF cascading · Fea-Accu cascading

F. M. Javed Mehedi Shamrat (B) · Z. Tasnim Department of Software Engineering, Daffodil International University, Dhaka, Bangladesh T. R. Chowdhury Department of Computer Science and Engineering, Islamic University of Technology, Gazipur, Bangladesh R. Shema Department of Computer Science and Engineering, International University of Business Agriculture and Technology, Dhaka, Bangladesh Md. S. Uddin · Z. Sultana Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_8

89

90

F. M. Javed Mehedi Shamrat et al.

1 Introduction With the rapid worth of technology and increase of computational power, machines have become more intelligent every day with the ability to make their own decisions. Machines can interact with humans by recognizing the person to be interacted with, listening, analyzing, and reacting as needed. To recognize a person to be interacted with, the procedure of face recognition is widely used. Face recognition is a technological innovation that decides the area and size of a human face in the advanced picture [1]. With immersing computer vision technologies, cameras, surveillance systems, facial detection through image, and video processing have become a huge research field. The facial recognition systems have been utilized to automatically detect, recognize, and distinguish a person in a video source or image. This technique has been used to live, most specifically in forensic analysis and security systems [2]. Face location is the first phase in automated face recognition. Its dependability affects the presentation and convenience of the whole face detection system [3]. Face discovery is additionally delegated face identification in pictures and continuous face recognition [4]. For face detection, cascade classifiers are widely and efficiently used. The basic of face detection depends on the rejection of a larger part of base territories rapidly in the initial period to lessen the measure of calculation in the advanced phase [5]. However, several cascading methods can be applied to training data to get results. In this paper, few cascading methods will be discussed that are used to detect human faces from image data. At the same time, the performances will be evaluated to observe efficiency (Fig. 1).

2 Proposed Work 2.1 Dynamic Cascading Method In the study [6], the dynamic cascade algorithm tends to the test of preparing face identifiers utilizing informational index with a large quantity of positive and negative examples. Here, a little subset of preparing information called “dynamic working set” is utilized for support preparing. Rong Xiao et al. gathered over 45,000 pictures from the Web, about 20,000 of which comprehend faces. At that point, 40,857 face pictures were marked and edited from this set. A positive arrangement of 531,141 examples was produced by including little arbitrary varieties in move, scale, and pivot. Of these, the arrangement of 40,000 haphazardly chosen tests is utilized for approval. Correspondingly, around 10 billion negative examples are gathered from more than 25,000 pictures without faces from which around 50 million negative examples are arbitrarily trimmed and rearranged to fill in as the online negative dataset. Figure 2 shows the images.

Multiple Cascading Algorithms to Evaluate Performance …

91

Fig. 1 Proposed system architecture

Four preparing sets were arbitrarily inspected from the positive informational index, and the online negative informational index is shown in Fig. 3. For, respectively, dataset, an indicator is trained with static sets of exercise strictures (α = 0, Dtarget = 0.98, fu = 0.3). Another indicator with positive bootstrap on the informational index D is likewise prepared. Figure 4 shows the proficiency of utilizing positive bootstrap which is the one of a kind structures of dynamic course that empower the proficient dispersed teach, permitting us to utilize both colossal measures of positive and negative preparing tests.

92

F. M. Javed Mehedi Shamrat et al.

Fig. 2 Faces in the training set

Fig. 3 Graphical representation of training sets

2.2 Haar Cascading Method The research [7] uses an open-source computer vision library called OpenCV [8] to recognize human faces. Paula Voila and Michael Jones [9] initially conducted this approach. Hair highlights are the main component of hair classification for the exploration of the human face. Haar highlights are used to acknowledge the appearance in the image [10] of highlights. Every factor generates a solitary value

Multiple Cascading Algorithms to Evaluate Performance …

93

Fig. 4 Execution correlation on various sizes of preparing information

which is the difference between the pixel volumes in the white square shape from the number of pixels under the dark square shape of Fig. 5. The accuracy [7] calculations for human face recognition are completed via the equation, accuracy =

(TP + TN) (TP + TN + FP + FN)

(1)

where TP indicates the number of faces that are effectively differentiated (faces recognized as faces), the measure of correctly defined non-faces (faces identified as non-faces) demonstrates the measurement of erroneously distinguished (faces distinguished as non-faces), and the measure of wrong faces (falsely negative) demonstrates that the measurement of wrong faces (falsifiers) is erroneous.

Fig. 5 Haar features

94

F. M. Javed Mehedi Shamrat et al.

Fig. 6 Human detection results

From Fig. 6, it tends to be seen that the precision of the human face identification strategy is 71.48%. The measurable outcomes show that the FP value is 33.82%, which influences the aftereffects of framework exactness.

2.3 SURF Cascading Method SURF [11] is a nearby element descriptor that mirrors the shape and surface of highlight focuses. It is unchanging and has an unbelievable rise in processing power. SURF descriptor is utilized to extricate inclination data, which are hearty to the turn of faces as indicated by Siquan Hu et al. in [12]. As appeared in Fig. 7, the discovery window size is 40 × 40 and the size of highlight square shape bit by bit increments

Fig. 7 Flowchart of feature selection and calculation for SURF cascade

Multiple Cascading Algorithms to Evaluate Performance …

95

Fig. 8 Recall models of FDDB

from 8 × 8 to 40 × 40 and slides in the 40 × 40 discovery window bringing about a large number of applicants highlights. Each element is separated into 2 × 2 or 1 × 4 picture squares, and [−1, 0, 1] is utilized to process the angle data of the flat, vertical, and corner to corner measurements of the pixel. And we calculate |d x | ± dx to get the eight-dimensional incline of the picture block that is summed up (|dx | + dx ),  (|dx − dx |) to acquire 32-dimensional vectors aimed at each feature. There are normally three different ways to choose the best element from our applicant highlights. By limiting the total of the total estimations of the blunder loads all things considered from Eq. (2), N 

wi |h i − yi |

(2)

i=1

By limiting the total of squared blunders of all examples as in Eq. (3), N 

wi (h i − yi )2

(3)

i=1

By the greatest estimation of AUC which consolidates the classifiers acquired in the past preparing from Eq. (4),   J H i−1 + h j (x, w)

(4)

96

F. M. Javed Mehedi Shamrat et al.

Fig. 9 Accumulated reject rate

Figures 8 and 9 illustrate that the cross-approval and AUC-based SURF cascading model has a higher rejection rate in the initial barely any phases and a higher last TPR. The 99 percent rejection rate indicates that this model has a strong learning power.

2.4 Fea-Accu Cascading Method According to Shengye Yan et al. [13], the key thought of the Fea-Accu course is to just utilize the highlights adapted beforehand as opposed to the recently learned powerless classifier and the solid classifier. For the Fea-Accu course, there are 20,000 facial tests and 10,000 non-facial tests at each level of learning. All examples are resized to 24 * 24. Figure 10 shows certainty VS test number, where it is seen that Fea-Accu course is a solid classifier with high discrimination. To calculate the discriminative extents numerically, the Bhattacharyya distances [14] are computed for Fea-Accu strong classifier using Eq. (5). 

l

2

l σ1 + σ22 1 1 = ln B = ln  2 2 2σ1 σ2 σ12l σ22l σ12 +σ22 2

(5)

To check the accuracy, a detector is trained by using the Fea-Accu cascade. The numbers of the features for each point of the cascading detector are shown

Multiple Cascading Algorithms to Evaluate Performance …

97

Fig. 10 Confidence verses sample number

in Fig. 11. The preparation of a Fea-Accu cascaded detector with a false alarm rate of 1/1,000,000 is just approximately 10 h.

Fig. 11 Features verses layer

98

F. M. Javed Mehedi Shamrat et al.

3 Result Analysis According to Rong Xiao et al. [6], the dynamic cascading method is compared to other methods in the broadly utilized frontal informational collection CMU + MIT. The findings in Fig. 12 demonstrate that the presentation of the technique of dynamic dropping is virtually the same as the cutting edge. It is only second to the soft cascading method. This article [12],the relation by the cascading model Haar as seen in Fig. 13, indicates that the latter has improved considerably on point FP = 0, FP = 100 and only has five phases with 105 fragile classifiers. The model has 24 phases of Haar cascade with a poor classification of 2916. The true positive score for the frontal finder model of SURF cascades is 77.9%; just 2% of the frontal faces are absent from the FDDB dataset. Shengye Yan et al. in [13] have demonstrated that the Fea-Accu cascade method gets the best performance. In Fig. 14, the ROC curves of the soft cascade, nested cascade, and viola et al. are compared to Fea-Accu cascade is shown to compare the performance.

Fig. 12 Performance comparison for dynamic cascade

Multiple Cascading Algorithms to Evaluate Performance …

99

Fig. 13 SURF cascade model and Haar cascade model comparison

Fig. 14 ROC curves of Fea-Accu cascade method compared to others

4 Discussion It is seen that the location execution and precision can be improved extraordinarily utilizing enormous scope preparing information [15–22]. This implies contrasting various calculations we need to utilize similar preparing information. Otherwise, the comparison would be pointless as the different sizes of training data or different

100

F. M. Javed Mehedi Shamrat et al.

contents will give incomparable observations according to [6]. Therefore, it is necessary to create a big daily training package directed at the research community to share [23–27].

5 Conclusion This paper is focused on various cascading methods for human face detection. It reviews the works previously on dynamic, Haar, SURF, and Fea-Accu cascading methods for face detection [28–30]. Here, each of the techniques is described in brief to give an understanding of their working process [31–36]. Each method has a different performance rate which is portrayed in this paper as well. From the research gap, performance comparison among the four cascading methods is shown. Simultaneously, performance comparison of these methods with other commonly and widely used cascading methods is illustrated, however, comparisons among the four cascading methods are not demonstrated due to the lack of a dataset of the same features. These comparisons will be helpful to make decisions while working with cascading classifiers.

References 1. Kumar A, Kaur A, Kumar M (2018) Face detection techniques: a review. Springer Nature B.V. https://doi.org/10.1007/s10462-018-9650-2 2. Qasim NJ, Al_Barazanchi I (2019) Unconstrained joint face detection and recognition in video surveillance system. J Adv Res Dyn Control Syst 11(01-Special Issue) 3. Li SZ, Wu J, Face Detection. https://doi.org/10.1007/978-0-85729-932-1_11 4. Akanksha, Kaur J, Singh H (2018) Face detection and recognition: a review. In: 6th ınternational conference on advancements in engineering & technology (ICAET-2018). Sangrur, 23–24 Feb 2018 5. Yang H, Wang XA, Cascade classifier for face detection. J Algor Comput Technol. https://doi. org/10.1177/1748301816649073 6. Xiao R, Zhu H, Sun H, Tang X (2007) Dynamic cascades for face detection. In: Proceedings IEEE ınternational conference on computer vision, ıeee ınternational conference on computer vision. https://doi.org/10.1109/ICCV.2007.4409043 7. Priadana A, Habibi M, Face detection using haar cascades to filter selfie face ımage on ınstagram. https://doi.org/10.1109/ICAIIT.2019.8834526 8. OpenCV. 2018. Available from https://docs.opencv.org/3.4.1/d7/d8b/tutorial_py_face_dete ction.html 9. Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: Computer vision and pattern recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE computer society conference, vol 1. IEEE, pp I-I 10. Sing V, Shokeen V, Singh B (2013) Face detection by haar cascade classifier with simple and complex backgrounds images using opencv implementation. Int J Adv Technol Eng Sci 11(12):33–38 11. Li E, Yang L, Wang B, Li J, Peng Y (2012) SURF cascade face detection acceleration on sandy bridge processor. In: IEEE computer society conference on computer vision and pattern recognition workshops

Multiple Cascading Algorithms to Evaluate Performance …

101

12. Hu S, Zhang C, Liu L (2017) Analysis and improvement of face detection based on surf cascade. ˙In IOP conference series: journal of physics: conference series, vol 887, pp 012027. https:// doi.org/10.1088/1742-6596/887/1/012027 13. Wang S, Shan S, Chen X, Gao W (2009) Fea-Accu cascade for face detection. ˙In: Proceedings ICIP ... ınternational conference on ımage processing. https://doi.org/10.1109/ICIP.2009.541 3674 14. Theodoridis, Koutroumbas K (2003), Pattern recognition. Elsevier Science, pp 177–179 15. Javed Mehedi Shamrat FM, Asaduzzaman M, Sazzadur Rahman AKM, Tusher RTH, Tasnim Z (2019) A comparative analysis of parkinson disease prediction using machine learning approaches. Int J Sci Technol Res 8(11): 2576–2580, ISSN: 2277-8616 16. Javed Mehedi Shamrat FM, Sazzadur Rahman AKM, Tasnim Z, Hossain SA (2020) An offline and online-based Android application “TravelHelp” to assist the travelers visually and verbally for outing. Int J Sci Technol Res 9(01):1270–1277. ISSN: 2277-8616 17. Javed Mehedi Shamrat FM, Allayear SM, Alam MF, Jabiullah MI, Ahmed R (2019) A smart embedded system model for the AC automation with temperature prediction. In: Singh M, Gupta P, Tyagi V, Flusser J, Ören T, Kashyap R. (eds) Advances in computing and data sciences. ICACDS 2019. Communications in computer and ınformation science, vol 1046. Springer, Singapore. https://doi.org/10.1007/978-981-13-9942-8_33 18. Ghosh P, Azam S, Jonkman M, Karim A, Javed Mehedi Shamrat FM, Ignatious E, Shultana S, Beeravolu AR, Boer FD, efficient prediction of cardiovascular disease using machine learning algorithms with relief and LASSO feature selection techniques. ˙In: IEEE Access. https://doi. org/10.1109/ACCESS.2021.3053759 19. Javed Mehedi Shamrat FM, Tasnim Z, Nobel NI, Ahmed MR (2019) An automated embedded detection and alarm system for preventing accidents of passengers vessel due to overweight. In: Proceedings of the 4th ınternational conference on big data and ınternet of things (BDIoT’19). Association for Computing Machinery, New York, NY, USA, Article 35, 1–5. https://doi.org/ 10.1145/3372938.3372973 20. Shamrat FMJM, Nobel NI, Tasnim Z, Ahmed R (2020) Implementation of a smart embedded system for passenger vessel safety. In: Saha A, Kar N, Deb S (eds) Advances in computational ıntelligence, security and ınternet of things. ICCISIoT 2019. Communications in computer and ınformation science, vol 1192. Springer, Singapore. https://doi.org/10.1007/978-981-15-36663_29 21. Ahmed MR, Ali MA, Ahmed N, Zamal MFB, Javed Mehedi Shamrat FM (2020) The ımpact of software fault prediction in real-world application: an automated approach for software engineering. In: Proceedings of 2020 the 6th ınternational conference on computing and data engineering (ICCDE 2020). Association for Computing Machinery, New York, NY, USA, pp 247–251. https://doi.org/10.1145/3379247.3379278 22. Sazzadur Rahman AKM, Javed Mehedi Shamrat FM, Tasnim Z, Roy J, Hossain SA (2019) A comparative study on liver disease prediction using supervised machine learning algorithms. Int J Sci Technol Res 8(11):419–422. ISSN: 2277-8616 23. Javed Mehedi Shamrat FM, Mahmud I, Sazzadur Rahman AKM, Majumder A, Tasnim Z, Nobel NI (2020) A smart automated system model for vehicles detection to maintain traffic by ımage processing. Int J Sci Technol Res 9(02):2921–2928. ISSN: 2277-8616 24. Ghosh P et al. (2020) Optization of prediction method of chronic kidney disease with machine learning algorithms. In: The 15th ınternational symposium on artificial ıntelligence and natural language processing (iSAI-NLP 2020) and ınternational conference on artificial ıntelligence & ınternet of things (AIoT 2020) 25. Javed Mehedi Shamrat FM, Ghosh P, Sadek MH, Kazi MA, Shultana S (2020) Implementation of machine learning algorithms to detect the prognosis rate of kidney disease. In: 2020 IEEE ınternational conference for ınnovation in technology (INOCON). Bangluru, pp 1–7. https:// doi.org/10.1109/INOCON50539.2020.9298026 26. Ghosh P, Karim A, Atik ST, Afrin S, Saifuzzaman M (2020) Expert model of cancer disease using supervised algorithms with a LASSO feature selection approach. Int J Electric Comput Eng 11(3)

102

F. M. Javed Mehedi Shamrat et al.

27. Javed Mehedi Shamrat FM, Tasnim Z, Mahmud I, Jahan MN, Nobel NM (2020) Application of K-means clustering algorithm to determine the density of demand of different kinds of jobs. Int J Sci Technol Res 9(02):2550–2557. ISSN: 2277-8616 28. Sungheetha A, Sharma R (2020) A novel capsnet based image reconstruction and regression analysis. J Innov Image Proc (JIIP) 2(03):156–164 29. Javed Mehedi Shamrat FM, Raihan MA, Sazzadur Rahman AKM, Mahmud I, Akter R (2020) An analysis on breast disease prediction using machine learning approaches. Int J Sci Technol Res 9(02):2450–2455. ISSN: 2277-8616 30. Ghosh P, Hasan MZ, Jabiullah MI (2018) A comparative study of machine learning approaches on dataset to predicting cancer outcome. J Bangladesh Electron Soc 18(1–2):81–86. ISSN: 1816-1510 31. Javed Mehedi Shamrat FM, Tasnim Z, Ghosh P, Majumder A, Hasan MZ (2020) Personalization of job circular announcement to applicants using decision tree classification algorithm. In: 2020 IEEE International conference for ınnovation in technology (INOCON). Bangluru, pp 1–5. https://doi.org/10.1109/INOCON50539.2020.9298253 32. Ghosh P, Hasan MZ, Dhore OA, Mohammad AA, Jabiullah MI (2018) On the application of machine learning to predicting cancer outcome. In: Proceedings of the ınternational conference on electronics and ICT—2018. Organized by Bangladesh Electronics Society (BES), Dhaka, Bangladesh, pp 60, 25–26 Nov 2018 33. Ghosh P, Hasan MZ, Atik ST, Jabiullah MI (2019) A variable length key based cryptographic approach on cloud data. In: 2019 ınternational conference on ınformation technology (ICIT). Bhubaneswar, India, pp 285–290. https://doi.org/10.1109/ICIT48102.2019.00057 34. Javed Mehedi Shamrat FM, Tasnim Z, Sazzadur Rahman AKM, Nobel NI, Hossain SA (2020) An effective ımplementation of web crawling technology to retrieve data from the World Wide Web (www). Int J Sci Technol Res 9(01):1252–1256. ISSN: 2277-8616 35. Chen JIZ (2020) Smart security system for suspicious activity detection in volatile areas. J Inf Technol 2(01):64–72 36. Javed Mehedi Shamrat FM, Allayear SM, Jabiullah MI (2018) Implementation of a smart AC automation system with room temperature prediction. J Bangladesh Electron Soc 18(1–2):23– 32. ISSN: 1816-1510

Data Prediction and Analysis of COVID-19 Using Epidemic Models A. M. Jothi, A. Charumathi, A. Yuvarani, and R. Parvathi

Abstract Coronavirus scourge is manifested as the general fortune dilemma of planetary stew by the World Health Organization in the second multi-day stretch of March 2020. This sickness starts in China during December 2019 has just destroyed the globe, including India. The initial case in quite a while was accounted for on 23rd Feb 2020, with the cases crossing nearly 6000 on the day. The complete isolation of the country for 21 days and quick disengagement of contaminated cases are the energetic advances took by the specialists. In this work, the Indian Covid dataset is taken for Analysis and Prediction. Two epidemic models named SIR and SEIR are used to analyze the dataset and to have the comparison to determine SEIR model is performing better prediction than the SIR for our dataset. Keywords COVID-19 · Susceptible · Infectious · Recovery (SIR) · Susceptible exposed infectious recovery (SEIR)

1 Introduction Toward the end of 2019, the novel (COVID-19) spread broadly in China, and an enormous number of individuals got tainted. As of now, the homegrown flare-up has been viably controlled, while the new COVID is spreading quickly in other territories. At present, Europe has gotten the focal point of the current episode of new pneumonia. Then, on March 11, the World Health Organization (WHO) proclaimed another pneumonia flare-up a “worldwide pandemic.” The new COVID has made A. M. Jothi (B) · A. Charumathi · A. Yuvarani · R. Parvathi School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India e-mail: [email protected] A. Charumathi e-mail: [email protected] A. Yuvarani e-mail: [email protected] R. Parvathi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_9

103

104

A. M. Jothi et al.

an incredible danger to the wellbeing and security of individuals everywhere in the world because of its astonishing spreading force and potential hurt. The examination of the homegrown and global pestilences and the future advancement pattern has become an intriguing issue of momentum research. Presently, there are endless dashboards and measurements around the Coronavirus spread accessible everywhere on the web. This blog is an endeavour of information displaying and investigating Coronavirus (COVID-19) spread with the assistance of information science and information examination in python code. This examination will assist us with finding the premise behind basic ideas about the infection spread from absolutely a dataset point of view. Thus, this investigation combines the information retrieval and time management as a blog. The information is part over the accompanying four records: (a) (b) (c) (d)

Confirmed cases Death cases Recovered cases Country cases

2 Literature Survey In [1, 2] provides a snappy study of consequence on the exemplary SIR structure and variations considering heterogeneity in contact rates. In [3, 4] contributes the aligning of the exemplary model to information created by a heterogeneous model can prompt conjectures that are one-sided severally and to misrepresentation of the truth of the conjecture vulnerability. In [5, 6] various late monetary examinations of the COVID-19 pestilence expand on a standard homogeneous SIR model. In [7, 8] consider a populace of unit mass. At each time (tym) every individual from the populace is in one of three states such as Susceptible, Infectious, or Recovered. This is composed with S(t_s), I(t_s), and R(t_s) for parts on each state at time t_s. Here, expect that the elements of these parts are Infected, Recovered, and Susceptible. In [9, 10, 11] they propose goals including state factors on an ideal issue that involved a partitional S-E-I-R design. In [12, 13] They analyze a course of action on such issues when blended state control goals are used to drive the furthest cut-off points on the open inoculations at every snapshot of time. It researches the opportunity of driving furthest cut-off points on the number of weak individuals with and without obstructions on the number of vaccinations available. The blended state restriction is maintained during the aggregate pattern of an immunization program. For such an issue, they get a shut sort of the ideal control issue. They in like manner propose two models with state goals yet no expository examination is coordinated. In [14, 15] the reproduction results give an expectation image of the quantity of COVID-19 in Indonesia in the next days, the re-enactment consequence additionally

Data Prediction and Analysis of COVID-19 Using Epidemic Models

105

shows that the antibody can quicken COVID-19 recuperating and most extreme detachment can slow the spread of COVID-19.

3 Methodology Used Disease spreading process, Information spreading process, Vaccination analysis is the best application of the epidemic model. In this work, three datasets were collected from Kaggle covid_19_india.csv State-wise Testing Details.csv, countriesaggregated.csv. Data analysis between “Confirmed” and “Cured”, “Confirmed” and “Deaths” were done using covid_19_india.csv dataset which contains fields named as Serial no, Date, Time, State/Union Territory Confirmed Indian National, Confirmed Foreign National, Cured, Deaths, Confirmed (Check Fig. 1 for results). Next, statewise distribution data is considered to do a comparative analysis between “Positive” and “Negative” cases (Check Fig. 2 for results). Then data analysis of covid-19 for country-wise distribution is performed and a graphical representation between “Confirmed” and “Deaths” are shown in Fig. 3.

3.1 SIR Model The S-I-R model is the most direct partition model and various models are auxiliaries of this fundamental structure. The model includes three sections that have been given in Fig. 4: S (Susceptible): Several vulnerable people. At a point when a defenceless and an irresistible separate come into “irresistible contact”, the vulnerable separate agreements the sickness and advances to the irresistible partitions. S = S(t) is the number of susceptible sole. Fig. 1 Comparison of “Confirmed” and “Cured” as well as “Confirmed” and “Deaths” using India dataset

Data Collecti on

Prediction Model SIR Model

SEIR Model

Result

106 Fig. 2 Analysis between “Positive” and “Negative” cases using state-wise dataset

Fig. 3 Country-wise data analysis Fig. 4 SIR (Susceptible, Infectious and Recovered) diagram

A. M. Jothi et al.

Data Prediction and Analysis of COVID-19 Using Epidemic Models

107

I (Infectious): The number of irresistible people. These are people who have been contaminated and are fit for tainting helpless people. I = I(t) is the number of the infected sole. R (Recovered): for the quantity of eliminated (and safe) or expired people. These are people who have been tainted and have neither recuperated from the infection and involved the eliminated sections or passed on. R = R(t) is the number of the recovered sole. It is expected that the quantity of passing is insignificant regarding the complete populace. This compartment may likewise be classified as “recuperated” or “safe”. New contaminations happen because of contact among infective and susceptible. In this basic model, the rate at which new contaminations happen is for some certain consistency. At the point when another contamination happens, the individual tainted moves from the helpless class to the infective class. In our basic model, there is no alternate way people can enter or leave the powerless class, so our first differential condition are Eqs. (1), (2), (3), and (4): d S = −β I S dt

(1)

The other cycle that can happen is that infective people can enter the eliminated class. At the rate γ I for some sure consistent γ were accepted. Consequently, two other differential conditions are as follows: d I = βIS − γ I dt

(2)

d R =γI dt

(3)

dS dI dR + + = −β I S + (β I S − γ I ) + γ I = 0 dt dt dt

(4)

This description of the S-I-R model has been shown in Fig. 4 Few limitations of the SIR Model are: SIR model accepts that each individual is moving and has an equivalent possibility of contact with every single other individual among the populace independent of the space or distance between various individuals. It is accepted that the transmission rate stays consistent all through the time of the pandemic. Besides, this model doesn’t cater to those tainted who have been analyzed and are isolated. It regards them the same as the individuals who have not been isolated. Here, both are considered to have a similar transmission rate.

108

A. M. Jothi et al.

Fig. 5 SEIR Model (Susceptible, Exposed, Infectious and Recovered) diagram

3.2 SEIR Model For some significant contaminations, there is a huge hatching period during which people have been tainted however they are not irresistible themselves. The compartment E is individual during this period. This process has shown in Fig. 5. By accepting that the hatching period is an arbitrary variable with outstanding dissemination with boundary α (for example the normal brooding time frame is αˆ1), and furthermore expecting the presence of fundamental elements with birth rate λ equivalent to death rate μ, and the model is described as: βIS dS = λN − μS − dt N

(5)

dE βIS = − (μ + α)E dt N

(6)

dI = α E − (γ + μ)I dt

(7)

dR = γ I − μR dt

(8)

Numerous sicknesses have an idle stage during which the individual is contaminated however not yet irresistible. This postponement between the procurement of contamination and the irresistible state can be consolidated inside the SIR model by adding an inactive/uncovered populace, E, and letting tainted (however not yet irresistible) people move from S to E and from E to I. For more data, see Incubation boundaries.

4 Result and Discussion A comparative study between SIR (Susceptible-Infected-Recovered with immunity) model and Susceptible-Exposed-Infected-Recovered (SEIR) model was performed to check Figs. 4 and 5 for results. As a result, it was found that SEIR model has been performing better as it includes an Exposed group of individuals additionally. SEIR model is a compartmental numerical model for the expectation of Coronavirus

Data Prediction and Analysis of COVID-19 Using Epidemic Models

109

scourge elements consolidating microbe in the climate and mediations. As a category of quarantined people is involved, the SEIR model is more efficient. The result for the S-I-R model has been shown in Fig. 6. And the result for the S-E-I-R model has been shown in Fig. 7.

Fig. 6 SIR Model

Fig. 7 SEIR Model

110

A. M. Jothi et al.

Fig. 8 Counts of susceptible, infected, and discovered cases

Fig. 9 Counts of susceptible, infected, Exposed, and discovered cases

The SIR and SEIR models were compared for susceptible, infected, and discovered people in some of the countries that have been shown on Figs. 8 and 9.

5 Conclusion In this article, through dissecting the current information of Hubei plague circumstance, the relating model is set up, and later the recreation is completed. Here, the similar investigation of COVID-19 was considered, for example, the SIR model and SEIR model. Furthermore, the development pattern of the current scourge information were anticipated, and found that the monumental controls would have significant effect on the plague. An investigation and anticipated information through the state-wise, and nation insightful were examined. Finally, this article can make a few commitments to the world’s reaction to this pestilence and provide a few references for future enhancement.

Data Prediction and Analysis of COVID-19 Using Epidemic Models

111

References 1. Ellison G (2020) Implications of heterogeneous sir models for analyses of COVID-19. National Bureau of Economic Research (June 2020) 2. Wangping J, Ke H, Yang S (2020) Extended SIR prediction of the epidemics trend of COVID-19 in Italy and compared with Hunan, China. frontiersin 3. Ji C, Jiang D (2014) Threshold behaviour of a stochastic SIR model. In: Applied mathematical modelling. Elsevier 4. Leung TC (2020) Comparing the change of R0 for the COVID-19 pandemic in 8 countries using SIR model for specific periods 5. Chen YC, Lu PE, Chang CS (2020) A time-dependent sir model for COVID-19 with undetectable infected persons. IEEE 6. Hasan S, Al-Zoubi A, Freihet A (2019) Solution of fractional SIR epidemic model using residual power series method 7. Chen X, Li J, Xiao C, Yang P (2020) Numerical solution and parameter estimation for uncertain SIR model with application to COVID-19. In: Fuzzy optimization and decision making 8. Suba M, Shanmugapriya R, Balamuralitharan S (2019) Current mathematical models and numerical simulation of SIR model for coronavirus disease 9. Biswas MHA, Paiva LT, de Pinho MdR (2014) A SEIR model for control of infectious diseases with constraints. Math Biosci Eng 11(4) 10. Lekone PE (2006) Statistical interface in a stochastic epidemic SEIR Model with control intervention: Ebola as a case study 11. Kamrujjaman Md, Ghosh U, Islam MdS (2020) Pandemic and the dynamics of SEIR model: case COVID-19 12. Pandey G, Chaudhary P, Pal S (2020) SEIR and regression model based COVID-19 outbreak predictions in India 13. Kutrolli G, Kutrolli M (2020) The origin, diffusion and the comparison of Ode numerical solutions used by SIR model in order to predict SARS-CoV-2 in Nordic Countries 14. Annas S, Pratama MI, Rifandi M, Sanusi W (2020) Stability analysis and numerical simulation of SEIR model for pandemic COVID-19 spread in Indonesia 15. Kaddar A, Abta A, Alaoui HT (2011) A comparison of delayed SIR and SEIR epidemic models. Researchgate.net

A Comparative Analysis of Energy Consumption in Wireless Sensor Networks Nasser Otayf and Mohamed Abbas

Abstract Over the last decades, manufacturers, researchers and users have paid sufficient attention to the wireless sensor network (WSN), especially for managing and monitored tasks with a limited lifetime and greater reliability and durability to effectively collect data in a variety of environments. However, the main goal is to minimize energy usage by maximizing network life in the creation of applications and protocols. This paper highlights several research recommendations on energy usage at WSN, with assessments of various energy consumption and cluster stability schemes. Then, the research conducted a comparative study between the protocols and logarithms that effectively contributed to reducing energy consumption in these types of networks. The results of that comparison showed the ability of those logarithms and protocols to reduce that energy, but in varying proportions. It can be concluded that a significant reduction in energy consumption can be obtained by combining a number of these protocols. Keywords WSN · Energy consumption · Efficient clustering · Protocols · Algorithms

1 Introduction Most studies have established different approaches, algorithms, clusters and protocols to archive maximum energy efficiency on energy consumption in wireless sensor technology. There is a wireless sensor network (WSN) consisting of several nodes used for multi-hopping communication [1]. In order to apply physical and environmental monitoring conditions such as temperature, sharpness, vibes, pressure, movement and pollutants, wireless sensor networks may be defined as self-configuring, wireless networks free of all infrastructure, which track or transmit data through the network to a central point or sink. The user-to-network interface is the baseline N. Otayf (B) · M. Abbas College of Engineering, King Khalid University, Abha, Saudi Arabia M. Abbas e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_10

113

114

N. Otayf and M. Abbas

Fig. 1 Wireless sensor network infrastructure

or sink. You can collect the relevant network information by injecting queries and collecting sink data. In general, a wireless network has thousands of nodes. Radio can relay sensor nodes. The wireless node includes the modules, transceivers and components of electricity. Any node within the WSN limits the speed of operation, store and bandwidth. Figure 1 shows the infrastructure of wireless sensor network. Often, after deploying sensor nodes, they are responsible for powerful multihop network infrastructure. Wireless technology is being used in most applications, including the environment, the military, building automation, agriculture and industry, and ensures optimum performance [2]. The WSN should be working optimally without delay or with low delay and provide accurate data with minimal energy consumption. The capacity to work under minimal energy consumption has, however, remained one of the major performance limits of WSNs. The energy consumption in WSNs is related to three processes, namely sensing, data processing and data transmission. The energy consumption of the three processes depends on the particular requirements of the application, including number of nodes involved, power given, service life of the sensors, the data to be sensed and timing, the position of the sensors and the environment in which the sensors are working and the context [3]. Most wireless sensor network researchers and applicants face the challenges of achieving low energy consumption in WSNs by ensuring stable communication, fast transmission of data and good sensing, including energy area of coverage, storage and scalability. Energy is the big problem in all networks of wireless sensors. The energy cost is related to sensor node operation, data transmission and data processing. The high energy consumption of WSNs poses a major challenge since the sensors that use most energy rely on limited power batteries which pose a challenge in most WSN applications designed to prevent the simple replacement of the batteries. The explanation behind the high energy consumption by the sensors is the electronic communication processes, whereby energy consumption increases per communication processes, with increased data transmission, the distance between the sender and the recipient and node collisions increasing [4]. The other problem in WSNs leading to high energy consumption is the coverage area. Sensors are equipped with

A Comparative Analysis of Energy Consumption in Wireless …

115

an accurate, restricted view within a certain range. Increasing the coverage area beyond the limit of the sensors with some precision causes the sensors to consume high energy in the processes. Increased energy consumption for electronic communication processes increases between the receiver and the transmitter, as given in [4]. Scalability is another daunting problem in WSN’s energy use. The network shall continue to be scalable, such that services can be provided at an acceptable level when the network size occurs or increases. Sensor networks involve thousands of sensor nodes, which makes it difficult for t to ensure stable operations [4]. The last challenge for WSNs is to ensure service quality (QoS). Service quality remains challenging in most applications because it requires the processing and transmission of sensed data in a given period. Time constraints usually exist because of energy conservation related to the lifespan of the WSN [5]. Therefore, collection and transmission processes will need to consume high energy for quality of service to be achieved, which also decreases the lifespan of the WSN. The rest of the paper is structured accordingly. Section 2 defines concentrations of the WSN architecture. Section 3 defines the specifications for WSN design. Section 4 addresses recent trends and a critical review of past WSN studies. The findings of the comparison are discussed in Sect. 5, while Sect. 6 ends the study.

2 WSN Design Concentration There in each layer of the WSN system design, there is an energy-consuming portion. As such, energy conservation during different design phases is feasible. By optimizing software and implementing appropriate techniques, the energy consumption can be reduced and the lifetime extended. WSN conducts a variety of functions in the course of meeting the criteria for providing application features. The sensor node typically contains multiple hardware modules, each of which plays a particular role in a larger effort to fulfill requirements. Multiple units include communication, processing, sensing and control [6]. Hardware modules include multiple devices. During the design process, two methods can theoretically be used as part of the energy-saving effort: the hardware focus approach and the software concentration approach. The first includes the selection of hardware components that best handle functionality at the lowest energy expense. The focus of the hardware is generally on processing and communication units due to the wide range of available options.

2.1 Hardware Design Requirements Hardware scaling techniques are also used as a means to monitor node capacity. Three components: sensors, radio and CPU [7] can be used. Hardware scaling includes the modification of hardware settings and the relevant parameters by modifying the frequency, rate or voltage to increase the sensor node’s lifetime. The techniques

116

N. Otayf and M. Abbas

for energy optimization can be used in different abstraction layers, from the device layer to the system level [8]. Both the idle blocks in the device and the scaling of the voltage supply necessary for the mode of power operation are disabled. In the processing unit, dynamic voltage scaling (DVS) can be used to minimize the amount of energy consumed without undermining device performance [9]. Alternative approaches, such as profiled power management [10], use voltage and frequency scaling to decrease the amount of energy used in conjunction with the power level in the node.

2.2 Software Design Requirements Optimizing software is a post-design approach that can be used both for current and new hardware designs. During software optimization, algorithmic modifications are performed in order to reduce energy consumption without sacrificing functionality efficiency. Different strategies—including data gaining, routing, data reduction, clustering, duty cycling and data processing—can be used at all levels to minimize energy consumption while maintaining consistent performance [11]. This strategy usually involves turning on and off tools frequently, such as the radio transceiver of the node. The resources that are disabled are reset after invoking. If the data packet sizes are smaller, the energy needed to start the transceiver is higher than that needed for the idle operation [12]. The combination of an energy-aware routing approach and a lowpowder mode algorithm [13] will ensure energy efficiency. Various methods derived from duty cycling may take topology into account. These methods include techniques and connection-based techniques based by venue. In the case of a location-driven system, nodes are put in nearby sleep mode on the assumption that the data is not substantially different in the vicinity. One of the downsides of this strategy is that the spectrum of radio can be underused because its communication capabilities are restricted to local grids and neighborhoods [14]. LEACH-CS suggested that some nodes remain active based on data importance while the majority of the data are in sleep mode. A virtual grid and mobile sink routing algorithm has been proposed. Any nodes selected by virtual infrastructure hold the last position in the proposed algorithm. The algorithm had better energy consumption and performance latency than comparable algorithms [15].

3 WSN Desıgn Requırements Wireless sensor networks have the primary objective of replacing wired networks primarily in the field where wired network implementation has been complicated by geographical location or expensive in terms of meeting random needs and usage. The extension of network life is the key challenge in the wireless sensor network, which operates in comparison with conventional wired networks, in addition to speed

A Comparative Analysis of Energy Consumption in Wireless …

117

and power. In all of this, the emphasis is on optimizing channels by leveraging the minimum node [16]. Coverage in WSNs is based on a sensor, which has the capability of responding to different physical stimuli such as pressure, sound, smoke and heat and transforms the stimuli into corresponding signals that are mapped to sensor information. WSNs contain different types of sensing nodes that are used in detecting and monitoring certain activities. The sensor’s coverage ability is significant toward achieving optimum energy consumption in wireless sensor networks. There is a base station in wireless sensor networks which needs continuous power supply with a limited resource node. For example, limited memory, limited processing capacity and capabilities for harvesting suggested mainly longevity with a complete network life. Restriction is obtained for the life of the sensor node from the backup at a given time, measurement and definite contact. It also has a certain distance with a given node. The dead node is considered and handled when the power backup is depleted below the threshold. For wireless sensor networks, energy conservation methods thus represented the key challenge and identified numerous approaches to the solution of problems [17]. Figure 2 shows the energy consumption percentage of the output of various sensor node activities during the active time period, as available in the literature study. It is found that more than 50% of energy is used mainly by the sensor node during communication. The maximum coverage area is for contact in which network access within the implemented wireless sensor network has been maintained. The data collection and aggregation of sensed data from different sensor nodes to be sent to the base station for processing is a routine compilation of the sensed data. The wireless sensor nodes are energy-constrained and remote. For all wireless sensor nodes used principally for data transmission directly to the base

Fig. 2 Energy consumption by various activities of a WSN

118

N. Otayf and M. Abbas

station, these are inefficient. In addition, if multiple sensing nodes sensor the same case, the data would be large and redundant. Furthermore, such procedures are required in order to combine redundant data with information at the specified wireless sensor nodes which, as an expected decrease in bandwidth and energy consumption, reduce the total number of packets in overall measurements. A key role is played mainly by the clustering algorithms for data collection and aggregation with a maximized area coverage. Clustering is regarded for group sensor nodes as the action path with a given cluster head in the large-scale sensor network. The sensor nodes energy limitation mainly contributes to restricted network existence and the fact that this clustering is seen primarily as necessary, which decreases overall the use of energy in a network with more improvements in the network life. Moreover, as soon as the sensor node drains resources, the packets are not received and transmitted, and then the network disconnected with further loss of the coverage area. Once data is aggregated, the data transmission from the sensor nodes to the established base station was maximized accordingly by the energy-efficient detection of routes and network life. A balanced multi-path routing algorithm was proposed, based on the residual energy and hop count of each node, to discover and integrate the best routes into the routing table. The key principle of this algorithm is ant colony optimization (ACO) and automated network modeling [18].

4 Energy Consumptıons Algorıthms Achieving optimum energy consumption in wireless sensor networks requires clusters and protocols designed to increase energy efficiency to improve network lifetime. The energy consumption in the wireless sensor networks is mostly associated with the number and distance of data transmission. The difference in terms of number and distance involves nodes that are far away from the base station, or those that have more tasks in forwarding information consume more energy than others, which results in other nodes exhausting their energy before others. Application of clustering routing protocols has been identified by various studies as a solution to ensuring the energy balance acquired and optimum energy consumption achieved. This section analyzes various routing protocols and clusters that have been proposed and implemented in various studies. The enhanced harmony search (IHS) algorithm was implemented in a predetermined scheme involving node deployment strategy and equation improvements where the number of selections increases [19]. Additionally, the sink node position was changed to find the optimal position and reduce node energy consumption. Both experiment results respond positively by reducing energy consumption as the range increases. A new frog-spring algorithm (SFLA) model. The model mainly provides the mathematical representation of energy consumption in the physical layer with the transmission, power receipts and signal bandwidth and builds on WSN’s total energy consumption and transmission power [20] for the objective optimization function of

A Comparative Analysis of Energy Consumption in Wireless …

119

energy consumption balance. Another energy balance scheme is the new grid-based hybrid network deployment (GHND) architecture that ensures energy efficiency and balance in wireless sensor networks. The WSNs suggest the fuzzy-based data fusion method, which increases QoS and decreases energy consumption [21]. Smart industrial wireless sensor networks proposed an energy-efficient broadcasting scheme [22]. The scheme implements a controlled broadcast radius, improving the network upgrade functionality of the scheme. The nodes use their residual energy in the data collection cycle to increase the likelihood of receiving packets and minimize the delay in packet transmission by extending the broadcast range, i.e., the transmission power [23]. A stable, energy-efficient data transformation structure using the TinyMD5 algorithm [24] in wireless sensor networks. Dual-mode interest transfer (DMIF) scheme is proposed in [25]. DMIF consists of two hybrid transmission modes that are designed to preserve and balance power consumption through a range of energyefficient mechanisms including multi-faceted mode changes, flood protection, storm prevention, packet deletion and energy weight factors. The energy-efficient clusterbased scheduling method blends sensor network service life and efficiency [26] is another highlight. New NSGA-II planning link (Genetic Algorithm II not dominated). The scheme is based on the optimal routing tree that meets with minimum planning time and energy consumption wireless sensor networks [27]. WSN energy model was proposed by specifying energy consumption at each node. Such a model calculates the energy at each node by calculating the energy of the key functions formed while the routing protocol is running. Experimental findings show that MPH routing protocol consumes 16%, 13% and 5% less energy compared to AODV, DSR and ZTR, respectively, and shows just 2 and 3% higher energy consumption compared to PEGASIS and LEACH protocols [28]. An approach to reduce the energy consumption and extend the network lifetime has been proposed. It has been accomplished by augmenting the energy balancing in clusters among all sensor nodes to minimize the energy dissipation during network communications. This method is based on a cluster head selection method [29]. The outclassing of simple hybrid routing protocol (SHRP) and dynamic multi-objective routing algorithm (DyMORA) was shown to be QoS-assured multi-objective hybrid routing algorithm (Q-MOHRA) performance. The packet supply ratio is increased by 24.31% compared with SHRP, compared to DyMORA by 11.86% [30]. Power consumption in WSN will be related to the node of the network, and it is calculated in [31], where the distributed coordination function and the handshake have been used. Consider TS , TI , TR and TX are the fractions of time spent by the interface in each of the possible power states: PS , PI , PR and PX sleep, idle, receive and transmit, respectively, these fractions of time satisfy the condition. Then, the model of the average power consumed (P) by the interface has been calculated as follows: P = TS × PS + TI × PI + TR × PR + TX × PX If a knot that has very low reserve power is chosen as a cluster leader, it can be depleted without all the data from neighboring knots. The reserve power fitness (F)

120

N. Otayf and M. Abbas

is shown in [32] as shown in the following: F=

N 

REk − (ET + |C|k × ER + ED)

i=1

where REk is the residual energy of the k-th cluster head and ET, ERX, and EDA correspond to the energy for data transmission, reception, aggregation and |C|k is the coverage area of the k-th cluster head. Below is a comparison of flat routing schemes based on sensor use and cluster stability. For the hierarchical routing schemes, the energy-based performance can be illustrated in the following table.

5 Comparative Analysis Table 1 can be ills tested by Fig. 3 that cover the energy consumption for different routing protocols. According to Table 2, SAERP scheme has less energy consumption compared to other schemes, while EACBRS has the best cluster stability, as shown from Figs. 4 and 5. Compared to other schemes covered, the ORW scheme is considered the most effective in reducing energy consumption and delay for wireless sensor network. The ORW scheme includes three main aspects that make WSNs the most efficient energy optimization scheme. They involve reliability, energyefficient versus throughput, involving opportunistic routing to increase energy quality relative to other schemes. ORW also use opportunistic rooting, adding low-latency to duty-cycled networks, allowing the number of nodes that simultaneously overhead a package to be reduced and reducing energy consumption. The ORW uses Table 1 Flat routing schemes Scheme

Description

Energy consumption per sensor Cluster stability node (A)

ORW

Decreases delay and duty-cycle 20 without sacrificing energy efficiency and reliability

15

REL

Energy efficiency and load 15 balancing using end-to-end link quality for IoT applications

18

FACOR

Energy efficient, reliable and have good end-to-end delay

15

18

OD-PRRP Reduces control overhead 20 without compromising cost and energy efficiency

18

A Comparative Analysis of Energy Consumption in Wireless …

121

Fig. 3 Energy consumption for different routing schemes

Table 2 Hierarchical routing schemes Scheme

Description

LLEAP

Three-phase cluster and 17 tree formation for balanced routing

15

ERP

Efficient cluster formation using cohesion and cluster separation metrics

15

18

17

19

SHEAR

Two-level cluster formation 13 with energy-efficient path selection

15

SAERP

Clustering which increases 1 2 stability period while decreasing the instability period

14

ICP

Efficient clustering with fast setup time and lower overheads

16

EACBRS Energy-efficient route selection with congestion control

Energy consumption per Cluster stability per second sensor node (A)

14

the lightweight algorithm that provides specific forwarder selection and delays the decision to pick forwarder until the packet is received, reducing energy consumption. From Fig. 6, under this criteria, the ORW had the best performance score. Followed by REL, FACOR AND OD-PRRP, respectively. The ORW scheme when compared against other schemes covered is considered to the most effective for wireless sensor network in terms of reducing energy consumption and delay. The ORW scheme involves three key aspects that make it the most effective energy optimization scheme in WSNs. They include reliability, energy efficient versus throughput, whereby

122

Fig. 4 Energy consumption in hierarchical routing schemes

Fig. 5 Cluster stability in hierarchical routing schemes

Fig. 6 Energy consumption and cluster stability

N. Otayf and M. Abbas

A Comparative Analysis of Energy Consumption in Wireless …

123

Fig. 7 Energy consumption in WSN routing protocols

opportunistic routing is involved to improve energy efficiency when compared to routing in other schemes. ORW also utilize opportunistic rooting, which brings lowlatency to duty-cycled networks, enabling the number of nodes that concurrently overhead a package to be limited, which reduces energy consumption. The ORW utilizes the lightweight algorithm that offers unique forwarder selection and delays the decision of selecting forwarder until the packet has been received, which minimize energy consumption. Comparing the overall performance, LLEAP has the best performance. Figure 7 shows an energy consumption comparison between MPH, ZTR ADOV and DSR. The practical analysis, it was done by a reference comparison between algorithms, and protocols used in intra-network data transfers. The refer to the statistical analysis of the performance of these logarithms when the two energy consumption processes for each element in the circuit and the stability data for each algorithm. This statistical analysis aims to present this comparison in a precise way that highlights the efficiency of each algorithm.

6 Conclusıons This study illustrates the variables to consider while developing the WSN. As shown in the report, there is potential for multi-level and design-stage energy conservation behavior. When designing WSN hardware modules, energy consumption is a fundamental factor. Devices that implement high functionality consume high power. Different protocols and schemes were explained.

124

N. Otayf and M. Abbas

This study is characterized by a recent standard comparison based on studies conducted in the past five years between the latest algorithms used to transmit wireless sensor network (WSN) data. This is the first comparison taken into consideration for both stability and power consumption of each algorithm. As a future work for various research directions, the batteries supply a quantity of energy in the wireless sensor network, so it is not easy to swap or recharge these batteries as these sensor nodes are energy-restricted. For wireless sensor networks, the power-compliant and attentive protocols at each contact sheet are very important. Energy-efficient routing performs a very critical role, considered to be the most important and serious elements of this network. Most power is consumed in communications when sensor nodes transmit or receive information. In this case, energyefficient device that minimizes overall energy consumption and increases energy efficiency will be implemented to supply power conservation and best solution, restricting the network’s lifespan. The neural network (NN) optimization algorithm will be used to set the objective function to minimize the total energy consumption of the WSN network under a given threshold. Acknowledgements The authors are thankful to electrical engineering department, college of engineering at King Khalid University of supporting this research.

References 1. Del-Valle-Soto C, Max-Perera C, Nolazco-Flores J, Velázquez R, Rossa-Sierra A (2020) Wireless sensor network energy model and its use in the optimization of routing protocols. Energies 13(728). https://doi.org/10.3390/en13030728 2. Zhou Y, Yang L, Yang L, Ni M (2019) Novel energy-efficient data gathering scheme exploiting spatial-temporal correlation for wireless sensor networks. Hindawi Wirel Commun Mob Comput. https://doi.org/10.1155/2019/4182563 3. Carlos-Mancilla M, López-Mellado E, Siller M (2016) Wireless sensor networks formation: approaches and techniques. Hindawi Publ Corporation J Sens. https://doi.org/10.1155/2016/ 2081902 4. Talib M (2018) Minimizing the energy consumption in wireless sensor networks. J Babylon Univ Pure Appl Sci 26(1). https://doi.org/10.29196/jub.v26i1.349 5. Alshehri A, Lin S, Akyildiz I (2017) Optimal energy planning for wireless self-contained sensor networks in oil reservoirs. In: IEEE ICC 2017 ad-hoc and sensor networking symposium 6. Pandey S, Jain R, Kumar S (2018) An efficient data aggregation algorithm with gossiping for smart transportation system. In: International conference on communication, networks and computing. Springer 7. Gaber T et al (2018) Trust-based secure clustering in WSN-based intelligent transportation systems. Comput Netw 146:151–158 8. Sabor N et al (2017) A comprehensive survey on hierarchical-based routing protocols for mobile wireless sensor networks: review, taxonomy, and future directions. Wirel Commun Mobile Comput. https://doi.org/10.1155/2017/2818542 9. Tunca C et al (2015) Ring routing: an energy-efficient routing protocol for wireless sensor networks with a mobile sink. IEEE Trans Mob Comput 14(9):1947–1960

A Comparative Analysis of Energy Consumption in Wireless …

125

10. Wang J et al (2017) Energy-efficient cluster-based dynamic routes adjustment approach for wireless sensor networks with mobile sinks. J Supercomput 73(7):3277–3290 11. Akbar M et al (2016) Sink mobility aware energy-efficient network integrated super heterogeneous protocol for WSNs. EURASIP J Wirel Commun Netw 2016(1):66 12. Zhang J, Tang J, Chen F (2016) Energy-efficient data collection algorithms based on clustering for mobility-enabled wireless sensor networks. In: International conference on cloud computing and security. Springer 13. Zhao H et al (2015) Energy-efficient topology control algorithm for maximizing network lifetime in wireless sensor networks with mobile sink. Appl Soft Comput 34:539–550 14. Miao Y et al (2018) Time efficient data collection with mobile sink and vMIMO technique in wireless sensor networks. IEEE Syst J 12(1):639–647 15. Yarinezhad R, Sarabi A (2018) Reducing delay and energy consumption in wireless sensor networks by making virtual grid infrastructure and using mobile sink. AEU Int J Electron Commun 84:144–152 16. Zhao M, Yang Y, Wang C (2015) Mobile data gathering with load balanced clustering and dual data uploading in wireless sensor networks. IEEE Trans Mob Comput 14(4):770–785 17. Saranya V, Shankar S, Kanagachidambaresan G (2018) Energy efficient clustering scheme (EECS) for wireless sensor network with mobile sink. Wirel Pers Commun 100(4):1553–1567 18. Laouid A, Dahmani A, Bounceur A, Euler R, Lalem F, Tari A (2017) A distributed multi-path routing algorithm to balance energy consumption in wireless sensor networks. Ad Hoc Netw 64:53–64 19. Halim NH, Isa A, Hamid A, Isa I, Mahyuddin M, Saat S, Zin M (2018) A pre-defined scheme for optimum energy consumption in wireless sensor network. J Telecommun Electron Comput Eng 10(1–8) 20. Zhou C, Wang M, Qu W, Lu Z (2018) A wireless sensor network model considering energy consumption balance. Hindawi Math Prob Eng. https://doi.org/10.1155/2018/8592821 21. Collotta M, Pau G, Bobovich A (2017) A fuzzy data fusion solution to enhance the QoS and the energy consumption in wireless sensor networks. Hindawi Wirel Commun Mob Comput. https://doi.org/10.1155/2017/3418284 22. Chen J, Jia J, Dai E, Wen Y, Dazhe Z (2015) Bicriteria optimization in wireless sensor networks: link scheduling and energy consumption. Hindawi Publ Corporation J Sens. https://doi.org/10. 1155/2015/724628 23. Chen Z, Liu A, Li Z, Cjoi Y, Sekiya H, Li J (2017) Energy-efficient broadcasting scheme for smart industrial wireless sensor networks. Hindawi Mob Inform Syst. https://doi.org/10.1155/ 2017/7538190 24. Farman H, Javad H, Ahmad J, Jan B, Zeeshan M (2016) Grid-based hybrid network deployment approach for energy efficient wireless sensor networks. Hindawi Publ Corporation J Sens. https://doi.org/10.1155/2016/2326917 25. Gao S, Zhang H, Zhang B (2016) Energy efficient interest forwarding in NDN-based wireless sensor networks. Hindawi Publ Corporation Mob Inform Syst. https://doi.org/10.1155/2016/ 3127029 26. Rahhala H, Ramadan RA (2015) A novel multi-threshold energy (MTE) technique for wireless sensor networks. In: International conference on communication, management and information technology (ICCMIT 2015). Procedia Comput Sci 65:25–34 27. Xiao L, Wu F, Yang D, Zhang T, Zhu X (2015) Energy efficient wireless sensor network modelling based on complex networks. Hindawi Publ Corporation J Sens. https://doi.org/10. 1155/2016/3831810 28. Del-Valle-Soto C, Mex-Perera C, Nolazco-Flores J, Velázquez R, Rossa-Sierra A (2020) Wireless sensor network energy model and its use in the optimization of routing protocols. Energies 13:728 29. Elshrkawey M, Elsherif SM, Wahed ME (2017) An enhancement approach for reducing the energy consumption in wireless sensor networks. J King Saud Univ Comput Inform Sci 30:259– 267

126

N. Otayf and M. Abbas

30. Kulkarni N, Prasad N, Prasad R (2018) Q-MOHRA: QoS assured multi-objective hybrid routing algorithm for heterogeneous WSN. Wirel Pers Commun 100:255–266 31. John T, Ukwuoma HC, Danjuma S, Ibrahim M (2016) Energy consumption in wireless sensor network. Comput Eng Intell Syst 7:63–67 32. Lee J, Chim S, Park H (2019) Energy-efficient cluster-head selection for wireless sensor networks using sampling-based spider monkey optimization. Sensors (Basel, Switzerland) 19

Performance Evaluation Among ID3, C4.5, and CART Decision Tree Algorithm F. M. Javed Mehedi Shamrat, Rumesh Ranjan, Khan Md. Hasib, Amit Yadav, and Abdul Hasib Siddique

Abstract Data is the most valuable resource in the present. Classifying the data and using the classified data to make a decision holds the highest priority. Computers are trained to manage the data automatically using machine learning algorithms and making judgments as outputs. Several data mining algorithms can be obtained for Artificial Neural Network classification, Nearest Neighbor Law and Baysen classifiers, but the decision tree mining is most commonly used. Data can be classified easily using the decision tree classification learning process. It’s trained on a training dataset and then implemented on a test set from which a result is expected. There are three decision trees (ID3 C4.5 and CART) that are extensively used. The algorithms are all based on Hut’s algorithm. This paper focuses on the difference between the working processes, significance, and accuracy of the three (ID3 C4.5 and CART) algorithms. Comparative analysis among the algorithms is illustrated as well. Keywords ID3 · C4.5 · CART · Algorithm · Classification · Machine learning · Decision tree

F. M. Javed Mehedi Shamrat (B) Department of Software Engineering, Daffodil International University, Dhaka, Bangladesh R. Ranjan Department of Plant Breeding and Genetics, Punjab Agriculture University, Ludhiana, Punjab, India e-mail: [email protected] K. Md. Hasib Department of Computer Science and Engineering, Ahsanullah University of Science and Technology, Dhaka, Bangladesh A. Yadav Department of Information and Software Engineering, Chengdu Neusoft University, Chengdu, China A. H. Siddique International University of Scholars, Dhaka, Bangladesh © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_11

127

128

F. M. Javed Mehedi Shamrat et al.

1 Introduction Since the mid-1950s, artificial intelligence has become a significant research area. ML is a vital part of artificial intelligence. Machine learning reference to machines’ ability to learn. “Machine learning algorithms are established for machines to able to learn by themselves and provide high performance” [1, 2]. Nowadays, Deep Learning is also having an impact [3]. Among the many supervised algorithms, a decision tree is the maximum general algorithm for simplicity and efficiency. “Decision trees gained their popularity because it follows a stream framework like that of human rationale and thinking” [4–7]. As per [8], “the exactness of a choice tree is equivalent to or higher than some other grouping model” [9] “because choice trees don’t need a maximum amount of boundaries to give an outcome” [10]. “Decision trees have been utilized as solid procedures for dynamic on account of its graphical assemblies that can be effortlessly comprehended and functional. Decision trees remain portrayed through portioning varied information as per their similitudes with the end goal that the information becomes increasingly homogeneous about the objective variable” [11, 12]. The key aim of the analysis is to examine the various algorithms. Between many algorithms, ID3, C4.5, and CART are the maximum prominent algorithms. Each algorithm has its specialty and flaw as well. Each algorithm was built to be more efficient than the last. The three algorithms have an overall high-efficiency rate and low execution time. This paper will discuss the properties and working process of decision tree learning algorithms. The author gave an analysis based on research done before to help understand the algorithms, differentiate among their structures, and examine their accuracy rate. Figure 1 indicates the organization of the system.

2 Proposed Work 2.1 Decision Tree As the name suggests, the algorithm helps make a decision, and its structure is similar to that of a tree. The decision tree contains hubs that create a known tree, which implies that it is a coordinated tree through a node called “root” with no imminent edges, while very varied hubs have just one imminent edge. An inner or exam node is called a center with complex edges. Every single additional node is entitled greeneries or by way of incurable or excellent nodes [13]. The leaf node is connected to the name of the class. The decision tree is designed as part of the planning set [14]. Such a decision tree is exposed in Fig. 2:

Performance Evaluation Among ID3, C4.5, and CART …

Fig. 1 Entire system diagram Fig. 2 Decision tree structure

129

130

F. M. Javed Mehedi Shamrat et al.

2.2 ID3 (Iterative Dichotomiser 3) Quinlan Ross implemented the estimation of the decision tree for Iterative Dichotomiser 3 (ID3) in 1986. It is carried out sequentially and is based on an interpretation by Hunt’s analysis. The ID3 algorithm’s fundamental assumption is to build the choice tree by actualizing an upper-lower, enthusiastic review by the available sets to test apiece typical at apiece tree hub [15]. In the algorithm, the distinguishing through the most noteworthy information gain is chosen as a split property. Data enhancement is used to construct a tree that is used to classify research data through knowledge planning. At the point when data gain arrives at zero or all cases have a place with the sole objective, the tree quits developing [14]. Entropy The ID3 algorithm, a straightforward choice tree calculation, utilizes the entropy-based meaning of data gains as parting rules. Entropy is the level of the irregularity of information that portrays the virtue of the information. On the off chance that entropy is zero, at that point, the example is homogeneous and else dubious [14]. The entropy n-wise grouping is characterized as [16], Entrophy(S) =

n 

− pi log2 pi

(1)

i=1

Information Gain Quality with the most elevated data gain is chosen as the most acceptable parting standard trait [14]. The condition is as following [19]: Gain( p, T ) = Entropy(S) −

n  

  pi Entropy S j

(2)

j=1

Algorithm ID3 algorithm develops a choice tree that relies upon the data increase from the preparation data and afterward executes it to order the test information. Algorithm 1 displays the pseudo-code with a lot of non-specified variables C 1 , C 2 , …, C n , C, and S recordings. Algorithm 1: ID3 Inputs: R: a collection of non-target attributes, C: target attribute, S: training data. Output: returns a decision tree Step 1: Initialization of an empty tree; Step 2: If S is empty, then Step 3: Returns the failure value of a single node Step 4: End If Step 5: If S is rendered for values of the same target only, then

Performance Evaluation Among ID3, C4.5, and CART …

131

Step 6: Returns a single node with this value Step 7: End if Step 8: If R is empty, then Step 9: Return the commonest value in S for the target attribute values. Step 10: End if Step 11: D ← attribute with the largest Gain (D, S) among all the attributes of R Step 12: {d j j = 1, 2, …, m} ← Attribute values of D Step 13: {S j with j = 1, 2, …, m} ← The subsets of S composed of the d j records attribute value D, correspondingly Step 14: Return a tree whose root is D, and the arcs are labeled by d 1 , d 2 , …, d m and going to sub-trees ID3 (R − {D}, C, S 1 ), ID3 (R − {D}, C, S 2 ), …, ID3 (R − {D}, C, S m ).

2.3 C4.5 (Classification 4.5) Quilan Ross creates the C4.5 (Classification 4.5) calculation in 1993 as the replacement to the ID3 calculation. Not at all like ID3 in C4.5 pruning happens by displacing the inner hub through a leaf node in this method, lessening the mistake ratio [17]. It has an upgraded approach for tree pruning [19] that reductions misclassification errors due to turmoil and such a large number of delicacies in the training set collection. C4.5 calculation includes the idea of data gain extent and consistent characteristics [18]. It seizes the task of using ID3 with different cutting techniques to avoid the tree’s overriding. C4.5 utilizes the gain percentage contamination technique to appraise the excruciating attribute. The algorithm C4.5 has subsequent compensations [15, 17]: • Handles qualities with dissimilar prices • Maintain training data with missing attribute standards by marking “?” for missing values. The lost attribute standards are not utilized in gain and entropy algorithms. • Handles together ceaseless and separate qualities by making a limit and afterward parting the rundown into those whose characteristic worth is over the edge and those that are not exactly or equivalent to it. • C4.5 revisit the tree once it has been made and endeavors pruning Perception of Information Gain Ratio This proportion creates based on is the idea of information gain by the accompanying recipe [18]. gain ratio(A) =

Gain(A) SplitI (A)

(3)

132

F. M. Javed Mehedi Shamrat et al.

In which, SplitI (A) =

v 

  − p j log2 p j

(4)

j=1

2.4 Classification and Regression Trees (CART) CART is dependent on Hunt’s algorithm and can be applied successively. The classification tree development via Truck depends on the twofold parting of the Gini list characteristics [15, 17]. Gini list is a debasement-based model that gauges the divergences between the probabilities dispersions of the objective characteristic’s qualities. The following equation shows the Gini indexing [13]: Gini(y, S) = 1 −

 c j ∈dom(y)

  σ y=c S  2 j |S|

(5)

The regression study include is utilized in approximating a reliant on variable assumed a ration of pointer issues ended an assumed deadline. Trucks bolster ceaseless and ostensible characteristic information and have an average speed of processing [15]. Cart Operation CART constructs a binary decision tree using training data with known classification. The quantities of substances in the two sub-bunches characterized at every parallel split, compared to the two branches rising out of each halfway hub, become progressively littler [20]. The proportion of the pollution or entropy at hub t, indicated by i(t), is as appeared in the accompanying condition: i(t) = −

k    p w j |t log p(w j |t) j=1

where p(wj |t) is the number of x i designs attributable to class wj at node t.

(6)

Performance Evaluation Among ID3, C4.5, and CART …

133

3 Result Analysis 3.1 ID3 Figure 3 shows the weather condition data, which is classified by a decision tree to regulate which weather is suitable to play outside. Using the ID3 algorithm of decision, as suggested in algorithm 1, the following results are obtained for the four attributes. Gain (S, Outlook) = 0 0.246. Gain (S, Wind) = 0.048. Gain (S, Temperature) = 0.0289. Gain (S, Humidity) = 0.1515. We see from the equation that Outlook has the highest data gain, which allows it ideal to become the decision tree’s root node. After implementation, the values are divided, as in Fig. 4. Finally, the following tree in Fig. 4 is formed (Fig. 5).

Fig. 3 Dataset 1

134

F. M. Javed Mehedi Shamrat et al.

Fig. 4 ID3 decision tree root node

Fig. 5 ID3 decision tree

3.2 C4.5 Replacing the humanity conditions with the actual values, we get the following Fig. 6. This table is used to determine the C4.5 algorithm. In C4.5, information gain is calculated the same as ID3 except for continuousvalued attributes. They are to be sorted first in ascending order and repeating values to remove. Then, information gain is calculated using Eqs. 1 and 2. We got the details in Table 1 after the estimate. The highest Gain (S, Humidity) = 0.102

Performance Evaluation Among ID3, C4.5, and CART …

135

Fig. 6 Dataset 2

Table 1 Information gain for C4.5

Humidity

Info (S, T )

Gain

65

0.892

0.048

70

0.925

0.015

75

0.8950

0.045

78

0.85

0.09

80

0.838

0.102

85

0.915

0.025

90

0.929

0.011

95

0.892

0.048

96

0.94

0

Info (S) = −8/13 * log2 (8/13) − 5/13 * log2 (5/13) = 0.961 Info (Outlook, S) = 5/13 * Entropy (SSun) + 3/13* Entropy(Sovercast) + 5/13 * entropy(SRain) = 0.747 Entropy (SSun) = −2/5 * log2 (2/5) − 3/5 * log2 (3/5) = 0.9710 Entropy (SOvercast) = −3/3 * log2 (3/3) − 0/3 * log2 (0/3) = 0 Entropy (SRain) = −3/5 * log2 (3/5) − 2/5 * log2 (2/5) = 0.9710 Gain (Outlook) = 13/14 (0.961 − 0.747) = 0.199. Therefore, we use Outlook as the root node in Fig. 7.

136

F. M. Javed Mehedi Shamrat et al.

Fig. 7 The root node of the C4.5 decision tree

Table 2 Construction attribute data

Code

Defect type

A47

Inability to examine development works and hardware

A75

Letdown to log the construction journal

A81

Absence of value control factual investigation

B1

Nonstandard material driving

B4

Garbage on the solid surface

B285

Inability to introduce required fall security services

3.3 CART Using the CART algorithm, a decision tree is fashioned that has four attributes. The data that is analyzed is based on a construction project where the following defects are considered (Table 2). The root node shows some grades in percentage. The improvement occurs in three layers and is illustrated in Fig. 8.

3.4 Algorithm Comparison Authors [15, 17] illustrate a clear difference between the ID3, C4.5, and CART algorithms. This is shown in a tabular format below (Table 3).

Performance Evaluation Among ID3, C4.5, and CART …

137

Fig. 8 Structure of the CART algorithm

Table 3 Comparisons among ID3, C4.5, and CART Features

ID3

C4.5

CART

Type of data

Categorical

Continuous and Categorical

Data on constant and nominal attributes

Speed

Low

Faster than ID3

Average

Boosting

Not supported

Not supported

Supported

Pruning

No

Pre-pruning

Post pruning

Missing values

Can’t deal with

Can’t deal with

Can deal with

Formula

Use information entropy and information Gain

Use split info and gain ratio

Use Gini index

Procedure

Top-down decision tree construction

Top-down decision tree construction

Constructs binary decision tree

Machine learning algorithms are used to make better decisions with a more accurate result and reduce workload. Moreover, decision trees and other machine learning algorithms are widely used in systems such as Intrusion Detection Frameworks, email gateways for spam detection, smart environment monitoring, etc., [21–24]. Each algorithm is designed to address the problems in the dataset and make sure they do not affect the outcome of the algorithm.

138 Table 4 Comparison of precision among ID3 and C4.5

F. M. Javed Mehedi Shamrat et al. Features

Accuracy ID3

C4.5

14

94.15

96.2

24

78.47

83.52

35

82.2

84.12

3.5 Accuracy Comparison Table 4 indicates a difference between ID3 and C4.5 accuracies of various dataset sizes from [19] and contrast can be obtained vividly in Fig. 9. As the exactness rate increase, it is seen that the implementation period of the algorithms decreases in comparison. For the various amount of data in the dataset, the execution time comparison among ID3 and C4.5 is demonstrated in Table 5 and Fig. 10 [19]. Table 6 illustrates the accuracy in terms of TP and FP rates. Here, TP and FP reference to True Positive and False Positive. It demonstrates the 3 ML procedures that produce prognostic replicas by the best class astute correctness [19]. A graphical view of the numbers are demonstrated in Fig. 11.

Fig. 9 Accurateness rate of ID3 and C4.5

Table 5 ID3 and C4.5 efficiency time

Size of dataset

Execution time (s) ID3

C4.5

14

0.215

0.0015

24

0.32

0.17

35

0.39

0.23

Performance Evaluation Among ID3, C4.5, and CART …

139

Fig. 10 Assessment of accomplishment period for ID3 and C4.5 Table 6 Accuracy of TP and FP

Algorithm

Class

ID3

Pass

0.714

0.184

Promoted

0.625

0.232

C4.5

CART

Fig. 11 Accuracy of TP and FP

TP rate

FP rate

Fail

0.786

0.061

Pass

0.745

0.209

Promoted

0.517

0.213

Fail

0.786

0.092

pass

0.809

0.349

Promoted

0.31

0.18

Fail

0.643

0.105

140 Table 7 Accuracy rate

F. M. Javed Mehedi Shamrat et al. Algorithm

Correctly classified instances (%)

Incorrectly classified instances

ID3

52.0833

35.4167

C4.5

45.8333

54.1667

CART

56.2500

43.7500

Fig. 12 The classifier’s accuracy is represented in the form of a graph

Table 7 shows the algorithms’ accuracy in terms of correctly or incorrectly classified instances with a visual representation in Fig. 12, as illustrated in [19]. From Table 7 and Fig. 12, we see that C4.5 has the lowest accuracy rate and the CART algorithm has the high accuracy rate with 56.26% correctly classified instances. However, ID3 has the lowest incorrectly classified instances. Therefore, it is to say that CART classifies the most correctly, whereas ID3 classifies the least incorrectly. Comparing the execution time and TP/FP of each of the algorithms on the same scale and dataset, this accuracy rate is obtained.

4 Conclusion From the analysis done on the ID3, C4.5, and CART, it can be concluded that the decision tree learning algorithms provide a high accuracy rate. However, each algorithm is to be implemented based on the condition of the dataset. For the common dataset, ID3 will provide a satisfactory result, but if pruning of the tree is necessary, C4.5 will deliver the expected result. If the dataset contains impurities, the CART algorithm will use the Gini index to binary split the attributes. In this study, it can be observed that that algorithms have great potential for performance prediction. ID3, C4.5 and CART have 52.0833%, 45.8333% and 56.2500% correctly identifications, respectively. The higher performance rate indicates to lower execution time. However, the C4.5 algorithm, compared to others provides the most accurate result for a small dataset.

Performance Evaluation Among ID3, C4.5, and CART …

141

References 1. Manlangit S, Azam S, Shanmugam B, Karim A (2019) Novel machine learning approach for analyzing anonymous credit card fraud patterns. Int J Electron Commerce Stud 10(2):175–202. https://doi.org/10.7903/ijecs.1732 2. Ghosh P, Azam S, Jonkman M, Karim A, Shamrat FMJM, Ignatious E, Shultana S, Beeravolu AR, De Boer F Efficient prediction of cardiovascular disease using machine learning algorithms with relief and LASSO feature selection techniques. IEEE Access. https://doi.org/10.1109/ACC ESS.2021.3053759 3. Foysal MFA, Islam MS, Karim A, Neehal N (2019) Shot-Net: a convolutional neural network for classifying different cricket shots. Commun Comput Inform Sci Rec Trends Image Process Pattern Recogn 111–120. http://doi-org-443.webvpn.fjmu.edu.cn/10.1007/978-981-13-91811_10 4. Shamrat FMJM, Asaduzzaman Md, Rahman AKMS, Tusher RTH, Tasnim Z (2019) A comparative analysis of parkinson disease prediction using machine learning approaches. Int J Sci Technol Res 8(11):2576–2580. ISSN: 2277-8616 5. Williams FM, Rothe H, Barrett G, Chiodini A, Whyte J, Cronin MT, Yang C (2016) Assessing the safety of cosmetic chemicals: consideration of a flux decision tree to predict dermally delivered systemic dose for comparison with oral TTC (Threshold of Toxicological Concern). Regul Toxicol Pharmacol 76:174–186 6. Rahman AKMS, Shamrat FMJM, Tasnim Z, Roy J, Hossain SA (2019) A comparative study on liver disease prediction using supervised machine learning algorithms. Int J Sci Technol Res 8(11):419–422. ISSN 2277-8616 7. Wang X, Liu X, Pedrycz W, Zhang L (2015) Fuzzy rule based decision trees. Pattern Recognit 48:50–59 8. Lim T, Loh W, Shih Y (1997) An empirical comparison of decision trees and other classification methods. Technical report, Department of Statistics, University of Wisconsin, Madison, WI, USA, 1997 9. Karim A, Azam S, Shanmugam B, Kannoorpatti K, Alazab M (2019) A comprehensive survey for intelligent spam email detection. IEEE Access 7:168261–168295.https://doi.org/10.1109/ ACCESS.2019.2954791 10. Vieira EMA, Neves NTAT, Oliveira ACC, Moraes RM, Nascimento JA (2018) Avaliação da performance do algoritmo J48 para construção de modelos baseados em árvores de decisão. Rev Bras Comput Apl 10:80–90 11. Kaur D, Bedi R, Gupta SK (2015) Review of decision tree data mining algorithms: ID3 and C4.5. In: Proceedings of international conference on information technology and computer science, 11–12 July 2015. ISBN 9788193137307 12. Karim A, Azam S, Shanmugam B, Kannoorpatti K (2020) Efficient clustering of emails into spam and ham: the foundational study of a comprehensive unsupervised framework. IEEE Access 8:154759–154788. https://doi.org/10.1109/ACCESS.2020.3017082 13. Kumar N, Obi Reddy GP, Chatterji S, Sarkar D (2012) An application of ID3 decision tree algorithm in land capability classification. Agropedology 22(J):35–42 14. Shamrat FMJM, Mahmud I, Rahman AKMS, Majumder A, Tasnim Z, Nobel NI (2020) A smart automated system model for vehicles detection to maintain traffic by image processing. Int J Sci Technol Res 9(2):2921–2928. ISSN 2277-8616 15. Dai Q, Zhang C, Wu H (2016) Research of decision tree classification algorithm in data mining. Int J Database Theor Appl 9(5):1–8. https://doi.org/10.14257/ijdta.2016.9.5.01 16. Hssina B, Merbouha A, Ezzikouri H, Erritali M A comparative study of decision tree ID3 and C4.5. (IJACSA) Int J Adv Comput Sci Appl. Special Issue on Advances in Vehicular Ad Hoc Networking and Applications 17. Bittencourt, Radke H, Clarke RT (2004) Feature selection by using classification and regression trees (CART)

142

F. M. Javed Mehedi Shamrat et al.

18. Shamrat FMJM, Ghosh P, Sadek MH, Kazi MA, Shultana S (2020) Implementation of machine learning algorithms to detect the prognosis rate of kidney disease. In: 2020 IEEE international conference for innovation in technology (INOCON), Bangluru, pp 1–7.https://doi.org/10.1109/ INOCON50539.2020.9298026 19. Shamrat FMJM, Asaduzzaman Md, Ghosh P, Sultan MdD, Tasnim Z (2020) A web based application for agriculture: “Smart Farming System”. Int J Emerg Trends Eng Res 8(6): 309– 2320. ISSN 2347-3983. https://doi.org/10.30534/ijeter/2020/18862020 20. Saleh AJ, Karim A, Shanmugam B, Azam S, Kannoorpatti K, Jonkman M, Boer FD (2019) An intelligent spam detection model based on artificial immune system. Information 10(6):209. https://doi.org/10.3390/info10060209 21. Shamrat FMJM, Tasnim Z, Nobel NI, Ahmed MdR (2019) An automated embedded detection and alarm system for preventing accidents of passengers vessel due to overweight. In: Proceedings of the 4th international conference on big data and Internet of Things (BDIoT’19). Association for Computing Machinery, New York, NY, USA, Article 35, pp 1–5. https://doi. org/10.1145/3372938.3372973 22. Shamrat FMJM, Allayear SM, Alam MF, Jabiullah MI, Ahmed R (2019) A smart embedded system model for the AC automation with temperature prediction. In: Singh M, Gupta P, Tyagi V, Flusser J, Ören T, Kashyap R (eds) Advances in computing and data sciences. ICACDS 2019. Communications in computer and information science, vol 1046. Springer, Singapore. https://doi.org/10.1007/978-981-13-9942-8_33 23. Shamrat FMJM, Nobel NI, Tasnim Z, Ahmed R (2020) Implementation of a smart embedded system for passenger vessel safety. In: Saha A, Kar N, Deb S (eds) Advances in computational intelligence, security and Internet of Things. ICCISIoT 2019. Communications in computer and information science, vol 1192. Springer, Singapore. https://doi.org/10.1007/978-981-153666-3_29 24. Ma L, Destercke S, Wang Y (2016) Online active learning of decision trees with evidential data. Pattern Recogn 52:33–45 25. Ahmed MdR, Ali MdA, Ahmed N, Zamal MdFB, Shamrat FMJM (2020) The impact of software fault prediction in real-world application: an automated approach for software engineering. In: Proceedings of 2020 the 6th international conference on computing and data Engineering (ICCDE 2020). Association for Computing Machinery, New York, NY, USA, pp 247–251. https://doi.org/10.1145/3379247.3379278 26. Shamrat FMJM, Tasnim Z, Ghosh P, Majumder A, Hasan MZ (2020) Personalization of job circular announcement to applicants using decision tree classification algorithm. In: 2020 IEEE international conference for innovation in technology (INOCON), Bangluru, pp 1–5.https:// doi.org/10.1109/INOCON50539.2020.9298253 27. Liang C, Shanmugam B, Azam S, Karim A et al (2020) Intrusion detection system for the internet of things based on blockchain and multi-agent systems. Electronics 9(7):1120. https:// doi.org/10.3390/electronics9071120

Human Face Recognition Applying Haar Cascade Classifier F. M. Javed Mehedi Shamrat, Anup Majumder, Probal Roy Antu, Saykot Kumar Barmon, Itisha Nowrin, and Rumesh Ranjan

Abstract Human face recognition is distinguished by a method of identifying facts or confirmation that tests personality. The technique essentially relies on two stages, one is face identification, and another is face recognition. Facial recognition applies to a PC device with a few implementations in which human faces can be identified in pictures. Usually, facial identification is achieved by using “right” data from fullfrontal facial photographs. Although there are a variety of situations in which full frontal faces are not visible, blemished faces captured by CCTV cameras are an excellent demonstration. Subsequently, the use of fractional facial data as tests is still, to a large extent, an unexplored field of research on the PC-based face recognition problem. In this research, through using incomplete facial evidence to concentrate on face recognition. By implementing critical analysis to evaluate the presentation of AI using the Haar Cascade Classifier is proposed and used to build our framework. There are three phases of the proposed face detection method such as the face data gathering (FDG) process, train the stored image (TSI) phase, face recognition using the local (FRUL) binary patterns histograms (LBPH) algorithm, and this classifier computation was tested by splitting it into four phases. In this analysis, Haar feature selection is applied to complete the detection phase, and also to generate an integral image, Adaboost preparing, Cascading Classifiers. To complete this venture’s human F. M. Javed Mehedi Shamrat (B) Department of Software Engineering, Daffodil International University, Dhaka, Bangladesh A. Majumder Department of Computer Science and Engineering, Jahangirnagar University, Dhaka, Bangladesh e-mail: [email protected] P. R. Antu · S. K. Barmon · I. Nowrin Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh e-mail: [email protected] S. K. Barmon e-mail: [email protected] R. Ranjan Department of Plant Breeding and Genetics, Punjab Agriculture University, Ludhiana, Punjab, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_12

143

144

F. M. Javed Mehedi Shamrat et al.

protection facial recognition framework with face detection, local binary patterns histograms (LBPH) is used to estimate the model. In LBPH, a few parameters are used and a dataset is obtained by implementing an algorithm. By adding the LBPH operation and extracting the histograms, I got the Final computational part. “Image Processing Based Human Face Recognition Using Haar Cascade Classifier” Image Processing-Based Human Face Recognition Using Haar Cascade Classifier. Keywords Image processing · Face recognition · Human face · Cascade classifier

1 Introduction Characterizes a neuronal system in the AI method that has been successfully extended to researching sensory imagination. It has a few facial recognition and video investigation programs, frameworks suggested, and normal language training. Here, through visual analysis, facial detection and acknowledgment for people on foot were discussed. Face recognition is an increasingly multidisciplinary geographic investigation, and it has open security implications, identifying facts and examination of a person in the photo. Face identification and recognition is a form of biometric programming that is capable of identifying specific facts and clarifying an individual by looking at and breaking down examples based on a person’s facial configuration. Facial recognition compares the data and a history of recognized appearances to investigate the match. To identify and validate them from a source that offers computerized images based on individuals who wish to attend education. Innovation in facial recognition is used in various ways such as billing, protection improvement, offender identifying proof, marketing, human healthcare, and so on. To reach college, facial identification and recognition is utilized. Unapproved persons breach varsity grounds now and then, which may be dangerous for university grounds and students. Immediately enlist the excluded students from the university surveillance list on the unlikely risk that they would not be sheltered if they want to return it. In the off possibility that by easily distinguishing rusticated pupils, dangerous lawbreakers, and other accessible risks, guarantee times, auditorium, and electronic laboratories to recognize them and not join them. It can protect locations, workplaces, gyms, libraries, restaurants, bars, places for athletic preparation, and utilize our savvy access control stage all the more. Access to ground testing centers, emergency clinics, and other touchy places using face recognition is closely monitored. Likewise, it would coordinate with current protection networks. Our proposed system can be used to identify people’s faces in the university, school, workplaces, train stops, bus stations, and homes, and our system makes a distinction between suspect faces and signs. The device guarantees the protection of every campus where it is introduced. Our plan will operate competently to secure a university setting. It completely decreases unauthorized entry caused by unauthorized individuals as well as helps to secure the area. With exception of face identification, which uses a single training technique to detect all faces, each individual’s face should

Human Face Recognition Applying Haar Cascade Classifier

145

Fig. 1 Face recognition example [1]

be trained in the face recognition process. For planning, a face scale of 150 × 150 pixels was selected. In various situations, the faces of one person is used as positive samples and the faces of other people as negative samples. In the face recognition level, the detected face is processed for the entire image (Fig. 1). The remaining article will address the literary review in Sect. 2. The proposed System Methodology and System Overview is represented in Sect. 3, and in Sect. 4 discussed Result and Implementation of the entire system. Finally, in Sect. 5, the conclusion of the research work is presented.

2 Literature Review Ilias Maglogiannis et al. [2] developed a coordinated system for the identification of emotions by using Markov randomly selected areas. The technique consisted of three components and they used color images. They used Markov arbitrary fields that actualized skin recognition for picture division and skin identification. As the training package, they were presented as shaded images of human faces. They used the HLV shading region for the predefined position of the eye and mouth for detection and extraction. Tomas Markciniak et al. [3] used a continuous method to assess the efficiency of facial detection and recognition from low-resolution images. Face detection was funded by different programs. Zhi et al. [4] suggested genetic algorithmdependent face recognition to improve data protection. They are based on character distinguishing evidence of guarantee protection in the cash, national security and equity areas, etc. Shonal Chaudhry et al. [5] Chandra suggested a facial detection and recognition algorithm in an unconstrained domain for the portable visual assistive device. They saw the intelligent machine module as the basic stuff. They created their proposal using performance-dependent CNN [26] or cascade classifiers. A technique to optimize the presentation of the face detection and recognition method was suggested by Alireza Tofighi et al. [6]. Their proposed methodology consists of

146

F. M. Javed Mehedi Shamrat et al.

two basic pieces. Right off the bat, faces were recognized, and then distinct appearances were viewed. In the recognition phase, they used the Gaussian skin color model coupled with the AdaBoost calculation for skin color division. He et al. [7] noticed a strategy for halfway face detection called the Dynamic feature Matching (DFM) in a related way. For transfer highlights to FCN, they suggested a VGG face model. Their suggested methodology improved the precision of classification. Long et al. developed subclass pooling for grouping (SCP) in [8] to take care of the problem of double detention. In a planning package, they used minimal knowledge. Peng et al. [9] suggested a technique for strengthening delegate images, Locality Constrained Collective Representation (LCCR). They used LCCR and extended it to many databases. Three facial focus photos have been used, including eyes, nose, and lips. Hair identification is focused on the recognition of photographs of hair. The test results revealed that their party participants improved the amount of right skin, mouth, and hair choices from the group. From these research gaps, a system that can identify human faces through Haar Cascade Classifier is proposed and developed. Incidentally, starting late the display of proposed procedures rots strikingly while overseeing exceptional hindrances in a face. A few previous talks indicate that closeness has any one of the reserves of being a core element of affirmation in facial recognition. As the goal face picture is fractured obstructed, with airs and with the age of the topic, the pace of closeness of the image varies. Besides, machine learning-based algorithms, both general and deep learning, is not only having a huge impact on image processing but also in the fields of Cybersecurity, Health Informatics, Environment Monitoring, etc., [10, 11, 27].

3 Methodology and System Overview When it comes to the exposure of expected strategies, it deteriorates significantly when confronted with extreme challenges. Any earlier observations indicate that proximity is an important factor in detecting the face. The speed of image closeness varies as the target face images are completely blocked, with attitudes and with the individual’s age. Figure 2 depicts the diagram of the entire proposed system. The entire system is operated with a three-phase system given below: (a) (b) (c)

Face data gathering (FDG) phase, Train the stored image (TSI) phase, Face recognition using local binary patterns histograms (LBPH) algorithm.

3.1 Dynamic Cascading Method The purpose of face detection implies that the faces from the whole image or recordings or real-time video of a person were detected. By using the Haar cascade classifier, the faces are detected. There are four steps of the Haar Cascade classifier:

Human Face Recognition Applying Haar Cascade Classifier

147

Fig. 2 Diagram of the proposed system

(a) (b) (c) (d)

Haar features selection Creating an integral image Adaboost training Cascading classifiers

This algorithm requires several positive and negative photos to identify faces. A good photographs of a mask and bad images without a face are positive images. It is required that these photos train the classifier. Haar function at a region in windows fits for neighboring rectangular locale. It identifies each domain’s aggregate pixel strength and investigates the revelations within these totals (Fig. 3). This device uses interior images and a lot of functionality will occur. Adaboost is then used to detect the best element, and Adaboost prepares the classifier for it. A technique called boosting is used to plan each level. Boosting involves a weighted

148

F. M. Javed Mehedi Shamrat et al.

Fig. 3 Haar-Cascade classifier [12]

level of preference made by frail learners. The most reliable classifier can be planned. The domain is identified by each stage of the classifier as the present sliding windows condition. The results may be optimistic or bad. If positive, it was found at the pointed object. And the negative article was not located at that time. The grouping of the zone is finished at the stage where the label is negative. At that point, windows are moved to the net location by the identifier. The classifier usually moves the area to the next level. At the earliest, the phases have to plan the course classifier for positive and negative images. Viola and Jones’, the authors responsibilities was added to summarize region, tables, which they considered as integral pictures. As two-dimensional question tables, integral images may be defined as a matrix with a scale identical to the first image. The total number of pixels in the initial image increases, resulting in a locale that can be located anywhere in the integral image. This enables the entirety of rectangular zones to be figured in the picture, at any location or size, using only four queries: Sum = I (C) + I (A) − I (B) − I (D)

(1)

Here A, B, C, D are parts of the integral image I. Each Hair-like component can involve several queries, based on how it has been characterized. Viola and Jones’ 2 rectangle highlights require six queries, three rectangle highlights need eight queries, and four rectangle highlights need nine queries. A cascade algorithm is a computational approach for computing function values of the simple scaling and wavelet functions of a discrete wavelet transform using an iterative algorithm in the mathematical subject of wavelet theory. It begins from values on a sparse sampling point sequence and generates values for more densely spaced sampling point sequences in succession. Since it repeats the same procedure on the output of the previous process, it is known as the cascade algorithm.

Human Face Recognition Applying Haar Cascade Classifier

149

From {h} and {g} filter coefficients, the iterative algorithm produces successive approximations to ψ(t) or ϕ(t). If the algorithm converges to a fixed point, then the fundamental scaling feature or wavelet [10] is the fixed point. Iterations can be specified by, ϕ

(k+1)

(t) =

N −1 

 (k) h[n] 2ϕ (2 − n)

(2)

n=0

And in the form, the limit can be regarded as an endless product, √  ω  (k)  ω  φ φ (k+1) (ω) = 1/ 2H 2 2    √ ω (k) ω  ∅(k+1) (ω) = 1/ 2H ∅ 2 2

(3)

And in its shape, the limit can be seen as an endless commodity, ∞  1 ω √ H k φ (∞) (0) 2 2 k=1 √  ω  (∞) ∞ ∅ (0) ∅(∞) (ω) = πk=1 1/ 2H 2

φ (∞) (ω) =

(4)

If there is such a maximum, there is a continuum of the scaling function, √  ω  (∞) ∞ ∅ (0) 1/ 2H ∅(ω) = πk=1 2

(5)

The limit does not rely on the assumption of the initial form for ϕ(0)(t). This algorithm, even though it is discontinuous, efficiently converges to ϕ(t). The wavelet can be produced from this scaling function, ψ(t) =

∞ 

√ g[n] 2ϕ (k) (2t − n)

(6)

n=−∞

In the frequency domain, a successive approximation can also be derived.

3.2 Train Stored Image (TSI) A database of images must be prepared by the computer in the “yml” format. A face would be remembered for the contrast and prepared detail.

150

F. M. Javed Mehedi Shamrat et al.

3.3 Face Recognize with Using LBPH The measures of the estimation of local binary patterns histograms provide parameter measurement steps, Training Algorithm, Applying LBP operation, Extracting the Histogram, Performing, and Recognition. I.

Parameter:

The algorithm considers four parameters of local binary patterns histogram: (a)

(b) (c) (d)

II.

Radius: The span is used to construct the local binary patterns of the roundabout. It is spread across the spectrum across the central pixel. Its value at regularly is 0. Neighbor: The test point is significant for the assembly of local binary patterns across the neighborhood. Grid X: Any horizontal calculation of the cells. At that point, if more cells, the grid is increasingly better and the corresponding part is higher-dimensional. Grid Y: Any vertical estimation of the cells. There are higher-dimensional cells, progressively improved grid, and components come about. It is set to 8 for the rest of the time. Training Algorithm:

Initially, the algorithm is trained. A dataset is used for the recognition of face detection. Similarly, the ID for each image needs images of the same person must have the same ID (Fig. 4). III.

Applying LBP Operation:

This is a computational step. The key computational stage is the creation of the intermediate image, which represents the first image in another way by using better facial characteristics. With Radius and Neighbor, this algorithm uses sliding glass.

Fig. 4 Training dataset

Human Face Recognition Applying Haar Cascade Classifier

151

It would be increasingly justifiable to represent them bit by bit at that stage. (a) (b) (c)

(d) (e) (f) IV.

As a grid and also a matrix, this algorithm divides an image. 3 × 3 pixels are one of these matrices and the grid. Consider the matrix’s core estimation and use it as an edge. Set another binary meaning for each of the eight Edge neighbors. And also set 1 to be comparable to or more visible than the point, and 0 to be smaller than the edge. The matrix is composed of binary numbers. Concatenate matrix estimation for each location, line by line and transforms the binary value into decimal and positions it as the mean matrix value. Possess another snapshot with a unique image with a great representation of attributes. Extracting the Histogram

The extraction splits through the Grid by using Grid X and Grid Y. There are only 256 locations in each histogram. It reflects a pixel’s strength. To construct a new and enormous histogram, connect each histogram at that stage. A definitive histogram addresses the characteristics of the specific images (Fig. 5). V.

Performing and Recognition:

This progression was used to train the algorithm. Each histogram is produced to communicate with any frame from the database. Since the requirement is if we need to send a new image as information, the means are re-run, and a new histogram with database images is created. However, in order to find the coordination image, two histograms were analyzed [13–16].

Fig. 5 Extract histogram from original image (LBPH phase)

152

F. M. Javed Mehedi Shamrat et al.

It restores an image with the nearest histogram at that time. To detect the difference between the two histograms, such as euclidian, chi-square, and so forth, use various rules. Here, euclidian distance is used to separate two histograms from one another. The product of the equation is ID from the image with the nearest histogram. Approximate separation can be used as a reliable estimation. In this case, less certainty indicates a closer difference between the two histograms. In these sections, lower certitude is preferred over greater certitude. At the moment, less gratitude is used. If the level of certainty is below a certain threshold, face recognition is completely accurate. The data flow diagram of the entire system is shown in Fig. 6. From a video shoot, the camera detects a face and transforms the face image into a matrix [17–21]. It allows clustering and storage on database face at that stage. At that stage when the match happens, knowledge is transmitted on the dataset regarding the individual recognized [22–26].

Fig. 6 Data flow diagram of the entire system

Human Face Recognition Applying Haar Cascade Classifier

153

4 Results and Discussion Each person will be defined using an id number at the moment of forming the dataset (Table 1). When identification, when the picture of the test individual fits the datasets, the machine immediately identifies the faces using the suggested algorithm. A total of 200 images of one customer and group all images under one customer ID. The images is captured from real-time video and cropped it into the size 150 ✕ 150 pixels. Then stored it into the database with a unique userID, which is shown in Fig. 7. An “yml” file is created and this file contains all the matrix of every images. These matrices are utilized for contrasting images, which is shown in Fig. 8. In this experiment, the tests are performed with the programming language Python incorporated with the OpenCV 3.4.1 module, on a desktop computer with an AMD Ryzen 7 3800 × 8core–Processor @ 3.90 GHz CPU and an internal RAM of 32 GB. It needs 5–6 s to process the image and make the output of the given proposed system. Table 1 Sample database of stored data

Fig. 7 Recognized faces

No

Information factor

1

Shamrat

2

Arup

3

Saykot

4

Antu

5

Subham

6

Anup

7

Mehedi

8

Hasan

154

F. M. Javed Mehedi Shamrat et al.

Fig. 8 Matrix of dataset in YML file

Device requirements and internet connectivity have an important function to play in this framework. Several experiments were performed to analyze the system’s performance. After several tests, the output and the expected results are satisfied. In test 1, the faces that are in database storage were tested. The result of test 1 accuracy is 100%. The system can detect the faces and identify the person which is shown in Fig. 9. In test 2, the same 3 faces is tested from different environments with different backgrounds. The result of test 2 is also an output accuracy of 100%. The system also can detect the faces and identify the same person from a different environment with different backgrounds, which is depicted in Fig. 10. Figure 11 determines the result that is satisfying as expected. The checking aspect considers 200 images of anyone with several exterior appearances to identify our machine defects. In every event, our device brings excellent result. It can distinguish several faces in one case and accurately interpret distinct faces on one side. It can

Fig. 9 Recognized faces in test 1

Human Face Recognition Applying Haar Cascade Classifier

155

Fig. 10 Recognized faces in test 2

Fig. 11 The detection result of test 1 and test 2

also identify the names, identify the faces, from information of an individual who is recognized in the database [27].

5 Conclusion Human face detection can be used in different fields, such as in law enforcement, equity arrangements, identification recovery, and biometrics. Facial identity and recognition technology integrate university campus tech solutions to ensure student protection. For chosen locations, such as university campuses, facial recognition based on face detection were established. The primary objective is to identify and eradicate unwanted persons out of the protected area. An efficient framework is developed for facial recognition that identifies people’s faces. By calculating the illegal movement of outsiders by using our suggested methodology, internal campus crimes can be minimized as much as to expand transparent mindfulness, human protection, and authorization of the law by adopting the proposed work.

156

F. M. Javed Mehedi Shamrat et al.

In the future, artificial intelligence over the proposed system can be used to differentiate objects like humans, animals, and so on to ensure security surveillance.

References 1. Manoharan S (2019) Smart image processing algorithm for text recognition, information extraction and vocalization for the visually challenged. J Innov Image Process (JIIP) 1(01):31–38 2. Kumar A, Kaur A, Kumar M (2018) Face detection techniques: a review. Springer Nature B.V. https://doi.org/10.1007/s10462-018-9650-2 3. Maglogiannis I, Vouyioukas D, Aggelopoulos C Face detection and recognition of natural human emotion using Markov random fields. https://doi.org/10.007/s0077900701650 4. Marciniak T, Chielewska A, Wechan R, Parzych M, Dabrowski A Influence of low resolution of images on reliability of face detection and recognition. https://doi.org/10.1007/s11042013 15688 5. Zhi H, Lui S (2018) Face recognition based on genetic algorithm. J Vis Commun Image R. https://doi.org/10.1016/j.jvcir.2018.12.012 6. Manoharan S (2019) Image detection, classification and recognition for leak detection in automobiles. J Innov Image Process (JIIP) 1(02):61–70 7. Tofighi A, Monadjemi SA Face detection and recognition using skin color and Adaboost algorithm combined with Gabour features and SVM classifier. 1 8. Savvides et al Dynamic feature matching (DFM) for partial face recognition. https://doi.org/ 10.1109/TIP.2018.2870946 9. Shamrat FMJM, Allayear SM, Alam MF, Jabiullah MI, Ahmed R (2019) A smart embedded system model for the AC automation with temperature prediction. In: Singh M, Gupta P, Tyagi V, Flusser J, Ören T, Kashyap R (eds) Advances in computing and data sciences. ICACDS 2019. Communications in computer and information science, vol 1046. Springer, Singapore. https://doi.org/10.1007/978-981-13-9942-8_33 10. Karim A, Azam S, Shanmugam B, Kannoorpatti K (2020) Efficient clustering of emails into spam and ham: the foundational study of a comprehensive unsupervised framework. IEEE Access 8:154759–154788. https://doi.org/10.1109/ACCESS.2020.3017082 11. Liang C, Shanmugam B, Azam S, Karim A et al (2020) Intrusion detection system for the Internet of Things based on blockchain and multi-agent systems. Electronics 9(7):1120. https:// doi.org/10.3390/electronics9071120 12. Haar Cascade Classifier Image. https://www.google.com/search?q=haar+cascade+classifier+ image&source=lnms&tbm=isch&sa=X&v 13. Shamrat FMJM, Tasnim Z, Nobel NI, Ahmed MdR (2019) An automated embedded detection and alarm system for preventing accidents of passengers vessel due to overweight. In: Proceedings of the 4th international conference on big data and Internet of Things (BDIoT’19). Association for Computing Machinery, New York, NY, USA, Article 35, pp 1–5. https://doi. org/10.1145/3372938.3372973 14. Shamrat FMJM, Nobel NI, Tasnim Z, Ahmed R (2020) Implementation of a smart embedded system for passenger vessel safety. In: Saha A, Kar N, Deb S (eds) Advances in computational intelligence, security and Internet of Things. ICCISIoT 2019. Communications in computer and information science, vol 1192. Springer, Singapore. https://doi.org/10.1007/978-981-153666-3_29 15. Ahmed MdR, Ali MdA, Ahmed N, Zamal MdFB, Shamrat FMJM (2020) The impact of software fault prediction in real-world application: an automated approach for software engineering. In: Proceedings of 2020 the 6th international conference on computing and data engineering (ICCDE 2020). Association for Computing Machinery, New York, NY, USA, pp 247–251. https://doi.org/10.1145/3379247.3379278

Human Face Recognition Applying Haar Cascade Classifier

157

16. Shamrat FMJM, Tasnim Z, Ghosh P, Majumder A, Hasan MZ (2020) Personalization of job circular announcement to applicants using decision tree classification algorithm. In: 2020 IEEE international conference for innovation in technology (INOCON), Bangluru, pp 1-5. https:// doi.org/10.1109/INOCON50539.2020.9298253 17. Karim MA, Karim A, Azam S, Ahmed E, Boer FD, Islam A, Nur FN (2021) Cognitive learning environment and classroom analytics (CLECA). Innovative data communication technologies and application (IDCTA), vol 59. Springer 18. Shamrat FMJM, Asaduzzaman Md, Rahman AKMS, Tusher RTH, Tasnim Z (2019) A comparative analysis of Parkinson disease prediction using machine learning approaches. Int J Sci Technol Res 8(11):2576–2580. ISSN 2277-8616 19. Rahman AKMS, Shamrat FMJM, Tasnim Z, Roy J, Hossain SA (2019) A comparative study on liver disease prediction using supervised machine learning algorithms. Int J Sci Technol Res 8(11):419–422. ISSN 2277-8616 20. Shamrat FMJM, Raihan MdA, Rahman AKMS, Mahmud I, Akter R (2020) An analysis on breast disease prediction using machine learning approaches. Int J Sci Technol Res 9(2):2450– 2455. ISSN 2277-8616 21. Shamrat FMJM, Asaduzzaman Md, Ghosh P, Sultan MdD, Tasnim Z (2020) A web based application for agriculture: “smart farming system”. Int J Emerg Trends Eng Res 8(6):2309– 2320. ISSN 2347-3983, https://doi.org/10.30534/ijeter/2020/18862020 22. Kathed A, Azam S, Shanmugam B, Karim A, Yeo KC, De Boer F, Jonkman M (2019) An enhanced 3-tier multimodal biometric authentication. In: 2019 international conference on computer communication and informatics (ICCCI). IEEE, pp 1–6. https://doi.org/10.1109/ ICCCI.2019.8822117 23. Shamrat FMJM, Tasnim Z, Mahmud I, Jahan N, Nobel NI (2020) Application of k-means clustering algorithm to determine the density of demand of different kinds of jobs. Int J Sci Technol Res 9(2):2550–2557. ISSN 2277-8616 24. Shamrat FMJM, Ghosh P, Sadek MH, Kazi MA, Shultana S (2020) Implementation of machine learning algorithms to detect the prognosis rate of kidney disease. In: 2020 IEEE international conference for innovation in technology (INOCON), Bangluru, pp 1–7. https://doi.org/10.1109/ INOCON50539.2020.9298026 25. Shamrat FMJM, Tasnim Z, Rahman AKMS, Nobel NI, Hossain SA (2020) An effective implementation of web crawling technology to retrieve data from the world wide web (www). Int J Sci Technol Res 9(1):1252–1256. ISSN 2277-8616 26. Ghosh P et al Efficient prediction of cardiovascular disease using machine learning algorithms with relief and LASSO feature selection techniques. IEEE Access. https://doi.org/10.1109/ ACCESS.2021.3053759 27. Shamrat FMJM, Mahmud I, Rahman AKMS, Majumder A, Tasnim Z, Nobel NI (2020) A smart automated system model for vehicles detection to maintain traffic by image processing. Int J Sci Technol Res 9(2):2921–2928. ISSN 2277-8616

A Novel Simon Light Weight Block Cipher Implementation in FPGA S. Niveda, A. Siva Sakthi, S. Srinitha, V. Kiruthika, and R. Shanmugapriya

Abstract For resource constrained devices, lightweight block cipher algorithm is highly indispensible. For low resource devices, use of light weight block ciphers ensures cost-efficiency. The most important problem in VLSI design implementation is balancing the parameter constraints such as power consumption, device utilization and latency. Light weight cryptography employs a cryptographic algorithm which indeed helps to reduce the resource constraints. The primary objective of the paper is to design an efficient Simon light weight block cipher implemented in FPGA. The Simon algorithm employing the structure for implementing XOR operations consumes more power, and it has critical path delay. But in our proposed system, clock gating and power gating methodology is utilized and it helps to reduce the delay and power consumption respectively. The design is implemented in Virtex5 and Artix-7 FPGA devices and its performance is analyzed. From the obtained results, it is evident that the proposed methodology is efficient in terms of adaptive security level and also it is able to encrypt longer messages depending upon the sizes of encrypt and decrypt keys and blocks. Based on the obtained results, it is clearly viewed that the execution time and power consumption is highly reduced. Keywords Simon block cipher · Security · Clock gating · Power gating

S. Niveda (B) · S. Srinitha · V. Kiruthika Department of ECE, Sri Ramakrishna Engineering College, Coimbatore, India e-mail: [email protected] S. Srinitha e-mail: [email protected] V. Kiruthika e-mail: [email protected] A. Siva Sakthi Department of BME, Sri Ramakrishna Engineering College, Coimbatore, India e-mail: [email protected] R. Shanmugapriya Department of ICE, Sri Ramakrishna Polytechnic College, Coimbatore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_13

159

160

S. Niveda et al.

1 Introduction Security plays a vital role in all blooming applications. The increasing interconnectedness of embedded system makes demand for cryptographic components. Strong cipher encryption algorithms are essential building block but its requirements will be varied according to the application and sometimes it is highly impossible in identifying all the requirements of cipher [1, 2]. Different block ciphers with various design structures are available. Almost in all hardware and software systems, the advanced encryption standard (AES) algorithm is used and it is employed in all block ciphers. But advance encryption standard is too expensive for low resource devices. In order to reduce the device utilization, more light weight block ciphers were suggested [3, 4]. Lightweight block cipher algorithms provide secured communication in all sensor networks. Simon consist block size of 32–128 bits and a key length of 64–256 bits. Since it is suitable for light weight hardware implementation, it is applicable in embedded CPU where it occupies a small area. Early SIMON block cipher papers, says about the implementations but where the power consumption, area and clock rate doesn’t reach the expected level of output. A cipher is represented through algebraic equations and then it is solved algebraically by using “Algebraic Differential Fault Attack” (ADFA). This attacker joins both the algebraic technique and differential fault attack and split the cipher. To cutdown the SIMON cipher, three unique methodologies are used in three attacks are proposed that combines with ADFA in bit flip fault model [4]. An ultralow power SIMON blocks cipher which consists of 32 bit plaintext and 64 bit key by leveraging adiabatic theory. This paper mainly focuses on the heat transfer in design structures and achieves the huge reduction. But the latency cycle clock rate is about 704 which one is highly not recommended [5]. Variable key size and variable block size are used in both Simon and speck algorithms and it can be parallel implemented to test its functionality [6]. The Simon Cipher having the hash function was proposed. Because the Simon block cipher gives a largest output and also works for fast and slow clock systems. Primarily the modifications in Simon cipher scheme are using counter and round schedule and it is essential in creating a construction which is implemented on the 2-block cryptographic data. Major drawback here is that the hardware implementation is not that much potential at the expense of security. Parallel implementations of security algorithms focus on simplicity, security and conflicting goals in cryptographic design [7]. This is particularly designed to provide security with high simplicity. This design improves flexibility with design simplicity, but not reaches the expected efficiency range. The above mentioned papers reveal the possible changes in security level, throughput efficiency, area coverage separately but not shown any expected level of output in power dissipation and cyclic clock rate [8]. Compared to those papers the proposed Simon block cipher algorithm balances all the essentials under acceptable cost and mainly reduces the dissipation of power in our architecture by using clock and power gating techniques.

A Novel Simon Light Weight Block Cipher Implementation in FPGA

161

2 Proposed Work 2.1 Simon Block Cipher The Simon block cipher consists of a block length of 2n where n denotes the number of bits [9, 10]. By using a Feistel network, the computational process is implemented serially and it is reformed to perform encryption on a large amount of data. Fiestel network methodology splits the data blocks into 2 pieces of equal size and applies encryption process in several rounds [5]. Likewise different cipher algorithms are compared and its performances are analyzed in [11, 12]. Simon blocks involve operations like AND, OR, SHIFTING, XOR. Figure 1 represents the start state which indicates the beginning of text converting process. The input plaintext is passed for encryption process. It should validate the plaintext value otherwise it again goes for encryption request process. Once if the block gets valid, then the acceptable plaintext value goes for key expansion process. In the key schedule process, the key gets generated and combines with the plaintext. If the combined data is in valid range value it gets encrypted. The encrypted data comes as cipher text. The expected form of cipher text is converted from the plaintext and the process gets end.

2.2 Round Function Round function is used as a building block for iterated block cipher operations. Most of the block cipher algorithm is used in iterative manner, where it involves continuous transformation of plaintext to cipher text, this continuous transformation is called round function. Each iteration is called as round. Here plaintext is converted into cipher text. Round function structure is depicted in Fig. 2. Initially, the signals are given as inputs to the multiplexers, where the start signal is assigned to 1. After initial computation, the Start signal is assigned to 0, where other rounds are estimated. Registers are used to store the results. Finally the value of cipher text C = xur||xlr is calculated. Shift operation is carried out in 3 cyclic for each and every round, and bit-wise logical operations are carried out. Bit wise logical operations are represented using tree structure. This tree implementation is used to reduce latency. Here wired cyclic shifts are implemented for performing cyclic shift operations. Therefore redundant hardware requirement is reduced.

162

S. Niveda et al.

Fig. 1 Work flow of Simon algorithm

2.3 Key Schedule A key schedule algorithm helps to expand the small master key into the long expanded key of hundreds or thousands of bits to use in the cryptographic algorithms. The master key is essential for key generation process. The plaintext gets into all the rounds. The master key is highly secured that should be maintained privately and should have high security level. The master key is only known to the user and not to the third party. If it is revealed, the information passed between the users gets revealed and leads to complexity. This key is responsible for random key generation. The generated key combines with the plaintext data and passes through all the rounds.

A Novel Simon Light Weight Block Cipher Implementation in FPGA

163

Fig. 2 Round function structure

Final completion of last round gives cipher text as output. Registers are used to store the previous round results. Figure 3 shows the key generation process for the conversion of plain text to cipher text. Master key is a secret key. The count value represent the number of rounds occurred and number of keys generated (Fig. 4). The key scheduling process of SIMON gets a master key and generates key word sequences from k 0 to k r −1 , here r denotes the number of rounds. Key generation are in random manner and key generation depends on key and block data size value.

2.4 Round Function Algorithm The Simon encryption algorithm involves converting a plaintext data into cipher text data, which happens in round process. The data gets divided into upper and lower level and passes for the first round process with the round key k 0 . Input: 2n bit Plaintext Output: 2n bit Cipher text The following are the steps in algorithm

Step 1: For i from 1 to r do Step 2: Xui = Xli−1 XOR ((Xui−1 C BCH > C CN 4 TE (CH, BS) = εelec + εamp × dCH

(5)

4 TE (BCH, BS) = εelec + εamp × dBCH

(6)

2 TE (CN, CH) = εelec + εamp × dCN

(7)

2 TE (CN, BCH) = εelec + εamp × dBCN

(8)

RE (CH) = εelec

(9)

Fault-Tolerant Cluster Head Selection Using Game Theory …

RE (BCH) = εelec

405

(10)

εelec and εamp [14] are basic energy consumption constants in circuits of sensors for transmitting and receiving data among transmitters and receivers. Also, d is the distance between transmitter and receivers.

5 Game Theory In a formal game-theoretic approach a game contains set players with possible strategic options set si represented as S. The main aim of the game is optimizing the utility of the player. The notations of the gaming set as represented as follows [18]: 1. Players (P) = pi , which i = 1, 2, 3, …, n set of players; 2. Actions (A) = aj , which j = 1, 2, 3, …, m set of actions; 3. Strategy (S) = sk , which k = 1, 2, 3, …, q set of strategies; 4. Payoff Utilization (U) calculates the payoff of each player. 5. Player Decision act as a decision agent which makes an optimized decision during a game. Game: The formal description of a different strategic condition in the environment. Rationality: If they play such that their payoff is maximized, and this is termed as fair player. The rationality of all the players is also believed to be common knowledge. Strategy: One of the most possible actions in the given set the actions is a strategy. Payoff: A payoff is an amount that is often referred to as a utility. For a player, it positions the desirability of a result. Payoffs are weighted with odds, given that the result is random. The predicted payoff is closely linked to the intent of the player to take risks. An N player is expressed with anti-coordination and 3-symmetric strategy game. Let the game is defined as G = {N, S, U}. N is the number of players who participated in the game; every player has a similar gaming strategy denoted by S for action/strategy, and U gives the utility of respective strategies. Therefore, S = {Cluster Head, Backup Cluster Head, Cluster Node} = {CH, BCH, CN} is the strategy package. S is the strategy that refers to the method the cluster node may declare as a CH or not being a CH in N player strategy game. In the gaming environment, the cost of CH and the cost of BCH is defined in terms of distance and available energy of the cluster nodes. Also, the utility payoff of every player can choose to serve as the cluster head to have the further tasks for its cluster nodes, or decline to be a cluster head. If more than one player opts to become a cluster head or backup cluster head in nearby physical proximity, when the smaller clusters occur If none of the nodes want to be a cluster head or a backup cluster head, however, all the cluster nodes are suffering to gain the payoff of 0, then nodes will not be send any information to the base station. The utility functions to a specific strategy of cluster nodes denoted by U(S i ) and defined by:

406

R. Anand et al.

Table 1 Payoff matrix of two players with different strategy game Players

Player (Pj )

Player (Pi )

Strategies

CH

BCH

CN

CH

(CHi,1 , CHj,1 )

(CHi,2 , BCHj,1 )

(CHi,3 , CNj,1 )

BCH

(BCHi,1 , CHj,2 )

(BCHi,2 , BCHj,2 )

(BCHi,3 , CNj,2 )

CN

(CNi,1 , CHj,3 )

(CNi,2 , BCHj,3 )

(CNi,3 , CNj,3 )

⎧ CCH ⎪ ⎪ ⎨ CBCH U (Si ) = ⎪ C ⎪ ⎩ CN 0

for for for for

Sj Sj Sj Sj

= CH and CCH > TE = BCH and CCH ≥ CBCH > TE = CN and TE > CCN > 0 = CN∀N CCN ∼ =0

(11)

The proposed method deals with the game strategy of nodes becoming a cluster head, backup cluster head, or cluster member for energy consumption in wireless sensor networks (WSNs). Every node has to play for three strategies in which can act among any one of the strategies such as cluster head (CH), backup cluster head (BCH), and cluster node (CN). The cluster node may have a strategy to choose either cluster head or not. Sometimes the nodes act as cluster heads, but they do not send any information to the base station and are unable to obtain any utility payoff. The best way for every CN to act as a selfish node when the node itself has not been chosen for cluster head, but the highest residual energy nodes are announced as cluster head in every cluster. Each node gains utility through a backup cluster head or cluster head by representing their cost are revealed as CCH and CBCH, respectively. In this case, a minimum three-node network is well-defined. In which every node has extended its utility payoff as per the choice of CH or BCH in the cluster. Assumes that CH, BCH, and CN stands for selecting itself as cluster head, backup cluster head, and cluster nodes, respectively, then Table 1 shows the interaction between the two players with possible strategist options. The payoffs of plays listed as (row, column). Where C CH > C BCH > C CN .

5.1 Nash Equilibrium In the Nash gaming environment [19], the new concepts of solutions formed of pure and mixed strategies for 2-players. But, as per our proposed method does support only mixed equilibrium due to the presence of three strategies with a 2-player environment.

Fault-Tolerant Cluster Head Selection Using Game Theory …

407

5.2 Players Utility Payoff Let us assume that ith player (Pi ) chooses CH with probability α 1 , BCH with probability α 2 , and CN with probability 1 − α 1 − α 2 . And similar way jth player (Pj ) chooses CH with probability β 1 , BCH with probability β 2 , and CN with probability 1 − β 1 − β 2 . The utility playoff of players have calculated as follows: Ui (CH) = α1 CHi,1 + α2 CHi,2 + (1 − α1 − α2 )CHi,3

(12)

Ui (BCH) = α1 BCHi,1 + α2 BCHi,2 + (1 − α1 − α2 )BCHi,3

(13)

Ui (CN) = α1 CNi,1 + α2 CNi,2 + (1 − α1 − α2 )CNi,3

(14)

U j (CH) = β1 CH j,1 + β2 CH j,2 + (1 − β1 − β2 )CH j,3

(15)

U j (BCH) = β1 BCH j,1 + β2 BCH j,2 + (1 − β1 − β2 )BCH j,3

(16)

U j (CN) = β1 CN j,1 + β2 CN j,2 + (1 − β1 − β2 )CN j,3

(17)

Based on Nash Equilibrium from the Eqs. (12), (13), and (14) the equation written as Ui (CH) = Ui (BCH) = Ui (CN)

(18)

Based on Nash Equilibrium from the Eqs. (15), (16), and (17) the equation written as U j (CH) = U j (BCH) = U j (CN)

(19)

In addition to the above linear equations have four α1, α 2 , β 1 , β 2 then the equations are split up into another two equations α1 CHi,1 + α2 CHi,2 + (1 − α1 − α2 )CHi,3 = α1 BCHi,1 + α2 BCHi,2 + (1 − α1 − α2 )BCHi,3

(20)

α1 CHi,1 + α2 CHi,2 + (1 − α1 − α2 )CHi,3 = α1 CNi,1 + α2 CNi,2 + (1 − α1 − α2 )CNi,3 β1 CH j,1 + β2 CH j,2 + (1 − β1 − β2 )CH j,3 = β1 BCH j,1 + β2 BCH j,2 + (1 − β1 − β2 )BCH j,3

(21)

(22)

408

R. Anand et al.

β1 CH j,1 + β2 CH j,2 + (1 − β1 − β2 )CH j,3 = β1 CN j,1 + β2 CN j,2 + (1 − β1 − β2 )CN j,3

(23)

If player Pi or Pj chooses to play with only two strategies between CH and BCH then the probabilities of β and 1 − β then have three variables and three Eqs. (4), (8) and (9) β CH j,1 + (1 − β)CH j,2 = βBCH j,1 + (1 − β)BCH j,2

(24)

βCH j,1 + (1 − β)CH j,2 = βCN j,1 + (1 − β)CN j,2

(25)

If any one player mixing the strategies to play with other players in a Nash equilibrium, the payoff coefficients of above given the matrix is identical in terms of a row or a column. The mixing player might choose any mix strategies involved to get Nash equilibrium. The cluster head is selected in which the cluster head have maximum cluster head utility payoff among all the cluster nodes. U (CH) = max{U1 (CH), U2 (CH), . . . Un (CH)}

(26)

Similar to the cluster head the backup cluster head is selected in which the backup cluster head has maximum cluster head utility payoff among all the cluster nodes. U (BCH) = max{U1 (BCH), U2 (BCH), . . . Un (BCH)}

(27)

5.3 Player Utility Payoff Algorithm See Fig. 2. (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l)

Start Deploy the Wireless Sensor Node with initial energy Formulate cluster with neighbor node information Design a gaming environment and initialize N players Initialize the threshold Energy T E and i = 1 Evaluate the node energy RE and initialize j = i + 1 Select the players Pi and Pj Design game strategies for players Pi and Pj Calculate strategy payoff for players Pi and Pj Apply the mixed Nash Equilibrium and calculate the utility payoff of each player Pi and Pj If the utility payoff is greater then update player utility payoff Repeat the steps step 7 to step 12 until j < N

Fault-Tolerant Cluster Head Selection Using Game Theory …

409

Fig. 2 Proposed Nash equilibrium algorithm

(m) (n) (o)

Repeat the steps step 6 to step 13 until i < N Select cluster head (CH) and backup cluster head (BCH) based on maximum utility payoff Stop.

6 Simulation and Results 6.1 Simulation Parameters Every round network simulation represents the lifetime of the sensor node. Few of the simulation parameters used for the implementation and shown below. These parameters are set in the same way as a reference [20], used in many WSN research papers. Parameter and Values

410

R. Anand et al.

Area 500 × 500 m Sensor nodes: 300 Initial None energy of Cluster nodes: 0.6 J Energy Required per bit transfer: 5 × 10−8 J/bit Energy Required per bit receive: 5 × 10−8 J/bit Simulation rounds: till all the nodes become dead or 5000 Packet size CH to BS: 6400 bytes Packet size Nodes to CH: 200 bytes Node communication range: 100 m Cluster head communication range: 500 m MAC Interface: Mac/802.11Ext Antenna Type: Antenna/OmniAntenna Routing protocol: Fault-Tolerant Clustering using game theory

6.2 Result Analysis The simulation results were set for both GCEEC [21] and FTCHG protocols with a probability of 0.3. So that is 3% percentage of cluster nodes become CH and 2% cluster nodes become BCH in the total simulation environment. The experimental results had compared in terms of the number of dead nodes in every round in the proposed FTCHG as well as GCEEC. The list of dead nodes includes CHs and sensor nodes becomes dead due to the reduction energy or the failure of the device was observed in reliability function. In which FTCHG gives better results than the GCEEC during the simulation environment for the analysis of network lifetime. In the proposed method has dynamic CH selection using BCH, which has elected as CH using a game theory with a high payload utility of BCH. The CN is communicated to CH than the packets transmitted to BS. The CH requires more energy because all packets are delivered to BS through CH. The proposed game theory algorithms will identity better CH from BCH when the existing CH becomes a failure (Figs. 3 and 4). Figure 5 represents the consumption average energy per in FTCHG has lower than the GCEEC algorithm. The average energy consumption of sensor nodes in every round better than the existing algorithm. Based on Fig. 6 shows the comparison of alive and dead nodes in every round and simulated for 5000 rounds or till all nodes become dead. As per the graph, the dead nodes are increasing the alive nodes that will decrease in every round. In which our FTCHG has better results than the existing GCEEC algorithm. In Fig. 7 shows a number of cluster heads in every round. This also proves the better throughput at both cluster head and base station. Figures 8 and 9 represent the transmission of the packet between the cluster heads and base stations.

Fault-Tolerant Cluster Head Selection Using Game Theory …

411

Fig. 3 FTCHG simulation setup for 300 nodes with Sink

Fig. 4 GCEEC simulation setup for 300 nodes with Sink

7 Conclusion and Future Enhancement In the current generation, networks have wider wireless communication and many applications. The main challenging task in wireless sensor networks toward increasing network lifetime by utilizing sensor node batteries. The WSN’s are managed by implementing suitable energy consumption algorithms to improve the lifetime of the network. The proposed game theory-based fault-tolerant cluster head selection supports a better network lifetime compared to the existing GCEEC algorithm. In this game-theoretic algorithm has been implemented with backup cluster heads (BCH) and cluster heads (CH) which will resume the services of CH through BCH when the CH becomes failure or death. The complete explanation of FTCHG

412

R. Anand et al.

Average Energies FTCHG

GCEEC

0.70

Energy in Joules

0.60 0.50 0.40 0.30 0.20 0.10

0 250 500 750 1000 1250 1500 1750 2000 2250 2500 2750 3000 3250 3500 3750 4000 4250 4500 4750 5000

0.00

No. of Rounds

Rounds FTCHG 0 250 500 750 1000 1250 1500 1750 2000 2250 2500 2750 3000 3250 3500 3750 4000 4250 4500 4750 5000

0.60 0.49 0.39 0.32 0.27 0.24 0.21 0.19 0.17 0.15 0.14 0.13 0.12 0.11 0.10 0.10 0.09 0.08 0.08 0.07 0.07

GCEEC 0.50 0.33 0.23 0.18 0.14 0.12 0.10 0.09 0.07 0.07 0.06 0.05 0.05 0.04 0.04 0.03 0.03 0.03 0.02 0.02 0.02

Fig. 5 Average Energy Comparison

and GCEEC protocols was implemented and presented in this paper. Based on the comparison of the simulation results, the FTCHG algorithms perform better than GCEEC. The simulation results clarify the number of alive and dead nodes based on FTCHG and GCEEC algorithms including its non-cooperative game theory approach of the WSNs. The future work may be an attempt to design game-theoretic wireless sensor routing algorithms in various scenarios such as energy variation and mobility conditions.

0

50

100

150

200

250

300

350

Fig. 6 Alive and Dead nodes Comparison

No. of Nodes

Alive nodes FTCHG Alive nodes GCEEC

Dead Nodes GCEEC

Alive and Dead Nodes

No of Rounds

0 250 500 750 1000 1250 1500 1750 2000 2250 2500 2750 3000 3250 3500 3750 4000 4250 4500 4750 5000

Dead Nodes FTCHG

0 250 500 750 1000 1250 1500 1750 2000 2250 2500 2750 3000 3250 3500 3750 4000 4250 4500 4750 5000

Rounds

Dead Nodes FTCHG 0 0 27 64 101 126 137 145 153 163 174 181 186 196 200 205 206 212 218 221 225

Alive nodes FTCHG 300 300 273 236 199 174 163 155 147 137 126 119 114 104 100 95 94 88 82 79 75

Dead Nodes GCEEC 0 1 85 116 146 168 186 198 208 218 220 231 235 239 243 247 251 254 257 261 266

Alive nodes GCEEC 300 299 215 184 154 132 114 102 92 82 80 69 65 61 57 53 49 46 43 39 34

Fault-Tolerant Cluster Head Selection Using Game Theory … 413

414

R. Anand et al. Cluster Heads FTCHG

GCEEC

70

No. of Cluster heads

60 50 40 30 20 10

0 250 500 750 1000 1250 1500 1750 2000 2250 2500 2750 3000 3250 3500 3750 4000 4250 4500 4750 5000

0

No. of Rounds

Rounds

FTCHG

GCEEC

0 250 500 750 1000 1250 1500 1750 2000 2250 2500 2750 3000 3250 3500 3750 4000 4250 4500 4750 5000

50 52 51 51 37 38 37 31 40 29 28 21 22 23 18 24 14 21 15 12 16

62 60 40 50 33 28 19 18 22 15 19 19 15 13 12 12 9 8 10 6 5

Fig. 7 Cluster heads comparison

Packets to Cluster Heads FTCHG

GCEEC

300

No. of Packets

250 200 150 100 50

0 250 500 750 1000 1250 1500 1750 2000 2250 2500 2750 3000 3250 3500 3750 4000 4250 4500 4750 5000

0

No. of Rounds

Fig. 8 Packets to Cluster Heads Comparison

Rounds 0 250 500 750 1000 1250 1500 1750 2000 2250 2500 2750 3000 3250 3500 3750 4000 4250 4500 4750 5000

FTCHG GCEEC 250 238 248 239 212 175 185 134 162 121 136 104 126 95 124 84 107 70 108 67 98 61 98 50 92 50 81 48 82 45 71 41 80 40 67 38 67 33 67 33 59 29

Fault-Tolerant Cluster Head Selection Using Game Theory …

415

Packets to BS FTCHG

Rounds FTCHG

GCEEC

80 70

No. of Packets

60 50 40 30 20

5000

4750

4500

4250

3750

4000

3250

No. of Rounds

3500

2750

3000

2250

2500

2000

1500

1750

1250

750

1000

500

0

0

250

10

0 250 500 750 1000 1250 1500 1750 2000 2250 2500 2750 3000 3250 3500 3750 4000 4250 4500 4750 5000

GCEEC 75 70 66 56 42 43 42 36 45 34 33 26 27 28 23 29 19 26 20 17 14

68 66 46 56 39 34 25 24 28 21 25 25 21 19 18 18 15 14 16 12 8

Fig. 9 Packets to Base Station

References 1. Esmaeeli M, Ghahroudi SAH (2016) Improving energy efficiency using a new game theory algorithm for wireless sensor networks. Int J Compu Appl (0975–8887) 136(12) 2. Taheri Y, Garakani HG, Mohammadzadeh N (2016) A game theory approach for malicious node detection in MANETs. J Inf Sci Eng 32:559–573 3. Brown D, Shi Y (2020) A distributed density-grid clustering algorithm for multi-dimensional data. In: 2020 10th annual computing and communication workshop and conference (CCWC), Las Vegas, NV, USA, pp 0001–0008.https://doi.org/10.1109/CCWC47524.2020.9031132 4. Thushara K, Raj JS (2013) Dynamic clustering and prioritization in vehicular Ad-hoc networks: zone based approach. Int J Innovation Appl Stud 3(2):535–540. ISSN 2028-9324 5. Kumarawadu P, Dechene DJ, Luccini M, Sauer A (2008) Algorithms for node clustering in wireless sensor networks: a survey. In: Proceedings of IEEE 6. Boyinbode O, Le H, Mbogho A, Takizawa M, Poliah R (2010) A survey on clustering algorithms for wireless sensor networks. In: Proceedings of 2010 13th international conference on networkbased information systems, Takayama, Japan, pp 358–364 7. Kaur S, Mir RN (2016) Clustering in wireless sensor networks—a survey. IJ Comput Network Inf Secur 6:38–51 8. Bhavana V, Rathi J, Rakshith Reddy K, Madhavi K (2018) Energy efficiency routing protocols in wireless sensor networks—a comparative study. Int J Pure Appl Math 118(9):585–592 9. Alghamdi TA (2020) Energy efficient protocol in wireless sensor network: optimized cluster head selection model. Telecommun Syst 74:331–345 10. El Assari Y, Al Fallah S, El Aasri J, Arioua M, El Oualkadi A (2020) Energy-efficient multi-hop routing with unequal clustering approach for wireless sensor networks. Int J Comput Networks Commun (IJCNC) 12(3) 11. Liu Q, Liu M (2017) Energy-efficient clustering algorithm based on game theory for wireless sensor networks. Int J Distrib Sensor Networks 13(11) 12. Mishra M, Panigrahi CR, Sarkar JL, Pati B (2015) Gecsa: a game theory based energy efficient cluster-head selection approach in wireless sensor networks. In: International conference on man and machine interfacing (MAMI), pp 1–5

416

R. Anand et al.

13. Attiah A, Chatterjee M, Zou CC (2017) A game theoretic approach for energy-efficient clustering in wireless sensor networks. In: 2017 IEEE wireless communications and networking conference (WCNC) 14. Azharuddin M, Kuila P, Jana PK (2015) Energy efficient fault tolerant clustering and routing algorithms for wireless sensor networks. Comput Electr Eng 41:177–190 15. Yang L, Lu Y, Xiong L, Tao Y, Zhong Y (2017) A game theoretic approach for balancing energy consumption in clustered wireless sensor networks. Sensors 17:2654 16. Farman H, Javed H, Ahmad J, Jan B, Zeeshan M (2016) Grid-based hybrid network deployment approach for energy efficient wireless sensor networks. J Sens 2016(2326917):14 17. Chen Z, Shen H (2018) A grid-based reliable multi-hop routing protocol for energy-efficient wireless sensor networks. Int J Distrib Sensor Networks 14(3) 18. Habib MA, Moh S (2019) Game theory-based routing for wireless sensor networks: a comparative survey. Appl Sci 9(14):2896 19. Hu S, Wang X (2020) Game theory on power control in wireless sensor networks based on successive interference cancellation. Wireless Pers Commun 111:33–45 20. Liu X, Wu J (2019) A method for energy balance and data transmission optimal routing in wireless sensor networks. Sensors (Basel) 19(13):3017 21. Qureshi KN, Bashir MU, Lloret J, Leon A (2020) Optimized cluster-based dynamic energyaware routing protocol for wireless sensor networks in agriculture precision. Hindawi J Sens 2020(9040395):19

The Prominence of Corporate Governance in Banking Sector with Reference to UAE Santosh Ashok and Kamaladevi Baskaran

Abstract In the last decade, corporate governance has come to the forefront in each sector. Many grounds have created such a situation across the corporate world at large that have compelled every organization within the corporate space to identify and adapt. Owing to extreme emphasis on corporate governance in UAE banking sector, this study identifies the guidelines followed and ensure that banks are in complete adherence. The analysis into the factors within the UAE for corporate governance will be linked by providing a path for analysing and coming to a verdict about the corporate governance within the banking industry in UAE and its role within every banking institution. This study will further solidify the fact that ethical practices are quintessential for any industry irrespective of its nature looking at its proportion in any country’s economic growth as well as depict the positive aspect of corporate governance practices within the U.A.E banking system. The results highlight that banks are exploring ways to improve the methods to boost productivity on corporate governance practices within organizations and specifically for banks that have a poor record of giving worth and respect to stakeholder’s goals for a longer period. Keywords Stakeholders · Corporate goals · Productivity · Central bank · Accountability · Mismanagement · Fraudulent transactions

1 Introduction Corporate Governance is a concept that has instigated a huge shift within the corporate globe. It is an exercise of corporates abiding towards ethical principles. It is indispensable and is the root for any corporate reform across industries. Additionally, this tradition is not limited and is applicable regardless of the size of a company. There are cases surrounding the corporate world that have shaped an atmosphere for growing relevance of corporate governance. In comparison with other segments S. Ashok Amity University, Dubai, UAE K. Baskaran (B) Department of Management and Commerce, Amity University, Dubai, UAE © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_33

417

418

S. Ashok and K. Baskaran

within the finance field, banking has undergone considerable change which has made the operational part of banks to become coherent and credible. Considering the need of uprightness across the banking sector in UAE, this study will focus on the prominence of the same. The respect of its stakeholders is paramount for existence within the market. The level of banks collaboration with the UAE Central Bank for active compliance with the code of conduct and the standard operating procedures for ensuring long term stability and civility in the day to day working within the banking sector. The implication of this research study can be directly related to how the banks observe the corporate governance methodologies at the time of lending money to its clients. This study is also critical for the fact that how the banks ponder about the roadmap and what approach it wishes to adopt keeping in view the customers dominance within the extensive banking system of the UAE. This study holds meaning in any scenario due to conviction on unambiguity that can be contributed by several banks by means of non-stop appraisal of the staffs on how corporate governance is established in conformance with government directives on banking. The background can be attached to the fact that the banks to a greater extent are alert of the control of its contributors and their knack to affect its customary habits. Even though the COVID19 has affected the banks within the UAE, this study is decisive from the angle of relentless prioritization to be given on a frequent basis by the banks towards transmission of corporate governance. To put to context, this study is far-reaching for the very fact that the efficiency of the banks activities lies in how banks arrive at its plans and takes decision in light of its conformity towards ethical acts and corporate governance measures. To summarize, the research study addresses facets that have be familiarized by the banks that cannot be ignored for assuring compliance pertaining to corporate governance protocols and its adeptness to centralize the corporate governance issues in the course of fulfilling its vision. The study consists of what the banks can do to amend the mechanisms set. In deliberation of the resolve of banks towards corporate governance in general, this study can be very beneficial to the existing professionals and youngsters who are keen to learn on this and its application across all the banks within the UAE and foster their careers within the banking field. As the stockholders survey the banks where they have stakes in, this report will corroborate on how the banks within UAE have realized the demand of corporate governance and doctrines within the banks [1]. This research can also be a matter of further examination for overseeing any study on corporate governance issues and ability to alter the landscape of banking engagements within UAE.

2 Literature Review The literature review highlights about corporate governance and its complexion within the UAE banking arena. It will also be captivating for conducting a comprehensive research in future. Past research about corporate governance in the banks

The Prominence of Corporate Governance in Banking Sector …

419

were administered on a fragmented manner for this research study in realization of persistent vagaries that have occurred. The rising awareness of corporate governance has made every person engaged within the banking occupation to comprehend the incentives for practicing corporate governance within their banks on a methodical basis for guaranteeing integrity and sincerity at the moment of supervising crucial activities for creating a favourable impression about the banks they are in and for the overall banking specialization. Researchers before had performed research on corporate governance but concentrated on every other area instead of confining to an expertise and the nature of corporate governance vary in contradiction to other kinds of the organizations. Compared to previous research studies which looked into the basic tenets of corporate governance Venulex Legal Summaries (2010) research study argues about how the banks perceive corporate governance internally as well as within the respective industrial field within the UAE and how the same can be recommended further. In comparison with research performed under this topic, statistical methods are being devised for evaluating the data obtained through primary sources. Prompt reply is imperative for speedy sanctioning of the corporate governance within the banks. This research study can be profitable for individuals who are itching to work in the banks. Corporate governance has gripped within the corporate world and has forced to consider corporate governance within the banks as a research objective. The norms of the research objective are bound to change over in the nick of time and are not static. The aftermath of any distress experienced by banks does not take away their onus from realizing its statutory obligations. The relationship amongst the dependent and independent variables; how it will reflect on the study can be observed. Banks had chosen to diversify their incomes to compete in a competitive world. Due to their diversification policy, the banks opted to work in the competitive world are more efficient and stable in proving their increased performance and returns based on risk. Such competition paves the way for banks to diversify their revenues through traditional and non-traditional activities [2]. The size of the bank has a greater impact on the volatility of return on assets with a negative coefficient. This proves that large-scale banks are not concerned about investment in high-risk assets [3]. Findings relate to the outputs of Maudos [4] who documents an increase in risk when banks increase non-interest income in their revenue concentration. Nisar et al. [3] observe that banks can get positive outputs if they diversify their revenues mainly in the non-interest source of income. Doumpos et al. [5] explain the profitability and fruitfulness of revenue diversification of banks for developing and developed nations. The researchers conclude that it is more fruitful for developing nations. Foos et al. [6] look through an analysis and discover that loan growth decreases the risk of interest income. Non-interest income is a critical component of a bank’s income diversification strategy [7]. Emerging countries have typically displayed financial markets with rapidly evolving economies while making political, social, and economic progress in recent decades [8]. The capital adequacy ratio is important because increasing the capital adequacy limit helps to mitigate bank risk [9]. Bank size and bank loans are also important factors in managing bank risk. Non-interest income is a critical component of a bank’s income diversification strategy [7]. De

420

S. Ashok and K. Baskaran

Jonghe [10] and Fiordelisi et al. look at revenue diversification and bank risk in European banks and find a strong link. Gurbuz et al. come to similar conclusions. According to Nisar et al. [11], banks can produce positive outputs by diversifying their revenue sources away from interest. This suggests that bank diversification strategies aid in the improvement of financial outputs, which reduces risk. According to DeYoung and Rice [12], banks with well-managed operations rely on non-traditional income sources less than banks with inefficient management practises. This means that reduced interest source income has inverse agency effects. The earnestness and determination shown towards corporate governance rules and regulations set within every bank can be obtained in this research for giving a clear depiction as to how the banks are responding to the wave of corporate governance. This research is very insightful to the extent of gauging how essential the corporate governance is within the banks across the UAE with the help of the responses received when evaluated with other research studies with the same research objective at any time. This study will also stimulate energy in determining the belief in common men and women on the banks within the UAE. The drawbacks of corporate governance machinery within every bank in the UAE can also be found out which can help in future research for in collective endeavours to make corporate governance practice a certainty within the banking field. In order to nail down what we can do more, this study will also justify an opportunity for present and prospective personnel within every bank in the UAE to contribute and come up with the suggestions to the senior level management of respective banks. If also ensure to pledge towards providing cooperation and helping in creating a better mechanism of corporate governance in the later years.

3 Methods The research methodology of this study based on corporate governance within the UAE banking sector involves quantitative techniques that are found suitable for this research. This study will support enthusiastic researchers who are attracted towards the banking specialization. This research is designed to constitute an outline on how corporate governance can disrupt and can entice the banks to locate the flaws and their capability to strengthen its association with those distressed for safeguarding its dominance in the long run. Data was obtained from primary sources that includes family and friends within the banking circle and from other colleagues within respective banks. The tests used in this study where Correlation and Independent Sample t-test. IBM SPSS was utilized for analysing the data collected for this research [13]. Levene’s study of homogeneity assumptions finds out whether the difference between two groups relevant in this research exists or not. It also proves whether the assumption of equal variance between two groups which are considered in the scope of this research study is violated or not and aids in scrutinizing whether the groups can be compared or not. This test can only be adopted in case of independent sample

The Prominence of Corporate Governance in Banking Sector …

421

t-test. For rejecting the null hypothesis, the limit was kept at 0.05 so as to give us a confidence interval of 95%. The contribution of primary data for the research has been from relatives, friends and colleagues from their respective banks having the familiarity about the way in which banks compete within the UAE. The reactions of the respondents about the steps approved by the banks for assessing where it ranks on corporate governance were secured via questionnaires. However, even though the research study covers some points and explanations that are essential for an inclusive research in future, there were shortcomings faced during this research study. Firstly, the number of responses accumulated is less than 30, thereby shrinking the possibility to conduct a test like z-test. Secondly, this research is not broad in nature and is confined to the banking setup within the UAE. Other countries and professions that have inculcated corporate governance programmes and legalized this phenomenon are not recognized because of the research problem in this study. Thirdly, other statistical tools like the ANOVA test, z-test, etc. could not give an ideal representation about the observations that would manifest in the study when differentiating with those in the study (Independent sample t-test and correlation). The tests overlooked in this research study could not reveal the essence of every variable on the final outcome of this research [14]. This can serve as a signal for people who are eager to go elaborately into countless perspectives on this subject. Fourthly, a detailed research necessitated a lengthier time span for benefiting the avid researchers with the intent of witnessing the forms of corporate governance and its feats across banks. The ongoing predicament does not give an in-depth clarity to all the parties affiliated about the ranking of the UAE banks in contrast to other banks across the world. Lastly, there were fewer female than the male participants. Diverse outlook was neglected making it difficult to garner thoughts about how corporate governance can be viewed in future. Comparative survey was a hurdle which was arduous to get away from. It was not synchronizing with the elements reckoned to be feasible. The following Tables 1 and 2 highlights the descriptive statistics and hypothesis testing using independent samples test.

4 Results and Discussion Table 1 depicts that there is no significant difference in thinking of male and female employees in UAE banks on the capability of its corporate governance mechanisms to counter frauds and mismanagements with respect to corporate governance mechanisms are capable enough to deal with frauds and mismanagements. The p value in this case is 0.243 which more than the criteria set of 0.05 which is testament to the fact that we have failed to reject the null hypothesis. Equal variance is assumed among the groups therefore bringing about comparison between them. The corporate governance resulting in accountability and honesty amongst the banks within UAE with the p value in this case is 0.258 which is more than the criteria set.

422

S. Ashok and K. Baskaran

Table 1 Group statistics Particulars

Gender N

Corporate governance mechanisms are capable of dealing with frauds and mismanagement

Male

Mean Std. deviation S.E. mean

21 4.14

0.79

0.17

4.63

0.52

0.18

The bank has contemplated its stakeholder’s Male 21 4.00 well-being as per central bank Female 8 4.13

0.95

0.21

0.35

0.13

Serious efforts are undertaken within the bank for incorporating corporate governance mechanisms

Male

Corporate governance has resulted in accountability and honesty in all banks

Male

The composition of board of directors is adequate for taking any major decision within the bank

Male

Whistle-blower policy is strong enough to protect whistle-blowers from reporting unethical activities

Male

Female 8

21 4.19

0.40

0.09

4.50

0.53

0.19

21 4.10

0.44

0.10

4.13

0.64

0.23

21 4.10

0.54

0.12

4.63

0.74

0.26

21 4.14

Female 8

Female 8 Female 8

0.57

0.13

4.00

0.00

0.00

Company secretary exists within the bank to Male 21 3.57 provide guidance with regards to company Female 8 4.00 law matters

1.03

0.22

0.00

0.00

Employees can participate fully within the bank for its effectiveness

Female 8

Male

21 4.10

0.62

0.14

4.38

0.52

0.18

Shareholders have equal voting rights in Male 21 4.00 matters placed in the annual general meeting Female 8 4.63

0.77

0.17

0.52

0.18

Bank get their accounts and other financial statements audited by external auditors as per schedule

Male

0.85

0.19

4.38

0.52

0.18

Legal and other requirements are followed by the bank

Male

21 4.33

Decisions taken by the board of directors are made after due recognition of the concerns of stakeholders

Male

Female 8

21 3.86

Female 8

0.48

0.11

4.13

0.64

0.23

21 4.24

Female 8

0.54

0.12

4.38

0.52

0.18

Regular training given to employees across Male 21 4.14 departments about the working of corporate Female 8 4.25 governance mechanisms of the bank

0.48

0.10

0.46

0.16

Trust levels placed within the of corporate governance process of the bank is high

0.81

0.18

0.35

0.13

Female 8

Male

21 3.81

Female 8

3.88

Corporate governance has resulted in accountability and honesty in banks

Serious efforts are undertaken within the bank for incorporating corporate governance mechanisms

The bank has contemplated its stakeholder’s well-being as per central bank

The corporate governance mechanisms are capable of dealing with frauds and mismanagement

Particulars

Equal variances not assumed

Equal variances assumed

Equal variances not assumed

Equal variances assumed

Equal variances not assumed

Equal variances assumed

Equal variances not assumed

Equal variances assumed

Table 2 Independent samples test

1.34

4.63

20.81

1.42

F

0.258

0.041

0.000

0.243

Sig

Levene’s test for equality of variances

27.00 19.62

27.00 26.99

27.00 10.18

27.00 9.59

−1.91

−0.36 −0.52 −1.69 −1.49

−0.14 −0.12

df

−1.59

T

0.906

0.887

0.168

0.102

0.609

0.722

0.070

0.124

Sig.(2-tailed)

T-test for equality of means

−0.03

−0.03

−0.31

−0.31

−0.13

−0.13

−0.48

−0.48

Mean difference

0.25

0.21

0.21

0.18

0.24

0.35

0.25

0.30

Std. error difference

−0.58

−0.45

−0.77

−0.69

−0.62

−0.84

−1.01

−1.11

Lower limit

(continued)

0.52

0.39

0.15

0.07

0.37

0.59

0.04

0.14

Upper limit

95% CI of the difference

The Prominence of Corporate Governance in Banking Sector … 423

Employees can participate fully within the bank for its effectiveness

Company secretary exists within the bank to provide guidance with regards to company law matters

Whistle-blower policy is strong enough to protect whistle-blowers from reporting unethical activities

The composition of board of directors is adequate for taking major decisions within the bank

Particulars

Table 2 (continued)

Equal variances not assumed

Equal variances assumed

Equal variances not assumed

Equal variances assumed

Equal variances not assumed

Equal variances assumed

Equal variances not assumed

Equal variances assumed

0.06

17.95

8.48

1.59

F

0.815

0.000

0.007

0.218

Sig

Levene’s test for equality of variances

20.00

27.00 20.00

27.00 15.28

1.14

−1.17 −1.91

−1.12 −1.23

27.00

9.94

−1.84

0.70

27.00

df

−2.13

T

0.239

0.271

0.071

0.254

0.267

0.492

0.096

0.043

Sig.(2-tailed)

T-test for equality of means

−0.28

−0.28

−0.43

−0.43

0.14

0.14

−0.53

−0.53

Mean difference

0.23

0.25

0.22

0.37

0.13

0.20

0.29

0.25

Std. error difference

0.40

−0.12

−0.77

−0.79

−0.90

(continued)

0.21

0.23

0.04

0.33

0.56

−0.28

−1.18

0.11

-0.02

Upper limit

−1.17

−1.04

Lower limit

95% CI of the difference

424 S. Ashok and K. Baskaran

Decisions taken by the board of directors are made after due recognition of the concerns of stakeholders

Legal and other requirements are followed by the bank

Bank get their accounts and other financial statements audited by external auditors as per schedule

Shareholders have equal voting rights in matters placed in the annual general meeting

Particulars

Table 2 (continued)

Equal variances not assumed

Equal variances assumed

Equal variances not assumed

Equal variances assumed

Equal variances not assumed

Equal variances assumed

Equal variances not assumed

Equal variances assumed

0.09

0.00

3.30

0.50

F

0.767

0.950

0.080

0.484

Sig

Levene’s test for equality of variances

27.00 21.10

−1.60 −1.98

10.19 27.00 13.19

0.83 −0.62 −0.63

27.00

19.16

−2.51

0.95

27.00

df

−2.10

T

0.540

0.542

0.424

0.351

0.060

0.122

0.021

0.045

Sig.(2-tailed)

T-test for equality of means

−0.14

−0.14

0.21

0.21

−0.52

−0.52

−0.63

−0.63

Mean difference

0.22

0.22

0.25

0.22

0.26

0.32

0.25

0.30

Std. error difference

0.76

−0.35

−0.61

(continued)

0.33

0.32

0.66

−0.24

−0.59

0.02

0.15

-0.10

-0.01

Upper limit

−1.06

−1.18

−1.15

−1.24

Lower limit

95% CI of the difference

The Prominence of Corporate Governance in Banking Sector … 425

Higher trust levels placed within corporate governance process of the bank

Regular training given to employees across departments about the working of corporate governance mechanisms of the bank

Particulars

Table 2 (continued)

Equal variances not assumed

Equal variances assumed

Equal variances not assumed

Equal variances assumed

1.44

0.14

F

0.240

0.715

Sig

Levene’s test for equality of variances

27.00 13.09

27.00 26.29

−0.55

−0.22 −0.30

df

−0.54

T

0.765

0.829

0.590

0.591

Sig.(2-tailed)

T-test for equality of means

−0.07

−0.07

−0.11

−0.11

Mean difference

0.22

0.30

0.19

0.20

Std. error difference

−0.51

−0.68

−0.53

−0.51

Lower limit

0.38

0.55

0.31

0.30

Upper limit

95% CI of the difference

426 S. Ashok and K. Baskaran

The Prominence of Corporate Governance in Banking Sector …

427

There is a significant difference on part of male and female employees’ rationale on banks contemplating its stakeholder’s well-being as laid out by the central bank. The serious efforts of the banks while incorporating corporate governance mechanisms with the p value in this case is 0.041 which is less than the criteria set. The variance of both the groups significantly fluctuates. Equal variance cannot be assumed consequently bringing about less probability of comparison amongst them. The p value for the composition of the Board is adequate for taking any major decision within the bank is 0.218 which is more than the criteria set. Equal variance is assumed therefore resulting in comparison with the two groups possible. The p value for whistle-blower policy is strong enough to protect whistle-blowers from reporting unethical activities is 0.007 which is less than the criteria set. There is significant difference in the views of male and female employees on company secretary being at hand for advising on company law affairs, employees reckon the bank accounts and financial statements are audited by external auditor as per timelines. There is no significant difference in the feelings of male and female employees in banks on full participation for reaping effectiveness and employees on legal and other requirements followed by the bank. Also, no significant difference among male and female employees on decisions being taken by board after due recognition of stakeholder’s concern; employees on corporate governance within the bank; employees on higher trust levels placed by them on corporate governance process in the bank. Table 3 denotes that the ideal correlation coefficient ranges from −1 to + 1. In this case, where Pearson’s correlation is employed, the correlation coefficient, i.e. r value Table 3 Correlation analysis shows relationship between considering stakeholders interest and serious efforts undertaken Banks considered the interest of its stakeholders in accordance with the rules laid out within the guidelines set out by the central bank

Serious efforts undertaken within the bank for incorporating corporate governance mechanisms

Banks considered the Pearson correlation 1.00 interest of its Sig (two-tailed) stakeholders in N 29 accordance with the rules with the rules laid out within the guidelines set out by the central bank

0.36

Serious efforts undertaken within the bank for incorporating corporate governance mechanisms

1.00

Pearson correlation 0.36 Sig (two-tailed)

0.059

N

29

0.059 29

29

428

S. Ashok and K. Baskaran

Table 4 Correlation analysis shows relationship between whistle-blower policy and trust levels from reporting unethical activities Whistle-blower policy is strong enough to protect whistle-blowers from reporting unethical activities

Trust levels placed within the corporate governance process of the bank is extremely high

Whistle-blower policy Pearson correlation 1.00 is strong enough to Sig (two-tailed) protect whistle-blowers N 29 from reporting unethical activities

0.67

Trust levels placed within the corporate governance process of the bank is extremely high

1.00

Pearson correlation 0.67 Sig (two-tailed)

0.000

N

29

0.000 29

29

is 0.36 and p value of 0.059. This means that there is moderate positive correlation that prevails that gives an indication on the importance given to corporate governance by banks in UAE due to positive relationship between the factors addressed over here but can be enhanced more. The findings here give us a solid illustration of its eminence. It justifies why it is binding on banks to stand up for it. Table 4 demonstrates that the ideal correlation coefficient ranges from −1 to + 1. Pearson’s correlation is used over here instead of Spearman’s rank correlation. The correlation coefficient, i.e. r value is 0.67 and the p value is 0.000 that shows correlation which is visualized as something that is noteworthy while coming to terms with ascertaining the superiority of corporate governance over other ritual in the banks. The relationship echoed here can be an example of how the decorum can be upheld. Nonetheless there is a leeway for the banks to further improvise and not ostracize this liaison as exemplified as it can open the floodgates for banks to arrest its slide for uplifting itself without being agonized by any kind of setback. Table 5 explains that the optimal correlation coefficient is from −1 to + 1. The table shows the r value which is 0.40 and p value which is 0.030. This does lay bare to the truth on the moderate positive correlation that is predominant over here. It’s thrilling that the positivity is there when we look at the groups that can match each other. But, there is still loads to do for perfect affinity to arise in case of the variables looked at over here for charting out the lionization of corporate governance within the UAE banks. The tests conducted throughout the research study with the benefit of primary data portrays the positivity of corporate governance in the U.A.E Banking system. The positive relationship between proper training of employees and ethical policies for prevention of any contravening practices that can be perpetrated can be an example of how corporate governance can be a great asset for the banks in the times to come. From the viewpoint of the trust of employees on the whistle-blowing policy of the

The Prominence of Corporate Governance in Banking Sector …

429

Table 5 Correlation analysis shows relationship between employees trained without contravention

Employees are trained on corporate governance within the bank

Pearson correlation

Employees are trained on corporate governance within the bank

Ethical policies are not contravened by anyone in the bank

1.00

0.40

Sig (two-tailed) N

Ethical policies are not Pearson correlation contravened by anyone Sig (two-tailed) in the bank N

0.030 29

29

0.40

1.00

0.030 29

29

banks and the protection assured to them for reporting suspicious activities across banks. Other aspects like timely convening of Annual General Meetings, decisions made with stakeholders concerns in mind and honesty and accountability exhibited by the banks in its behaviour as a whole [15]. The discussion is a crux on the necessity of corporate governance in the U.A.E and can epitomize how can the momentum created from prioritizing corporate governance be not erased from the system as a whole.

5 Conclusion The summary of research denotes that throughout this stage, there were positive trends that have emerged. This is a resemblance of the peculiarity that is prevalent here which makes this an exasperating experiment to delve upon. Results here have proved to be conclusive on the preface that any infringement that has taken place, thereby delivering on the consistency as for honouring the commitment of the banks to rein in on deceptions that will take place for endorsement of corporate governance. The findings of the study have further proven the credence of corporate governance that is mirrored through legal requirements being followed as well as maintenance of books of accounts which is a true testament to the fact that corporate governance is a must for the banks worldwide more so in the U.A.E. The assessments made in the study fairly gives a quantification on how much distance is remaining for its widespread acceptance among the banks. Optimism shown through the responses is a marker that the banks must still do a lot on this. Until banks cooperate, the ratio of banks consenting to ethics is too marginal. This study facilitates to gauge the sense of how the public interpret the banks hard work in promoting the culture of corporate governance and ethical practices within respective banks and within the industry. Artificial Intelligence brings massive disruption in financial services as increasing number of banks improve existing processes by introducing innovations under the aegis of AI powered technologies [16]. Since all the banks have a role in promotion

430

S. Ashok and K. Baskaran

of good practices, they are guided and regulated in relation to maintaining stringent corporate governance by understanding its enormity in terms of in what way the banking transpires keeping in view the prevailing local traditions. With the detailed analysis in the study, corporate governance can be a clincher for any bank more so within U.A.E to be guaranteed of survival in future.

References 1. Ashok S, Baskaran K (2019) Audit and accounting procedures in organizations. Int J Recent Technol Eng 8(4):8759–8768. https://doi.org/10.35940/ijrte.D9165.118419(2019) 2. Amidu M, Wolfe S (2013) Does bank competition and diversification lead to greater stability? Evidence from emerging markets. Rev Develop Finance 3(3):152–166 3. Rashid A, Khalid M (2018) An assessment of bank capital effects on bank-risk-taking in Pakistan. Pak J Appl Econ 28(2):213–234 4. Maudos J (2017) Income structure, profitability and risk in the European banking sector: the impact of the crisis. Res Int Bus Finance 39(A):85–101 5. Doumpos M, Gaganis C, Pasiouras F (2016) Bank diversification and overall financial strength: international evidence. Financ Mark Inst Instrum 25(3):169–213 6. Foos D, Norden L, Weber M (2010) Loan growth and riskiness of banks. J Bank Finance 34(12):2929–2940 7. Huang LW, Chen YK (2006) Does bank performance benefit from non-traditional activities? A case of non-interest incomes in Taiwan commercial banks. Asian J Manage Humanity Sci 1(3):359–378 8. Boubaker S, Nguyen DK (2014) Corporate governance in emerging markets. Springer 9. Ashraf B, Arshad S, Hu Y (2016) Capital regulation and bank risk-taking behavior: evidence from Pakistan. Int J Financ Stud 4(3):16 10. De Jonghe O (2010) Back to the basics in banking? A microanalysis of banking system stability. J Financ Intermed 19(3):387–417 11. Nisar S, Peng K, Wang S, Ashraf B (2018) The impact of revenue diversification on bank profitability and stability: empirical evidence from South Asian countries. Int J Financ Stud 6(2):40 12. DeYoung R, Rice T (2004) Noninterest income and financial performance at US commercial banks. Financ Rev 39(1):101–127 13. Baskaran K (2019) The impact of digital transformation in Singapore e-Tail market. In: Int J Innovative Technol Exploring Eng 8(11):2320–2324. https://doi.org/10.35940/ijitee.I8046. 0981119 14. Baskaran K, Vanithamani MR (2014) E-customers attitude towards E-store information and design quality in India. Appl Res Sci Eng Manage World Appl Sci J 31:51–56. https://doi.org/ 10.5829/idosi.wasj.2014.31.arsem.555 15. Baskaran K (2011) Success of retail in India: the customer experience management scenario. Int J Electron Market Retail 4(2/3) 16. Mehrotra A (2019) Artificial intelligence in financial services—need to blend automation with human touch. In: 2019 International conference on automation, computational and technology management. https://doi.org/10.1109/ICACTM.2019.8776741

Ensuring Privacy of Data and Mined Results of Data Possessor in Collaborative ARM D. Dhinakaran and P. M. Joe Prathap

Abstract The usage of the data mining (DM) technique has rapidly increased in the recent era. Most organizations utilize DM for forecasting their goals and for predicting various possibilities of solutions to their problems. DM provides various favors to our society; it also has some downsides like a risk to privacy and data security in collaborative mining. Privacy cracks occur eventually in the communication of data and aggregation of data. In the recent era, various approaches and methods for data privacy were obtained to achieve privacy of individual’s data and collaborative DM results, but yield into loss of information and undesirable effect on the utility of data; as a result, DM success is downgraded. In this paper, we proposed an effectual approach—Fisher–Yates shuffle algorithm for privacy-preserving (PP) association rule mining (ARM). With our approach, medical supervision can steadily discover a global verdict model through their local verdict models without the aid of cloud, and the perceptive medical data of each medical supervision is well protected. Hence, association among some delicate diseases like coronavirus and its symptoms, treatment, and remedy helps in foreseeing the disease in the beginning time. Our target is to conclude association rules in a dispersed environment with reasonably reduced communication time and computation costs, preserves the privacy of participants, and gives precise results. Keywords Data mining (DM) · Privacy and data security · Collaborative mining · Association rule mining (ARM) · Dispersed environment

D. Dhinakaran (B) Information and Communication Engineering, Anna University, Chennai, India e-mail: [email protected] P. M. Joe Prathap Department of Information Technology, RMD Engineering College, Kavaraipettai, Tiruvallur, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_34

431

432

D. Dhinakaran and P. M. Joe Prathap

1 Introduction Data mining is the swiftly escalating field in the PC professional that be able to find treasured and enthralling instances covered up in incredible events of information put away in various data sources [1]. Typical DM involves all data to be placed in the main memory for executing the mining process. For a modest DM task, a solo desktop computer is satisfactory to accomplish the goals. For major DM tasks, data are typically hefty and cannot suit the main memory. A common solution is to depend on a parallel workout or shared mining to sample and cumulative data from diverse bases. Two major issues faced by any data possessor are privacy and data sharing. Recent dispute of DM is scalability, information sharing, and data privacy. Information sharing is a clear goal for all data owners, but data exchange and transmission alone do not eliminate privacy concerns. Prevalent perspectives to defend privacy restrict entree to the records, such as adding up authentication like certification to data entries. Replace individuality with pseudonyms or special cryptograms; hence, secret information cannot be located in an individual record. DM has to empower an info exchange and fusion mechanism to certify that all data possessor can exert together to attain a universal optimized goal. The developers will gain substantial professional value from the prediction accuracy, which is slightly better than random presumption. In the semi-honest model, the data possessor properly trails the protocol condition yet tries to acquire supplementary info by analyzing the transcript of communication received right through the accomplishment. Secure mutual computation is a simple approach in which multiple data owners are participating, but they do not share their data. Instead, they run existing DM tools on their own data and aggregate the results to get the global output. Never a one can recognize whatsoever excluding its particular input and the outcomes. This paper is focused with a category of ARM, specifically mining of association rules in dispersed databases. Privacy anxiety takes place as there is not a copious authentic third force. Third forces are frequently truthful and snooping, fervent to get further facts of abuser, and probably exposed to exploitation. Since we did not outsource the data possessor’s data to a third force, we can avoid the transmission delay. In our approach, data will not be transmitted; instead, only the results will be transmitted in the middle of the DP. Preprocessing like adding fictitious transactions to DP’s original data before starting the mining process and by not encrypting individual’s data to certify the privacy via cryptography yields in a better computational process. Our approach supports scalability in the communication by adding or removing DP in the communication network at the respective iteration of the mining process.

Ensuring Privacy of Data and Mined Results of Data Possessor …

433

2 Related Work There are quite a lot of approaches that come up for the PPDM problem. Anonymization, obfuscation, cryptographic techniques, and Secure Multiparty Computation are the keys for PPDM. These solutions have few limitations like accurateness of the data scrutiny on altered data is reduced and grieves from heavy information loss, original data values cannot be reconstructed, low efficiency, and diminishing the utility of the primary data. Liang et al. [1] premeditated solutions for privacy espouse to defend perceptive information for each category of abuser. They build up an approximate algorithm, which can competently and precisely find out frequent itemsets in an outsized unsure database. The author offers incremental mining algorithms, which facilitate probabilistic frequent itemset (PFI) outcome to be revived. This diminishes the need of re-executing the intact mining algorithm on the novel database, which is frequently further pricey and superfluous. Algorithm observes how a presented algorithm that digs up exact itemsets, as well as novel approximate algorithm, can sustain incremental mining. Wu et al. [2] offered two algorithms, i.e., UP-Growth and UP-Growth+, for taking out elevate efficacy itemsets with a vector of effectual tactic for snipping candidate itemsets. The knowledge of higher efficacy itemsets is stored in a tree-based structure known as UP-Tree, which allows candidate itemsets to be generated efficiently with just two database scans. The outcome of UP-Growth and UP-Growth+ is judged against the algorithms on numerous category of mutually authentic and artificial datasets. Zhu et al. [3] progressed the PP-ARM built on hybrid partial hiding (HPH) algorithm. Where HPH is a data perturbation algorithm, the authentic dataset is inhibited and distorted by dissimilar arbitrary parameters and the system generates frequent items built on HPH. This technique has the potential to reduce the proficiency and effectiveness of primary data. Wenlang et al. [4] endowed with a mining mold based on the distributed database and a subsequent effectual mining algorithm. Via association rules of market basket examination, incorporate every database file, then acquire the mining consequence, and craft an auxiliary mining. Transfer the rules which are not adapted with the necessities back to all dispersed place to craft a supplementary precise mining practice, thus evading the regular network communication. This algorithm can diminish recurrent communication saddle; they have a discriminate asset in equivalent arithmetic computing and asynchronous action and assorted mining. Saygın et al. [5] considered the problem of structuring the PP algorithms for the ARM. They introduce new metrics to contemplate how security issues are conceivably tackled in the general skeleton of association rule mining, they provide a skeleton for association rules, while the input contains mysterious values, and they propose a pioneering technique for cuffing rules from a dataset via mysterious values. Lin et al. [6] addressed the issue of data privacy for several parties on outsourced cloud data. They proposed a system for PP-ARM in which data are transmitted from

434

D. Dhinakaran and P. M. Joe Prathap

several sites in a twin-cloud designed structure. Where the data owner and miners have different encryption keys that are kept hidden from everyone, including the cloud server. They develop a set of cryptographic blocks for PP-ARM based on the BCP cryptosystem. Their approach is based on a set of sophisticated two-party reliable computation algorithms. They achieved reasonable computation cost and advanced point of privacy. Yi et al. [7] considered a state where a user enciphered its data and placed the data in the cloud. To find association rules from the data, the user outsources the job to semi-honest servers, which collaborate to perform ARM on the enciphered data in the cloud and return enciphered association rules to the user. Their work was erected on the dispersed ElGamal cryptosystem to achieve confidentiality of item, database, and transaction correspondingly. To diminish the prospect that all servers are conceded, the user can utilize servers from diverse cloud earners. This approach causes higher computation and communication costs. Li et al. [8] premeditated solutions for multiple data owners to outsource their databases to proficiently share their data unwaveringly without sacrificing data privacy. Their solutions outflow less information regarding the raw data than the utmost existing approaches. To certify supports/confidences not to be revealed, they proposed a well-organized symmetric homomorphic encryption scheme and a secure comparison scheme. To generate an association rule, they proposed a cloud-aided frequent itemset mining solution. In this paper, we come up with an effectual approach for PP-ARM. Our approach discovers an association rule in a dispersed environment with reasonably reduced communication time and computation costs, preserves the privacy of participants, and gives accurate results.

3 Levels of Process 3.1 Intra/Data Level Due to collaborative data mining, each data possessor does the pre-action like data cleaning, data reduction, and transformation process on the resident data sources they obtain to transmute the raw data into a suitable and well-organized format for mining purposes. Data possessor calculates the frequent itemsets and association rule candidates on the resident data sources by applying ARM and exchanges the results to the next data possessor to acquire global result.

Ensuring Privacy of Data and Mined Results of Data Possessor …

435

3.2 Inter/Pattern Level Data possessor collects frequent itemsets and association rule candidates from previous data possessor, calculates the resident frequent itemsets and association rule candidates by consolidating received information with a resident data source, and exchanges the resident results to the subsequent data possessor assigned as per communication path. At the final level of communication, the master competitor discovers the global frequent itemsets and association rule candidates [9, 10].

4 Communication Between the Data Possessor To collaborate with each data possessor (DP), as an initial stage of communication, a list of N − 1 tokens will be generated based on the N data possessor. Tokens will be generated by the participant who wishes to do the mining process, as shown in Fig. 1. Tokens generated (1 ≤ i < N) and assigned to all involvers in the communication by randomly using Fisher–Yates shuffle (FYS) algorithm and token assigned will be unique for each data possessor. Consider (i) is a participant who wishes to do the mining process, then (i) will be assigned as a master competitor, and token 1 will be assigned to the first participant in a well-shuffled list of data possessors. Token mentions the communication path from 1 to N. Through this pattern of communication, each data possessor does the resident mining process and shares their output (frequent itemsets and association rule candidates) [11]. Each data possessor knows the transmission path through assigned tokens of all involvers in the transmission. Due to the random assignment of tokens to the data possessor, a communication path will be created randomly, since the communication path is directly propositional to the tokens assigned. The proposed communication method avoids collision, as the communication path varies for each iteration. Due to the variation of the communication path at each mining process initialization, there is less possibility that two or more data possessors can mutually communicate and try to infer the data of other data possessors. Fig. 1 Assigning tokens

DP 1 DP (MINER) Master Competitor - Generang Tokens

- Assign Tokens

(FYS Algorithm)

T o k e n

DP 2 . . .

DP N-1

436

D. Dhinakaran and P. M. Joe Prathap

4.1 Fisher–Yates Shuffle Algorithm To generate a random permutation of tokens based on N data possessor (1 ≤ i < N), the FYS algorithm has been used. It produces a random number in O (1) time. Working of algorithm flinches from the last data possessor; swap it with a randomly selected data possessor from the whole list of data possessor. Now consider the list of data possessor from 1 to N − 2 (size reduced by 1), and repeat the process till we knock out the first data possessor. An important property of the FYS algorithm is that each data possessor has an equal probability, of 1/N, of being chosen as the last data possessor in the list of data possessor. At the end of shuffling, we will be getting a well-shuffled list of data possessors. As a result, a token will be assigned in sequence to a list of data possessor which is shuffled. For example, say DP1, DP2, DP3, and DP4 are the four data possessors involved in the mining process. Consider DP2 is a participant who wishes to do the mining process, then DP2 is assigned as master, and rest three data possessors are given as input to FYS algorithm. Consider the output of the FYS algorithm as DP4, DP1, and DP3, token 1 will be assigned to DP4, token 2 will be assigned to DP1, and token 3 will be assigned to DP3. Algorithm Fisher–Yates (L): Input: A List, L, of N data possessors, indexed from 1 to N − 1 Output: A permutation of L so that all permutations are equally likely For d=N − 1 down to 1 do Let j ← random (d+1) // j is a random integer in [1, d] Swap L[d] and L[j] // swap L[d] with itself, if j=d return L Iterate through all of the tokens that haven’t been shambled yet, starting at the end of the array and working backwards through each token. At the last of the array, all swaps occur on-site, and once a token has been “shuffled” into place, it cannot move again. This limits the results that are possible to the same as the digit of combination that are possible. const fisherYatesShuffle = (Res) = > { for (var a = Res.length − 1; a > 0; a–) { const SIndex = Math.floor(Math.random() * (a + 1)) const CCard = deck[a] const ToSwap = deck[SIndex] Res[a] = ToSwap Res[SIndex] = CCard } return Res }

Ensuring Privacy of Data and Mined Results of Data Possessor …

437

5 Mining Process To find the ARM, we stick with the collaborative DM process, where each data possessor does a resident mining process to obtain global mining results. In collaborative DM, some algorithms hang on the authentic arbitrator to discover the global association rule, where each data possessor sends their data to a third party to preserve privacy between the data possessors, as shown in Fig. 2. Finding reliable arbitrators is not an easy task in recent days [12, 13]. We acquire a model called semi-honest, where the arbitrator will not be involved in finding the global association rule. Only the data possessor will be involved and collaborate to mine frequent itemsets and association rule candidates. Any participants involved in the collaborative DM process can initiate the ARM to take out valuable information for their business needs. If any one of the participants initiates the task, he will be acting as a master competitor and generates the token for sequencing the participants for communication and he assigns the parameters like minimum support count and minimum confidence and sends this information to other participants. A master competitor will initiate the task of association rule mining from his data and send his resident mining process output to the subsequent participant to continue the mining practice to acquire the global result. There is more chance for revealing the master competitor’s original data to the second participant by knowing the resident mining process output of the master competitor. For example, the master

DP-1

Result

Iniate Mining Process - Master

{Sanizaon & Mining}

Result

Result DP-4

DP-N

Token 1 {Resident Mining}

Token 2 {Resident Mining}

Collaborave Associaon Rule Mining

Result DP-2 Token 5 {Resident Mining} Fig. 2 Mining process

Result

Result DP-3

DP-5

Token 4 {Resident Mining}

Token 3 {Resident Mining}

438

D. Dhinakaran and P. M. Joe Prathap

competitor obtains f 1 (p, q) from his resident data and the next data possessor obtains f 2 (p, q) from his resident data; we often denote the snooping functionality by (a, b)  (f 1 (p, q), f 2 (p, q)). To avoid this scenario, a master competitor should insert fictitious transactions into his original data before starting the resident mining process. Due to the insertion of fictitious transactions, the global results obtained from DP-N by the master competitor may be degraded. To get accurate global results, a master competitor has to remove the fictitious transactions from the global results received from the DP-N.

5.1 Mining Each data possessor calculates the frequent itemsets and association rule candidates using the Apriori algorithm on the resident data sources and exchanges the results to the next data possessor to acquire a global result. Master possessor will insert fictitious transactions into his original data before starting the resident mining process by using the optimal k-anonymity method [14]. To calculate support: Local support count(ls) = Frequency(x, y)/N , Frequency(x, y) =

n 

t ∈ V (x, y)

(1)

t=1

where X, Y are the items, t is the transactions, n is the number of transactions, and V (x, y) are the indices of transactions containing x, y. Global Support(X ) =

n 

ls p (X ) + Gs p−1 (X )

(2)

p=2

where p is the possessor, n is the total number of possessors, lsp is the local support count of the possessor, Gs p−1 is the consolidated support count received from preceding data possessor, and X is the itemset. To calculate confidence: Local Confidence(lc) = Frequency(X, Y )/Frequency(X )

(3)

Frequent itemset generation: Supp(X) ≥ T s , where T sis a threshold quantified by the data possessor. Association rule candidates: Conf (X ⇒ Y ) ≥ T c , where T c is a threshold quantified by the data possessor.

Ensuring Privacy of Data and Mined Results of Data Possessor …

439

6 Security Analysis In this part, we examine the security properties of the proposed solutions, concentrating on by what means our solutions can protect a data possessor data from the mediator, cloud, and other data possessors.

6.1 Security Under the Third-Party/Cloud Attacks Because we choose the semi-honest paradigm, only Data owners are involved in defining the Association rule, so the arbitrator and cloud will be unaware of it. Only data possessor will intricate and work together with each other to mine frequent itemsets and association rule candidates [15]. Hence, mined results and data possessors will be not known to any intruders and cloud.

6.2 Security Under Data Possessors’ Attacks We adopted a collaborative DM process, where each data possessor does a resident mining task to obtain global mining results; only the master competitor initiates the mining task and discovers the global frequent itemsets and association rule candidates. A master competitor could be anyone of the data possessor. Each data possessor calculates the frequent itemsets and association rule candidates on the resident data sources and exchanges the results to the next data possessor to attain a global result [16]. By receiving the global results from previous data possessor, one could be unable to find the data of any data possessor accept the results of the first data possessor since the results are of the amalgamations of all the data possessor’s data. To avert the revealing of master competitor data, a master competitor will insert fictitious transactions into his original data before starting the resident mining process. Hence, there is no chance for any data possessors to infer other data.

7 Application of Proposed Approach: Medical Management Association among some delicate diseases like coronavirus and its symptoms, treatment, and remedy helps in foreseeing the disease in the beginning time. An early forecast of these ailments decreases the death percentage and saves more human lives. Association rule mining technique is very beneficial in mining patterns between disease and patient’s symptoms with specific accuracy. The accuracy of medical

440

D. Dhinakaran and P. M. Joe Prathap

domain inference can be enhanced by accumulating the medical data stored at various locations by various medical data possessors [17]. Numerous researchers have come up with secure data mining using association rules in horizontally or vertically partitioned data which are based on cryptography function. Medical data possessor will add fictitious transactions in indigenous data to mitigate frequency analysis attack and as a preprocessing; they will encrypt the indigenous data with counterfeit data [18]. The enciphered data will be outsourced to the cloud for mining the desired medical information for doctors to make accurate decisions to treat the patients. These practices have been followed in most of the hospitals by the doctors to improvisation of their decisions. Cryptography solutions provide high privacy of individual medical data possessor, but their communication and computational cost is extreme because of their message broadcasting techniques. Our work directs to find an association between disease and patient’s symptoms that enable the doctors in taking medical conclusions. In today’s modern world, the medical domain has amplified due to the digitalization of hospitals. The data are valuable resources for medical research. Hospitals share medical data and collaborate to take out the conclusion about the disease using DM; hence from the mined result, the conclusion of ailment can be taken by the doctors [19]. To collaborate with each medical data possessor (MDP), as an initial stage of communication, a list of N − 1 tokens will be generated based on N medical data possessor using the FYS algorithm. Any MDP involved in the collaborative DM process can initiate the ARM for extracting medical information. He will be acting as master and generates the token for sequencing the MDP for communication, and he assigns the parameters like minimum support count and minimum confidence and sends this information to other MDP. Master will initiate the task of association rule mining from his medical data and send his resident mining process output to the succeeding participant to continue the mining process to obtain the global result. To ensure privacy and data security during communication and aggregation of data, our approach guarantees that there is no information or data loss and with no side effects on data utility and benefits of DM has remained upgraded.

8 Performance Evaluation In this section, we evaluate the communication complication, computational complication, and data utility of our ARM and frequent itemset mining solutions. In the assessment, we choose one of [13]’s solutions as the baseline.

8.1 Computation Cost Analysis The computation cost of our solution is much inferior related to the sanitized dataset, cryptographic hash function, and outsourced ARM [5]. We got datasets

Ensuring Privacy of Data and Mined Results of Data Possessor … Table 1 Experimental settings

Table 2 Period analysis

441

CPU

Intel i5-2410 M (dual-core 2.3 GHz)

Memory

8 GB

Software

Windows 7 64 bits and NetBeans

Data

World/datasets/health

Data bit length

< 72 bits

Association rules

Period

1000

133

2000

266

3000

394

from https://data.world/datasets/health engendered for the medical scrutiny dataset. These datasets are utilized to perform the experimentation and estimate the computation and transmission cost of our algorithm. We have assessed the performance of our practice with the sanitized dataset, cryptographic hash function, and outsourced ARM [20]. The figure illustrates the computation cost of our approach with different volumes of the dataset (Table 2). In Fig. 3, by fluctuating the sum of transactions (t) and the amount of attributes (a) to find the frequent itemsets, the time also increases linearly with t and a. Figure illustrates the period for finding the association rules mining from the frequent itemsets. We can realize that the period is increased when the number of association rules increases. Fig. 3 Computation cost analysis

442

D. Dhinakaran and P. M. Joe Prathap

Fig. 4 Communication cost analysis

8.2 Communication Cost Analysis For a complete communication analysis of the process, we first analyze the transmission cost of the intra/data level and then analyze the transmission cost of the inter/pattern-level ARM. Communication Cost =

n 

ls p (X ) + Transmission cost

(4)

p=1

In Fig. 4, by fluctuating the number of transactions (n) and the number of attributes (m), we evaluated the transmission cost of the sanitized dataset and outsourced data for the given user. We can realize that the time of optimizing encryption rises linearly with n and m. For example, when m = 20, the encryption time increases from 0.26 to 0.34 s when n is varied from 5000 to 10,000.

8.3 Data Utility For determining data utility, we compared the number of frequent itemsets generated from the original dataset, sanitized dataset, cryptographic hash function, and outsourced dataset [21]. We recorded the number of frequent itemsets generated from the original dataset and further applied in a sanitized dataset produced using the optimal k-anonymity method, cryptographic hash function, and outsourced dataset.

Ensuring Privacy of Data and Mined Results of Data Possessor …

443

Fig. 5 Data utility analysis

Data utility =

Amount of data utilized Total number of data

(5)

As shown in Fig. 5, our experiments and analysis showed that the sum of frequent itemsets generated in the sanitized dataset which is shown as data utility rate % is not as much as that which were generated in the original dataset. As a result, anyone mining the sanitized dataset will not achieve the desired results in terms of frequent itemsets and strong association rules [22]. So, our intention is satisfied, and we are successful in maintaining the data utility and privacy of association rules.

9 Conclusion In this paper, we presented the significance of data mining in the healthcare domain for improving medical research. Privacy issues during collaborative data mining for medical research have been discussed. To solve this, we proposed an effectual approach for PP-ARM on medical data. The theoretical and practical analyses of the proposed algorithm are also discussed. Our approach will not disclose information about the raw data of the data possessor, compared to other current solutions. Further, a proposed practice can also be concerned for other applications (e.g., connection between heart disease and food habits of patients). In the future, it can also be extended for more than n participants in the collaboration without sacrificing the data utility and performance with reasonably reduced transformation time and computation costs.

444

D. Dhinakaran and P. M. Joe Prathap

References 1. Liang W et al (2012) Efficient mining of FIS on large databases. IEEE Trans 24:2170–2183 2. Wu C-W et al (2013) Efficient algorithms for mining high data utility itemsets from transactional databases. IEEE Trans 25(8):1772–1786 3. Zhu JM, Zhang N, Li ZY (2013) A new privacy preserving ARM algorithm based on hybrid partial hiding strategy. Cybern Inform Technol 13:41–50 4. Wenlang C et al (2011) Research of the mining algorithm based on distributed database. In: IEEE 2011 8th international conference on FSKD, pp 1252–1256 5. Saygın Y, Verykios VS, Elmagarmid AK (2002) Privacy preserving association rule mining. In: Proceedings of the 12th international workshop on RIDE’02 6. Lıu L, Chen R, Lıu X (2018) Privacy-preserving ARM on outsourced cloud data from multiple parties. In: Conference proceeding at Singapore Management University 7. Yi X, Rao F-Y, Bertino E (2015) Privacy-preserving ARM in cloud computing. In: Published in ASIA CCS ‘15 8. Li L, Lu R, Choo KR, Datta A, Shao J (2016) Privacy-preserving outsourced ARM on vertically partitioned databases. IEEE Trans 11(8):1847–1861 9. Dhinakaran D, Joe Prathap PM (2018) Recommendation scheme for research studies via graph based article ranking. Int Adv Res J Sci Eng Technol 5(3):45–51. ISSN (Online) 2393-8021. https://doi.org/10.17148/IARJSET.2018.5310 10. Dhinakaran D, Joe Prathap PM (2020) A brief study of privacy-preserving practices (PPP) in data mining. TEST Eng Manage 82:7611–7622 11. Rakesh A, Ramakrishnan S (2000) Privacy-preserving data mining. ACM 29(2):439–450 12. Selvi M, Joe Prathap PM (2019) Analysis and classification of secure data Aaggregation in wireless sensor networks. Int J Eng Adv Technol (IJEAT) 8(4):1404–1407 13. Jogi Priya PM, Joe Prathap PM (2017) Security boosted online auctions using group cryptography. IJAER 12(6):6257–6266 14. Lindell Y, Pinkas B (2009) Secure multiparty computation for PPDM. J Priv Confidentiality 1(1):5–27 15. Vaidya J, Clifton C (2004) Privacy preserving naiıve bayes classifier for vertically partitioned data. In: SIAM international conference on DM, ISBN: 978-0-89871-568-2 16. Lindell Y, Pinkas B (2000) Privacy preserving data mining. In: CRYPTO. Springer, pp 36–54 17. Manoharan S (2020) Geospatial and social media analytics for emotion analysis of theme park visitors using text mining and GIS. J Inform Technol 2(02):100–107 18. Aruna Jasmine J, Richard Jimreeves JS, Dhinakaran D (2021) A traceability set up using digitalization of data and accessibility. In: 3rd International conference on intelligent sustainable systems (ICISS), IEEE Xplore, pp 907–910. https://doi.org/10.1109/ICISS49785.2020. 9315938 19. Anand JV (2020) A methodology of atmospheric deterioration forecasting and evaluation through data mining and business intelligence. J Ubiquitous Comput Commun Technol (UCCT) 2(02):79–87 20. Aggarwal CC, Philip SY (2008) A general survey of privacy-preserving data mining models and algorithms. In: Privacy-preserving data mining. Springer, Boston, pp 11–52 21. Shah A, Gulati R (2016) PPDM: techniques classification and implications—a survey. Int J Comput Appl 137(12):40–46 22. Lindell Y, Pinkas B (2000) Privacy preserving data mining. In: crypto-00, vol 1880. pp 36–54

Implementation of Load Demand Prediction Model for a Domestic Load Center Using Different Machine Learning Algorithms—A Comparison M. Pratapa Raju and A. Jaya Laxmi

Abstract To comply with the advanced smart grid operations such as Artificial Intelligence (AI) based Distributed Generation (DG) Integration and Load schedules, learning of future load and supply availability is inevitable. Specifically, the use of Big Data analytics and prediction is very crucial as they have changed the paradigm of Conventional Grid operations. Since last two decades, research on Load demand (PT ) forecasting is on high pedestal. And there has been more than a dozen Machine Learning (ML) algorithms reported in the literature. But, features/predictors selection was always a critical call in any ML based prediction. Not only that effective comparison and choice between numerous ML algorithms has always been a research challenge. To adress the said challenges, this article presents the load forecasting of a domestic load center using Feed Forward Artificial Neural Networks (FF-ANN) and nineteen different ML algorithms trained by the combination of weather and time stamp features/predictors. ML algorithm driven MATLAB-SIMULINK prediction model designed and developed can predict the Load demand for any given date if weather parameters are fed to it. In adition, an extensive comparison between different ML algorithms in terms of training time, prediction speed, Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Square Error (MSE), R2 , Training Time in seconds and Prediction Time in Obs/Sec presented paves a way for researchers in selecting right ML algorithm for load forecasting problem concerning domestic load centers. Among all ML algorithms trained and tested, Rotational Quadratic Gaussian Process Regression (RQ-GPR) ML algorithm is witnessed to be with higher accuracy and lower RMSE. MATLAB 2018b licensed user added with Statistics and ML Tool box is used for the whole implementation. Keywords Load forecasting · Machine learning algorithms · Gaussian process regression · Support vector regression · Tree regression · Ensembled algorithms · Artificial neural networks M. Pratapa Raju (B) · A. Jaya Laxmi Department of Electrical and Electronics Engineering, Jawaharlal Nehru Technological Univeristy Hyderabad, Hyderabad, India A. Jaya Laxmi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_35

445

446

M. Pratapa Raju and A. Jaya Laxmi

1 Introduction The world has been witnessing an exponential rise in energy demand throughout previous decade. The consumption, production and estimation of energy directly indicate the development of a country. Energy demand forecasting helps in accurate management of available resources and raw material. Any country which estimates its future energy need would always be ready for any unforeseen circumstances. Energy demand estimation further equips a country to be sustainable with regard to industrial manufacturing, agriculture, economy, quality of life. Among all the load sections of electrical grid, domestic loading is the most prominent. Hence, prediction of domestic load demand will lead to the healthy operation of Electrical Grid. The work presented in this article also contributes to the same purpose by implementing numerous ML algorithms with extensive comparison between them.

2 Related Work According to Taylor et al. in [1, 2], short-term Load demand is predicted using the statistical methods with historical data as the input. The former employs ARMA, EXS and the latter EXS, PCA. In [3], short-term forecasting of Load demand is done using the historical load data, NWPs, weather data employing the statistical ARMA model by Taylor et al. In [4], statistical EXS model is implemented using the historical load data on a short term/medium term for predicting the Load demand by Gould et al. In [5, 6], short-term demand forecasting is done using the historical load data where the statistical and AI methods used are Kalman filtering and ANN, respectively, by Al-Hamadi HM et al. and Villalba et al. In both [7, 8], the Load demand is predicted by Taylor et al. using the statistical methods such as ARIMA, AR, Exponential Smoothing (EXS), PCA using the historical load data where the former doe’s very short-term forecasting and latter does short-term forecasting. In [9–11], short-term Load demand forecast is done by Wang et al., Zheng et al., and Badri et al., making use of the historical load data and the AI methods made use of are E-Insensitive Support Vector Regression (E-SVR), SVM and ANN, Fuzzy Logic Systems (FLS), respectively. In [12–14], short-term Load demand forecasting is done by Ho et al., Galarniotis et al., and Hong et al., using the AI methods ES, ELM, ANN and GPR. All three use the historical load data for forecasting, whereas [12, 14] also use weather data and temperature, respectively. In [15–17], Hybrid methods are used for short-term Load demand forecasting by Shu and Luonan, Zhang and Dong, Song et al., using the historical load data. The models employed are Self-Organized Maps Space Vector Machine (SOM-SVM), ANN-Wavelet and FLS, respectively. The gray box, white box and black box are the three categories into which energy prediction models can be put [18] and the black box algorithms can be divided into sub sections like Auto Regressive Algorithms (ARAs) [19]. An ARA has been

Implementation of Load Demand Prediction Model …

447

proposed by Touretzky and Patil for forecasting the amount of energy needed in buildings [20]. Ferracuti et al. studied different complicated algorithms for predicting the use of energy for every hour in a district [21]. Several studies for estimating and managing the energy, both for short and long term were conducted [22–24]. Iteratively re-weighted least squares were used to gauge model parameters by He [25] and since the power data is time series, Stochastic time series models have been used. Shilpa and Sheshadri [26] designed an autoregressive moving average model for predicting the electrical patterns of Karnataka. A novel hybrid auto regressive moving average method for estimating electricity prices was developed by Dash and Dash [27]. Support vector machine was put into use for a model designed by Yu et al. [28] for forecast day membership. Random forest was used by Lahouar and Slama [29] to connect different features like load profiles, customer behaviors and special holidays. Wavelet Recurrent Neural Networks and Neuro wavelet based prediction of energy and loads has been done in [30, 31]. Few algorithms like Kalman filters, Autoregressive moving average and multiple regression methods which belong to ANN, SVM, genetic models and fuzzy logic have been used for forecasting energy required [32–34]. Artificial intelligence models including ANN expert systems were used for estimating the load [35] and fuzzy inference methodologies/processes [36, 37], genetic code algorithms [38], evolutionary computation methods [39] and Support Vector Regression (SVR) [40] are some of the AI methods to name. Machine Learning methods like SVM [41], multiple regression [42, 43] and methods based on neural networks [44, 45] have been used for energy demand estimation. Solar radiation, time, population, temperature, humidity and electricity price per kilowatt hour are part of the energy load as seen in [46]. The greater the amount of data fed for learning, the more accurate the machine learning algorithm predicts the outcome [47]. Deep learning finds its applications in medical devices and automated driving as well [48]. Industrial fabricating applications would benefit by employing the Supervised machine learning as they tend to supply a lot of labeled data [49]. However, this research article emphasizes the investigation of numerous ML algorithms for Load demand prediction of a domestic Load center using weather, date and time stamps Big Data. A MATLAB-SIMULINK based prediction model driven by ML algorithms is presented which can be used to predict Load demand for any given weather, time and date stamps.

3 ML Algorithms Used Fundamentally, ML algorithms are classified into categories such as supervised, unsupervised and reinforced algorithms. Supervised Learning: In this, a target/dependent variable to be predicted in the given data set of predictors (independent variables) is fed to the Algorithm. Using which the ML algorithm generates a function which maps the inputs to desired outputs. Examples: Linear Regression (LR), GPR, SVM and Ensemble Algorithms. Unsupervised Learning: In this, target

448

M. Pratapa Raju and A. Jaya Laxmi

or outcome variable is not given to train ML algorithms. Mostly, it is used for clustering population in various clusters. Re-enforcement Learning: In this, ML algorithms are trained to make specific decisions, and the machine continuously trains itself by trial and error.

3.1 Artificial Neural Networks A prominent branch of statistical machine learning, called the Neural Networks (NN) is being used in several prediction tasks. The nonlinearities of data in various sectors can be modeled efficiently using ANNs with utmost accuracy [50–54]. The ANN model’s level of strength can be inferred from the relationship between its input and output variables [54]. Ferrero Bermejo et al., studied all the features of ANN which make them prominent in giving solutions for complex tasks [55].

3.2 Regression Based ML Algorithms Regression is an unique way of modeling a response/dependent value based on independent predictors. This is mostly used for the prediction and estimation of cause and effect affiliation between variables. In this work, four regression algorithms are considered for the implementation. To begin with, LR is considered first [56]. LR is the first supervised machine learning algorithm used in modeling and regression models. Simple LR is a method used to model and analyze the correlation between dependent and independent variables, the dependency of both variables is presented in Fig. 1. The simplest form of it is presented in Eq. 1. Fig. 1 Regression based ML algorithm

Implementation of Load Demand Prediction Model …

449

y = w.x + b

(1)

y = k1 x 1 + k1 x 1 + k2 x 2 + k4 x 4 . . . kn x n + b

(2)

where, y: dependent variable, x: independent variable, w/x 1 /x 2 : weight matrix, b: bias. Multi Variable Linear Regression (MVLR) is about establishing the relationship between many independent variables with one dependent variable. Equation (2) describes the basic model of multiple variables. Subsequently, in order to suppress the effect of outliers and accommodate maximum number of data points in the regression models, Robust Linear Regression (RLR) model is employed. Iteratively, reweighted least squares are used for non-equal error variances. The weights assigned are inversely proportional to the residual size. Large residual size of the outlier is assigned with smaller weight. Every iteration produces a new residual and corresponding weight is changed. To account for interaction between the variables, interactional effects are considered in LR. So, Interactional Linear Regression (ILR) is considered for the implementation. In addition, Step Wise Linear Regression (SWLR) is also used in the said regression implementation. The Backward Elimination (BE) and Forward Selection (FS) methods are combined to form the popular SWLR. The FS and BE are used together until no additional variables are added at the FS stage. The algorithm stops when no variable can be added. Addition of a variable results in new residual sum of squares value which is smaller than the previous residual sum of squares value.

3.3 Regression Tree Algorithms Regression Trees are presented by Brieman et al., in 1984 along with classification trees. The decision tree allows to extract rules and clarify the relationship between input and output variables, and it is classified into two types. They are classification and regression trees. First one deals with a qualitative output variable and the second handles a quantitative one. Here, regression tree is considered because Short-Term Load Forecasting (STLF) handles a quantitative problem. The regression tree consists of the split and terminal nodes. The input data is compared with the split conditions, and it proceeds to the left or to the right-side node. This process is repeated until the input data reaches a terminal node. The split conditions from the root to the terminal nodes make rules that clarify the relationship between input and output variables. The data classified into the same terminal node are similar to each other. A decision tree with binary splits is used for regression. An object of class regression can forecast responses for fresh data with the prediction method. The object contains the data used for training, so it can compute substitution predictions [57]. By considering different sizes for min leaf, different regression tree ML algorithms such as Fine Tree (FT), Medium Tree (MT) and Coarse Tree (CT) can also be trained to predict Load demand.

450

M. Pratapa Raju and A. Jaya Laxmi

Fig. 2 SVM for regression: regresion line and support vector

3.4 SVM for Regression The ‘Support Vector Machine’ (SVM) is a powerful supervised machine learning algorithm that can be used to solve classification and regression problems. It is mostly used in classification rather than regression. In this algorithm, each data item is plotted as a point in n-dimensional space (where ‘n’ is the number of features one has) with the value of each feature being the value of a particular coordinate. Then, classification is performed by tracking the hyper-plane that extricates the classes very well. Lately, SVM for regression problem is first presented by Chen et al. in 2001 and later named as “Support Vector for Regression” (SVR) [58]. In 2004, Smola et al., have presented a complete tutorial on SVR [59]. Survey provided by Sapankevych et al., on time series prediction using SVM in 2009 widens the scope of SVR [60]. The focal point is to estimate the function having a deviation ‘ε’ from the anticipated targets with reference to the entire training data, and Figure 2 presents the SVR with regression line and support vector indications. A function anticipating the input–output pairs with the defined envelope should be articulated. The predictors are maped onto the higher dimensional feature space using a function f (x) and it is produced by kernal function. This kernal function is formulated with the product of two vectors and the pattern of the training samples. In this article, five different SVM ML algorithms such as Quadratic SVM (QSVM), Cubic SVM (CSVM), Fine Gaussian SVM (FGSVM), Medium SVM (MSVM) and Coarse Gaussian SVM (CGSVM) are trained and tested by using five different kernel functions to predict load demand.

3.5 Ensemble Algorithms A survey presented by Soares et al., reveals the possibilities of the effectiveness of ensemble ML algorithms in solving regression problems [61]. Ren et al. introduced state-of-the-art ensemble methods in 2015, with an emphasis on forecasting wind and solar power [62]. Subsampling from the training set, handling the input features, and handling the output variable are the three methods for manipulating

Implementation of Load Demand Prediction Model …

451

data. Subsampling can be divided into two types: bagging and boosting. Subsampling based on bagging and boosting has been successfully applied to regression algorithms. Bagging is a common method in this [63–66].

3.6 Gaussian Process Regression Algorithm A least-square approach could be used to fit a straight line if the underlying function f (x) is expected to be linear and some assumptions about the input data can be made (linear regression). If f (x) is suspected to be quadratic, cubic, or even polynomial, model selection rules can be used to narrow down the possibilities. A finer solution than this is Gaussian Process Regression (GPR). Rather than saying that f (x) is related to a particular model (e.g., f (x) = mx + c), a Gaussian process should describe f (x) in a more oblique yet rigorous way by allowing the data to ’speak’ for themselves. While GPR is still supervised learning, the training data is used in a more subtle way [67]. Probabilistic wind speed forecasts are produced using GPR. GPR is a probabilistic method that is both principled and realistic, making it useful for interpreting model predictions. To deal with high dimensions and small samples in complex nonlinear problems, it has excellent adaptability and generality. GPR is simple to implement, self-adaptive for superior parameter estimation, and versatile enough to make nonparametric inferences as compared to Neural Networks and SVM. ‘squared exponential,’ ‘matern52,’ ‘exponential,’ and ‘rational quadrat’ are some of the kernel functions available.

4 Proposed Load Demand Prediction Model Using Machine Learning Algorithms Building prediction models by ML algorithms is carried out in steps and details of the hierarchy followed are given in Fig. 3. The ML algorithms are trained with the load data sets of domestic load center located in Birmingham Municipality, city of Alabama, USA, is considered [68]. As per the flow diagram given in Fig. 3: firstly, Big Data of the load consumption such as ‘cooling’, ‘heating’, ‘interior lighting’, ‘exterior lighting’ and ‘water heater’ loads are collected for one year. In addition, weather variables such as ‘temperature’, ‘dew point temperature’, ‘humidity’ and ‘pressure’ of Birmingham Municipality location are also acquired [69]. And also, time stamp Big Data such as ‘date’, ‘month’, ‘day’ and ‘time in hours’ is acquired to train ML algorithms. Secondly, load, weather and time stamp data sets undergo a process of cleansing and analytics. Specially, load and weather data samples, at different time stamps, are arranged at same time stamp by data analytic tools, and NaN variables are replaced with moving averages. Detailed envelopes of both load and weather variables with 8760 samples

452

M. Pratapa Raju and A. Jaya Laxmi

Fig. 3 Steps involved in traning and developing ML based prediction model

Fig. 4 8760 samples of load data of Birmingham load center

for the year 2013 are described in Figs. 4 and 5, respectively. Load, weather and time stamp Big Data acquired for training ML algorithms are first investigated using heat map in order to analyze the relation between various predictors and the response variable. Heat maps help in understanding correlation between all predictor variables as well. The heat map developed using Python is presented in Fig. 6. From the heat map, it can be inferred that the response variable PT has positive correlation with predictor ‘pressure’, whereas predictors ‘temperature’, ‘dew point temperature’ and ‘humidity’ have positive correlation between them. Furthermore, it can be observed that response variable PT has negative correlation with predictors ‘temperature’,

Implementation of Load Demand Prediction Model …

453

Fig. 5 8760 samples of weather data of Birmingham

Fig. 6 Heat map between predictors and response variables

‘dew point temperature’ and ‘month’, whereas ‘pressure’, ‘temperature’ and ‘dew point temperature’ have a negative correlation between them. In this implementation, Big Data of weather and time stamp parameters such as day, temperature, dew point, humidity, pressure, Month, date and time in hours totally 8 variables are considered as predictor variables whereas total Load demand PT (Total power consumption: Addition of all the five individual load variables, presented in Fig. 4) is considered as response variable. First of all, Load demand prediction using ANN is carried out, as mentioned before ANN with Feed Forwad Network comprising of 10 hidden layers is trained with eight predictors and one response variables. A MATLAB script is desinged to invoke data, train and test ANN. The Regression plot and Error Histogram presented in Fig. 7 describes that the Neural Net is trained with the target and output correlation of value 0.868.

454

M. Pratapa Raju and A. Jaya Laxmi

Fig. 7 Regression and error histogram plots: load prediction using ANN

Subsequently, nineteen different ML algorithms are trained, which includes four Regression algorithms, three Regression Tree algorithms and Five SVM algorithms. MATLAB script is designed and developed to invoke Big Data, segregating them into predictors and response variables, building ML command codes, train and test the ML algorithm. The response and regression plots result in training and testing of ML algorithms are presented in Fig. 8a–u. These figures show true and predicted response (PT ) against the record number (samples), blue and orange dots (marks) present in response plots represent true and predicted response Load demand PT . Among nineteen ML algorithms trained and tested, LR, ILR, RLR and SWLR ML algorithms are not performing well and it can be witnessed from Fig. 8a–d, where it is observed that true (blue) and predicted (orange) responses have considerable deviation between them. Three Regression Tree ML algorithms FT, MT and CT are showing better training results compared to first four Regression algorithms. Plots of true and predicted responses presented in Fig. 8e–g are showing that Regression Tree ML algorithms are superior over all the fundamental Regression ML algorithms. To be specific, FT ML algorithm is showing higher accuracy in predictions among all Regression Tree ML algorithms. Six SVM ML algorithms are trained for regression and out of them FGSVM ML algorithm is the best fit algorithm over other SVM ML algorithms. Superiority of the FGSVM ML algorithm over others is clearly witnessed in Fig. 8h– m. Out of the two Ensemble algorithms trained, EBT ML algorithm is outperforming EBoT ML algorithm which can be seen in Fig. 8n, o. But when compared to FGSVM, EBT ML algorithm is quite inferior. Out of the four GPR ML algorithms trained, RQ-GPR is showing better training results over all other GPR ML algorithms. The superiority of RQ-SVM ML algorithm over all others can be witnessed from Fig. 8p–r. Thus, using the best fit RQ-GPR ML algorithm among all, a Load demand prediction model is developed in MATLAB-SIMULINK environment. And it can be simulated for any given date, time stamp and weather variables to predict load demand PT . The description of the SIMULINK Model developed is presented in Fig. 9.

Implementation of Load Demand Prediction Model …

455

(a) Response plot_LR

(b) Response plot_ILR

(c) Response plot_RLR

(d) Response plot_ SWLR

(e) Response plot_FT

(f) Response plot_MT

(g) Response plot_ CT

(h) Response plot_LSVM

(i) Response plot_ QSVM

(j) Response plot_CSVM

(k) Response plot_FGSVM

(l) Response plot_MGSVM

(m) Response plot_CGSVM

(n) Response plot_EBoT

(o) Res. plot_ EBT

(p) Response plot _SE-GPR

(q) Response plot_ MGPR

(r) Response plot_ EGPR

(s) Response plot_RQ-GPR

(t) Regression plot_ RQ-GPR

(u) Residual plot_ RQ-GPR

Fig. 8 a Response plot_LR, b response plot ILR, c response plot_RLR, d response plot SWLR, e response plot_FT, f response plot_MT, g response plot_CT, h response plot_LSVM, i response plot_QSVM, j response plot_CSVM, k response plot_FGSVM, l response plot_MGSVM, m response plot_CGSVM, n response plot_EBoT, o Res. plot_EBT, p response plot_SEGPR, q response plot_MGPR, r response plot_EGPR, s response plot_RQ-GPR, t regression plot_RQ-GPR, u residual plot_RQ-GPR

456

M. Pratapa Raju and A. Jaya Laxmi

Fig. 9 ML algorithm training for load demand (PT ) prediction

5 Results and Discussion Performance parameters such as RMSE, R2 , MSE, MAE, Prediction Speed and Training Time concerning all nineteen ML algorithms trained and tested are calculated and presented in Table 1. It is witnessed that among all ML algorithms trained and tested RQ-GPR ML algorithm exhibits the best fit regression with an R2 value of 0.98 and RMSE of 0.27046. Further, it is noted that FT, MT, CT, MGSVM, FGSVM, EBT, SE-GPR and RQ-GPR ML algorithms resulted in lower RMSE and MSE values of the order less than ‘1’ compared to other ML algorithms. Among all, SE-GPR and RQ-GPR ML algorithms are observed to have lowest RMSE of 0.288 and 0.270, respectively. With reference to the prediction speed, MGSVM, MGPR and RQ-GPR ML algorithms take very less time to predict and the Obs/Sec observed is less than 5000 when compared to other ML algorithms. It is observed that RQ-GPR ML algorithm is faster compared to all other ML algorithms, and it has highest prediction speed of 3200 obs/sec. However, ANN is being inferior in furnishing promising results with the time stamp and weather predictors (Table 1). ANN is witnessed to have MSE of 0.865 which is quite better than LR, ILR, RLR, SWLR, CT, LSVM, QSVM, CSVM, CGSVM, EboT, MGPR, EGPR ML algorithms. Concerning training time LR, ILR, RLR, FT, MT and CT ML algorithms are observed to train in less time of the order less than 10 s. Among all the said ML algorithms, CT is the fastest with a training time of just 5.032 s. It is also observed that RQ-GPR algorithm is slower in training with training time of 371.78 s, followed by EGPR with 267.88 s. Despite being slow to train, RQ-GPR is the most efficient ML algorithm over others. Eventually, it is evident that RQ-GPR ML algorithm standing superior with very less RMSE, MSE, MAE and higher R2 compared to other ML algorithms. The superiority is displayed graphically in Fig. 8s–u and in Figs. 10, 11, 12 and 13. To evaluate the functionality of RQ-GPR based ML algorithm driven prediction model (SIMULINK model) built, the test input variables (historical data) of a random date

Implementation of Load Demand Prediction Model …

457

Table 1 Performance parameters of ML algorithms trained for load demand prediction S. Name of RMSE No. the ML algorithm

R2

MSE

MAE

Prediction Training Predictors and speed in time in response Obs/sec seconds variables

1

FF-ANN



0.7

0.868





2

LR

1.4363

0.3

2.063

1.0463

78,000

8.4

11

CSVM

1.0666

0.61 1.1377

0.64121 27,000

12

FGSVM

0.59308

0.88 0.3174

0.29731 16,000

13

MGSVM 0.92101

0.71 0.84826

0.53641 15,000

Big data of weather and date parameters of 8.594 Birmingham load center: 8.819 Predictor 74.6 variables: day, 6.0186 temperature, dew 5.26257 point, humidity, pressure, month, 5.032 date and time in 13.904 hours 19.215 Response 33.685 variable: PT (total power 21.992 consumption/load 16.684 demand)

64,000

21.73

3

ILR

1.3216

0.41 1.7467

0.8988

4

RLR

1.6263

0.1

0.97029 82,000

2.6447

5

SWLR

1.305

0.41 1.7436

0.89891 67,000

6

FT

0.74707

0.83 0.55811

0.41264 10,000

7

MT

0.8203

0.8

0.47905 93,000

0.67371

8

CT

0.94488

0.73 0.89279

0.57742 490,000

9

LSVM

1.5621

0.17 2.4401

0.96425 33,000

10

QSVM

1.1994

0.51 1.4386

0.72553 29,000

9.555

14

CGSVM

1.3574

0.37 1.8425

0.79208 14,000

15

EBoT

1.1396

0.61 1.2988

0.76071 59,000

22.466

16

EBT

0.84012

0.79 0.70581

0.52279 28,000

26.14

0.16591 6600

191.23

17

SE-GPR

0.28865

18

MGPR

4.0455

-3.92 16.366

0.97 0.83321

1.9128

4200

209.07

18

EGPR

2.2898

-0.58 5.2434

1.1559

6600

267.88

20

RQ-GPR 0.27046

0.98 0.073149 0.15165 3400

371.78

Root Mean Square Error Comparison plot: Load demand prediction RMSE

5

4.0455

4 3 2

0.74707 1.3216 1.4363

1.0666

0.94488 1.5621

1.6263 1.305

1.1396

0.92101

0.27046 2.2898

1.3574

1.1994 0.8203

0.84012 0.59308

1

0.28865

Type of ML algorithm

Fig. 10 RMSE comparison plot for load demand prediction

Rotational Quadratic GPR

Mattern 5/2 GPR

Exponential GPR

Squared Exponenetial

Ensembled Bagged Tree

Coarse Gaussian SVM

Ensembeled Boosted Tree

Medium Gaussian SVM

Cubic SVM

Fine Gaussian SVM

Linear SVM

Quadractic SVM

Coarse Tree

Fine Tree

Medium Tree

Step wise Linear regression

Robust Linear Regression

Interactional Linear Regression

Linear Regression

0

Squared -R

1 0 -1 -2 -3 -4

Type of ML algorithm

Fig. 13 Squared R comparison plot: load demand prediction 0.97

-3.92

Rotational

0.8988 0.97029 0.72553 0.76071 0.47905 0.29731 0.79208 0.89891 0.96425 0.64121 0.53641 0.52279 0.41264 0.57742 0.16591

Rotational Quadratic GPR

Exponential GPR

Mattern 5/2 GPR

Squared Exponenetial

Ensembled Bagged Tree

Ensembeled Boosted Tree

Coarse Gaussian SVM

Medium Gaussian SVM

Fine Gaussian SVM

Cubic SVM

Quadractic SVM

Linear SVM

Coarse Tree

Medium Tree

Fine Tree

Step wise Linear regression

1.0463

Exponentia

0.61

Mattern

Squared

0.37

Ensembled

0.8 0.73 0.17 0.610.88 0.71

Ensembele

Coarse

0.51

Medium

Fine

Cubic SVM

Quadractic

0.83

Linear SVM

Coarse Tree

0.41 0.10.41

Medium Tree

Fine Tree

Step wise

0.3 Robust Linear Regression

Rotational Quadratic GPR

Exponential GPR

Mattern 5/2 GPR

Squared Exponenetial

Ensembled Bagged Tree

Ensembeled Boosted Tree

Coarse Gaussian SVM

Medium Gaussian SVM

Fine Gaussian SVM

Cubic SVM

Quadractic SVM

Linear SVM

Coarse Tree

Medium Tree

Fine Tree

Step wise Linear regression

Robust Linear Regression

Interactional Linear Regression

2.063

Robust

2

Interactional Linear Regression

Linear Regression

MSE 20

Interaction

Linear Regression

0

Linear

MAE

458 M. Pratapa Raju and A. Jaya Laxmi

Mean Square Error comparison plot: Load demand prediction 1.7467

16.366 1.7436 0.67371 2.4401 1.1377 0.84826 1.2988 0.83321 5.2434 2.6447 0.55811 0.89279 1.4386 0.3174 1.8425 0.70581

0.073

Type of ML algorithm

Fig. 11 MSE comparison plot for load demand prediction

Mean Average Error comparison plot: Load demand prediction 1.1559 1.9128

0.15165

0

Type of ML algorithm

Fig. 12 MAE comparison plot for load demand prediction

Squared -R Comparison plot: Load demand prediction

0.98

0.79

-0.58

Implementation of Load Demand Prediction Model …

459

in different seasons of year 2013 are used to facilitate effective comparison. The testing is carried for different test inputs of different hours of a day. The results concerning Load demand in seasons spring, summer, winter and fall are presented in Tables 2, 3, 4 and 5, respectively, and the same are plotted in Fig. 14a–d, respectively, with predicted and actual Load demand indications.

6 Conclusion Load demand prediction of a domestic load center using Big Data with FF-ANN and ML algorithms is investigated and a thorough comparison is presented in terms of MSE, RMSE, MAE, Training and Prediction Speed. It is observed that among all ML algorithms trained and tested RQ-GPR ML algorithm stands quite accurate in predicting Load demand with RMSE, MSE, MAE, R2 of 0.270, 0.073, 0.151 and 0.98, respectively. RQ-GPR ML algorithm based MATLAB-SIMULINK model built predicts the Load demand of the load center for any given weather, date stamp variables of the domestic load center. The test results presented in all the four seasons witness the ability of the prediction model. This prediction model built can readily be used for future learning purposes in Smart Grid operations such as AI based DG Integration and Smart optimal Load scheduling, etc.

Day

6

6

6

6

Spring: 3/1/2013

6:00 AM

11:00 AM

6:00 PM

11:00 PM

21.1

22.2

20.6

17.2

Temp in o C

18.3

16.1

6.7

15.6

Dew in o C

0.84

0.68

0.4

0.9

Humidity in %

1010.6

1011.3

1015.3

1014.7

Pressure in Hg

Table 2 Load demand prediction test results for the input variables from spring season

3

3

3

3

Month

1

1

1

1

Date

23

18

11

6

Time in hrs

2.93

2.98

3.36

2.59

Actual PT in kW

2.9722

2.8306

3.0119

2.599

Predicted PT in kW

460 M. Pratapa Raju and A. Jaya Laxmi

Day

6

6

6

6

Summer: 6/15/2013

6:00 AM

11:00 AM

6:00 PM

11:00 PM

25.6

30

27.8

21.1

Temp in o C

19.4

17.8

18.3

18.3

Dew in o C

0.68

0.48

0.56

0.84

Humidity in %

1020.2

1018.1

1020.1

1019.6

Pressure in Hg

6

6

6

6

Month

Table 3 Load demand prediction test results for the input variables from summer season

15

15

15

15

Date

23

18

11

6

Time in hrs

0.87

1.78

1.03

0.58

Actual PT in kw

1.7666

1.6791

0.9009

0.6235

Predicted PT in kW

Implementation of Load Demand Prediction Model … 461

Day

1

1

1

1

Fall: 9/14/2013

6:00 AM

11:00 AM

6:00 PM

11:00 PM

22.8

27.2

24.4

17.8

Temp in o C

16.7

15.6

16.7

14.4

Dew in o C

0.68

0.46

0.62

0,8

Humidity in %

1022.4

1021.8

1024

1022.9

Pressure in Hg

Table 4 Load demand prediction test results for the input variables from fall season

9

9

9

9

Month

14

14

14

14

Date

23

18

11

6

Time in hrs

0.87

1.39

0.75

0.59

Actual PT in kW

0.6618

1.1651

0.7147

0.4531

Predicted PT in kW

462 M. Pratapa Raju and A. Jaya Laxmi

Day

4

4

4

4

Winter: 12/17/2013

6:00 AM

11:00 AM

6:00 PM

11:00 PM

14.4

16.7

15

14.4

Temp in o C

12.8

12.8

13.9

12.8

Dew in o C

0.9

0.78

0.93

0.9

Humidity in %

1007.8

1006.1

1008.5

1009.1

Pressure in Hg

Table 5 Load demand prediction test results for the input variables from winter season

12

12

12

12

Month

17

17

17

17

Date

20

15

11

6

Time in hrs

5.9

2.98

6.4

7.37

Actual PT in kW

6.1648

3.4791

6.4329

7.774

Predicted PT in kW

Implementation of Load Demand Prediction Model … 463

464

M. Pratapa Raju and A. Jaya Laxmi Actual and predicted Load demand on a random date in Spring 2013

5

2.98 2.9722

3.36

2.599 2.59

2.8306

3.0119

2

1.03

0 6:00 AM 11:00 AM 6:00 PM

11:00 PM

0.58 6:00 AM

(a).Spring 2013

3.4791 7.37

6.4

2

6:00 AM

11:00 AM

6:00 PM

(c).Winter 2013

11:00 PM

1.39

0.75 0.59

5.9

Actual power PT in kw Predicted power PT in kw

6:00 PM

Actual and predicted Load demand on a random date in Fall 2013

6.1648

2.98

0

11:00 AM

(b). Summer 2013

Actual and predicted Load demand on a random date in 7.774Winter 2013 6.4329

0.87

0.9009

Actual power PT in kw Predicted power PT in kw

Actual power PT in kw Predicted power PT in kw

10

1.6791

0.6235

2.93

0

Actual and predicted Load deman on a random date in Summer 2013 1.78 1.7666

0 11:00 PM

0.87 1.1651

0.6618 0.7147 0.4531 6:00 AM 11:00 AM 6:00 PM 11:00 PM Actual power PT in kw Predicted power PT in kw

(d).Fall 2013

Fig. 14 a–d Actual and predicted load demand PT in different seasons

References 1. Taylor JW (2010) Triple seasonal methods for short-term electricity demand forecasting. Eur J Oper Res 204(1):139–152 2. Taylor JW, De Menezes LM, McSharry PE (2006) A comparison of univariate methods for forecasting electricity demand up to a day ahead. Int J Forecast 22(1):1–16 3. Taylor JW, Buizza R (2003) Using weather ensemble predictions in electricity demand forecasting using weather ensemble predictions in electricity demand forecasting. Int J Forecast 19(1):57–70 4. Gould PG, Koehler AB, Ord JK, Snyder RD, Hyndman RJ, Vahid-Araghi F (2008) Forecasting time series with multiple seasonal patterns. Euro J Oper Res 191(1):207–222 5. Al-Hamadi HM, Soliman SA (2008) Short-term electric load forecasting based on Kalman filtering algorithm with moving window weather and load model. Electr Power Syst Res 68(1):47–59 6. Taylor JW, Mcsharry PE (2007) Short-term load forecasting methods: an evaluation based on european data. IEEE Trans Power Syst 22(4):2213–2219 7. Taylor JW (2008) An evaluation of methods for very short-term load forecasting using minuteby-minute British data. Int J Forecast 24(4):645–658 8. Villalba SA, Alvarez C (2000) Hybrid demand model for load estimation and short term load forecasting in distribution electric systems. IEEE Trans Power Deliv 15(2):764–769 9. Wang J, Zhu W, Zhang W, Sun D (2009) A trend fixed on firstly and seasonal adjustment model combined with the 1-SVR for short-term forecasting of electricity demand. Energy Policy 37(11):4901–4909 10. Zheng Y, Zhu L, Zou X (2011) Short-term load forecasting based on Gaussian wavelet SVM. Energy Procedia 12:387–393

Implementation of Load Demand Prediction Model …

465

11. Badri A, Ameli Z, Motie BA (2012) Application of artificial neural networks and fuzzy logic methods for short term load forecasting. Energy Procedia 14:1883–1888 12. Ho KL, Hsu YY, Chen CF, Lee TE, Liang CC, Lai TS et al (1990) Short term load forecasting of Taiwan power system using a knowledge-based expert system. IEEE Trans Power Syst 5(4):1214–1221 13. Galarniotis AI, Tsakoumis AC, Fessas P, Vladov SS, Mladenov VM (2003) Using Elman and FIR neural networks for short term electric load forecasting. In: The proceedings of international symposium on signals, circuits and systems, Iasi, Romania, vol 2. pp 433–436 14. Hong T, Pinson P, Fan S (2014) Global energy forecasting competition 2012. Int J Forecast 30(2):357–363 15. Shu F, Luonan C (2006) Short-term load forecasting based on an adaptive hybrid method. IEEE Trans Power Syst 21(1):392–401 16. Zhang BL, Dong ZY (2001) An adaptive neural-wavelet model for short term load forecasting. Electr Power Syst Res 59(2):121–129 17. Song KB, Young SB, Hong DH, Jang G (2005) Short-term load forecasting for the holidays using fuzzy linear regression method. IEEE Trans Power Syst 20(1):96–101 18. Amara F, Agbossou K (2015) Comparison and simulation of building thermal models for effective energy management. Smart Grid Renew Energy 6:95–112 19. Bourdeau M, Zhai XQ, Nefzaoui E, Guo X, Chatellier P (2019) Modeling and forecasting building energy consumption: a review of data-driven techniques. Sustain Cities Soc 48:101533 20. Touretzky CR, Patil R (2015) Building-level power demand forecasting framework using building specific inputs: development and applications. Appl Energy 147:466–477 21. Ferracuti F, Fonti A, Ciabattoni L, Pizzuti S, Comodi G Data-driven models for short-term thermal behaviour prediction in real buildings research article. Appl Energy 204:1375–1387 22. Ahmad T, Chen H, Guo Y, Wang JA (2018) Comprehensive overview of the data-driven and large scale based approaches for forecasting of building energy demand: a review. Energy Build 165:301–320 23. Ahmad T, Chen H, Shair J (2018) Water source heat pump energy demand prognosticate using disparate data-mining based approaches. Energy 152:788–803 24. Ahmad T, Chen H, Huang R, Guo Y, Wang J, Shair J, Akram HMA, Mohsan SAH, Kazim M (2018) Supervised based machine learning models for short, medium and long-term energy prediction in distinct building environment. Energy 158:17–32 25. He W (2017) Load forecasting via deep neural networks. Procedia Comput Sci 122:308–314 26. Shilpa GN, Sheshadri GS (2017) Short-term load forecasting using ARIMA model for Karnataka state electrical load. Int J Eng Res 13:75–79 27. Dash SK, Dash PK (2019) Short-term mixed electricity demand and price forecasting using adaptive autoregressive moving average and functional link neural network. J Mod Power Syst Clean Energy 7:1241–1255 28. Yu X, Bu G, Peng B, Zhang C, Yang X, Wu J, Zou Z (2018) Support vector machine based on clustering algorithm for interruptible load forecasting. IOP Conf Series Mater Sci Eng 2019:533 29. Lahouar A, Slama JBH (2015) Day-ahead load forecast using random forest and expert input selection. Energy Convers Manag 103:1040–1051 30. Bonanno F, Capizzi G, Sciuto GL, Napoli C, Pappalardo G, Tramontana E (2014) A novel cloud-distributed toolbox for optimal energy dispatch management from renewables in ˙IGSs by using WRNN predictors and GPU parallel solutions. In: Internationa symposium on power electronics, electrical drives, automation and motion, Ischia, Italy, pp 1077–1084 31. Bonanno F, Capizzi G, Sciuto GL (2013) A neuro wavelet-based approach for short-term load forecasting in integrated generation systems. In: International conference on clean electrical power (ICCEP), Alghero, Italy, pp 772–776 32. Baz WE, Tzscheutschler P (2015) Short-term smart learning electrical load prediction algorithm for home energy management systems. Appl Energy 147:10–19 33. Zúñiga K, Castilla I, Aguilar R (2014) Using fuzzy logic to model the behavior of residential electrical utility customers. Appl Energy 115:384–393

466

M. Pratapa Raju and A. Jaya Laxmi

34. Gaur M, Majumdar A (2016) One-day-ahead load forecasting using nonlinear Kalman filtering algorithms, special section on: current research topics in power, nuclear and fuel energy, SPCRTPNFE 2016. In: International conference on recent trends in engineering, science and technology, Hyderabad, India 35. Gheydi M, Nouri A, Ghadimi N (2016) Planning in microgrids with conservation of voltage reduction. IEEE Syst J 12:2782–2790 36. Ghadimi N, Akbarimajd A, Shayeghi H, Abedinia O (2019) Application of a new hybrid forecast engine with feature selection algorithm in a power system. Int J Ambient Energy 40:494–503 37. Laouafi A, Mordjaoui M, Boukelia TE (2018) An adaptive neuro-fuzzy inference system-based approach for daily load curve prediction. J Energy Syst 2:115–126 38. Sharifi S, Sedaghat M, Farhadi P, Ghadimi N, Taheri B (2017) Environmental economic dispatch using improved artificial bee colony algorithm. Evolv Syst 8:233–242 39. Sakurai D, Fukuyama Y, Iizaka T, Matsui T (2019) Daily peak load forecasting by artificial neural network using differential evolutionary particle swarm optimization considering outliers. IFAC-PapersOnLine 52:389–394 40. Gollou AR, Ghadimi NA (2017) New feature selection and hybrid forecast engine for day-ahead price forecasting of electricity markets. J Intell Fuzzy Syst Prepr 32:4031–4045 41. Lu H, Azimi M, Iseley T (2019) Short-term load forecasting of urban gas using a hybrid model based on improved fruit fly optimization algorithm and support vector machine. Energy Rep 5:666–677 42. Samuel IA, Adetiba E, Odigwe IA, Felly-Njoku FC (2019) A comparative study of regression analysis and artificial neural network methods for medium-term load forecasting. Indian J Sci Tech 10 43. Cheepati KR, Prasad TN (2016) Performance comparison of short term load forecasting techniques. Int J Grid Distrib Comput 9:287–302 44. Tian C, Ma J, Zhang C, Zhan P (2018) A deep neural network model for short-term load forecast based on long short-term memory network and convolutional neural network. Energies 11(12):3493 45. Bozkurt OO, Biricik G, Taysi ZC (2017) Artificial neural network and SARIMA based models for power load forecasting in Turkish electricity market. PLoS ONE 12:e0175915 46. Kavaklioglu K, Ceylan H, Ozturk HK, Canyurt OE (2009) Modeling and prediction of Turkey’s electricity consumption using artificial neural networks. Energy Convers Manag 50:2719–2727 47. Bouktif S, Fiaz A, Ouni A, Serhani MA (2018) Optimal deep learning LSTM model for electric load forecasting using feature selection and genetic algorithm: comparison with machine learning approaches. Energies 11:1636 48. Faes L, Wagner SK, Fu DJ, Liu X, Korot E, Ledsam JR, Back T, Chopra R, Pontikos N, Kern C (2019) Automated deep learning design for medical image classification by health-care professionals with no coding experience: a feasibility study. Lancet Digit Health 1:e232–e242 49. Wuest T, Weimer D, Irgens C, Thoben KD (2016) Machine learning in manufacturing: advantages, challenges, and applications. Prod Manuf Res 4:23–45 50. Bâra A, Oprea SV Electricity consumption and generation forecasting with artificial neural networks. In: Advanced applications for artificial neural networks. https://doi.org/10.5772/int echopen.71239 51. Li K, Hu C, Liu G, Xue W (2015) Building’s electricity consumption prediction using optimized artificial neural networks and principal component analysis. Energy Build 108:106–113 52. Yuce B, Mourshed M, Rezgui YA (2017) Smart forecasting approach to district energy management. Energies 10:1073 53. Kumar S, Mishra S, Gupta S (2016) Short term load forecasting using ANN and multiple linear regression. In: 2nd International conference on computational intelligence and communication technology (CICT), Ghaziabad, India, pp 184–186 54. Raza MQ, Khosravi A (2015) A review on artificial intelligence based load demand forecasting techniques for smart grid and buildings. Renew Sustain Energy Rev 50:1352–1372

Implementation of Load Demand Prediction Model …

467

55. Ferrero Bermejo J, Gomez Fernandez JF, Olivencia Polo F, Crespo Márquez A (2019) A review of the use of artificial neural network models for energy and reliability prediction. A study of the solar PV, hydraulic and wind energy sources. Appl Sci 9:1844 56. Moghram I, Rahman S (1989) Analysis and evaluation of five short-term load forecasting techniques. IEEE Trans Power Syst 4(4):1484–1491 57. Breiman L (2017) Classification and regression trees. CRC Press. ISBN 13: 9781138469525 58. Chen B-J, Chang M-W, Lin C-J (2004) Load forecasting using support vector machines: a study on EUNITE competition 2001. IEEE Trans Power Syst 19(4):1821–1830 59. Smola AJ, Scholkopf B (2004) A tutorial on support vector regression. Stat Comput 14(3):199– 222 60. Sapankevych NI, Sankar R (2009) Time series prediction using support vector machines: a survey. IEEE Comput Intell Mag 4(2):24–38 61. Mendes-Moreira J, Soares C, Jorge AM, de Sousa JF (2012) Ensemble approaches for regression: a survey. ACM Comput Surv 45(1):1–10 62. Ren Y, Suganthan PN, Srikanth N (2015) Ensemble methods for wind and solar power forecasting: a state-of the-art review. Renew Sustain Energy Rev 50:82–91 63. Breiman L (1996) Bagging predictors. Mach Learn 26:123–140 64. Domingos P (1997) Why does bagging work? A Bayesian account and its implications. In: The proceedings of third international conference on knowledge discovery and data mining, pp 155–158 65. Schapire R (1990) The strength of weak learnability. Mach Learn 5:197–227 66. Freund Y, Schapire R (1996) Experiments with a new boosting algorithm. In: Proceedings of the international conference on machine learning, Murray Hill, NJ, pp 148–156 67. Hu J, Wang J (2015) Short-term wind speed prediction using empirical wavelet transform and Gaussian process regression. Energy 93(2):1456–1466 68. https://openei.org/doe-opendata/dataset/ 69. https://www.wunderground.com/history/daily/us/al/birmingham/KBHM/date/

COVID Emergency Handlers to Invoke Needful Services Dynamically with Contextual Data S. Subbulakshmi, H. Vishnu Narayanan, R. N. Adarsh, Fawaz Faizi, and A. K. Arun

Abstract Current COVID pandemic situation has created an urgency in development of an intelligent mobile application to handle emergencies in most appropriate and efficient manner. We have created a life-saving system with contextual information gathering module, which provides needful services dynamically with limited details, i.e., click of a button. With installation of application and registration, basic details of user are stored in cloud storage. Application is designed to handle both COVID and normal emergencies. With invocation, contextual details like user’s location, nature of illness, type of service to be provided, hospital details, and ambulance details are realized by appropriate modules. By considering the nature of illness, department and respective hospital in the near proximity are realized. It tracks nearest ambulance service with necessary facilities to handle emergency situation, directs them to user’s location, thereby identifying the hospital . Simulated android application is created with firebase cloud storage, JSON, GPS, GSM, and Google Maps. It provides critical needs with appropriate facilities in shorter periods, a more prevalent service to handle COVID in a graceful manner. Keywords COVID · Mobile app · Emergency · Ambulance service · Contextual data · Location tracking · GPS · Cloud storage · Google Maps

1 Introduction Upward trend in population growth has resulted in India’s population count around 1366 million. With outbreak of COVID pandemic in 2019, highly populated India faced a most challenging period in mid-2020 with a rapid increase in nationwide COVID infections which almost crossed 10 million individuals. Statistics depict 73% of COVID-19 deaths in India are people with co-morbidities. It is highly essential to handle such people with almost care as they are considered to be risker when compared with others. Infections among such people are likely to create emergency S. Subbulakshmi (B) · H. V. Narayanan · R. N. Adarsh · F. Faizi · A. K. Arun Department of Computer Science and Applications, Amrita Vishwa Vidyapeetham, Amritapuri, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_36

469

470

S. Subbulakshmi et al.

situations which when not handled properly may even lead to untoward incidents. Researchers are coming out with many applications and proposals in development of applications to safeguard both medical industries and patients. Current scenario shows that healthy people, i.e., people without any other health issues, are able to recover from COVID infection after a stipulated period of time on administration of basic drugs prescribed by physician. Infection becomes complicated among people with multiple-health issues, and such infections should be handled intelligently. In this paper, we propose mobile-based COVID emergency handlers mainly focused in providing emergency ambulance services which could be most likely used by patients with co-morbidities. This system focuses on providing needful services based on contextual information gathered in a dynamic environment when the user invokes the application. For registration of the application, the user has to provide some basic details like personal, health, and contact which is stored in a cloud storage. This information will be realized during emergency situations to provide needful services [1]. Same application also handles normal emergency services which is so common in real-world scenarios. With increase in density of population and change in their life styles, there is steep increase in count of vehicles used by commutators. Almost 53 k vehicles are registered per day in our country. Increase in the number of vehicles with fast pacing lifestyles has resulted in increase of accidents. Moreover, lifestyle changes have drastically changed the food habits by replacing healthy foods with fast junk food which rewards the in takers with health complications. Underlying methodology used in both emergency services implements the following set of functionalities: registration, contextual-based information gathering, identifying required services, locating the most appropriate ambulance service and finally includes a module to render needful service in graceful manner which provides maximum satisfaction to the end users. Registration module demands user to provide basic details which are stored and used in emergency situations. Contextual information gathering module realizes relevant details of user and identifies services to be rendered, by retrieving the data stored in cloud storage and by gathering information at runtime from the location where the system was invoked. It uses Global Positioning System (GPS), a satellite-based radio navigation system, in the above module. It is used in the next two modules: First is to find locations details of nearest ambulance with needful services by considering demands of the user, and the second is to show the available hospital in nearest proximity of the user’s location. Emergency service rendering module directs ambulance to user’s location and backs the ambulance to hospital identified, and it also provides tracking facility of the ambulance. Here, the system is able to connect it with GPS technology so that it tracks both driver and user locations, and it is used to find their location and provide correct direction and the shortest one to required area, i.e., to the hospital identified with relevant data. In all situations, the system is able to provide service as per the need of the user. It is able manage the time that the vehicle takes to reach the affected area by using the GPS technology, which gives proper direction. Global navigation satellite system (GNSS) provides geographical location and time-related information to a receiver

COVID Emergency Handlers to Invoke Needful …

471

anywhere on the Earth, and it provides the minute details which include all hospitals and quarantine centers [2]. Thus, when an emergency situation occurs, the system is invoked with a simple click of the required button, which performs a series of tasks in background and directs the most apt ambulance service to user’s locations in a barely minimum period of time. In case of COVID emergency, ambulance with proper facilities is selected and patients are directed to hospitals, where COVID patients are treated and quarantined. In normal cases, best hospitals are identified based on the requirements of the victim. Finally, the system continues its service till the victim is safely taken to the hospital, so that emergency treatment is being provided on time to save the lives of victim. System is effectively created as a mobile application which could be easily installed and used by the common public. First, a set of different categories of ambulance in a particular locality and users are registered in the app. When users invoked emergency service, available ambulance in near a proximity is identified based on the location details gathered. It is able to select appropriate ambulance based on normal and COVID emergency. Google Maps displayed in app of driver shows the shortest route to user location and hospital. In user’s mobile, the map helps to spot the location of ambulance in transit. This document contains Sect. 2 Related Work, which includes details of major research articles which form the basis for our work, Sect. 3 Proposed System Design elaborates methodology with various components and its constraints, Sect. 4 System Implementation Results illustrates implementation details of the system with relevant technologies and application program interface. It also depicts input and output interlaces of the application, and Sect. 5 Conclusion and Future Work concludes our work notes for future enhancements.

2 Related Work COVID 19 is a pandemic disease spread over whole world as a rapid fire and has knocked at the doors of all countries. This has led to increase in the number of rapid emergencies. GPS technology is widely used in tracking of vehicles handling those patients to hospital as elaborated by Sarkar [3] in his work. It illustrates hospital management with respect to ambulance service. It monitors movements of ambulance once it is assigned to a patient by hospital management. It mainly focuses to provide tracking facility. Arduino Uno a microprocessor is implemented in vehicles for tracking using GPS technology to get more accurate location of ambulance. It does not provide dynamic services of identifying needful service. Handling patients infected with COVID should be organized with utmost care, as they are main carriers of the deadly coronavirus. It is a proved strategy based on various statistical analyses done by many researchers in varied domains. Research work in paper [4] specifies vital measures to be followed in mode of travel, medical interventions, and hygienic procedure. It insists the use of air ambulance, providing crew members in the vehicle with proper training and PPE kit. It helps crew members

472

S. Subbulakshmi et al.

to render required services to the patients and to avoid being infected by the virus. But, this is not a practical solution, as all COVID patients could not be air lifted as it is not economically feasible. In order to find a feasible solution, we try to adopt this safety protocol in our application, where road transport is used for transporting COVID patients, which includes ambulance with PPE kit other required facilities instead of using ordinary ambulance. This is mainly designed to safeguard health workers who are in direct contact with the patients while transporting them to hospital or quarantine centers. Mobile ambulance system [5] is created to provide valuable service to critical needs. It renders two major services with buttons panic and normal. Panic button is used to handle critical needs, and normal button is used to avail ambulance in order to take the patients to hospital for normal checkup or anything similar to it. It allows registration of users and drivers and renders ambulance services to the needy. Panic button helps patient to get treatment in the apt hospital by shifting patient at right time. Normal case they book for an ambulance service in a required time slot to take the patients to a specific hospital. The system fails to authenticate users who registers in the application. Authentication is required in order to provide a better service. It is essential for both users and drivers who renders service, in order to avoid fraudulent activities. Since, it is mobile app which is easily available, there are ample chances for stakeholders to simply register as ambulance service providers or users. When such people invokes services, it leads to wastage of resources as ambulance is directed to that location. If they have registered as driver, there are chances for that ambulance to be selected as service provider to handle emergency call. Then, app fails to provide trustful service as he is a fake user; sometimes, it may even lead to loss of life. To overcome this, we include authentication facilities with mobile OTP verification process. Users are registered only if it is successful. It will restrict fake users as they are aware that their registration is mapped with valid mobile number. Application of firebase in android app development [6] is a web-based application used to create system with best quality. It enables use of a real-time database with ample query facilities instead of using a traditional ordinary database. RDBMS cannot handle unstructured data, and this is possible in Firebase by using a tree structure concept of JavaScript Object Notation. Information is stored here in the form of tree structures. Our mobile app ought to use real-time cloud system for storing actual details of users, drivers, and transactions. It is used to store verification id, and an essential part of this app is used to track ambulance availability. It also helps to implement database and authentication using mobile OTP [7]. With reference to literature survey done, an efficient COVID emergency handler is created to handle patients infected with virus intelligently by invoking needful services based on severity level which is gathered dynamically with minimal data provided by users.

COVID Emergency Handlers to Invoke Needful …

473

3 Proposed System Design COVID emergency handler is a mobile app created using Android Studio application developed with the intent to minimize work to be done by user in order to get emergency services. It uses Firebase platform for storing and retrieving essential details pertaining to user and emergency situation. It also activates required authentication service while registering details of user in cloud storage [8]. GPS and Google Maps are useful to track both ambulance driver and user’s locations. It helps the driver to select the best suited hospital. It also helps the driver to take the shortest path to reach the required destination. As shown in Fig. 1, there are three main modules. Once mobile application is installed, the user has to complete the registration process. At the time of installation, user has to allow location access facility of the mobile to this mobile app. It is considered to be mandatory, as location of user and drivers should be identified to handle emergencies. Second service invocation module is activated with user’s request in the form of button click of either COVID or normal emergency. Finally, render emergency service module provides required service to user based on their requirements.

3.1 Registration 3.1.1

User Registration

User details are collected in registration module. On installation of the application, the user has to register with essential details like blood group, emergency address, age along with name, username and password, and click the register button provided. Phone number is verified immediately as the part of registration process using the

Fig. 1 System architecture COVID emergency handler

474

S. Subbulakshmi et al.

firebase authentication service [9]. It is implemented using PhoneAuthProvider and its corresponding functions for sending and verifying the data. On proper validation, user details are stored in the cloud storage for further references.

3.1.2

Ambulance Driver Registration

Drivers who are involved in providing ambulance services are also supposed to register in this mobile application. Either they can register on their own or their registration could be completed by the company providing the ambulance service on behalf of drivers who are allotted to drive the respective ambulance. The main purpose of this module to map driver’s mobile numbers to respective ambulance, so that they could be contacted in case emergency needs. It collects driver details which include name, mobile number, and registration number of ambulance he is going to use, type of ambulance with reference to facilities available which includes equipment’s attached, number of people available to provide assistance, category of assistance available, whether it is allowed to handle COVID patients with needed facility. Once all information is provided, this module verifies mobile number using authentication service similar to user registration. After this, driver’s license details are collected which could be used for checking his credibility, as license number could be used to retrieve essential data like nationality, address, driving history, etc. It also allows updates in status of vehicles used by drivers when there is an upgradation or modification in facilities provided. This module is used to store ambulance with driver details in the cloud storage.

3.1.3

Authentication

Authenticating and validating mobile numbers of users and drivers are performed at the time of registration. It is an authentication process with OTP sent to the respective individuals. It is implemented to avoid fraud users. OTP is automatically checked and verified. It provides this facility to verify user with his mobile phone and confirm its validity. It can be used effectively and easily than any other procedures. OTP-based authentication is effective as it conforms both trustfulness and validity of users and drivers. If validation is successful, the registration process is complete and their data will be stored in JSON format. Firebase platform provided by Google helps to save data’s in JSON file format [10] as a tree. It reduces complex steps to create codes and statements for inserting information into cloud storage.

3.2 Service Invocation When emergency situation occurs, patients should be given critical care treatment in a very short duration; otherwise, it may lead to unbearable circumstances. Generally,

COVID Emergency Handlers to Invoke Needful …

475

tracking the nearest ambulance and shifting the patient to appropriate hospital are a tedious task. But, this mobile application makes this job easier. Since app is already installed with basic details, the whole process is automated with just one click. Application is invoked by clicking normal or COVID emergency button. Dynamic automation [11] of the process to handle relevant emergency situation includes the following operations.

3.2.1

Location Tracking

For location tracking, GPS technology is used to identify both user and ambulance locations. This module is invoked whenever the emergency button is clicked irrespective or any type of emergency. LocationManager and LocationListener objects of GPS technology [12] are used to check whether GPS is enabled in their mobile or not and to get the geographical location coordinates of user’s/driver’s mobile, respectively. Here, location of user is taken and locating exact user’s area is essential, so that it could be used in emergencies and to instruct the drivers to reach user’s location. Driver’s location is also recognized to realize nearby drivers available so that they could be directed to the user’s location.

3.2.2

Gathering Contextual Data

It is an important module invoked to dynamically collect relevant details updated in the cloud storage. Data collected from users and drivers are stored in the cloud storage with the help of Firebase framework. It is a software development tool used for storing data or information pertaining to any mobile application. It provides a set of API to manipulate the cloud data stored in JSON format from any mobile application. User’s data such as their mobile numbers, health conditions, relative’s details, and blood group stored at the time of registration could be retrieved automatically and used in appropriate modules providing emergency services [13]. Collected data of the user such as location, health issues, and mobile number are provided to the ambulance drivers, so that they could know about the patient and also could direct them to reach the best hospital to get the relevant critical care treatment.

3.2.3

Identify Ambulance with Needed Service

Main task of identifying best ambulance is performed in this module. After identifying user’s location, task of finding the nearest ambulance is invoked. For that with the help of GPS and GSM, the location of all ambulances available in near proximity of the user’s location is assessed. First, user’s location details realized are passed to the server with the use of GSM technology [14]. By using user’s location, ambulance services available in that area are assessed with the relevant GPS API calls. Next, based on the type of emergency, ambulance services available should be filtered.

476

S. Subbulakshmi et al.

In case of COVID emergency, ambulance which includes the essential features to handle COVID patients with necessary kits and other facilities regarding the staff providing necessary services only should be selected from the available set. In case of normal emergency filtering process does not happen, all available services are considered for further processing. Sometimes, user may specify some type of ambulance which is also considered by executing proper filtering with that ambulance type. Then, distance between those available ambulance and user location is calculated. Ambulance with the shortest path is selected, and conformation of that ambulance services is made on receiving the message from the ambulance driver to the user with the details of vehicle number and mobile number. Thus, main task of identifying ambulance is accomplished.

3.3 Providing Emergency Services After location tracking and selection of ambulance, next task of providing this valuable service is invoked. Some task in this service may be in general, and others may be addressed based on type of emergency.

3.3.1

Direction to User Location

As already mentioned, user location is fetched using GPS enabled in user’s phone, and direction to user location is provided to ambulance drivers using Google Maps API. The whole process is made available with ease as relevant functions are invoked by Google Maps, which provides an optimal service. In short, it thrives to show the best path which helps to reach destination in minimum time by using optimization algorithms. Our android mobile app realizes those functionalities by calling appropriate API, and it directs the ambulance to the user’s location. This app also provides functionality to user for visualizing location of ambulance, as that he could track the status of the ambulance. The above functionality is common regardless with the type of emergency.

3.3.2

Identifying the Hospital/Quarantine Center

Once the ambulance reaches the users location, next task of identifying the nearest hospital or quarantine center starts. Here, the selection of the destination is based on the type of emergency; if it normal emergency, nearest hospital should be selected, and in case of COVID emergency, nearest hospital which provides treatment for COVID patients or just quarantine center should be selected based on severity [15]. Based on the user details collected in the contextual information gathering module,

COVID Emergency Handlers to Invoke Needful …

477

appropriate hospitals with the required facilities are identified. This information is passed on to the next module.

3.3.3

Directing to Nearest Hospital/Center in the Shortest Path

Once appropriate hospital is identified, then ambulance is directed to destination hospital with the help of Google Maps API which indicates the optimum path to the drivers. Drivers will be shown direction to hospital and best route by considering various factors like timings and severity of the traffic. Finally, ambulance is able to drop the patients in emergency critical care unit, in a minimal period and thus pave way to avoid any untoward situations. Constraints of COVID emergency handlers are confidentiality of users, drivers, and transaction data stored in the cloud storage are not considered. The app checks only authenticity, validity, and reliability of users and drivers. It ignores the percentage of critical aspect of the victim, while selecting available ambulance service. When emergency call is made by more than one person at a same time and location, it adopts only first come first methodology in the process of selection.

4 System Implementation Results The whole system starts with installation of this app COVID emergency handers and registration. Emergency service can also be used without registration in case of emergencies through this app. The purpose of registration is to enable app to dynamically retrieve more specific contextual data about user in order to handle the situation more intelligently [16]. Registration of both users and drivers is done by acquiring the details through the registration form as shown in Fig. 2a and b. They provide name, age, username, and password commonly from both the user and driver. Mobile number from both is used for the authentication of person using the verification code. Some information is specific to user or driver such as blood group, and health data are related to users, and vehicle number and ambulance type are relevant to drivers. Acquired details regarding users or drivers are send to the firebase database which uses structure of JSON tree. Ambulance type plays a main role in handling COVID and normal emergency. Types are patient transport facility, basic life support, advanced life support, COVID patient transport facility, and advanced life support for COVID patients. Ambulance which is used for COVID patients includes all protocols for handling those patients in the best possible manner for both patients and health workers handling them. One more field set with drivers registration is the status field which indicates the availability of ambulance. It is initially set as available when the ambulance is registered to provide service. The verification of user or driver is done by enabling phone authentication settings in Firebase. This enables to create an instance of PhoneAuthProvider which could be

478

S. Subbulakshmi et al.

Fig. 2 a User registration, b driver registration, c forwarding OTP

used to invoke relevant API as per the requirement in the process of authentication. By using PhoneAuthProvider.verifyPhoneNumber(), the request to Firebase is passed for the verification of provided mobile number. While processing the request, Firebase sends an OTP code as SMS message to that mobile number. Then, PhoneAuthCredential() function is invoked which tries to capture the OTP received in that mobile. On capturing the OTP, onCodeSent() function is automatically invoked as shown in Fig. 2c. A verification successful response is delivered by onVerificationCompleted() function. If PhoneAuthCredential() is not able to capture the OTP on that mobile, then an invalid response message is sent by invoking onVerificationFailed() method aa a Firebase exception. This app includes two buttons: COVID emergency and emergency shown in Fig. 3a. Emergencies are when incidents like accident or critical illness happen. Moreover, current pandemic has led to a huge number of emergencies. When people with multiple comorbidities are infected with coronavirus, they might be affected severely due to less immunity power. It leads to acute illness and complications, which ultimately lead to critical situations. To tackle the above situations, ambulance should be called immediately in order to take patients to the nearest hospital. This can be done by just clicking the emergency button of this app. If COVID emergency is clicked, it asks whether it is for transporting the patients or to handle emergency needs. Ambulance filter condition is set to COVID Patient Transport facility for transporting COVID patients and Advanced Life Support for COVID patients for emergency needs. In case of normal emergencies, it asks user to select required type of ambulance and click submit button. By default, it is set to Basic Life Support. If selection is done by user, filter condition is set based on their selection. Only setting up the filter condition for ambulance is different, and all other

COVID Emergency Handlers to Invoke Needful …

479

Fig. 3 a Service interface, b identify user location, c identify ambu. location

operations are common for both types of emergencies which are explained in detail as follows. First, location details are gathered by getLatitude(), getLongitude(), functions of location object. Location object instance is created with getLastKnownLocation() method of locationManager object with provider name as GPS_PROVIDER. It is shown in Fig. 3b where pickup location refers to user location. Now, user location details are passed to contextual data gathering module which access Firebase data. It retrieves other contextual data about the user. It queries cloud storage to find available ambulances of a particular type, with the filter condition which is already set in the initial stage. The resultant ambulance set is filtered further to locate ambulance in near proximity to user. For that we need both users and ambulance driver’s location details [17]. User location is realized when service is invoked. To get location of drivers mapped with respective ambulances, the following procedures are implemented. Cloud storage Firebase database keeps track of ambulance driver’s location details by frequently updating driver’s location details as shown in Fig. 3c. Firebase stores current location of driver with driver id in JSON tree format. Database Reference is used to write location details. GeoFire is an open-source library for android that allows to store and query a set of keys based on their geographic location. It is used to set the location of drivers in Reference by passing latitude and longitude data retrieved from his mobile. Figure 3c shows the driver’s current location and that location is passed to real-time database. It is now clear that database is updated with driver’s location details.

480

S. Subbulakshmi et al.

With the above details, ambulance in near proximity with user is identified starting with a distance of 5 kms and it progresses in multiples of 5 till it identifies an apt ambulance. Here, it considers only set of ambulance already filtered using the type of ambulance [18], and it also considers ambulance with its status as available. When ambulance is identified to render service, the status of that ambulance is set to onservice indicating that it is already booked. Then, that ambulance driver is instructed to proceed to user location provided by firebase and it is marked as user pickup location. If one or more user makes the emergency call at same time in the same locality, first come first served method is adopted in rendering the ambulance service. The current location of the drivers is tracked, updated, and displayed with relevant API of Location object. Then, RoutingBuilder object is used for displaying the shortest routes from driver to the user location as shown in Fig. 4a. Driver is able to view it in the map form which is implemented with the use series of Google Maps API. First, onMapReady() is used to check whether Google Maps is ready. GoogleApiClient.Builder is the builder class used to create GoogleApiClient object. BuildGoogleApiClient() is used to initialize Google Play services in order to access Google Maps services. addApi() function is used to keep track of APIs requested by mobile app. Connection of app and Google Maps server is monitored with instance created using GoogleApiClient.ConnectionCallbacks class. This object is invoked with onConnected() and onConnectionSuspended() functions to find connection status. GoogleApiClientOnConnection FailedListener provides callback functions to track of connection failures between client and server. When it identifies such failures, onConnectionFailed() is called to take necessary actions. With

Fig. 4 a Route to user location, b user tracking ambulance, c Route to hospital/center

COVID Emergency Handlers to Invoke Needful …

481

proper connection, required map is displayed in the app screen, and addMarker() is used to display the specific text over mobile location on the map. In our app, it marks the current location of driver and user. moveCamera (CameraUpdate update) is used repositions camera as the parameters passed for camera updation. Thus, Google Maps are used to display various types of map facilities in this mobile app. User is also able to track movements of ambulance to their location shown in Fig. 4b, which is also implemented in same manner discussed above. Once ambulance reaches user location, selection of hospital module is invoked. Contextual data related to user health condition play the major role here. If they are COVID patients, hospitals capable of handling them are selected. Otherwise, it selects hospital based on requirements. In case of COVID patients who wants only transport facility, ambulance drivers are provided with the details of nearest quarantine center. Thus, module ascertains hospital or center in near proximity of user location. Finally, driver is directed to destination in the shortest route shown in Fig. 4c as Google Map. Implementation of direction from user location to hospital follows the same procedures described earlier. Thus, this app is able to handle emergencies and relieves user from great tension in finding appropriate vehicle and hospital, reaching the destination on time to get timely treatment to save patients from major complications. Since it is mobile app, it is included in Play Store without much setup process which paves an easy access of this needful service to end users.

5 Conclusion and Future Enhancements COVID emergency handlers are able to provide valuable services to needy who are facing unexpected medical issues. Current pandemic has created challenging situation when many people with past history of different ailments are infected by coronavirus. It leads to increase in emergency cases where patients have to be taken to appropriate hospitals. This mobile app could be invoked to provide valuable service with an initial process of registration. It collects basic details of users and drivers which are stored in cloud storage as JSON files, and it is manipulated using Firebase framework. Emergencies are handled with ease by dynamic contextual information gathering module which gathers most relevant data about user and invokes modules to identify appropriate ambulance services in near proximity with sufficient facilities using GPS and GSM technologies [18]. Rendering service module directs ambulance to user location and then from user location to hospital with proper guidance to drivers to reach destination through optimal path with use of Google Maps APIs. This mobile app is able to handle normal emergencies also. In the future, confidentiality of cloud storage, critical aspect of emergencies, and semantic-based hospitals selection could be included in the system to provide more efficient service.

482

S. Subbulakshmi et al.

References 1. Sakriya MZBM, Samual J (2016) Ambulance response emregency application. Int J Info Syst Eng 4(1) 2. Ahir D, Bharade S, Botre P, Nagane S, Shah M (2018) Intelligent traffic control system for smart ambulance. Int Res J Eng Technol (IRJET) 05(06) 3. Sarkar S (2016) Ambulance assistance for emergency services using GPS navigation. Int Res J Eng Technol (IRJET) 03(09) 4. Hilbert-Carius P, Braun J, Abu-Zidan F et al (2020) Pre-hospital care and interfacility transport of 385 COVID-19 emergency patients: an air ambulance perspective. Scand J Trauma Resusc Emerg Med 28:94 5. Devigayathri P, Amritha Varshini R, Pooja M, Subbulakshmi S (2020) Mobile ambulance management application for critical needs. In: 2020 Fourth international conference on computing methodologies and communication (ICCMC), Erode, India, pp 319–323. https:// doi.org/10.1109/ICCMC48092.2020.ICCMC-00060 6. Khawas C, Shah P (2016) Application of firebase in android app development-a study. Int J Comput Appl (0975–8887) 179(46) 7. Jisha RC, Jyothindranath A, Kumary LS (2017) Iot based school bus tracking and arrival time prediction. In: 2017 international conference on advances in computing, communications and informatics (ICACCI) 8. Hameed SA, Nirabi A, Habaebi MH, Haddad A (2019) Application of mobile cloud computing in emergency health care. Bullet Electri Eng Inform 8(3) 9. Karale B, Wasnik N, Singh M, Jawase R, Bondade A, Chopade A (2018) Survey paper for intelligent traffic control system for ambulance. Int J Trend Res Developm IJTRD 5(1) 10. Khan A, Bibi F, Dilshad M, Ahmed S, Ullah Z, Ali H (2018) Accident detection and smart rescue system using android smartphone with real-time location tracking. (IJACSA) Int J Adv Comput Sci Appl 9(6) 11. Binu PK, Viswaraj VS (2016) Android based application for efficient carpooling with user tracking facility. In: Second international symposium on emerging topics in computing and communications 12. Ramasami S, Gowri Shankar E, Moulishankar R, Sriramprasad D, Sudharsan Narayanan P (2018) Advanced ambulance emergency services using GPS navigation. Int J Eng Res Technol (IJERT) ISSN: 2278–0181 ETEDM-2018 Conference Proceedings 13. Isong B, Dladlu N, Magogodi T (2016) Mobile-based medical emergency ambulance scheduling system. Published Online November 2016 in MECS 14. Aziz K, Tarapiah S, Ismail SH, Atalla S (2016) Smart real-time healthcare monitoring and tracking system using GSM/GPS technologies. In: 2016 3rd MEC international conference on big data and smart city 15. Shashikant KS, Bajrang GA, Baban LN, Suresh JM (2015) Android based mobile smart tracking system. Department of Computer science and engineering, Karamayogi engineering college shelve pandharpur. Int J Latest Trends Eng Technol (IJLTET) 5(1) 16. Fathima S, Suzaifa, Guroob AH, Basthikodi M (2019) An efficient application model of smart ambulance support (108) services. Int J Innov Technol Explor Eng (IJITEE) 8(6S4) ISSN: 2278–3075 17. Mounika M, Selvi C, Rajamani K, Malathi J (2018) Emergency tracking and localistation using android mobile phones. Int J Pure Appl Mathem 119(10):43–52 18. Ghule S, Shinde G, Chile S, Kamble S, Palwe R (2019) Ambulance service app. IJARIIEISSN(O) 5(2):2395–4396

Deep Neural Models for Key-Phrase Indexing Saurabh Sharma, Vishal Gupta, and Mamta Juneja

Abstract The association of key-phrases allows a more efficient search for an article of interest, since key-phrases indicate an article’s main idea and allow to align the researcher’s interest in that article easily. The authors themselves often report these key-phrases. And most of the documents available on the Web do not yet have keyphrases assigned to them. The manual association of keyphrases is not feasible, considering a large amount of data currently available. To ensure intellectualization of key-phrase extraction, it is proposed to use deep learning models in this work, particularly a concatenated model approach. This work is devoted to studying and developing methods for key-phrase extraction based on word embedding (Glove) and Recurrent Neural Network (RNN). Three different RNN based models: Simple Bi-LSTM model—I, concatenated Bi-LSTM model—II, and concatenated BiLSTM model—III are proposed. From various aspects, the proposed models show consistent performance over existing approaches on three publically available & widely acceptable datasets viz Inspec, SemEval2010 and DUC. Keywords Key-phrase extraction · Indexing · Deep learning · Bi-LSTM · Word embedding

1 Introduction Nowadays, information is the currency for all areas of knowledge, and consequently, the volume of data generated daily is colossal [1]. Many claims that we are in the socalled “Information Age”, where this mass of data also needs to be stored, classified, S. Sharma Thapar Institute of Engineering & Technology, Patiala, India e-mail: [email protected] S. Sharma · V. Gupta (B) · M. Juneja University Institute of Engineering and Technlogy, Panjab University, Chandigarh, India e-mail: [email protected] M. Juneja e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_37

483

484

S. Sharma et al.

selected and transformed into new information. In particular, the amount of information stored in databases increases every minute, exceeding the human capacity to its interpretation, which becomes a problem for IT specialists. An example occurs in the research process when it is desired to locate documents in a digital library. Such a process is not simple, as there are usually many documents available, each with thousands of words, not all of which are relevant to the search carried out. Besides, when several documents are selected, another difficulty presents itself: how to order them coherently? As the volume of online documents increases rapidly with Internet media development, there is an increasing need for techniques to obtain useful information. Key-phrases [2] are the most concise form of expression and appropriate key-phrases, implying what exactly is in the document. Key-phrases can be used as a leading property in the text mining field as it supports text summarization, information retrieval and Natural Language Processing (NLP) related tasks [3]. Most textual contents are not available with key-phrases. The key-phrases designated by third parties are less reliable so automatic key-phrase extraction from a given document is an active research area [2]. Most key-phrase extraction system selects key-phrases through a two-step process [4]. Initially, they select candidate key-phrases from a given document, and then select where and how often they appeared. Based on the importance of candidate keyphrases, the key-phrases are extracted. Therefore, there is a demand for approaches that can automatically discover the key-phrases that best describe/represent a particular document’s subject and then associate those words with them. Deep learning emerged as a powerful technique for Automatic Key-phrase Extraction (AKE) [5,6, 7, 8] are the few popular deep learning-based AKE. Most of the schemes mentioned above are relies on the combination of Bi-LSTM and CRF [5,9,10]. Key-phrase extraction schemes are generally hindered by ambiguous contexts information in the text. For cases like short or ambiguous text, the systems may either unable to discriminate among different meanings of a word or suggest wrong key-phrases due to lack of information. The research community hypothesizes that the additional information from the given text can help alleviate the limitations mentioned above. One of the most prominent methodologies for KPE is based on word representation learning. The experimental evidence accumulated over the last 20 years indicates that key-phrase extraction systems based on word (sentence, document) embeddings produce efficient results superior to those obtained with other more elaborate text representations. However, increasing the efficiency of key-phrase extraction tasks in the modern era of NLP is very serious, since many documents are available on the internet without meta-data. There are many unresolved problems on the way to creating key-phrase extraction system. Still, the development of such systems is very relevant, since they have many advantages. The most important thing is that such systems bring us closer to solving the problem of choosing relevant information. It is emphasized that to increase the efficiency of Key-phrase extraction system, some suitable features and extra knowledge are applied in an integrated and intelligent manner together. We proposed to use deep learning models in this work to ensure the intellectualization of

Deep Neural Models for Key-Phrase Indexing

485

key-phrase extraction. This work is devoted to studying and developing methods for key-phrase extraction based on Recurrent Neural Network (RNN). The developed approach is innovative and original for the following reasons: 1. 2. 3.

We propose systematic frameworks to demonstrate the improvement using word embedding to train Bi-LSTM models for key-phrase extraction. We propose 3 RNN based models: Simple Bi-LSTM model—I, concatenated Bi-LSTM model—II, and concatenated Bi-LSTM model—III. From various aspects, the proposed models shows consistent performance over existing approaches on three publically available & widely acceptable datasets viz Inspec, SemEval2010 and DUC.

Rest of the paper is organized as follows. Section 2, discusses state-of-the-art keyphrases extraction techniques. Section 3, describes the background of the proposed models. The experimental results in Sect. 4 present the effectiveness of the proposed models. The last and final section considers the conclusion and future work.

2 Related Research This section discusses the existing research and limitations of key-phrases extraction systems. In most key-phrases extraction studies, the final list of key-phrases is largely selected through candidate key-phrase features [2, 3, 11 ,12] viz. position, POS tagging, neighbor-hood information, overlap, phrase connectedness and their frequency of occurrence in a document or whole dataset. Usually, many key-phrase extraction systems are based on data mining and machine learning approaches and these methods are primarily executed in a supervised and unsupervised manner [1, 2]. Some researchers consider extracting key-phrases as a binary classification task (either it is a key-phrase or not) and learning classification models, using data from training. These supervised methods require training data to be recorded manually, which is a lengthy process. Supervised learning methods [13,14] simply process the candidate key-phrases into attribute form by adopting methods like Naive Bayes, Support vector machine, conditional random field and other binary classification methods. In unsupervised methods [15,16], key-phrase extraction is treated as a ranking problem in which recently graph-based techniques are used to design a word graph from each document. Litvak et al. proposed DegExt [17] approach, which considers graph-based representation of text. Their connectedness property ranks the nodes with each other and then the top-ranked nodes are extracted as key-phrases. The graph is scanned iteratively and all the selected key-phrases are marked. After that, the sequences of adjacent key-phrases sharing same edge labels are combined into key-phrases. Wan et al. proposed a novel key-phrase extraction approach, called Expand-Rank [18] which is based on expanding the given document to a small neighbor document. This approach uses the graph-based ranking algorithm to incorporate word relationships in the given document and neighbor documents. The candidate phrases in the

486

S. Sharma et al.

document are evaluated based on the words’ scores present in the phrases. Another idea called HG-Rank [19] based on graph theory is proposed by Bellaachia et al. considers modeling the social and temporal aspects of a document. The temporal and social attributes of documents are modeled as hyper-edge weights, and discriminative term weights are modeled as vertex weight. To measure the temporal and social effect, both the social and temporal features are tied together in one ranking function. The final document rank is embedded in the hyper-graph. To rank vertices in a hyper-graph, a random walk ranking approach is used which is defined as transitioning between vertices after each discrete time stamp. Duari et al. proposed a novel parameter-less method naming SCAKE [20] to capture the contextual relationship that the words have between them. It comprises of three steps—(1) First of all, candidate key-phrases are identified by using different text preprocessing methods (2) next, a novel graph construction method termed as Context-Aware Text Graphs is used to connect words that pursue a close relationship with each other (3) finally, word score for these candidates is computed using a novel word scoring approach, SC-Score which is used to exploit the semantic relationship between the words without using any linguistic tools. Recently, deep learning-based attempts are presented to address the key-phrase extraction. In [21], Santosh et al. presented a Document-level Attention mechanism for Key-phrase Extraction (DAKE). The proposed scheme is modeled as the fusion of local context and hidden semantic context of a document. Bi-LSTM & CRF is used to capture a document’s hidden semantics and predict the label sequence for the sentence, respectively. The gating process verifies the impact of supplementary contextual information on the fusion with local contextual information. Dhruva Sahrawat et al. [9] consider the key-phrase extraction problem as a sequence labeling scheme and find the solution they employ BiLSTM-CRF architecture. After the tokenization process, each token is labeled with one of the three-class labels. Further, the Viterbi algorithm is employed by CRF to locate optimal label sequences. Similar kind of approach is developed in [5]. Yansen Wang et al. proposed a web key-phrase extraction scheme named SMART-KPE [6]. For this, they employed a recently released multi-modal OpenKP dataset [22]. This dataset includes the novel features in the form of visual properties of words, web page layout, and type. The visual features include the feature associated with the position, color, font’s size, and font’s type of words. Here, the key-phrase extraction task mainly divided into two steps: feature extraction and important key-phrase selection. This scheme adopts the idea of BERT2Tag [23] for sequence labeling. The focus of [24] is to extract absent key-phrases. With the help of an encoderdecoder model they developed a generative framework for efficient prediction. Further, a copying mechanism is incorporated in RNN to predict the unseen phrases. The experimental outcomes have proven the proposed model’s effectiveness as it predicts up to 20% of the unseen key-phrases. DeepUnseen [10], the method proposed by Zahedi et al. emphasized on automatic extraction of key-phrases absent in the document. The whole mechanism is summarized in two different steps: Firstly, a deep clustering network (DCN) based on the k-mean algorithm is employed to partition the whole document into clusters. Secondly, a key-phrase estimation scheme is

Deep Neural Models for Key-Phrase Indexing

487

proposed by training an RNN encoder-decoder model. To implement an encoding– decoding scheme, a GRU is used in the encoder and decoder section separately. The proposed scheme is fast and provided a good set of key-phrases compared to the state-of-the-art. Zhu et al. proposed an improved Bi-LSTM and CRF based approach [7]. In this scheme, the author takes advantage of three different embeddings: word embedding (Glove, 300-dim), POS embedding (25-dim) and dependency embedding (25-dim). To get the relevant key-phrases, they input these three embeddings (350-dim) to the Bi-LSTM architecture. Finally, CRF mechanism is adopted to label the sentence with five class labels.

3 Methodology In this section, we describe the detail of the proposed method. Our methodology is motivated by the achievements of RNN models in NLP. In this work, we suggest different Bi-LSTM based key-phrase extraction approaches from a variable length of sentences.

3.1 Preliminaries 3.1.1

Convolutional Neural Network

Convolutional neural networks (CNN) in traditional machine learning techniques have changed the way to solve problems by filling in the gaps. These networks eliminate the need for manual feature extraction from the raw text. A specialized architecture of deep learning called CNN is incredibly successful in image processing [25]. Due to fast feature extraction and fast execution time, there is a growing trend in using CNN for NLP tasks. Many studies have been carried out in literature and practice using deep learning technique. CNN consists of many layers that can be trained. The layer includes an input layer, convolutional layer, pooling layer, softmax layer, fully connected layer and output layer. The number of these layers may vary depending on the required structure. Different hyper-parameters are required while training the CNN model, and few of these parameters are kernel sizes, filters, dropout rate, strides, sequence length, optimizer, epochs.

3.1.2

LSTM and Bi-LSTM

Artificial neural network models, including feed-forward network and multilayer perceptron network, have shown great success in nearly all possible domains. These models are based on context independence as they treat each input sample as independent of the other samples. However, the assumption is not beneficial for machine

488

S. Sharma et al.

translation and other natural language processing tasks where the previous context plays a significant role in the present task estimation. Recurrent Neural Networks (RNN) [26] solved this problem by maintaining the contextual dependencies through loopbacks. They have successfully implemented several natural language processing tasks, including sentiment analysis, text classification, and machine translation. However, these models are not capable of handling the long-term contextual information of the data due to the vanishing gradient and exploding gradient problem Long Short (LSTM) network model [27], a variant of the RNN model serves the desired purpose by solving the long-term contextual dependencies handling problem. The memory network model handles long-term contextual information by replacing the conventional perceptron architecture with the memory cells and gates. The cell mechanism involves three gates, namely, forget gate, input gate and output gate. The working of these three gates is summarized as follows: For a given timestamp (t), hidden state (h t ) and input (X), the decision to throw irrelevant information from the LSTM current cell state (Ct ) is made by the forget gate ( f t ). Mathematically, it is given as:     f t = σ W f X t .h t−1 + b f

(1)

The input gate decides which information to be stored in the current cell state (Ct ) and is given as:     i t = σ Wi . h t−1 , xt + bi

(2)

    C˜ t = tanh WC h t−1 , xt + bC

(3)

The final step to update the previous cell state with the new cell state (based on the forget gate and input gate state/information) and to generate the required output is given as: Ct = f t ∗ Ct−1 + i t ∗ C˜ t

(4)

    ot = σ Wo . h t−1 , xt + bo

(5)

h t = ot ∗ tanh(Ct )

(6)

Here Wi , W f , Wo and bi , b f , bo represents the weights and bias values for the input gate, forget gate and output gate respectively. Bidirectional LSTMs are an extended version of the conventional LSTMs that can improvise the performance of the model by handling the future long-term contextual dependencies. Equations (1) to (6) are applied in forward and backward directions. Let {x1 , x2 , x3 , . . . , xn } be the word embedding sequence for different set of tokens. One hidden state vector ht produces the representation for word xt by considering

Deep Neural Models for Key-Phrase Indexing

489

  ← the sequence x1 , . . . , xt−1 and another hidden state vector h produces the repret

sentation for word xt by considering the sequence {xt+1 , . . . , xn }. Finally, Bi-LSTM combines these two hidden vectors to get the vector representation of wt . → ← − ← → − ht = ht ; ht

(7)

The Bi-LSTM is implemented as below:

− → → . h t−1 , x t + b− → ht = H W− h h

(8)



←−− ← − h t = H W← . h t−1 , xt + b←

(9)

− → ← − → h t + W← h t + b y yt = W− h

(10)

h

h

h

3.1.3

Word Embedding

A textual document’s representation into a low-dimensional, real-valued dense vector is achieved by word embedding [28]. Concerning each word, we get an individual fixed-size vector. The embedding consists of floating-point values, and similar or related words have close embedding and vice versa. Word2Vec [28], an initial algorithm for word embedding is proposed by Mikolov et al. It works based on the context of words, i.e. similar word meanings imply a similar context. Mikolov suggested two loss functions Continuous Bag of Word (CBoW) and skip-gram. CBoW, learn embedding value from the given context to predict the target word and skip-gram learn embedding from a current word to predict each context independently. Another word embedding algorithm Global Vector for word representation (Glove) [29] is proposed by Pennington. It is an unsupervised learning algorithm used to generate word embedding by creating a word to word coherence matrix and understanding the context it uses matrix factorization method.

3.2 Proposed Models The pipeline of the proposed systems is shown in Fig. 1. The detail about each task is presented in the different sub-sections below.

490

S. Sharma et al.

Fig. 1 The pipeline of the proposed systems

3.2.1

Token Generation and Preprocessing

A two-step mechanism is followed to generate the tokens. Firstly, the text document is split into sentences. Finally, the sentences are tokenized with the help of Punkt NLTK toolkit [30].

3.2.2

Word Embedding Generation

After getting a large set of tokens, the next step is to produce word embeddings. Each token in the sequence is represented as the 300-dim Glove vector. For this vector representation here, we adopt Stanford’s Glove Embeddings. The embedding provided by Stanford is trained on 6 billion words extracted from Gigaword 5 and Wikipedia 2014.

3.2.3

Model Architecture Structures

In this section, we describe the details of the proposed architectures. The three different models employed in this article are based on Bi-LSTM and CNN architectures. Similar kinds of architectures are proposed in the literature for various

Deep Neural Models for Key-Phrase Indexing

491

Fig. 2 The Proposed Simple Bi-LSTM Model–I

NLP tasks, and here we proposed a combined architecture approach for key-phrase extraction. (1)

Simple Bi-LSTM Model–I: Bi-Long Short Term Memory (Bi-LSTM) network model, a variant of the LSTM model, serves the desired purpose of long-term contextual dependencies handling problem by considering backward and forward LSTMs. Our Simple Bi-LSTM model is presented in Fig. 2, and figure depicts the different functions.

The tokenized data is directly fed into the embedding layer to generate respective embeddings. The embedding layer maps the input sequence of size N to a matrix of size N x D. Here D is the dimensions of the embedding. Next, the Bi-LSTM layer is feed with this newly constructed matrix. The output of LSTM layer is forwarded to dense layer and a dropout of 0.25 is applied before and after the dense layer. Finally, the outcome of the dense layer is processed through the softmax layer. The softmax layer has three units as we have 3 different possible outcomes at the output layer (0, 1, 2). (2)

(3) (4)

(5)

Concatenated Bi-LSTM model-II: The last section demonstrates the simplest Bi-LSTM based key-phrase extraction approach. In continuation, we proposed a concatenated Bi-LSTM model. In the first half, the Bi-LSTM is trained to produce document embeddings: The word embeddings of size N x D are produced by the embedding layer after processing the input sequence. This newly constructed matrix is feed into the Bi-LSTM layer. The LSTM layer’s output is forwarded to the dense layer, and a dropout of 0.25 is applied after the dense layer. The outcome of dense layer is stored as document embedding.

492

S. Sharma et al.

The working procedure of second half is much similar to the previous model (Simplest Bi-LSTM). The word embeddings and document embeddings produced in the first half are concatenated here and feed into the first Bi-LSTM. This Bi-LSTM is trained to find key-phrases based on binary classification task. The outcome of first Bi-LSTM is processed through second Bi-LSTM and the second Bi-LSTM is trained to find key-phrases based on 3 different classes as we have 3 output units in total. A dropout of 0.3 is applied before and after the second Bi-LSTM. The output of second Bi-LSTM is forwarded to the dense layer, and a dropout of 0.3 is applied after the dense layer. In the last outcome of the dense layer is mapped to the output unit via softmax function (Fig. 3). (3)

Concatenated Bi-LSTM model-III: In continuation, we proposed a concatenated Bi-LSTM model-II. In the first half, the CNN model is trained to produce document embeddings. Our CNN model contains three different convolutional and max-pooling layers. The first convolutional layer has following specification filter_size: 128, kernel_size: 32, stride: 4 and pool_size: 2 for maxpooling operation. The second convolutional layer has following specification filter_size: 128, kernel_size: 8, stride: 2 and pool_size: 2 for max-pooling operation. The third convolutional layer has following specification filter_size: 60, kernel_size: 4, stride: 1 and pool_size: 2 for max-pooling operation. The convolutional operation’s output is forwarded to flatten layer, and flattened output is stored as document embedding.

Next, the word embeddings and document embeddings produced in first half are concatenated here and feed into the first Bi-LSTM. This Bi-LSTM is trained to find key-phrases based on the binary classification task. The outcome of first Bi-LSTM

Fig. 3 The proposed concatenated Bi-LSTM model-II

Deep Neural Models for Key-Phrase Indexing

493

Fig. 4 The proposed concatenated Bi-LSTM model-III

is processed through second Bi-LSTM and the second Bi-LSTM is trained to find key-phrases based on three different classes as we have three output units in total. A dropout of 0.3 is applied before and after the second Bi-LSTM. The output of second Bi-LSTM is forwarded to the dense layer, and a dropout of 0.3 is applied after the dense layer. In last, the outcome of the dense layer is mapped to the output unit via softmax function (Fig. 4).

4 Result and Evaluation For the validity and reliability of the proposed work here, we consider three different data collections: DUC [31], Inspec [14], SemEval [32]. The performance results of the proposed models and state-of-the-art are depicted in Table 1. Here some of the feature-based, graph-based, topic-based, word embedding based, deep learning based and some other methods are used as our baselines. From Table 1, we can see that all the proposed models are producing better results. Specifically, Table 1 represents the performance measure (precision, recall and F-score) of all proposed models and baseline techniques for top 15 key-phrases. Concatenated Bi-LSTM model- III obtains the best results for precision, recall and F-score for SemEval and DUC datasets whereas concatenated Bi-LSTM model-II produces top results for Inspec dataset. Compared to other best-performing baseline approaches the simple Bi-LSTM model-I also produces significant key-phrases for all the considered datasets.

494

S. Sharma et al.

Table 1 Performance measure of all proposed models and baseline techniques for Top 15 Keyphrases System

SemEval2010 P

R

Inspec F

P

DUC R

F

P

R

F

Feature based techniques TF-IDF

0.221 0.135 0.168 0.147 0.156 0.151 0.173 0.247 0.203

KEA [11]

0.135 0.129 0.132 0.203 0.236 0.218 0.187 0.212 0.199

RAKE [12]

0.244 0.142 0.180 0.387 0.407 0.397 0.264 0.289 0.276

Text rank [4]

0.332 0.195 0.246 0.362 0.401 0.381 0.125 0.148 0.135

LDA [33]

0.292 0.181 0.223 0.395 0.383 0.389 0.156 0.160 0.158

WA Rank [34]



SG Rank [35]

0.384 0.196 0.260 0.397 0.297 0.338 –





TSAKE [36]

0.296 0.228 0.258 0.401 0.203 0.269 –





PP Score [37]

0.386 0.197 0.261 0.421 0.308 0.355 –





SIF Rank [38]

0.448 0.259 0.328 0.387 0.389 0.388 0.248 0.306 0.274





0.341 0.389 0.364 0.250 0.314 0.278

Deep Learning based techniques CopyRNN [24]

0.378 0.265 0.311 0.416 0.307 0.353 –





DeepUnseen [10]

0.332 0.256 0.289 0.378 0.332 0.384 –





Glocal [8]

0.383 0.298 0.335 –











GAT [39]

0.296 0.230 0.259 –











GCN [40]

0.267 0.179 0.214 –











Proposed models Simple Bi-LSTM Mode—I

0.300 0.219 0.254 0.357 0.452 0.399 0.265 0.284 0.274

Concatenated Bi-LSTM 0.228 0.167 0.193 0.394 0.454 0.422 0.273 0.291 0.281 Model—II Concatenated Bi-LSTM 0.410 0.300 0.346 0.386 0.444 0.413 0.287 0.307 0.296 Model—III The top 3 values of F-score are highlighted in bold

In the next half of experimentation, it is significant to compare the proposed models’ accuracy independently. The recall results of top 5/10/15 predicted keyphrase for all proposed models is demonstrated in Table 2. From Table 2, it can be concluded that all of the proposed models produce relevant (high-recall) set of keyphrases. Additionally, the proposed models are more efficient for different collections and more accurate than the deep learning based baseline approaches.

Deep Neural Models for Key-Phrase Indexing Table 2 Present the recall results of top 5/10/15 predicted key-phrase for all proposed models

System

495 SemEval

Inspec

DUC

Simple Bi-LSTM model R@5

0.091

0.236

0.126

R@10

0.164

0.376

0.210

R@15

0.219

0.452

0.284

Concatenated Bi-LSTM model—I R@5

0.078

0.250

0.133

R@10

0.134

0.389

0.203

R@15

0.167

0.456

0.281

Concatenated Bi-LSTM model—II R@5

0.152

0.240

0.115

R@10

0.242

0.381

0.218

R@15

0.300

0.444

0.296

5 Conclusion and Future Work Key-phrases facilitate the filtering and organization of documents, making it possible to select those that are probably relevant. Considering the pointed problem of automatically extracting the key-phrases associated with a particular document, the precise objective of our work is to propose different automatic key-phrase extraction methods. Unlike state-of-the-art mainly focused on manual feature extraction & selection, the proposed models jointly applied the concepts and ideas from the deep learning, word embedding and other domains to achieve this goal. We proposed three different RNN based models: Simple Bi-LSTM model—I, concatenated Bi-LSTM model—II, and concatenated Bi-LSTM model—III. The experimental outcomes on three datasets prove the efficiency of each individual proposed model, which consistently ameliorates the baseline approaches. The major limitation of the proposed scheme is that it is unable to find key-phrases containing stop words. The proposed model only considers the Glove embedding (300-dim) for word representation. In future, we plan to learn different embeddings of different sizes updated with additional knowledge. We also plan to extract key-phrases having stop words and absent key-phrases. It is highly advisable to consider multilingual documents to expand the scope of research in various tasks. Acknowledgements This research has been supported by the Ministry of Electronics and IT, Government of INDIA, for providing fellowship under Grant number: PhD-MLA/4(61)/2015-16 (Visvesvaraya PhD Scheme for Electronics and IT).

496

S. Sharma et al.

References 1. Hasan KS, Ng V (2014) Automatic keyphrase extraction: a survey of the state of the art. In: 52nd annual meeting of the association for computational linguistics. vol 1. Maryland, pp 1262–1273 2. Chuang J, Manning CD, Heer J (2012) Without the clutter of unimportant words Descriptive keyphrases for text visualization. ACM Trans Comput-Human Interact 19(3):1–29 3. Sharma S, Gupta V, Juneja M (2020) Diverse feature set based Keyphrase extraction and indexing techniques. Multimedia Tools Appl 80(3):4111–4142 4. Mihalcea R, Tarau P (2004) Textrank: bringing order into text. In: Conference on empirical methods in natural language processing. pp 404–411 5. Alzaidy R, Caragea C, Giles CL (2019) Bi-LSTM-CRF sequence labeling for keyphrase extraction from scholarly documents. In: The World Wide Web conference, pp 2551–2557 6. Wang Y, Fan Z, Rose C (2020) Incorporating multimodal information in open-domain web keyphrase extraction. In: Conference on empirical methods in natural language processing (EMNLP), pp 1790–1800 7. Zhu X, Lyu C, Ji D, Liao H, Li F (2020) Deep neural model with self-training for scientific keyphrase extraction. Plos One 15(5) 8. Prasad A, Kan MY (2019) Glocal: Incorporating global information in local convolution for keyphrase extraction. In: Conference of the North american chapter of the association for computational linguistics: human language technologies. vol 1. pp 1837–1846 9. Sahrawat D, Mahata D, Kulkarni M, Zhang H, Gosangi R, Stent A, Sharma A, Kumar Y, Shah RR, Zimmermann R (2019) Keyphrase extraction from scholarly articles as sequence labeling using contextualized embeddings. arXiv preprint arXiv:1910.08840 10. Zahedi AG, Zahedi M, Fateh M (2019) A deep extraction model for an unseen keyphrase detection. Soft Computing 1–10 11. Witten IH, Paynter GW, Frank E, Gutwin C, Nevill-Manning CG (2005) Kea: practical automated keyphrase extraction. In: Design and usability of digital libraries: case studies in the Asia Pacific. IGI global, pp 129–152 12. Rose S, Engel D, Cramer N, Cowley W (2010) Automatic keyword extraction from individual documents. Text Mining: Appl Theory 1:1–20 13. Gollapalli SD, Caragea C (2014) Extracting keyphrases from research papers using citation networks. In: Proceedings of the AAAI conference on artificial intelligence. vol 28, no. 1 14. Hulth A (2003) Improved automatic keyword extraction given more linguistic knowledge. In: Conference on empirical methods in natural language processing. pp 216–223 15. Grineva M, Grinev M, Lizorkin D (2009) Extracting key terms from noisy and multitheme documents. In: 18th international conference on World Wide Web. pp 661–670 16. Liu Z, Li P, Zheng Y, Sun M (2009) Clustering to find exemplar terms for keyphrase extraction. In: Conference on empirical methods in natural language processing. pp 257–266 17. Litvak M, Last M, Aizenman H, Gobits I, Kandel A (2011) DegExt—a language-independent graph-based keyphrase extractor. In: Advances in intelligent web mastering–3. Springer, Berlin, Heidelberg, pp 121–130 18. Wan X, Xiao J (2008) Single document keyphrase extraction using neighborhood knowledge. In: AAAI vol 8. pp 855–860 19. Bellaachia A, Al-Dhelaan M (2014) HG-Rank: a hypergraph-based keyphrase extraction for short documents in dynamic genre. In: MSM pp 42–49 20. Duari S, Bhatnagar V (2019) sCAKE: semantic connectivity aware keyword extraction. Inf Sci 477:100–117 21. Santosh TYSS, Sanyal DK, Bhowmick PK, Das PP (2020) DAKE: document-level attention for keyphrase extraction. In: European conference on information retrieval. Springer, pp 392–401 22. Xiong L, Hu C, Xiong C, Campos D, Overwijk A (2019) Open domain web keyphrase extraction beyond language modeling. arXiv preprint arXiv:1911.02671 23. Sun S, Xiong C, Liu Z, Liu Z, Bao J (2020) Joint keyphrase chunking and salience ranking with bert. arXiv preprint arXiv:2004.13639

Deep Neural Models for Key-Phrase Indexing

497

24. Meng R, Zhao S, Han S, He D, Brusilovsky P, Chi Y (2017) Deep keyphrase generation.“ arXiv preprint arXiv:1704.06879 25. Tajbakhsh N, Shin JY, Gurudu SR, Hurst RT, Kendall CB, Gotway MB, Liang J (2016) Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imaging 35(5):1299–1312 26. Bashar A (2019) Survey on evolving deep learning neural network architectures. J Artif Intell 1(02):73–82 27. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780 28. Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 29. Pennington J, Socher R, Manning CD (2014) Glove: global vectors for word representation. In: Conference on empirical methods in natural language processing (EMNLP). pp 1532–1543 30. Kiss T, Strunk J (2006) Unsupervised multilingual sentence boundary detection. Comput Linguist 32(4):485–525 31. Wan X, Xiao J (2008) Single document keyphrase extraction using neighborhood knowledge. In: AAAI conference on artificial intelligence. vol 8. pp 855–860 32. Kim SN, Medelyan O, Kan MY, Baldwin T (2010) Semeval-2010 task 5: automatic keyphrase extraction from scientific articles. In: 5th international workshop on semantic evaluation. pp 21–26 33. Liu Z, Huang W, Zheng Y, Sun M (2010) Automatic keyphrase extraction via topic decomposition. In: Conference on empirical methods in natural language processing. pp 366–376 34. Wang R, Liu W, McDonald C (2014) Corpus-independent generic keyphrase extraction using word embedding vectors. In: Software engineering research conference. vol 39. pp 1–8 35. Danesh S, Sumner T, Martin JH (2015) Sgrank: combining statistical and graphical methods to improve the state of the art in unsupervised keyphrase extraction. In: 4th joint conference on lexical and computational semantics. pp 117–126 36. Rafiei Rafiei-Asl J, Nickabadi A (2017) Tsake: a topical and structural automatic keyphrase extractor. Appl Soft Comput 58:620–630 37. Yeom H, Ko Y, Seo J (2019) Unsupervised-learning-based keyphrase extraction from a single document by the effective combination of the graph-based model and the modified C-value method. Comput Speech Lang 58:304–318 38. Sun Y, Qiu H, Zheng Y, Wang Z, Zhang C (2020) SIFRank: a new baseline for unsupervised Keyphrase extraction based on pre-trained language model. IEEE Access 8:10896–10906 39. Veliˇckovi´c P, Cucurull G, Casanova A, Romero A, Lio P, Bengio Y (2017) Graph attention networks. arXiv preprint arXiv:1710.10903 40. Kipf TN, Welling M (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907

A Hybrid Approach to Resolve Data Sparsity and Cold Start Hassle in Recommender Systems B. Geluvaraj

and Meenatchi Sundaram

Abstract There is a notable growth of RS and utilizations in a different area. The real aim of RS is yielding suitable options to users. In the process of implementation, there are different methodologies, namely CF, CBF and HRS. For this article, we will try to resolve it by using the hybrid approach. The benefits of these approaches are their Style, operation and productivity. CS and Data Sparsity is a beautiful area which has been troubling RS. CS hassle occurs when the RS cannot recommend new users items since there is data sparsity. In this article we will look into the hybrid approach solve the above said issues with hybrid approach and finding the similarities with computational equations and steps to resolve the hassles and experimental design of the approach constructed and comparing the hybrid approach with the Staple CF algorithm and finding out which gives the good performance and solves our Research problem. Keywords Recommended systems(RS) · Hybrid Recommended Systems(HRS) · Content-based filtering (CBF) · Collaborative filtering (CF) · Cold start (CS) · Data sparsity (DS) · Cosine metrice(CosSim) · Mean squared difference (MSD)

1 Introduction The main motive of all recommendation systems is to provide the most appropriate items to the right user at the right time. Extensive research studies are going in this field, and many different approaches are proposed, which benefit from different types of data and analysing techniques. (ere are various issues while designing an appropriate recommendation system such as scalability, high computation, and diversity. CS problem is split into the New User CS hassle and New Item CS hassle [1]. New user CS hassle refers to the lack of information about the user’s interest or fewer ratings provided by this user for any particular item in the system. Pure New User B. Geluvaraj (B) · M. Sundaram Garden City University, Bengaluru, India M. Sundaram e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_38

499

500

B. Geluvaraj and M. Sundaram

CS hassle refers to the problem when the user in the system provides no rating. With the increasing e-commerce platforms, huge numbers of new users signing every day or less-active users in almost every application create a serious issue for the recommendation systems. Another major issue is the new item CS hassle, which refers to the newly added item in any particular system with very little or no rating provided by the user, so in this scenario, analysing the item and referring it to the user can be a tedious task. Hybrid approaches have been proposed to overcome the problem of new CS hassles such as Combining the CF, CBF [2].

2 Related Work Many researchers worked on the sparsity, and CS hassle for my article I have taken few of the recent articles which address the issue which is taken in the article beginning from Melville he addressed sparsity issue and used CBF technique to overcome by converting sparse rating into full rating matrix advantage is the matrix pre-loaded with ratings before using in RS. Still, the same cannot be applied instantly in the real-time process [3]. Cotter and Smith proposed that CF and CBF generate recommendations separately and combine their predictions directly [4]. Salehi and Nakehi compared the SVM and naive Bayes to solve CS and sparsity hassle and concluded that naive Bayes is better than the other [5]. Sarwar has done a direct approach using the CBF and CF, which has not yielded the right solutions to hassle [6]. Badaro used the weighted technique with CF and CBF by using the users and items correlation simultaneously, which solved the sparsity issue, and even the accuracy was also good [7]. Pazani proposed a hybrid approach where a profile users’ content used to find similar users, but it was a failure. If something goes wrong with the content profile, the whole system yields poor recommendations [8]. Urszula KuÅelewska used clustering methods to analyse the user ratings and addressed the issue of finding a similar profile user by using cosine similarity and Euclidean distance [9]. With these different types of techniques, various firms’ customer data can be combined with various other data to yield better recommendations [10]. Sharif proposed a hybrid approach to solve CS hassle for the limited information about the item solved using item-based recommended systems [11]. Le Hoang Son has proposed a hybrid user-based fuzzy CF approach, which uses user-based similarity based on historical data and calculated fuzzy similarity based on the user’s demographic data for the outcomes [12]. Many researchers have proposed different hybrid approaches to solve the accuracy, CS and sparsity hassle, but it always difficult due to sparse dataset [3, 13, 14]. So, we have combined the Hybrid approach by combining the SVD and Random to yield a better result as SVD always the best CF known from the beginning of the RS.

A Hybrid Approach to Resolve Data Sparsity …

501

3 Research Problem (RP) CS and DS is categorized into three problems. RP 1: When fresh users need an item recommendation from the current system, there will be poor recommendations because the no history of the user is found to suggest the products. Solution: Can be solved by using the Demographic user data. RP 2: When a fresh item is appended in the inventory, it needs to be recommended to current users and fresh users in the system. Solution: Can be solved by watching the history of the user-item purchased and finding the similarities in the item introduced. RP 3: When above both are solved, the system needs to generate accurate recommendations. Solution: This can be solved using hybrid techniques to generate accurate recommendations by using the benefits of different techniques and combining them.

4 Proposed Methodology 4.1 Novel Hybrid Proposed Methodology for Predicting and Recommending New User Preference This segment will explore the novel resolving method used to amplify the accuracy of the items forecast recommended to users and determined to solve the sparsity hassle using MoviesLens dataset. The proposed techniques are compared with the CF techniques [15]. A novel resolving techniques main benefit is how to detect the relationship between items in sparse data. Even though both users do not have any same liked items, they liked other items. Hence we will believe them to be same if they take part similar preference for items. We will evaluate the similarity between users using the below equations used in the article on liking the same items and making groups and applying the clustering approach and hybrid algorithm approach to detect hidden items between users in the system’s recommended items [16]. Minimizing Data size for Data Pre-processing current history of the users browsing items and rated items and building a group in a systematic arrangement based on characteristics incorporating tag based on the genres. Non-familiar choices forecasting approach for users deploying a hybrid algorithm and the SVD on the history data for producing choices and based on the dataset used [17]. They evaluated the resemblance among the likeness of users and the history of the data which is already made groups in the beginning and duration of the movies the user watching to find the identical patterns of watchlist among them. The group order with most favourite will be elected. Series of items generated for Recommendation: Resemblance is evaluated among the items’ set, rated items and the items with tags, and tracks down the same items from the

502

B. Geluvaraj and M. Sundaram

Fig. 1 Novel Hybrid Proposed methodology for predicting and Recommending new user preference

users account which are rated and looking into the previous data items users already rated and producing final recommendations list (Fig. 1).

4.2 The Work Flow of Hybrid Algorithm to Solve CS Hassle See (Fig. 2). Input: Algorithms, Dataset, Weights, Similarity metrices. Output: Accuracy (Precision, Recall, F-measure). Method: Hybrid Algorithm Workflow. 1. 2. 3. 4. 5. 6. 7. 8. 9.

Def_init (): Which takes list of algorithms from class Algobase. Self. Weights (): Associated list of weights assigned to each algorithm when producing final rating estimate. Sim_option (): Finding the similarity between the user and item using Pearson similarity and mean squared difference Def_movielensedata (): loading dataset and rankings of the users and items and evaluator splits the data into train and test set. Def_fit (): it iterates through each algorithm in the list, training each one with our training dataset. Evaluator addalgorithm (): Which calls the SVD and Hybrid algorithm (which is combination of SVD and Random). SumScore and Sumweights: Will be calculated to zero error Def_Estimate (): By calling this function on each one it combines the results and estimates the weighted average for each user and item Return (): The values of precision. Recall and F-measure

A Hybrid Approach to Resolve Data Sparsity …

503

Fig. 2 Workflow of hybrid algorithm

4.3 Finding Similarity Metrices Between the User and Item 4.3.1

Item-Based Pearson Similarity  

 yi − I )  Cos Sim(x, y) =  2 2 (x − I ) i i i (yi − I ) i

xi − I



In this similarity metrices we look into the difference between ratings and average from all the users for that given item. Substituting the X with xi which is the items are rated by an active user and subtracting with I which is the average rating of the item in question from all the users. and Substituting Y with yi which are the items rated by user from another system. The Pearson similarity as measuring the similarity between people by how much they diverge from the average person’s behaviour [18].

4.3.2

Mean Squared Difference (MSD)  M S D(x, y) =

ie I x y (x i

− yi )2    Ix  y

504

B. Geluvaraj and M. Sundaram

M S D Sim(x, y) =

1 M S D(x, y) + 1

MSD is an Additional method to find similarity, in which all of the items that 2 users have common in their ratings and computing MSD how each user rated each item. If we break down the equation computing similarity between the users X and Y, on the top of this fraction we are summing every item i that users X and Y have both rated. The difference between the ratings of each user squared and then divided by the number of items each user had in common. So, in this process we have calculated a metric how different users X and Y are, but we have to measure how similar they are not how different they are, so to do that just inverse the equations of MSD by 1. In the bottom just 1 is added to avoid dividing It by 0 [19].

4.4 Illustrating Accuracy Metrices First, we will discuss about the 4 matrices which are used to derive the performance of the RS. 1. 2. 3. 4.

True Positive (TP): The number of occurrences that were + ve and Rightly grades as + ve False Negative (FN): The number of occurrences that were + ve and wrongly graded as −ve. True Negative (TN): The number of occurrences that were -ve and Rightly graded as −ve. True Positive (TP): The number of occurrences that were −ve and wrongly graded as +ve.

Lets calculate the values of Precision, Recall, F-measure, Accuracy by using the above 4 metrices. 1.

Precision (PR): Number of items graded similar (TP + FP) and Number of items actually similar (TP). Precision =

2.

Recall (RE): Number of items actually similar (TP + FN) and Number of items actually graded similar (TP). Recall =

3.

TP T P + FP

TP T P + FN

F-measure: Which consolidates the Precision and Recall into a unique calculation and accumulates both the properties.

A Hybrid Approach to Resolve Data Sparsity …

F−Measure = 4.

505

(2 ∗ PR ∗ RE) (PR + RE)

Accuracy (ACC): Accuracy is the number of correct predictions over the output size. ACC =

TP +TN T P + T N + FP + FN

The more emphasis will be on items which are rated in a client profile and in order to fetch similarities and train models to get user preferences accurately. Creating a group of datasets by regulating the grades like (grade 0–all items), (grade 1- item tags) and (grade 3 –movie duration. This helps proposed approach by interpreting the favourable number of groups [13]. and we categorize data into groups of sparsity grades between (0.2 and 0.5), (0.5 −0.7), (0.7–1.0). Sparsity Evaluation = 1 −

nH nUsers − nItems

(4)

The symbol nH the total number of times item visited by the user and nUsers is amount of users who are present and nItems is the amount of items which are present [20].

4.5 MovieLens100K Dataset The most widely used dataset in the research of RS. Which comprises of 1000 users, 1700 items and 1 lakh reviews oddly. In this experiment we used 80% of data as a training set and 20% of data as test set and density of the dataset is measured as the ratings divided by the user and the items and multiplied by the 100 to get the percentage values (Table 1).

5 Step by Step Implementation and Experimental Setup We used 16 GB Ram and I7 9th generation Processor and Windows 10 operating system. We used Anaconda Platform to run the python code using jupyter and on that SurpriseLib a python library to test and build RS. Step 1: Setting up the environment and adding SurpriseLib to the anaconda platform, which is a widely used platform in the area od RS. SurpriseLib is constructed throughout the framework of forecasting each movie’s ratings and each user and offering back the prime predictions as recommendations and its essential for estimating accuracy [21].

506

B. Geluvaraj and M. Sundaram

Table 1 Movielens dataset statistical analysis Attributes

Total

Training set

Test Set

Number of users

1000

800

200

Number of items

1700

1360

340

Number of ratings

100,000

80,000

20,000

Density

5.88%

4.704

1.176

Average number of rated items for every user

110

88

22

Maximum number of rated items for every user

835

668

167

Minimum number of rated items for every user

25

20

5

Maximum number of users rating an item

615

492

123

Minimum number of users rating an item

1

1

1

Step 2: Creating a Hybrid recommender algorithm and SVD algorithm is imported in the surpriseLib that can approach variables correlated with that occurrence. Coming forward it apprehends user ID and an item ID. So, when the surpriselib framework calls the estimate, it is asking to forecast a rating for the user and item flown in. These user and item IDs are inner IDs, by the way, IDs that are used internally, and must be delineated back to the fresh user and item IDs in your origin data [22]. Step 3: The Hybrid recommender algorithm and SVD algorithm trying to calculate the accuracy by using the performance methods such as Precision, Recall and F-measure need to generate a new class called Evaluated Algorithm. It contains an algorithm from SurpriseLib but establishes a new role called Evaluate that runs all of the metrics in Recommender Metrics on that algorithm. So, this class makes it easy to measure accuracy, sparsity, Cold start. Step 4: The Recommender Metrics class’s functions lets slice up training data into train and test splits in various ways [23]. That is what Evaluation Data class is for. It takes in a Dataset. Step 5: Defining the grades of the sparsity between the ranges and finding the actual positives and actual negatives, and predicting positives and negatives and calculating the precision score, recall score and F-measure, accuracy is calculated overall and Comparing with SVD and Hybrid recommender algorithms Evaluator class takes in a raw dataset, say from MovieLens [24], and the first thing it does is create an Evaluated Dataset object from it that it uses internally. Then, call Add Algorithm for each algorithm wants to compare. This creates an Evaluated Algorithm under the hood within the Evaluator.

6 Results See (Figs. 3, 4 and 5).

Accuracy

A Hybrid Approach to Resolve Data Sparsity … 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

507 0.9

0.7

0.7 0.6

0.6

0.5

SVD algorithm

Precision

Proposed Hybrid Algorithm

Recall

F-Measure

Accuracy

Fig. 3 Sparsity grade (0.2–0.5) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0.9 0.65

0.6 0.5

0.7

0.5

SVD algorithm

Precision

Proposed Hybrid Algorithm

Recall

F-Measure

Fig. 4 Sparsity grade (0.5–0.7) 1

0.9

0.9 0.8

0.7

Accuracy

0.7 0.6

0.6 0.5

0.5

0.5

0.5 0.4 0.3 0.2 0.1 0 SVD algorithm

Precision

Fig. 5 Sparsity grade (0.7–1.0)

Proposed Hybrid Algorithm

Recall

F-Measure

508

B. Geluvaraj and M. Sundaram

Table 2 Experimental results in tabular form Algorithm

Sparsity grade(0.2–0.5) Precision

Recall

F-measure

SVD Algorithm

0.55

0.72

0.65

Proposed hybrid algorithm

0.97

0.65

0.78

Algorithm

Sparsity grade(0.5–0.7) Precision

Recall

F-measure

SVD Algorithm

0.55

0.60

0.57

Proposed hybrid algorithm

0.95

0.65

0.76

Algorithm

Sparsity grade(0.7–1.0) Precision

Recall

F-measure

SVD Algorithm

0.57

0.51

0.56

Proposed hybrid algorithm

0.92

0.61

0.78

6.1 Experimental Results See (Table 2).

6.2 Discussions As per the simulations results shown in Fig. 1, 2 and 3, for every grade of sparsity, the values of precision, recall and f-measure of a hybrid algorithm are better than the standard SVD algorithm that is the most adopted CFT. Furthermore, even with the increasing level of sparsity density in the dataset, the hybrid algorithm has withstood the accuracy level by recommending the items to the users that solve both CS and Sparsity hassle.

6.3 Advantages of Proposed Work 1.

2.

Based on the user choices of the items and items that are not rated but by using the item labels and the number of times item visited by the user are used in this hybrid algorithms that show the algorithm’s potentiality. When the sparsity has increased the algorithm’s potentiality, which controls the accuracy, it is measured for minimum errors.

A Hybrid Approach to Resolve Data Sparsity …

3. 4.

509

The use of the number of items the user visits with the Hybrid algorithm to find the concealed connection allying with the users. Cold start hassle is solved by recommending the new items to users rated by the old user.

7 Conclusion RS are playing a prominent role in developing the business. The growing data size from social media and knowledge about user activities, which aids us recommend items to users in a more accurate way. In this article, we used MovieLense dataset, which is a consolidation of direct and indirect data rated by the users. The Proposed approach is constructed so that it is tested on different sparsity grades, The new hybrid algorithm improves precision, Recall and F-measure as shown in results and comparted with the SVD finds it difficult to locate the neighbouring data when the given input data is sparse. Bu the proposed Hybrid algorithm resolves the issues and recommends the users who are a new entry to the inventory and helps the fresh users by recommending the top items rated by the other users this is how the CS and DS hassle can be solved.

References 1. Berg Rvd, Kipf TN, Welling M (2018) Graph convolutional matrix completion. In: KDD 2. Chen C, Zhang M, Liu Y, Ma S (2018) Neural attentional rating regression with review-level explanations. In: WWW, pp 1583–1592 3. Melville P, Mooney RJ, Nagarajan R (2009) Content-boosted collaborative filtering 4. Cotter P, Smith B (2000) Ptv: Intelligent personalised tv guides. In: AAAI/IAAI. pp 957–964 5. Salehi M, Nakhai Kamalabadi I (2013) A hybrid recommendation approach based on attributes of products using genetic algorithm and Naive Bayes classifier. Int J Bus Inf Syst 13:381–399 6. Badaro G, Hajj H, El-Hajj W, Nachman L (2013) A hybrid approach with collaborative filtering for recommender systems. In: Wireless Communications and Mobile Computing Conference (IWCMC) 2013 9th international, IEEE, pp 349–354 7. Pazzani M (1999) A framework for collaborative, content-based and demographic filtering. Department of Information and Computer Science. University of California, Irvine. Irvine, CA, pp 92697 8. Ku˙zelewska U (2011) Advantages of information granulation in clustering algorithms. In: Agents and artificial intelligence. Springer, Berlin, pp 131–145 9. Mathew SK (2012) Adoption of business intelligence systems in Indian fashion retail. Int J Bus Inf Syst 9:261–277 10. Sharif MA, Raghavan VV (2014) A large-scale, hybrid approach for recommending pages based on previous user click pattern and content. In: Foundations of intelligent systems. Springer, Berlin, pp 103–112 11. Son LH (2014) HU-FCF: a hybrid user-based fuzzy collaborative filtering method in recommender systems. Expert Syst Appl Int J 41:6861–6870 12. Burke R (2007) Hybrid web recommender systems. In: The adaptive web. Springer, Berlin, pp 377–408

510

B. Geluvaraj and M. Sundaram

13. Ghazanfar MA, Prugel-Bennett A (2010) A scalable, accurate hybrid recommender system. In: Knowledge discovery and datamining, WKDD’10. Third international conference on. IEEE, pp 94–98 14. Fu W, Peng Z, Wang S, Xu Y, Li J (2019) Deeply fusing reviews and contents for cold start users in cross-domain recommendation systems. In: AAAI, pp 94–101 15. Hu L, Jian S, Cao L, Gu Z, Chen Q, Amirbekyan A (2019) Hers: modeling influential contexts with heterogeneous relations for sparse and cold-start recommendation. In: AAAI, pp 3830– 3837 16. Kipf TN, Welling M (2017) Semi supervised classification with graph convolutional networks. In: ICLR 17. Li X, She J (2017) Collaborative variational autoencoder for recommender systems. In: KDD, pp 305–314 18. Monti F, Bronstein MM, Bresson X (2017) Geometric matrix completion with recurrent multigraph neural networks. In: NIPS, pp 3700–3710 19. Sachdeva N, Manco G, Ritacco E, Pudi V (2019) Sequential variational autoencoders for collaborative filtering. In: WSDM, pp 600–608 20. Wu L, Sun P, Fu Y, Hong R, Wang X, Wang M (2019a) A neural influence diffusion model for social recommendation. In: SIGIR, pp 235–244 21. Wu Q, Zhang H, Gao X, He P, Weng P, Gao H, Chen G (2019b) Dual graph attention networks for deep latent representation of multifaceted social effects in recommender systems. In: WWW, pp 2091–2102 22. Xin X, He X, Zhang Y, Zhang Y, Jose J (2019) Relational collaborative filtering: Modeling multiple item relations for recommendation. In: SIGIR, pp 125–134 23. Ying R, He R, Chen K, Eksombatcha P, Hamilton WL, Leskovec J (2018) Graph convolutional neural networks for web-scale recommender systems. In: KDD, pp 974–983 24. Zheng L, Noroozi V, Yu PS (2017) Joint deep modeling of users and items using reviews for recommendation. In: WSDM, pp 425–434

RETRACTED CHAPTER: Software Effort Estimation of Teacher Engagement Application

ER

Sucianna Ghadati Rabiha, Harco Leslie Hendric Spits Warnars, Ford Lumban Gaol, and Benfano Soewito

D

C

H

A

PT

Abstract This study tries to develop an application that contains an independent self-diagnostic instrument that is used to view the profile of teacher involvement in Indonesia which is called the Indonesian Teacher Engagement Index. We measured the estimated effort of developing a neural network-based ITEI application. The system development is carried out so that the ITEI application can run more dynamically. The method to measure the software is use case point (UCP) approach. In estimating productivity of software projects, the use case point method is used to determine the software size to use case model used. The steps taken in measuring the effort estimation are calculation of unadjusted use case point, then calculation of use case point and calculation of effective effort. The results showed that project has small software size with UCP value = 52.9592, and the effective effort measurement value obtained was 1,059.184 h.

A

C

TE

Keywords Use case point · Effort estimation · Measurement

R

The original version of this chapter was retracted: The retraction note to this chapter is available at https://doi.org/10.1007/978-981-16-5640-8_57

ET

S. G. Rabiha (B) · H. L. H. S. Warnars · F. L. Gaol · B. Soewito Computer Science Department, Binus Graduate Program—Doctor of Computer Science, Bina Nusantara University, 11480 Jakarta, Indonesia e-mail: [email protected]

R

H. L. H. S. Warnars e-mail: [email protected] F. L. Gaol e-mail: [email protected] B. Soewito e-mail: [email protected] S. G. Rabiha Information Systems Department, Binus Online Learning, Bina Nusantara University, Jakarta, Indonesia

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022, corrected publication 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_39

511

512

S. G. Rabiha et al.

1 Introduction

R

ET

R

A

C

TE

D

C

H

A

PT

ER

The application used by teachers to assist them in carrying out self-diagnostics is the Indonesian Teacher Engagement Index (ITEI) application, where this application can detect teacher involvement through filling out questionnaires. Index value data or teacher profiles obtained from the results of diagnosis through the ITEI application can be further developed as input for decision support systems in government with education as the main focus. In supporting the selection of the right strategy to improve teacher performance and involvement, the predictive value of the Indonesian Teacher Engagement Index is very useful for the government to provide supporting information [1]. The output of measurement from teacher engagement profiler application has been adjusted to the teacher character assessment standards in Indonesia. The hope, by knowing the description of the profile, teachers can understand the character they have and can increase the character values that are still lacking so that teachers have the right educator character to provide teaching and educate students to the maximum. There are seven categories in the teacher engagement profile classification: The first (1) is disengagement; second (2) is frustrated; third (3) is burn out; fourth (4) is dependent engagement; fifth (5) is self-interest engagement; sixth is critical engagement; 7th is fully engagement [2]. The presence of self-diagnostic apps for the teachers has become very important, allowing them to perform self-assessment to shape their level of engagement quickly through smartphones. Thus, technological developments can also be perceived as early detection methods for teachers to determine the programs most needed to improve competence and engagement [3]. Along with the improvement of the neural network-based ITEI apps feature [1], the developer needs to take measurements in advance of the size of the software. One of the methods used to measure software size is the use case point (UCP) approach; the attributes used in the measurement of this method are length, functionality, complexity, and reuse. Use case diagrams are part of the Unified Modeling Language (UML) which serves to describe the activities carried out in the system through case notation. From the diagram, later we can see the scenario of the proposed system so that we can see how complex the software being developed is by paying attention to the weight of each use case. Measuring effort estimation using this method can be done by understanding the problem domain first and then estimating the appropriate weighting value. In previous research, the use case point approach is used to measure cost estimates by considering the risks associated with software development, and it is proven that the use case point method can provide relatively accurate estimates [4]. Furthermore, Jovan et al. used the UCP method to measure the size of the project based on functional requirements represented by models and use case scenarios [5]. One of the software effort estimation methodologies is the use case point methodology (UCP). Gustav Karner proposed the UCP as a basic technique for estimating use case efforts. In this method, the quantitative weighting factor (WF) is given to the actor and use case according to the classification based on three categories, namely simple, average, and complex [6].

RETRACTED CHAPTER: Software Effort Estimation of Teacher ...

513

Product selection according to budget and user specifications can be completed according to a predetermined target time which greatly affects the success of software development [7]. Problem of defining effort estimation is one of the main concerns in this research in developing the ITEI application.

2 Literature Review

R

ET

R

A

C

TE

D

C

H

A

PT

ER

The process by which a number or symbol is signed onto an entity to describe the entity in a meaningful way is an illustration of measurement software. In order to represent the effort and duration of a project, the size of the software must be measured and translated into numbers [8]. Project performance and current project status can be measured using metrics. In this literature, some of the metrics proposed as software measures fall into three categories, including: project metrics, product metrics, and process metrics. There are many factors that determine the quality of a product including efficiency, clarity, complexity, completeness, consistency, flexibility, accuracy, and significance for each according to the desired description and size. In the product life cycle, the quality factor can be used at any time [9]. The use case point approach is one of the estimation models developed several years ago. Analysis of function points and constructive cost modeling forms the basis of the UCP model [10]. The UCP method is a method that assigns weights to actor groupings and use cases. There are three types of clusters used, namely simple, average, and complex. Furthermore, unadjusted actor weight value represents the calculation results of the number of weighted actors, and the measurement results of unadjusted use case weights also use the same method, namely by calculating the two coefficients, technical factors and environmental factors, to provide an overview of project conditions, information, and the level of experience required by development team [11]. In agile projects, the use of use cases is very flexible and appropriate. Use case documentation can undergo a composition change by dividing scenarios and combining them with user stories. The initial requirements that emerge are usually defined by defining a use case name and then documented using a scenario [12]. One of the methods that can provide an overview of the estimated effort required in project development is UCP method. Estimates are measured by calculating the number of use case described and the complexity of the use case in the software project. Effort estimation is obtained from the calculation between the UCP value and the effort rate value which is then multiplied. The use case point method is quite significant in determining software effort estimation [13].

3 Methods In 1993, Gustav Karner began to propose the UCP method for object-oriented application measurement which is an extension of the function point analysis [14]. The

S. G. Rabiha et al.

A

PT

ER

514

H

Fig. 1 Use case point methods

C

A

3.1 Phase I

TE

D

C

use case model can be used to determine the estimated productivity of the device project and determine the estimated effort through the use case point method. The use of UCP can assist a software developer in making reliable estimates early in the development cycle. The stages in making effort calculations can be seen in the following image (Fig. 1):

R

ET

R

In this research, phase I begins by calculating the value of unadjusted use case point (UUCP). The sum of the unadjusted use case weights (UUCW) and unadjusted actor weights (UAW) values produces the UUCP value. This value will later represent the size of the system being developed. To get the UUCP score, we need to first calculate the UUCW score and the UAW score [4]. The calculation of the UUCW value helps us see how much the complexity of the transactions by the system in each business process, while the UAW helps us see how much complexity each actor is involved in using the system. 1.

Step 1: Determine the value of UAW Calculation of the unadjusted actor weights (UAW) score is the first step that must be taken in phase I. This step defines the complexity of the actor which is divided into three categories, namely simple, average, and complex. Simple categories have a weighted score of 1, which represents a system that communicates with other systems using the application programming interface (API). Average has a weight score of 2, representing a system that communicates

RETRACTED CHAPTER: Software Effort Estimation of Teacher ...

515

with other actors using the TCP or IP protocol. Complex has a weight score of 3, representing users who use an user interface to communicate with system. The following is the formula used: U AW =

 (#Actors ∗ Weight Factor)

(1)

Step 2: Determine the value of UUCW Furthermore, second step in phase I is calculating the UUCW score. On UUCW calculations, measurement of use case complexity is measured according to the complexity of the transactions for each use case. Transaction complexity is divided into three categories. The first is simple use case with a weighted value of five (5) if the number of transactions or entities in the database is less than 3. The second average use case with a weighted value is ten (10) if the number of transactions or entities in the database is 4 to 7 transactions. The third complex use case with a weighted value is fifteen (15) if the number of transactions or entities in the database is less than more than 7. The following is the formula used:  UU C W = (2) (#Use Cases ∗ Weight Factor)

3.

Step 3: Determine the value of UUCP Then, the third step to complete phase I is to add up the UAW and UUCW values using the following formula:

TE

D

C

H

A

PT

ER

2.

A

3.2 Phase II

(3)

C

UU C P = UU AW + UU C W

R

ET

R

In phase II, there are two steps that must be taken before determining the UCP value, namely calculating TCF value and ECF value. The work process of a software project is directly affected by the complexity factor [15]. TCF is a technical factor related to system functionality. These factors consist of parameters related to the nonfunctional requirements needed in system development. Meanwhile, ECF deals with the assessment of the user’s environmental conditions as seen from the experience and motivation needed by the software development team. The results of all these variable calculations are used in determining the estimated software development effort. 1.

Step 1: Technical Complexity Factor (TCF) Process of calculating value from the technical complexity factor is the first step in phase II. This stage uses thirteen (13) parameters to measure the technical factors and the system development environment. Each factor already has a predetermined weight. Next, the weight from each score is multiplied by the

516

S. G. Rabiha et al.

technical factor score. The value given is from the numbers 0 to 5, for every factor depending from how much influence that factor. 0 value represents that the factor has no effect, the value 3 represents the factor having an average effect, and the value 5 means that the factor has a significant effect so that the total technical factor (TF) value from the multiplication scores and weights is then added together, and then, the results of the TF are used to determine the value of the technical complexity factor (TCF) with the following formula: T C F = 0.6 + (0.01 ∗ T Factor )

ER

Step 2: Environmental complexity factor (ECF) The second step in phase II is to calculate the environmental factor (EF) value which directly affects the actions that determine the course of a software development project. Environmental factor consists of 8 parameters including familiarity with the project weight, experience of application, experience of programming, analyst capability, motivation, stable requirements, part-time staff, and difficult programming language. The score on environmental factors is multiplied with score of each weighted. The weight value starts from 0 to 5 for every factor depending on the influence from that factor. 0 value represents that the factor has no effect, number 3 represents the factor having an average effect, and number 5 means the factor has a significant effect so that the total environmental factor (EF) value is obtained from multiplication scores and weights which are then added together; then, the results from EF are used to determine the value of the environmental complexity factor (ECF) with the following formula:

TE

D

C

H

A

PT

2.

(4)

EC F = 1.4 + (−0.03 ∗ E Factor )

C

Step 3: Calculate use case point (UCP) Furthermore, in the third step to determine the value from use case point (UCP) score, it is through multiplying the unadjusted use case point, technical complexity factor, and environmental factor scores using the following formula: U C P = UU C P ∗ T C F ∗ E F

(6)

R

ET

R

A

3.

(5)

3.3 Phase III The productivity required by the project development team is measured at this stage. In its calculations, it is necessary to pay attention to the characteristics of the type of project being developed. In phase III, the step that needs to be done is calculating the effective effort (E) for person-hour (PH) with multiplying the specific score of Person Hours per UCP (PHperUCP) by UCP using the following calculation: E = U C P ∗ P H perU C P

(7)

RETRACTED CHAPTER: Software Effort Estimation of Teacher ...

517

The effort estimation value is obtained from the multiplication of the productivity factor and the UCP value. To get the value of the productivity factor according to Schneider and Winters’ calculations, if the EF value is α, the case is expected to occur. Weak visibility means that intruders are detected on congruent roads. Good border coverage guarantees that intruders are detected without violating path constraints (Fig. 3). BEGIN, ACTIVE, IDEAL, CHECK and execute as per the flow chart given below: -

Fig. 3 Initial setup of network in simulation

Scheduling Method to Improve Energy Consumption in WSN

529

Initially, all sensors are in an active mode based on the proposed algorithm, transmitting data packets carries Id, location of the node, and remaining lifetime. The node that sends the information of the network packet is ‘u’ and is called v from the configuration of the network. The visibility of the rectangular path is A. The suggested algorithm is a central method with four procedures (Fig. 4). Fig. 4 Initial procedure

530

J. Poornimha et al.

Node ‘u’ thus set A(u), executed in the Active method, that calculates and finds a region of the ‘R(u)’. If the process cannot locate the area R, it is implied that, the existing sensor network is not adequate to provide entire coverage, so node ‘u’ sends a message alert about this. When the process detects the area in the network node, you need to forward an information packet with node I, location, and lifespan to all other nodes. In all a, b nodes, if the virtual area differs from the real area, aαR(b) doesn’t mean yr(x) immediately. Then, all nodes in the network would have transmission information about 

N  and  u  Sub − Set Nu = {v : u R(v)}

Until they had sent a data, and the information packets reached their target. For the set of nodes knowledge in N u –N’u = {v: I R(v) and vR(u)’ all nodes in network ‘u’ respond with a packet of acknowledgment (Fig. 5). When the BEGIN method is executed all the nodes must perform the Activate method as shown in Fig. 6. This method determines either to live in the ideal location or not. The node u is optimal if the area is covered without u for every active node v. If the ideal state is available for two nodes, it causes damage. For all u A(i) = , u retains a set A(u) per node. As time passes, nodes tend to reach the optimal state, but they did not know. Putting nodes A(u) means this. From step 1 in the BEGIN procedure, I search all A(u) nodes, including u, to find a coverage problem in the ideal state. If there’s no coverage problem, visit the Nu network node and send query packets to the ideal state. When asked by node u, v sends information “not needed” Fig. 5 BEGIN procedure

Scheduling Method to Improve Energy Consumption in WSN

531

Fig. 6 Simulation of broadcoast information from to neighbour node

to get back to the ideal state. It just occurs, only i A(u)s do not enter in the protected area. Step 3 follows the ideal state, ‘u’ cange it states ideal if a “No need” information packet has been obtained from the node which is active in this region. If node ‘u’ is in an optimal or active condition, N u informs all nodes concerning their decision, then they know the state. When there is a need to go in good shape, it is expected to fail earlier before ‘T ’ before the first active sensor node in the network (Figs. 7 and 8). During the appropriate node is operational, the CHECK procedure seen in Fig. 9 will decide if the node failure makes it operational or optimum. From stage 1 all documents of nodes u are deleted from the table and the status are changed. Instead, you send a request packet to other Nu network nodes to find out whether the region needs to be covered. According to Step 2, the node v α N u reacts to node ‘u’ whether it wants the node to change its state as active in the same state or can change. If it is in ideal condition, certain nodes in the network, changes the status if ‘V’ answers a packet containing its Id of the Node, durability and position that is "not needed" or "necessary." It allows node u to manage active node records correctly in the network. Phase 3 indicates that you have an "unrequired" packet and can return to the optimal state. If not response from ‘v’ and ‘u’ to the active state in the case of a network failure node. Step 4 the Ideal state nodes are active for a given period, review the information and receive acceptation to remain the ideal state or active state. U notifies all participating N u nodes whether they choose to join. The next V will change the record (Fig. 10).

532

J. Poornimha et al.

Fig. 7 “Active” procedure

Fig. 8 Node with “Active” and “Ideal” states

5 Evaluation of the Performance The proposed method using NS2 with 100 nodes. The simulation setup of the networks consists of 100 nodes placed randomly. The area of the network is 500 × 500 Sq.mt. The propagation range of all the nodes is 200 m with 1 Mb / Second

Scheduling Method to Improve Energy Consumption in WSN Fig. 9 ‘CHECK’ Procedure

Fig. 10 Execution of ‘Check’ procedure

533

534 Table 1 Simulation setup Simulation setup

J. Poornimha et al. Parameter

Value

Size of the network

500 × 500 sq.m

Total sensor nodes used

100

Range of radio propagation

200 m

Capacity

1 M bits/sec

Packet size

1000 bytes

Simulation duration

100 s

channel capacity. The total simulation time is 100 s. The simulation setup is shown in the following Table 1. The simulation was executed with the time in the travel of 0–100 s. The lifetime is identified and enhancements is processed when accomplished using the proposed algorithm with the global barrier coverage method. To compute life, the suggested algorithm is equated with the RIS algorithm [5, 6]. The network ranges simulated from 50 to 100 nodes. Initially, the lifetime of both methods is 100%, after a particular time interval, the residual lifetime decreased. At the end of the 100th second, the residual lifetime of existing is 55.91% and the proposed algorithm is 90.7%. Similarly, the entire time interval the residual lifetime proposed is high. It is shown in Table 2 and graphically represented in Fig. 11. During the setup energy consumption of existing is 0 Jouls and proposed is 2.51 Jouls. Due to the proposed setup needs some energy to provide their information to the base station to maintain a table. Later on, a particular energy consumed gradually in the network. At the end of 100th second existing algorithm consumes 44.16 Jouls energy and the proposed Table 2 Life time and time interval comparision

Time interval

% of Life time Existing global barrier coverage

Proposed algorithm

0

100

100

10

93.1

98.11

20

85.3

96.12

30

78.78

94.92

40

76.10

93.85

50

72.12

93.51

60

68.25

93.50

70

62.96

93.37

80

59.92

90.91

90

55.91

90.7

Scheduling Method to Improve Energy Consumption in WSN

535

Fig. 11 Remaining life time

Table 3 Comparison table for energy consumption

Time interval

Energy consumption in Jouls Existing algorithm

Proposed algorithm

0

0

2.51

10

7.3

4.2

20

16.1

4.69

30

21.15

5.82

40

23.91

6.62

50

27.92

6.69

60

31.85

7.69

70

37.21

8.61

80

40.17

9.3

90

44.16

10.31

consumes only 10.31 Jouls. Similarly, all the time interval the energy consumption of proposed is less. It describes in Table 3 and shown in Fig. 12.

6 Conclusion In many applications in the last decade, the function of wireless sensor networks cannot be underestimated. Over the years, there has been a proposal to implement energy management systems aimed at extending the lifetime of the sensor node and the overall network, but the amount of energy needed by the sensors remains a

536

J. Poornimha et al.

Fig. 12 Energy consumption

challenge. In this article, a novel scheduling algorithm is simulated that will keep the network alive for ever when fully implemented. All the nodes in the network are not active always, only a few are active. So the energy is distributed evenly, it leads the less energy consumption and increasea the lifetime. The new algorithm for simulation is six times better than the current algorithm. Our work may have opened up several research questions by allowing the development of coverage algorithms. The limitation of the proposed work is to obtain the results from the simulation model. There is a small difference that may occur in real-time working model.

References 1. Jayaweera S (2006) Virtual MIMO-based cooperative communication for energy-constrained wireless sensor networks. IEEE Trans Wireless Commun 5(5):984–989 2. Hong Y-W, Scaglione A (2006) Energy-efficient broadcasting with cooperative transmissions in wireless sensor networks. IEEE Trans Wireless Commun 5(10):2844–2855 3. Nagarajan M, Karthikeyan S (2012) A new approach to increase the life time and efficiency of wireless sensor network. In: IEEE International conference on pattern recognition, informatics and medical engineering (PRIME), pp 231–235 4. Deng Y, Hu Y (2010) A load balance clustering algorithm for heterogeneous wireless sensor networks. In: E-Product E-Service and E-Entertainment (ICEEE), International conference on November, IEEE, pp 1–4 5. Valera C, Soh W-S, Tan H-P (2013) Energy-neutral scheduling and forwarding in environmentally-powered wireless sensor networks. Ad Hoc Netw 11(3):1202–1220 6. Slijepcevic S, Potkonjak M (2001) Power efficient organization of wireless sensor networks. In: Proceedings 2001 IEEE international conference on communications, vol 2. pp 472–476 7. Ezhilarasi M, Krishnaveni V (2019) An evolutionary multipath energy-efficient routing protocol (EMEER) for network lifetime enhancement in wireless sensor networks. Soft Comput. https://doi.org/10.1007/s00500-019-03928-1 8. Itoh H, Yong Y-K (2000) An analysis of frequency of a quartz crystal tuning fork by Sezawa’s approximation and Winkler’s foundation of the supporting Elinvar alloy wire. In: Proceedings IEEE/EIA international frequency control symposium exhibition, June, pp 420–424

Scheduling Method to Improve Energy Consumption in WSN

537

9. Poornimha J, Senthil Kumar AV (2019) An enhanced design of AODV protocol to increase the energy consumption in the MANET. Int J Res 8(6):71–77. ISSN NO: 2236–6124 10. Kacimi R, Dhaou R, Beylot AL (2013) Load balancing techniques for lifetime maximizing in wireless sensor networks. Ad Hoc Netw 11(8):2172–2186 11. Yuvaraja M, Sabrigiriraj M (2015) Lifetime enhancement in wireless sensor networks with fuzzy logic using SBGA algorithm. ARPN J Eng Appl Sci 3126–3132 12. AlShawi IS, Yan L, Pan W, Luo B (2012) Lifetime enhancement in wireless sensor networks using fuzzy approach and A-star algorithm. IEEE Sens J 12(10):3010–3018 13. Kalaiselvi P, Priya B (2017) Lifetime enhancement of wireless sensor networks through energy efficient load balancing algorithm. In J Future Innov Sci Eng Res (IJFISER) 1(IV):12–22 14. Chang JH, Tassiulas L (2004) Maximum lifetime routing in wireless sensor networks. IEEE/ACM Trans Netw 12(4):609–619 15. Carle J, Simplot-Ryl D (2004) Energy-efficient area monitoring for sensor networks. Computer 37(2):40–46 16. Pantazis NA, Nikolidakis SA, Vergados DD (2013) Energy-efficient routing protocols in wireless sensor networks: a survey. IEEE Commun Surveys Tutor 15(2):551–591 17. Munusamy N, Srinivasan K (2017) Various node deployment strategies in wireless sensor network. IPASJ Int J Comput Sci (IIJCS) 5(8):039–044. ISSN 2321–5992 18. Ezhilarasi M, Krishnaveni V (2021) A survey on wireless sensor network: energy and lifetime perspective. Taga J 14:3099–3113. ISSN 1748–0345 19. Mugen P et al. (2014) Heterogeneous cloud radio access networks: a new perspective for enhancing spectral and energy efficiencies. IEEE Wireless Commun 21.6:126–135 20. Sahoo A, Chilukuri S (2010) DGRAM: a delay guaranteed routing and MAC protocol for wireless sensor networks. IEEE Trans Mob Comput 9(10):1407–1423 21. Michael B et al. (2006) X-MAC: a short preamble MAC protocol for duty-cycled wireless sensor networks. In: Proceedings of the 4th international conference on Embedded networked sensor systems 22. Alper K et al. (2014) Effects of transmit-based and receive-based slot allocation strategies on energy efficiency in WSN MACs. Ad Hoc Netw 13:404–413 23. Mary C, Nair TR (2014) Priority based bandwidth allocation in wireless sensor networks. arXiv preprint arXiv:1412.8107 24. Mugunthan SR (2020) Novel cluster rotating and routing strategy for software defined wireless sensor networks. J ISMAC 2(02):140146 25. Haoxiang W, Smys S (2020) Soft computing strategies for optimized route selection in wireless sensor network. J Soft Comput Paradigm (JSCP) 2(01):1–12

Energy Dissipation Analysis in Micro/Nanobeam Cantilever Resonators Applying Non-classical Theory R. Resmi, V. Suresh Babu, and M. R. Baiju

Abstract Development of low cost, energy efficient cellular communication systems are very essential due to the ubiquitous use of mobile technology. To overcome the scarcity of energy resources, low loss component design in transceivers is an emerging trend and lots of research works are focused on it. RF transceivers can be designed with MEMS resonators and switches which have the advantages of energy efficiency and reduced fabrication costs. Thermoelastic damping is a major energy dissipation mechanism which limits the maximum attainable quality factor (QTED ) of structures in microscales which is an important design parameter. When the devices are scaled down, in order to accurately model micro/nanoscale resonators, nonclassical elasticity theories like Modified Couple Stress Theory are essential. By including a material length scale parameter (l), the size effects are incorporated in the analysis and quality factor is found to be enhanced by increasing l. In this paper, the conventional thermoelastic damping analysis is modified by applying Modified Couple Stress Theory and the impacts of length scale parameter on energy dissipation and QTED were analyzed with different structural materials. The maximum QTED was attained for beams with polySi as the structural material with the highest material length and that of lowest is for SiC. Vibrating cantilever micro/nanobeams with properly selected material length scale parameters in higher modes provide large QTED s which can be utilized for designing low loss MEMS components in RF transceivers. Keywords Micro/nanocantilever beam resonators · Size effects · Energy dissipation · Thermoelastic damping limited quality factor · Modified couple stress theory · Material length scale parameter R. Resmi (B) University of Kerala, LBS Institute of Technology for Women, Poojappura, India V. Suresh Babu APJ Abdul Kalam Technological University, Government Engineering College, Wayanad, Kerala 670644, India M. R. Baiju University of Kerala, Kerala Public Service Commission, Thiruvananthapuram 695004, Kerala, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_41

539

540

R. Resmi et al.

1 Introduction Microelectromechanical (MEMS)/nanoelectromechanical (NEMS) devices are ubiquitous nowadays with noteworthy commercial interest owing to the high impact applications in communication and related fields [1]. The size of the microdevices are between 1 mm and 1 μm, whereas the nano devices have a length scale below 1 μm (1–100 nm) [2, 3]. MEMS-based structures are utilized for implementing filters and switches in transceivers with surplus advantages of being low cost due to batch fabrication, compatibility with existing Si technology in integrated circuit manufacturing and integration with microelectronics [4]. The major breakthrough in wireless communication technology, entailed the necessity of using energy efficient components in cellular networks. The high operating cost and lack of energy efficient resources in the macro world, paved the way for exploiting the MEMS /NEMS devices such as resonators and switches for the design of transceivers in mobile communication systems [5]. The transceiver is the utmost energy-consuming component of wireless nodes. High performance radio frequency (RF) MEMS/NEMS structures can provide enhanced functionality and high efficiency to the systems in which they are integrated. MEMS/NEMS-based RF switches are used for band selection and switching at the antenna or within different RF paths of the mobile networks [6]. MEMS/NEMS resonators can absolutely supersede conventional resonant elements even though some challenges remain such as damping mechanisms which declines the quality factor [7]. Quality Factor (QF) is an important performance metric of the resonators used in communication systems which renders the characterization of energy loss due to its various energy dissipation methods [1]. Most systems exhibit various energy loss mechanisms and thermoelastic damping is a decisive one which deteriorates QF of the system and thereby affects sensitivity, resolution and reliability [8]. The maximum achievable QF in the resonator is limited by thermoelastic damping and denoted by QTED. The existence of TED as a prominent energy loss mechanism in homogeneous, isotropic, Euler–Bernoulli micro-beams was identified by Zener in 1937 [9]. An exact closed-form expression for TED of slender beams was derived by Lifshitz and Roukes [10]. Based on Classical continuum theories, the analysis of TED in micro/nanobeams is insufficient to predict the size dependent mechanical behavior, due to the lack of material length scale parameter. Higher order continuum theories can be used to estimate the size dependencies at micron/submicron levels [11, 12]. Modified strain gradient theory is another nonclassical theory which consists of three material length scale parameters and have much computational complexity so rarely used [13, 14]. In Modified Couple Stress Theory (MCST), size effects are included but only a single material length scale parameter (l) is involved to investigate the mechanical behavior of microstructures with less energy dissipation [15, 16]. In our study, five different structural materials were selected for the vibrating beams based on their mechanical, thermal and optical characteristics. Material length scale parameter (l) of the modified couple stress theory is not a constant for a particular

Energy Dissipation Analysis in Micro/Nanobeam Cantilever …

541

material but varies as size of the structure changes. The main advantage of using MCST is its reduced complexity and enhanced computational efficiency for the analysis due to the presence of a single parameter. In our study, a cantilever beam-based structure [4] with flexural mode vibrations were characterized by bending of the structure along its length (L). The cantilever beam was clamped at one end and free on the other end where the maximum deflection was obtained at the tip of the beam utmost away from the clamped end. For the development of microstructures with less energy dissipations, the size effects were also included by properly selecting length scale parameters with different values. The QTED of the device can be improved to a great extent by increasing the length scale parameter. This work aims to mitigate thermoelastic damping energy loss by applying modified couple stress theory through optimizing the structural material and internal length scale (l) parameter. This manuscript is organized as follows: higher order theories with length scale parameters based on variational principles were analyzed in this paper. Section 2 presents the equations for QTED from equations of motion and coupled thermoelasticity derived for both classical and nonclassical theories (MCST) [11]. In Sect. 3, numerical analyses have been done to demonstrate the effect of length scale parameter using MCST on energy dissipation of beams with different structural materials. The thermal and mechanical properties of the structural materials affects the quality factor of the resonator [17]. The impact of size effect with length scale parameter on energy dissipation and quality factor was numerically simulated using MATLAB 2015 for a cantilever beam under plane stress condition. Section 4 includes concluding remarks.

2 Expression for Thermoelastic Damping Limited Quality Factor To derive the expression for thermoelastic damping limited quality factor applying Modified Coupled Stress Theory (MCST), the total strain energy in terms of its stress components were derived for plane stress condition. The governing equations of motion were obtained from variational principles of total energy after applying MCST, and thus, the impact of size effects were also included. The expression for thermoelastic damping limited quality factor was derived from equations of motion and coupled heat conduction equations [11].

2.1 Stress Field in a Vibrating Cantilever Microbeam According to the Euler–Bernoulli model of a beam, the displacements in the x and z directions are given by Rezazadeh et al. [11],

542

R. Resmi et al.

Longitudinal displacement u x (x, z, t) in terms of transversal displacement u z (x, t) is u x = −zu z,x

(1)

During vibrations of the beam, coupling of temperature and elastic fields occur and as a result deformations due to thermal and mechanical components arise [11]. The elastic properties such as Young’s modulus (E), Poisson’s ratio (v) and thermal properties like thermal expansion coefficient (α) are related to mechanical and thermal strain components of the beam. Considering both thermal and mechanical components of strain, Total strain field, εi j =

  1+v v σi j + αϑ − σkk δi j E E

(2)

where σi j and δi j —stress tensor and Kronecker delta function, respectively, α— thermal expansion coefficient, E—Young’s modulus and v—Poisson’s ratio. Plane Stress Condition The displacement field of the beam according to Euler Bernoulli theory is given by u x = −zu z,x

(3)

The nonzero strain and stress tensor components in terms of the displacement field are: εx x = u x,x = −zu z,xx ε yy = εzz = vzu z,x x + α(1 + v)ϑ   σx x = −E zu z,xx −αϑ

(4)

2.2 Thermoelastic Damping Limited Quality Factor by Applying MCST Considering thermoelastic damping, the isothermal value of s in the complex frequency expression is siso = ±iωiso where

(5)

Energy Dissipation Analysis in Micro/Nanobeam Cantilever …

ωiso =

 a 2 n

L



543

(E I )eq , ρA

(6)

where an is a boundary condition constant and n represents the vibrating mode number. an = 3.52, 4.694, 7.855 . . . for a cantilever beam for n = 1, 2, 3 . . .

(7)

The value of an for the analysis considered was 7.855 since higher vibrating mode (third one) was chosen. According to the complex frequency approach, the inverse of the QF of the TED can be achieved through,    K(s)   Q −1 = 2 (s) 

(8)

The equation for quality factor in classical form Q C−1T =

 6 R 6 sinh(K ) + sin(K ) − K 3 cosh(K ) + cos(K ) (1 + ) K 2

(9)

(1+v) where R = EαρcvT0 is the relaxation strength, and = 2R (1−2v) where υ is the Poisson’s ratio. Under plane stress condition, 2

Q −1 MC ST

 6 R 6 sinh(K ) + sin(K ) = − 3 λ(1 + ) K 2 K cosh(K ) + cos(K )

(10)

in which λ=

(E I )eq EI

(E I )eq = K =h ωiso =

EI   + μAl 2 1 − v2

(1 + )ωiso 2D  a 2 (E I ) n

L



eq

ρA

(λ represents the rigidity ratio, (E I )eq is the equivalent stiffness and μ denotes the shear modulus of the vibrating beam).

544

R. Resmi et al.

3 Results and Discussions Most of the applications of MEMS/NEMS resonators utilized the mechanical and thermal properties of MEMS materials. For validating the analytical expression for QTED , micro/nanocantilever beam structures of length L = 200 μm, width W = 10 μm, vibrating in the third mode; using five different structural materials were examined at a temperature of T0 = 298 K. The five materials were selected based on their unique advantages such as easy availability (Si, polySi), unique mechanical (diamond) and thermal (SiC) and optical (GaAs) characteristics. Numerical simulations of the analytical expressions were done with MATLAB 2015. In this work, the size dependent energy dissipation analyses in microcantilever beams applying nonclassical elasticity theory using five different structural materials have been done to investigate the quality factor. Modified Couple Stress Theory (MCST) was used for analysis since it contains only one length scale parameter. The accurate analysis of energy dissipation has been done by incorporating a material length scale parameter (l) and the impact was studied by assigning different values to length scale parameter. Figures 1, 2 and 3 show the energy dissipation variation of cantilever beams with thickness using five structural materials for l = 0.2, 0.5, 1 μm, respectively. The energy dissipation analysis for the cantilever beam with same dimensions and environmental conditions has been also done based on classical 10 -4

2

MCST-PolySi CT-PolySi MCST-Diamond CT-Diamond MCST-Si CT-Si MCST-GaAs CT-GaAs MCST-SiC CT-SiC

1.8 1.6 1.4

Q-1

1.2 1 0.8 0.6 0.4 0.2 0

0

0.5

1

1.5

h

2

2.5

3 10 -5

Fig. 1 Variation of energy dissipation (Q −1 ) with thickness for a cantilever microbeam resonator (L = 200 μm, width W = 10 μm) under plane stress condition; mode (m, n) with m = 3 and n = 3; l = 0.2 μm, operating temperature, T 0 = 298 K; using five different structural materials a polySi, b diamond, c Si, d GaAs, and e SiC

Energy Dissipation Analysis in Micro/Nanobeam Cantilever … 10 -4

2

MCST-PolySi CT-PolySi MCST-Diamond CT-Diamond MCST-Si CT-Si MCST-GaAs CT-GaAs MCST-SiC CT-SiC

1.8 1.6 1.4 1.2

Q-1

545

1 0.8 0.6 0.4 0.2 0

0

0.5

1

1.5

h

2

2.5

3 10 -5

Fig. 2 Variation of energy dissipation (Q −1 ) with thickness for a cantilever microbeam resonator (L = 200 μm, width W = 10 μm) under plane stress condition; mode (m, n) with m = 3 and n = 3; l = 0.5 μm, operating temperature, T 0 = 298 K using five different structural materials a polySi, b diamond, c Si, d GaAs, and e SiC

theory, i.e., l = 0. In Fig. 1, the difference between energy dissipations based on classical and nonclassical theories is very small. From Figs. 2 and 3, as the value of l increases, the discrepancy between the two theories also increases and energy loss also get diminished. The minimum energy dissipation was obtained for l = 1 μm, and the findings were applicable to all five structural materials. The energy dissipation of beam resonator with different structural materials and l = 0.5 μm was illustrated in Fig. 2. As the material length scale parameter increases, energy loss decreases as depicted in Fig. 3. The thermoelastic damping (TED) limited energy dissipation analysis of all five materials were depicted in Table 1. When a material length scale (l) was included in the analysis, QTED also got enhanced as shown in Table 1. The single material length scale parameter in MCST was found to be enough for capturing the size effect and associated deformation behavior of the vibrating beams. The energy dissipation was found to be the highest for SiC (8.73E-05) even with material length scale parameter l = 1 μm. The energy dissipation was found to be both material and size dependent. The material order in which energy dissipation declines is SiC. > GaAs > Si > diamond > > polySi. The lowest energy loss was attained for cantilever beam resonators using polySi (3.90E-05) structural material with highest length scale parameter. When a material length scale (l) is included in the analysis, QTED increases as shown in Table 2. As l increases QTED also increases and reaches QTEDMAX when l becomes 1 μm. The maximum percentage change (183.99%) in QTED is

546

R. Resmi et al. 10 -4

2

MCST-PolySi CT-PolySi MCST-Diamond CT-Diamond MCST-Si CT-Si MCST-GaAs CT-GaAs MCST-SiC CT-SiC

1.8 1.6 1.4

Q-1

1.2 1 0.8 0.6 0.4 0.2 0

0

0.5

1

1.5

2

2.5

3 10 -5

h

Fig. 3 Variation of energy dissipation (Q −1 ) with thickness for a cantilever microbeam resonator (L = 200 μm, width W = 10 μm) under plane stress condition; mode (m, n) with m = 3 and n = 3; l = 1 μm; operating temperature, T 0 = 298 K; using five different structural materials a polySi, b diamond, c Si, d GaAs, and e SiC Table 1 Energy dissipation of cantilever microbeam resonators (L= 200 μm, width W = 10μm); mode (m , n) with m = 3 and n = 3; operating temperature, T 0 = 298 K; using five different structural materials 0

7.07E-05

9.51E-05

8.28E-05

1.43E-04

1.84E-04

0.2

6.87E-05

9.24E-05

8.16E-05

1.40E-04

1.77E-04

0.5

5.94E-05

7.99E-05

7.55E-05

1.25E-04

1.47E-04

1

3.90E-05

5.25E-05

5.89E-05

9.05E-05

8.73E-05

Table 2 Thermoelastic damping limited quality factor of cantilever microbeam resonators (L = 200 μm, width W = 10 μm); mode (m, n) with m = 3 and n = 3; operating temperature, T 0 = 298 K; using five different structural materials l(μm)

PolySi

Diamond

Si

GaAs

0

14,139.27183

10,510.27379

12,071.46306

7006.7265

0.2

14,561.12761

10,828.25308

12,258.5074

7159.2211

5653.55043

0.5

16,828.21756

12,518.15132

13,252.58094

7973.8458

6818.49175

1

25,627.88314

19,041.81583

16,965.25516

11,052.777

SiC 5436.85098

11,448.7212

Energy Dissipation Analysis in Micro/Nanobeam Cantilever …

547

obtained between l = 0 μm and l = 1 μm for a clamped–clamped beam under plane stress condition vibrating in third mode. The order of materials in which maximum percentage difference in size effect is obtained is as SiC > diamond > polySi > GaAs > Si. The minimum percentage difference (0.42%) in QTED is obtained in between l = 0.2 μm and l = 0.5 μm. The thermoelastic damping limited quality factor (QTED ) is a measure of the thermoelastic damping related energy loss of the structures and inversely related to TED. Table 2 shows the QTED of all five structural materials. As l increases QTED also increases and reaches QTEDMAX when l becomes 1 μm. The maximum QTED ( 25,627.88) is attained for micro/nanobeams with polySi as the structural material and that of lowest is for SiC. In order to enhance QTED, the impacts of material length parameter were also investigated as in Table 2. The maximum percentage change (110.77%) in QTED is obtained between l = 0 μm and l = 1 μm for a SiC-based cantilever beam under plane stress condition vibrating in third mode.

4 Conclusion To design high performance, energy efficient transceivers in mobile communication systems, MEMS/NEMS-based components are very essential. In this paper, the size dependent energy dissipation analysis in micro/nanocantilever beams applying nonclassical elasticity theory- MCST was performed to analyze the quality factor. According to this work, the maximum QTED was attained for beams with polySi as the structural material and that of lowest is for SiC. The material order in which energy dissipation declines is SiC. > GaAs > Si > diamond > > polySi. In order to further diminish the energy loss, material length scale parameters were introduced and it enhances the performance of the structures by reducing losses and QTED were found to be enhanced. The impacts of material length parameter was investigated by incorporating different values and verified to be very effective according to our simulation results. Thin cantilever beams with properly selected structural material and higher length scale parameters vibrating in higher modes provide smaller energy dissipations and helps engineers to design microbeam components with high quality factors. The betterment of mobile communication systems with low energy dissipation can be explored with various other structural materials in vibrating beams having different length scale parameters.

References 1. Vengallatore S (2005) Analysis of thermoelastic damping in laminated composite micromechanical beam resonators. J Micromech Microengg 2398–2404 2. Nirmal D (2019) High performance flexible nanoparticles based organic electronics. J Electron Info 1(1): 99-106

548

R. Resmi et al.

3. Raj, Jennifer S, Vijitha Ananthi J (2019) Vision intensification using augumented reality with metasurface application. J Info Technol 1(02):87–95 4. Yang J, Ono T, Esashi M (2002) Energy dissipation in submicrometer thick single-crystal silicon cantilevers. J Microelectromech Syst 11(6):775–783 5. Ogbebor JO, Imoize AL, Aaron-Anthony Atayero A (2002) Energy efficient design techniques in next-generation wireless communication networks: emerging trends and future directions. Wireless Commun Mobile Comput 7235362:19. https://doi.org/10.1155/2020/723 5362 6. Arathy US, Resmi R (2015) Analysis of pull-in voltage of MEMS switches based on material properties and structural parameters. In: 2015 International conference on control, instrumentation, communication and computational technologies (ICCICCT), Kumaracoil, pp 57–61. https://doi.org/10.1109/ICCICCT.2015.7475249 7. Abdolvand R, Bahreyni B, Lee JE-Y, Nabki F (2016) Micromachined resonators: a review. Micromachines 7:160.https://doi.org/10.3390/mi7090160 8. Florina S, , Pustan M, Cristian D, Birleanu C, Mihai S (2020) Analysis of the thermoelastic damping effect in electrostatically actuated mems resonators. Mathematics 8:1124. https://doi. org/10.3390/math8071124 9. Zener C (1937) Internal friction in solids I. Theory of internal friction in reeds. Phys Rev 52(3):230 10. Lifshitz R, Roukes ML (2000) Thermoelastic damping in micro-and nanomechanical systems. Phys Rev B 61(8):5600 11. Rezazadeh G, Vahdat AS, Tayefeh-rezaei S et al. (2012) Thermoelastic damping in a microbeam resonator using modified couple stress theory. Acta Mech 223:1137–1152. https://doi. org/10.1007/s00707-012-0622-3 12. Parayil DV, Kulkarni SS, Pawaskar DN (2015) Analytical and numerical solutions for thick beams with thermoelastic damping. Int J Mech Sci 94:10–19 13. Yang F, Chong A, Lam DCC, Tong P (2002) Couple stress based strain gradient theory for elasticity. Int J Solids Struct 39(10):2731–2743 14. Kandaz M, Dal (2018) A comparative study of modified strain gradient theory and modified couple stress theory for gold microbeams, Springer Nature, pp 1418–1436 15. Vahid B, Mohsen A (2019) Size-dependent analysis of thermoelastic damping in electrically actuated microbeams. Mech Adv Mater Struct 1–11https://doi.org/10.1080/15376494.2019. 1614700 16. Shaat M, Mohamed S (2014) Nonlinear-electrostatic analysis of micro-actuated beams based on couple stress and surface elasticity theories. Int J Mech Sci 84:208–217 17. Resmi R, Baiju MR, Suresh Babu V (2019) Thermoelastic damping dependent quality factor analysis of rectangular plates applying modified coupled stress theory. In: AIP Conference proceedings. vol 2166. pp 020029.https://doi.org/10.1063/1.5131616

An Integrated Technique for Security of Cellular 5G-IoT Network Healthcare Architecture Manoj Verma, Jitendra Sheetlani, Vishnu Mishra, and Megha Mishra

Abstract Security is possibly the most significant networking concern. It is not only considered potentially detrimental to the monetary penalty but also creates other more pressing problems such as customer loyalty, social trust, and personal protection. Networking is focused on IoT and wireless. Their performance and real value originate from the development of services on top of the IoT devices linked to them for the remote and lack of controlled devices. IoT and wireless network add more data and produce more devices to the network in more locations; it may lead to additional security issues rather than 5G age. Every industry with cellular IoT can be transformed. Networking is a prime requirement for every sector and exists as four IOT segments for multipurpose use that can be used with one 5G network. The four prime divisions are substantial IoT, broadband technology for IoT, decisive IoT, and business automation-related IoT. This research offers a straightforward roadmap for development and solves all cases of 5G IoT in a cost-effective manner. It also offers perfect knowledge that affects us as evidence for the future, from easy to the most complex. This overview introduces IoT and wireless communications to enhance node security. The preliminary definitions of IoT and security questions were presented in this report. Keywords Security · Privacy · Security threats · Architecture of wireless-enabled IoT · IOT 5G

M. Verma Computer Science and Engineering, Sri Satya Sai University of Technology and Medical Sciences, Madhya Pradesh, Sehore 466001, India J. Sheetlani Department of Computer Science and Engineering, Sri Satya Sai University of Technology and Medical Sciences, Madhya Pradesh, Sehore 466001, India V. Mishra (B) · M. Mishra Department of Science and Engineering, Shri Shankaracharya Technical Campus, CSVTU University, Bhilai, Chhattisgarh 491003, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_42

549

550

M. Verma et al.

1 Introduction The Internet has become universal today, and it is also reached almost every corner of the globe with unimaginably impacting human life. However, the journey is endless and networking are entering the age of much more ubiquitous networking where the Web will be connected to a very large range of appliances such as “Internet of Things” (IoT). Cellular networks connect things-to-things and stuff-to-people across borders. The benefits of cellular IoT have been enjoyed by many industries, such as in the sectors of consumer electronics, automobile, and rail. In 2020, there are over one billion cellular IoT connections and Ericsson expecting over five billion connections by 2025[1]. With 5G on the market, almost every company is exploring the value of cellular connectivity for companies that are fundamentally changing. Governments are promoting IoT adoption in some regions through direct and indirect incentives to encourage sustainability, creativity, and growth in some regions. This word has been interpreted in several different ways by numerous writers. The Internet of Things is characterized by Pena-L’opez et al. [1] define capabilities to query the object’s state and if possible to alter its state. Internet of Things refers to a new kind of world in popular parlance, where almost all the gadgets and appliances are connected to a network. The networks are used to accomplish difficult tasks that involve a high level of intelligence. The Internet of Things is described by Vermesan et al. [2] as simply an interface between the physical and digital worlds. Using a wealth of sensors and actuators, the digital world communicates with the real world. This template provides authors much more formatting requirements that are needed to prepare their article for electronic versions. Data storage and processing can be performed at the edge of the network itself or on a remote server. If some data preprocessing is necessary, it is normally performed on either the sensor or any other nearby device. The new computer concept Edge-of-Things (EoT) is a paradigm that takes computing power (e.g., IoT gateways). The EoT is very closer to IoT devices that play the role of middle computing layer between cloud computing and recent IoT devices [3]. The EoT layer is not only useful for simple transmitted features but also within a local smart community domain, and it can also perform smart decision-making and real-time analytical services. Besides, healthcare data store in cloud storage for more global data processing via the EoT layer. Figure 1 demonstrates the architecture of our smart healthcare surveillance safe EoT system. Such a system makes it possible to identify and treat diseases in their early stage that potentially minimizes the harm caused by diseases and prolonging many lives. Machine learning methods are commonly used to identify patients and analyze bio-signal data into various classes according to specific health conditions, such as clustering-based algorithms, with the potential to detect irregular patterns. This paper explores IoT-based research patterns and also uncovers different problems that need to be tackled to transform devices and technologies, such as 1.

Classification of current IoT-based studies for the healthcare network into three patterns and presentation as a review of each one.

An Integrated Technique for Security of Cellular 5G-IoT …

551

Fig. 1 IoT security layer devices architecture for 5G

2. 3. 4. 5. 6.

7.

Providing an extensive survey of utilities and applications focused on IoT. Highlighting different attempts to accept healthcare products and prototypes compliant with the IoT. Providing detailed insights into IoT healthcare solutions’ protection and privacy concerns and proposing a security model. The discussion of key innovations that can reshape IoT-based healthcare technologies. Highlighting different policies and initiatives that can help researchers and policymakers in their realistic incorporation of IoT innovation into healthcare technologies. Providing challenges and transparent problems that need to be solved to render healthcare technologies focused on IoT.

2 5G IOT Security Device and Component The IoT is rapidly increasing. The network industry will become sensor-based in the next few years. It is expected that 5G IoT devices and applications can deal with vital private information such as personal data. Also, such smart devices can be connected at anytime, anywhere to global information networks for their access. The domain of IoT devices can therefore be a priority for attackers in the next generations. Although there is no clear demonstration of the incorporation of those devices into other cloud networks, many other portable IoT devices are available. That is, it is only a matter of time before IoT functions become embedded in these devices. The growing numbers of 5G-era applications and facilities are increasing worldwide. 5G devices can be categorized into four types:

552

i. ii. iii. iv.

M. Verma et al.

Stationary devices—devices can be used on a specific physical location (e.g., local is IoT device) Monitoring devices—control and monitor items (e.g., sensors, smart artificial intelligence, etc.) Embedded devices—devices gadgets that can be embedded interior the body (e.g., pacemakers). Wearable devices—endorsed devices by specialists (e.g., individual device)

5G and IoT network implementations focus on supporting real-time network intelligence, tracking, and edge analysis. As telecommunication carriers are planning their network implementations of the fifth generation or 5G, this subject is time bounding. To provide enhanced digital experiences and further facilitate the continuous growth of communication and interactions with the Internet of Things (IoT), the promise of faster speeds, lower latency, and greater throughput has arrived.

3 Healthcare Security 5G IOT Protocol Huge IoT based on Cat-M or NB-IoT and broadband IoT based on LTE are both supported by 4G networks today. Huge IoT is growing thanks to Cat-M or NBIoT connectivity on 5G-enabled networks, and broadband IoT is improving thanks to the introduction of 5G radio and core networks. Critical IoT will be enabled by 5G networks, which will have powerful, ultra-reliable, and/or ultra-low latency capabilities for time-critical communications. The four types of IoT networking will coexist. Some devices, such as an autonomous vehicle with complex specifications [4], can require multiple IoT communication segments to complete one or more use cases.

3.1 Protocol for Restricted Application The restricted protocol are the part of synchronous network. It was built on top of HTTP and a subset of HTTP schemes. The CoAP communicates with the UDP. Unlike the multicast-free TCP [5, 6], UDP-based application layer protocols reduce bandwidth requirements and enable multicast and unicast. The Internet engineering task force (IETF) designed the constrained application protocol (CoAP) [7], which is a synchronous request or response protocol. The CoAP’s aim is to support resourceconstrained users like phones, tablets, laptops, and low-power devices. Abbreviations can only be included in headings and titles if they are unavoidable [8].

An Integrated Technique for Security of Cellular 5G-IoT …

3.1.1

553

Message Queue Telemetry Protocol for Transport

On top of the TCP, the MQTT works [7]. It is an asynchronous publication or subscription mechanism that decreases the bandwidth of the device and decreases the calculation requirements. The MQTT is planned to satisfy battery consumption requirements and the low bandwidth. The messenger on Facebook uses the MQTT protocol. The protocol MQTT uses many security features of various protocols. MQTT may have lower delays, but CoAP receives low package losses and greater reliability by offering the option of using service quality (QoS).

3.1.2

Protocol for Extensible Messaging and Presence

The XMPP protocol is built into its core specifications with TLS or SSL support [9]. QoS alternatives make it impractical for M2 M communications and do not support it, while XMPP has its inadequacy, and as an acceptable protocol for the IoT, it has recently regained significance [10].

3.1.3

Communication Layer Protocol Advanced Message Queuing Protocol

The strategy which is based on Bluetooth has restricted connection expertise whose strength, a major influence, and minimum price create it perfect in favor of a broad variety of strategy range from mobile phones and computers to medical devices and home activity products [11]. The latest version of Bluetooth is Bluetooth Low Energy (BLE) or Bluetooth 4.x. The power consumption and energy level which is based on Bluetooth that enlarges the Bluetooth regular edition 4.0 for ultra-low energy function during the general practice of low-power inactive periods, which is intrinsically hard for occurrence hopping expertise. It is significant and also to be noted that the “BLE shifted toward the strategy which is a unique one and also provide power utilization 20 epoch lesser than the earlier report. Because of very little energy utilization, the utmost information speed for BLE is 100 kbps, which is much inferior to the Bluetooth characteristic through EDR mode” [12].

3.1.4

LTE RACH Protocol for Advanced Message Queuing Protocol

In the financial sector, the advanced message queuing protocol (AMQP) is often used. JPMorgan sends one billion messages a day using AMQP [13, 14]. When operating over the TCP protocol, AMQP has fundamental reliability features. It offers an asynchronous system for posting or subscribing messages. Research shows that as the bandwidth increases, the performance rate increases, and the AMQP will send a greater amount of message per second compared to other rivals. This guarantees continuity with message delivery guarantees. Protection in AMQP is managed

554

M. Verma et al.

using the TLS or SSLL feature. Within IoT system surroundings, the whole thing is considered through the different network providers and producer’s requirements to be associated with the Internet; after performing various discussion, the association receives concerns for the IoT network environment [15].

3.1.5

LTE RACH Protocol

LTE RACH’s key constituents that mainly focused on time–frequency resources are RA slots which are owed for the transmission of access needs and each RA slot engaged 1.08 MHz in RACH’s that meets the bandwidth of six physical resource blocks depends on the admission requests at the same time as the duration of each RA slot in the instant region depends on the admission requests. Two separate RA processes appear in LTE; the first is fully contingent on contention, and the second is free from contention [16]. The RA mechanism sponsored by contention, which is the key issue based in this article, consists of four grip communications between the UE system and eNodeB. Contact one is random access broadcasting preamble [17, 18]. The preamble is the LTE access order. At any time, it has details that need to be submitted to the workstations broadcast. The 64 orthogonal prefaces are currently available that are explicitly designed for haphazard entry. These preambles are transmitted, according to the system configuration, using six resource blocks (RBs) on one or more subframes [14, 19]. Contact second is random access response (RAR): The eNodeB broadcasts this communication for any successfully decoded preamble. This communication comprises the delivery of uplink supplies to be used by the UE mechanism for the broadcasting of communication and 3. communication third is identification of the workstation: The UE system transmits its individuality in this communication. The message and controversy decision: This correspondence is broadcasted by the eNodeB as a response to communication. The system that does not receive message four declares a failure to settle the dispute and schedules a new attempt to access it. Baring for expanded access (EAB) techniques is grouped into a definite number of access groups within the EAB (ACs) [20, 21].

4 Healthcare IOT Key Management Process Algorithm In the IoT, there are several technical challenges; connectivity specifications for devices dedicated to IoT services vary from human-based communication that is designed and implemented to meet broadband application requirements. Two main distinctions are: i. ii.

The size of the message which is usually helpful for communication is normally short for IoT devices, as it is primarily for event reporting. Second, the number of network devices is projected to be at least one order of magnitude that should be greater than the number of broadband networks.

An Integrated Technique for Security of Cellular 5G-IoT …

iii. iv.

555

The standardized cellular networks are considered as a key technology for providing wide-ranging coverage, protection, allowing roaming and mobility, and operating in licensed bands, making them more capable of ensuring stable and deterministic communication. There is an interdisciplinary objective to provide a holistic IoT perspective. When disciplines borrow ideas and techniques from each other to research phenomenon or to study a phenomenon of healthcare network. A synthesis of ideas from different disciplines in a problem-oriented partnership (Fig. 2).

4.1 Generation and Distribution Process This segment addresses the processes of key generation and distribution (see Algorithm 1). According to this algorithm, if there is no BgK fA pair, then it checks whether or not there is a relation between A and B. If a relationship exists in the CTA, then sender A does not need to generate K because it can reuse the K key stored in its global table. If the relation does not exist, however, it implies that it is a new connection and needs to create a new K mutual key. By applying a key conversion function, the sender generates the key K and updates its global table, and finally, it produces a hash key k. The algorithm now tests the B to A reverse relation. If there is a relation, this means that CTBA and LTB need to be modified with the new key. K and k will be stored directly in CTB and LTB if the link is not present. This will establish a stable key pairing and send a secure message from sender node A to receiver node B by calling the Algorithm 2. Keep your affiliations as concise as possible (e.g., do not differentiate among departments of the same organization). For two affiliations, this template was built.

4.2 Protocol for Extensible Messaging and Presence On top of the TCP, the MQTT works [7]. It is an asynchronous publication or subscription mechanism that decreases the bandwidth of the device and decreases the calculation requirements. The MQTT is planned to satisfy low bandwidth and battery consumption requirements. The messenger on Facebook uses the MQTT protocol. MQTT may have lower delays, but CoAP receives low package losses and greater reliability by offering the option of using service quality (QoS). The MQTT protocol uses the security features of various protocols.

556

Fig. 2 Security algorithm block diagram

M. Verma et al.

An Integrated Technique for Security of Cellular 5G-IoT …

557

Algorithm 1: SIoT algorithm (A, B, m) Input: any node of sender A, and any node of receiver B, and the message m Output: The algorithm will attempt to send the message m to receiver B from sender A. if the The procedure will be successfully completed and it will return true otherwise return false. /* A, B when the algorithm fails, are described as disconnected. */ If it does not exist, fA, then BgK, BgK, If it doesn't exist, then CTAB /* The new reciprocal key K is generated by A */ K = GenerateKey () /* A adds K to its CTA and GTA and a random value of v */ (a); addCT(B, K); Use addGT(K, v); /* Use the conversion function */ to render the hash key k (A, K, v); k = f If there is a CTBA, then /* B receives k and v from A and updates the current CTBA and v entries. LTB if it matches the stored hash */ RefreshCT(A, K); RefreshLT(K,k); Otherwise /* B acquires k from A and adds */ to LTB and CTB Please addCT(A, K); Use addLT(K,k); /* There is a stable registration, so find the K in CTA or CTB and Send the message at last, m */ Send message(A) STOP

4.3 Cloud Distributed Approach 5G Era There are two levels for the distributed approach: The first stage defines as Level1 “Edge processing stage,” and the next stage defines as Level-2’ “Cloud (global) processing stage.” In the first phase, edge IoT devices (e.g., IoT gateways) evaluate the sensed bio-signal data based on chosen clustering-based techniques within their particular ranges. The results of this phase are recorded as input to detect any abnormality changes for data owners and healthcare providers. Then, for more global processing, the result is forwarded to cloud computing. The second phase integrates aggregate findings with two phases on separate IoT edge devices: the step of normalization and the step of conciliation. The work attributable to the proposed distributed approach phases is shown in Fig. 4. Edge (local) stage of processing: For a limited collection of data within each IoT system, clustering-based techniques, like KMC and FCMC, are carried out. The outline of the FCMC and KMC techniques are described as track:

558

1. 2. 3. 4. 5.

M. Verma et al.

Let (x 1 ) … (x n ) is the number of data objects encrypted. Select the algorithm (c1 ) … (ck ) number in centroids of clusters. Measure the Euclidean distance (d ij ) between each object of knowledge (pi ) and each centroid cluster (cj ). Based on the shortest Euclidean distance between each data object and the cluster centroids, assign each data object (pi) to each cluster centroid (cj). Recalculate the centroid cluster (c1 )…. (ck ) by obtaining the average of the data items assigned to each centroid cluster (cj ). Before the shortest centroids converge, repeat steps 2, 3, and 4. The basic pseudocode of the FH-KMC method is shown in Algorithm 1.

4.4 Algorithm for Fuzzy C-means Clustering (FCMC) The fuzzy c-means clustering algorithm (FCMC): This algorithm is described as an unsupervised soft clustering method [22], and it is one of the most popular fuzzy clustering techniques because it can retain much more information than hard clustering methods, particularly in the healthcare sector. The membership value is the degree to which each data object in the FCMC algorithm belongs to each centroid cluster. Applying FCMC research tasks and FHE here preserves data privacy while still maintaining a stable environment, similar to the KMC algorithms. Algorithm 2: Algorithm Fuzzy C-Means Clustering (FCMC) Inputs: Data objects with encryption (x1 (xn) Outputs: Encrypted centroid cluster package (c1) .... (ck): 1: Initialization: First select random of centroid clusters set (c1) .... (ck) from the data objects defined. 2: while jjUkC1 do Uk jj >D do 3: Data objects, 1....n does it for all data set (objects). 4: For all the centroids of the cluster 1....k do 5: Determine distance centroid cluster and data object. 6: Measure the worth of the membership. 7: end for 9: for all the centroid clusters 1....k do 10: Update each Centroid cluster based on the data objects allocated to it. 11: Terminate for 12:Terminate while

The key functions of FCMC are demonstrated in the below-mentioned steps. 1. 2. 3.

Let (x 1 ) …. (x n ) be the number of data objects encrypted. Select the algorithm (c1 ) …. (ck ) number in centroids of clusters. Measure the Euclidean distance (d ij ) between each object of knowledge (pi ) and each centroid cluster (cj ). The fuzzy membership value (ik ) is determined, indicating the degree to which each data point (pi ) belongs to each centroid cluster (cj ).

An Integrated Technique for Security of Cellular 5G-IoT …

4.

559

Revise the centroid clusters (c1 ) …. 5) modified the data objects’ fuzzy membership values, follow steps 3 and 4 until finding centroid value. Here, jjUkC1doUk jj < , where U is the fuzzy membership matrix ()n c containing the membership values between the data points and the centroids of the cluster, and it is the predetermined termination criterion value. Algorithm 2 demonstrates the FCMC algorithm’s pseudo-code which is simple.

5 MATLAB Synchronization This section deals with the MATLAB replication outcome that is supported by our anticipated SCEO algorithm and the optimization algorithm which are based on the Swift privacy rate. This investigation presents the dimensions of the entire broadcasting channel that are scattered as a distribution of Weibull with a scale parameter of two .With an even unit of the mean square for Weibull distribution, having the value of dwindling factor for transmission r is set to 0.8 that is the self-interference. Minimum alternation is required for this self-inference value which is already set. The broadcast authority constraints were set to 20dbb until and unless some other value is specified.

5.1 Privacy Potential Realized As shown in Figs. 3 and 4, a transmission efficiency assessment was carried out for three-terminal broadcasts with an influence of power and speed limitation. In terms of the solitude power of their broadcast, the research reveals a comparison of three distinct transmission circumstances. In Figs. 4 and 5, our suggested (SCEO) Fig. 3 Realized privacy potential under power and speed limitation

560

M. Verma et al.

Fig. 4 Performance under various power restrictions of the algorithms

Fig. 5 Algorithm efficiency for different speed limitation

technique that is basically planned to reduce the trouble of improvement which is already mentioned in (17) was compared with the bisection method in [23] and the two-phase mutual blocking scheme (TMBS) in [24]. The overall 60 broadcast transmitters were finalized for mutual testing. The speed restriction is approximately 0.8 which is the optimum value and the optimum attainable capacity (OAA) preferred in Fig. 4, but the power limit (ps = pd = p) for Fig. 5 is placed at value 20 dB. Our proposed SCEO approach in Fig. 4 is sufficiently latent to obtain optimal values, mainly if the power constraints are minimal. In Fig. 5, with different rate restriction variations, the comparison of three different techniques is given. The proposed method performs much better than the previous two methods which have already been mentioned apart from the extreme speed limitations. Furthermore, for the TSCS and bisection methods, convergence may be difficult as the speed limit becomes more severe; however, this algorithm is approximately flawless and gives the desired result with the time constraint.

An Integrated Technique for Security of Cellular 5G-IoT …

561

5.2 TSCS and Bisection Comparison The comparison of SCEO is the basic to minimize the trouble of improvement already which is already mentioned in (17), with the bisection method [22], and the next is the two-stage cooperative jamming scheme (TSCS) represented in [24]. For carrying out the research experiment, 60 broadcast receivers were chosen for mutual testing. For speed limitation, the approximate value that is taken is 0.8 related to OAA shown in Fig. 4, while the power limit (ps = pd = p) for Fig. 5 is set to 20 dB. In Fig. 4, our proposed method is sufficient enough to achieve the most favorable result, mainly if the power constraints are minimal. Comparison of the three different methods in Fig. 5 which is given with the variations of rate restriction for the value of p is 20 dB. With extreme rate limitations, this method is found to perform much more accurately than the previous one, and this can also be observed in Figs. 4 and 5. Furthermore, for the TSCS and bisection methods, convergence may be difficult for various speed restrictions, and it becomes extra time-consuming, but this research gives the result more accurately under the time constraint.

6 Conclusion This paper covers all cases of 5G IoT in a cost-effective manner. It also offers perfect knowledge that affects us as evidence for the future, from easy to the most complex. The realized privacy potential under power and speed limitation with different rate restriction variations has compared with all three different techniques. The proposed method performs much better than the previous two methods which have already been mentioned apart from the extreme speed limitations. Performance under various power restrictions of the algorithms that is SCEO, bisection, and TSCS is formed well in 5G IoT in a cost-effective manner for network healthcare architecture.

References 1. Peña-López I (2005) Itu internet report 2005: the internet of things 2. Vermesan O, Friess P, Guillemin P et al (2011) Internet of things strategic research roadmap. IoT Glob Technol Soc Trends 1:9–52 3. Driving transformation in the automotive and road transport ecosystem with 5G, Ericsson Technology Review, 2019 4. Gope P, Hwang T (2016) BSN-care: a secure IoTBased modern healthcare system using body sensor network. IEEE Sens J 16(5):1368–1376 5. Zhou Q, Chan C (2018) Secrecy capacity under limited discussion rate for minimally connected hypergraphical sources. In: Proceedings of the 2018 IEEE international symposium on information theory (ISIT), Vail, CO, USA, 17–22 June 2018, pp 2664–2668 6. Shang X, Yin H, Wang Y, Li M, Wang Y (2020) Secrecy performance analysis of wireless powered sensor networks under saturation nonlinear energy harvesting and activation threshold. Sensors 20:1632

562

M. Verma et al.

7. Arvind S, Narayanan VA (2019) An overview of security in CoAP: attack and analysis. In: Proceedings of the 5th international conference on advanced computing communication systems (ICACCS), Coimbatore, India, 15–16 Mar 2019, pp 655–660 8. Boudko S, Abie H (2019) Adaptive cybersecurity framework for healthcare Internet of Things. In: Proceeding of 2019 13th international symposium on medical information and communication technology (ISMICT), 8–10 May 2019, pp 1–6 9. Yu W, Chorti A, Musavian L, Poor HV, Ni Q (2019) Effective secrecy rate for a downlink NOMA network. IEEE Trans Wirel Commun 18:5673–5690. 10. Verba N, Chao KM, James A, Goldsmith D, Fei X, Stan SD (2017) Platform as a service gateway for the Fog of Things. Adv Eng Inform 33:243–257. 11. Luo et al (2018) Privacyprotector: privacy-protected patient data collection in IoT-based healthcare systems. IEEE Commun Mag 56(2):163–168 12. Lai X, Zou W, Xie D, Li X, Fan L (2016) DF relaying networks with randomly distributed interferers. IEEE Access, vol 5, pp 18909–18917, 2017; Fan L, Lei X, Yang N, Duong TQ, Karagiannidis GK (2016) Secure multiple amplify-and-forward relaying with co channel interference. IEEE J Sel Topics Signal Process 10(8):1494–1505 13. Koutli M, Theologou N, Tryferidis A, Tzovaras D, Kagkini A, Zandes D et al (2019) Secure IoT e-health applications using VICINITY framework and GDPR guidelines. In: Proceeding of 15th international conference on distributed computing in sensor systems (DCOSS). 2019 May 29–31; Santorini Island, Greece. IEEE, pp 263–270 14. Vishwakarma R, Jain AK (2019) A survey of DDoS attacking techniques and defence mechanisms in the IoT network. Telecommun Syst 15. Zheng B, Wen M, Wang C-X, Wang X, Chen F, Tang J, Ji F (2018) Secure NOMA based two-way relay networks using artificial noise and full duplex. IEEE J Sel Areas Commun 36:1426–1440 16. Lohachab A, Karambir B (2018) Critical analysis of DDoS—an emerging security threat over IoT networks. Commun Inf Netw 3:57–78 17. Salahuddin MA, Al-Fuqaha A, Guizani M, Shuaib K, Sallabi F (2017) Softwarization of Internet of things infrastructure for secure and smart healthcare. Computer 50(7):74–79 18. Daud M, Khan Q, Saleem Y (2018) A study of key technologies for IoT and associated security challenges. In: Proceeding of 2017 international symposium on wireless systems and networks (ISWSN) 19–22 Nov 2017, Lahore, Pakistan. IEEE, pp 1–6 19. Cherian M, Chatterjee M (2019) Survey of security threats in IoT and emerging countermeasures. In: Thampi SM, Madria S, Wang G, Rawat DB, Alcaraz Calero JM (eds) Security in computing and communications. Springer, Singapore, pp 591–604 20. Wazid M et al (2018) Design of secure user authenticated key management protocol for generic IoT network. IEEE Internet Things J 5(1):269–282 21. Iwendi C, Zhang Z, Du X (2018) ACO based key management routing mechanism for WSN security and data collection. In: Proceedings of the 2018 IEEE international conference on industrial technology (ICIT), Lyon, France, 20–22 February 2018, pp 1935–1939 22. Yang Y, Zheng X, Guo W, Liu X, Chang V (2018) Privacy-preserving fusion of IoT and big data for e-health. Future Gener Comput Syst 86:1437–1455 23. Zhou J, Cao Z, Dong X, Vasilakos A (2017) Security and privacy for cloud-based IoT: challenges. IEEE Commun Mag 55:26–33 24. Shen J et al (2018) “Cloud-aided lightweight certificate less authentication protocol with anonymity for wireless body area networks. J Netw Comput Appl 106:117–123. https://doi. org/10.1016/j.jnca.2018.01.003 25. Fan YJ, Yin YH, Xu LD, Zeng Y, Wu F (2014) IoTbased smart rehabilitation system. IEEE Trans Industr Inf 10(2):1568–1577 26. Heo J, Kim J-J, Jeongyeup P, Saewoong B (2018) Mitigating stealthy jamming attacks in low-power and lossy wireless networks. J Commun Netw 20:219–230 27. Hua Y (2018) Advanced properties of full-duplex radio for securingwireless network. IEEE Trans Sig Process 67:120–135

An Integrated Technique for Security of Cellular 5G-IoT …

563

28. He D, Ye R, Chan S, Guizani M, Xu Y (2018) Privacy in the Internet of things for smart healthcare. IEEE Commun Mag 56(4):38–44 29. Safavi S, Meer AM, Melanie EKJ, Shukur Z (2018) Cyber vulnerabilities on smart healthcare, review and solutions. In: Proceeding of Proceedings of the 2018 Cyber Resilience Conference. 2019 Jan 28; Putrajaya, Malaysia. IEEE 1–5. 30. Ahmed A, Latif R, Latif S, Abbas H, Khan FA (2018) Malicious insiders attack in IoT based multi-cloud e-healthcare environment: a systematic literature review. Multimed Tools Appl 77(17):21947–21965 31. Lai X, Zou W, Xie D, Li X, Fan L (2017) DF relaying networks with randomly distributed interferers. IEEE Access 5:18909–18917 32. Yang Y, Zheng X, Guo W, Liu X, Chang V (2019) Privacy-preserving smart IoT-based healthcare big data storage and self-adaptive access control system. Inf Sci 479:567–592 33. Liu X, Deng R, Choo KR, Yang Y, Pang H (2018) Privacy-preserving outsourced calculation toolkit in the cloud. IEEE Trans Dependable Secur Comput 1 34. Li J, Zhang Y, Chen X, Xiang Y (2018) Secure attribute-based data sharing for resource-limited users in cloud computing. Comput Security 72:1–12 35. Lin Q, Li J, Huang Z, Chen W, Shen J (2018) A short linearly homomorphic proxy signature scheme. IEEE Access 6:12966–12972 36. Huang Z, Liu S, Mao X, Chen K, Li J (2017) Insight of the protection for data security under selective opening attacks. Inf Sci 412–413:223–241 37. Wang T et al, Sustainable and efficient data collection from WSNs to cloud. IEEE Trans Sustain Comput, to be Published. doi: https://doi.org/10.1109/TSUSC.2017.2690301 38. Kanagavelu R, Aung KMM (2019) A survey on SDN based security in internet of things. In: Arai K, Kapoor S, Bhatia R (eds) Advances in information and communication networks. Springer, Cham, Switzerland, pp 563–577

Predictive Model for COVID-19 Using Deep Learning Hardev Goyal, Devahsish Attri, Gagan Aggarwal, and Aruna Bhatt

Abstract The Coronavirus has now taken more than 2.4 million lives and infected more than 100.2 million people. The spread of Coronavirus has had an adverse effect on global health and economy. The Coronavirus pandemic puts healthcare systems worldwide under immense pressure. With advancement in Machine Learning and, in particular Artificial Intelligence, early detection of Covid-19 can assist in rapid recovery and help to relieve strain from healthcare systems. Early results indicate that there are abnormalities in patients’ chest X-rays infected with Coronavirus. In this Review paper, an extensive and exhaustive guide to identify COVID virus using Chest X-Ray samples in an effective and cheap method has been presented. It highlights different CNN Architectures and Gradient Class Activation as the main approach to analyze and detect the infection. A significant amount of training and validation data/images are required for training neural networks such as Convolutional Neural Networks (CNNs) for accurate predictions on test data. Generative Adversarial Networks (GANs) especially ACGANs and Cycle GANs were used to create new images for the training dataset, which helped generalizing the classification model. The paper exhibits the application of different Convolutional Neural Networks architectures for transfer learning, including Inception and ResNet50. The paper then presents the combination of GAN and deep learning models for precise identification of COVID-19 infection. With so much research going on to detect COVID-19, this paper will help all researchers and doctors in the future. Keywords ACGANs · Resnet50 · Deep learning · Convolutional neural network · Transfer learning · Generative adversarial network · CYCLEGANs · Gradient class activation · Inception

H. Goyal (B) · D. Attri · G. Aggarwal · A. Bhatt Department of Computer Science, Delhi Technological University, Delhi, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_43

565

566

H. Goyal et al.

1 Introduction COVID-19 epidemic has become a severe health crisis. “As of now the number of people infected with Coronavirus is approximately 111 million, with 2.46 million deaths all over the world and 62.6 million cases in which people have been cured successfully” [1]. The main symptoms of Coronavirus are respiratory difficulty, headache, pain, fever, and cough, loss of smell and taste. The patient may also suffer from Pneumonia in critical cases. The infection can also lead to a serious intense respiratory condition, septic stun, multiple-organ failure, and ultimately, loss of life. Studies have shown that men (58%) were more prone to virus than women (42%). Regardless of being created, a large number of countries have faced a breakdown of their healthcare framework due to high requirements for intensive care units (ICU) at the same time. Virus detection methods take lesser and lesser time as new methods are being established in various countries worldwide. The test results allow the specialists to isolate and medicate infected patients in a convenient and agreed manner [2, 3]. The two ways to detect the infection are by the existence of the virus or vaccine that might have been produced in reaction to infection. Viral existence tests are used to diagnose individual cases and allow traceability and control of public health authorities’ outbreaks. Instead, antibody tests indicate whether anyone once had the disease. For diagnosing current infections, they are less useful because vaccines may not develop for weeks after contamination [4]. It is used to determine the widespread presence of infections, which predicts the fatality rate of infection. X-ray is an imaging procedure used to research cracks, relocations of bones, or even for chest infections. X-rays have been in dynamic use for a long time [5]. It provides a fast and effective way of examining the lungs and can help detect COVID-19 infections. The CT-Scans or MRIs give a detailed analysis of lungs but they are very expensive and out of reach of common people whereas X-Rays are cheap and with help of neural networks and fast computing they can be used in remote locations where the healthcare infrastructure is not adequate and can also be helpful for people who cannot afford the current tests [6]. Easy transportation of portable X-ray devices is an added advantage to detecting Coronavirus using X-rays. This paper will be a guide for doctors and researchers to identify chest X-ray samples between Coronavirus patients or healthy people. This paper is divided in 4 broad sections: – Section 2 talks about various technologies used in the research. – Section 3 describes existing methodology, different data sets used by researchers, and the different feature extraction techniques and classification steps – Section 4 discusses the results of various models and presents the final conclusion. – Section 5 we conclude the report and discuss briefly how this work can be used in future.

Predictive Model for COVID-19 Using Deep Learning

567

2 Background 2.1 Deep Learning Deep Learning has been very prevalent over the past few years, and it finds applications in a wide range of domains such as Speech, Computer Vision and NLP. Most state-of-the-art in these areas, from even companies like Facebook, Google, etc. use deep learning as the underlying solutions [7, 8]. Deep learning uses approaches assisted by representative learning through neural networks for artificial learning. It learns from representative examples instead of using task-specific algorithms. For example, you need to prepare a database containing many different cat photos if you want to create a model that recognizes cats by species [2]. Today, in image/video classification, DL is rapidly becoming a vital technology. Hidden deep layers on deep learning maps given data to required labels to explore hidden patterns in complex data. Their use in medical X-ray recognition can help in Coronavirus detection.

2.2 Convolution Neural Network Vision is a huge part of human life. There is three central part to a CNN: – Convolution: This is used for extraction of features from a given image or to select the most relevant features from the previous layer.. – Non-linearity: It allows to deal with non-linear data – Pooling: Allows your image’s spatial resolution to be sampled down so that the necessary features can be magnified. A CNN powerfully uses adjacent pixel information to successfully down sample the image first by convolution and then use classification layer(s) to give the final After filters are applied to a set of features the most important features will be highlighted which can then be used to make further predictions. A filter is basically a matrix used to magnify the most important features [9]. For medical image/video classification and identification, Convolutional Neural Networks have achieved remarkable success. Convolutional neural networks (ConvNets or CNNs) are key groups for image classification in neural networks. One of the most readily accessible and realistic approaches to the diagnosis of Coronavirus from X-ray is the Convolutional Neural Network (CNN). Several reviews are being carried out to highlight recent contributions to the identification of Coronavirus. In medical images, Convolutional neural networks (CNNs) have achieved state-of-theart efficiency, given sufficient data. Training on a labeled date is crucial to fine-tune its millions of parameters. Due to many parameters, Convolutional Networks can easily overfit/underfit data having less samples along with same kind of samples, so the degree of generalization is highly dependent on the size of the labeled dataset.

568

H. Goyal et al.

Fig. 1 Convolutional neural networks

With a restricted number of samples, small datasets are the most significant challenge in medical imaging. Provided sufficient data Convolutional Neural Networks (CNNs) have achieved near perfect efficiency in classification of medical images. To fine-tune its millions of parameters, training on labeled data is necessary. CNN’s can easily over fit small datasets due to several parameters, so a large dataset is required for deep learning models. Small databases are the most critical problem in medical imaging, with a limited number of samples. In Fig. 1. CNN has been explained in an elegant way. Example has been taken from MNIST Dataset. First, the image is passed through 4 layers of CNN with kernel size 5*5. Max Pooling is applied along with them. In the end, when our model has learned all the features for the given image, it tried to predict that what is the number in Image, i.e., from 0 to 9 So we have 10 classes here. Even if it fails to identify the correct number we train the model for few epochs after which our model performs well. After reviewing the current models, a trend is highlighted that even though deep learning and CNN are successful in the domain of computer vision, the accuracy of COVID detection using x-rays is still low because of the limited number of training datasets. A large dataset is required to train an efficient and accurate model [10].

2.3 Generative Adversarial Networks The aim of a GAN is to learn to produce data that is indistinguishable from the data being used for training. The reason we call the training process for generative adversarial network adversarial training is that the other player generates a losing scenario for the first person. Even though one of the players is performing at its worst the other player is being benefitted because it is learning to perform well

Predictive Model for COVID-19 Using Deep Learning

569

on the inputs provided by the other network. Generative Adversarial networks are mostly intended to solve the task of generative modeling [10]. The idea behind generative modeling is that we have a huge dataset of training examples usually of large multidimensional examples. The particular approach the generative adversarial network takes to generative modeling is to have two different models playing a game against each other. One of these agents is the generative network which tries to generate data, and the other model is a discriminator network that examines data and determines whether it is real or fake. The goal of the generator is to fool the discriminator. As the players compete with each other to win they get so good at their jobs that eventually the generator reaches a level where it is capable of producing realistic images, which are indistinguishable from the images in the training dataset. In the first half of the training process, we take a random set of images from the training dataset and call this set X. The discriminator is represented by D, which can also be considered as the first player. The discriminator is a neural network, i.e., it is a differentiable function whose parameters define the shape of the function. We apply the function D, the discriminator neural network, to the set X , and aim of D is to make D(x) as close as possible to unity, i.e., 1 [4, 5]. In the second half of the training process, we sample some random noise Z from a prior distribution over latent variables in our generative model. Z is just a sort of randomness that allows the generator to output many different images instead of outputting only one realistic image. After we sample the input noise Z , we apply the generator function which is similar to the discriminator function. The generator is a differentiable function controlled by some set of parameters, and in other words, it’s usually a deep neural network. After applying the function G to input noise Z , we obtain a value of X sampled from the model. We apply the discriminator function to the fake example that we pull from the generator. The discriminator tries to make its output D(G(Z )) be near to 0. Earlier, when we use the discriminator and real data, we wanted D(X ) to be near 1 and now the discriminator wants D(G(Z )) to be near null/0 to distinguish that the input is false. At the same time, the generator aims to make D(G(Z )) close to unity/1, thus proving the discriminator wrong [2, 3]. The current data set doesn’t have sufficient amount of images to train an efficient and accurate model. This is because of the lack of medical images. To have a data set of significant size, Auxiliary Classifier Generative Adversarial Network (ACGAN) which is an extension of GAN, predicts the label of an image instead of getting the label as an input. It is capable of producing high quality images which are indistinguishable from real chest X-ray images [11, 12] (Fig. 2). GAN has two main components: – The generative part is responsible for taking N-dimensional uniform random variables (noise) as input and generating fake images. The generator captures the probability P(X ), where X is the input. – The discriminatory part is a simple classifier that evaluates the images produced and separates them from the images. The discriminator takes the conditional condition P(Y |X ), where X is the input variable and the label is Y .

570

H. Goyal et al.

Fig. 2 Generative adversarial network model

In GANs, the input for one network is produced by another network. The quality of the input depends on the performance of that particular network. We can think of the Generator and Discriminator as being counterfeiters and police. The police would like to allow people with real money to spend their money without being punished. Still, they would also like to capture counterfeit money, remove it from circulation and punish the counterfeiter. Simultaneously the counterfeiter would want to fool the police and successfully use their money. But if the counterfeiters are not very good, they will get caught. Over time the police learn to be better at catching counterfeit money and counterfeiter learn to produce it [6, 11]. So in the end, to examine this case, we can use game theory, we discover that if both the police and the counterfeiter, or in other words, both the Discriminator and Generator have infinite capabilities, then this game’s Nash Equilibrium corresponds to the generator generating perfect samples from the same distribution as the training results. The counterfeiters, in other words, make counterfeit cash that is indistinguishable from real money. And at that point, the discriminator or, in other words, the police cannot distinguish between the two data sources and say that each input is half likely to be true and half likely to be false. We can formally describe the learning process using the Minimax Game [12, 13].

3 Existing Methodlogy Waheed et al. [1] used CNN and ACGANs for COVID-19 detection in this analysis. They combined Covid X-Ray images produced using ACGAN and the original images so as to make the dataset generalized and large and then used with their proposed CNN architecture. These were the following contributions to this study: – ACGAN was used for the first time to produce synthetic Covid images in form of X-Rays. – For COVID-19 detection, [1] developed a CNN-based model. – Used ACGANs for generation of training dataset with the different CNN models like Resnet, Xception, Inception, VGG for better detection of Coronavirus.

Predictive Model for COVID-19 Using Deep Learning

571

Fig. 3 Sample of images

Elene et al. [2] used CNN along with heat activation AND CyclicGANs for detecting this infection. They also used MobileNet With Support Vector Machine. Pamuk et al. used a mixture of Pneumonia and Coronavirus Dataset which resulted in 98.7 M. Talo et al. in their paper used 3 classes for the model - Pneumonia, Covid, and Normal with more than 1127 images combined which resulted in 98.

3.1 Data Generation Radiologists and Researcher’s participation is required in order to make a good database of medical images like for the X-Rays. It is expensive and requires a repeating process again and again. COVID-19 is very new and there has not been so much testing with X-Rays, so it is difficult to collect adequate Chest X-ray (CXR) image data. By using synthetic data augmentation, we suggest alleviating the disadvantages. To create the dataset, most Researchers obtained the images from these available datasets: – IEEE X-ray dataset for COVID Chest – X-Ray Database COVID-19 and – COVID-19 Initiative on the X-ray Covid Data (Fig. 3). As these collections of databases are available for open source contribution they were the best choice for building the X-Ray Dataset. Also, they are freely accessible to each person who wants to use it. The photos obtained are combined, and exactly similar images from the dataset are deleted using Image Hashing method. This method generates a hash value based on the contents of an image that identifies an input image uniquely. The dataset contains more than 460 images of COVID chest and 1266 Normal-CXR. Sadly, the number of COVID-CXR is much smaller than required to train an accurate and useful learning model. Therefore, we have created a GAN model that can be used to develop new COVID-CXR and Normal-CXR.

572

3.1.1

H. Goyal et al.

Data Pre-processing

Rezvy [5] in their Paper resized every image to 224*224 dimensions. Images that were in smaller dimensions were upscaled and which were larger than 224 were cropped. Apart from this based on appearance, color, intensity normalization was applied too. 224*224 was chosen because neither it results in loss of too much information and also helps in reducing the training time which is very crucial.

3.2 Auxiliary Classifier Generative Adversarial Network The ACGAN was used by Abdul et al. [1] to make their own CovidGAN. It is difficult for GANs to produce high-resolution images from unbalanced forms of data. They used this GAN in their research because to generate better image quality this model rather than focusing on only the specific images it focuses on external knowledge. To produce an image for a specific class, along with the label of that class, a random matrix of points generated through random function is given to the generator. An image and a class label are given to the discriminator, determining whether the image is correct or not. The main advantage of anACGAN is that the discriminator of this GAN tries to output specific image’s class label and not to receive it as an input unlike in most other GANs. It helps to control the overfitting part and assists the production of high-resolution images. In this, not only it produces better images but learns features of that image that is not dependent on a given image’s class. The generator uses the label c for a class and the random noise to produce a sample and then the discriminator D produces a probability distribution. This distribution is between labels and the sources. D maximizes Ls + Lc, and G maximizes Lc − Ls. Lc = E[log P(C = c|Xr eal)] + E[logP(C = c|X f ake)]

(1)

Ls = E[log P(S = r eal|Xr eal) + E[logP(S = f ake|X f ake)]

(2)

3.3 Cyclic GANs Zebin et al. [5] Used Cyclic GANs for image augmentation. As the dataset was unbalanced CyclicGANs were helpful in creating new data because it uses 2 Generators and Discriminators and then learns characteristics from one image sample data which is further used to create new data. In this a type of Cycle is formed as first the image is generated by first Generator G1 which hen acts as an input to another Generator G2 and that resembles the original image.

Predictive Model for COVID-19 Using Deep Learning

573

Fig. 4 Methodology

3.4 Model Architecture Most of the previous researchers have used VGG16, ResNet50, InceptionV3, and Xception pre-trained models which are back-boned by a fully connected layer. They replaced the pre-trained model’s final classifier with their classification layer of two classes (COVID-19 positive and COVID-19 negative samples), each with a ReLU and ELUs activation to supply the ultimate output (Fig. 4). All these models have already achieved high accuracy on the Standard ImageNet Dataset. The ResNet50 model consists of 48 Convolution layers. It takes 224 × 224 sizes of the image. It uses CONV2D, Batch Normalization, and MaxPooling as a combined layer and then uses it extensively. At the End Global Average Pooling is used, and then softmax is attached to find COVID or Non-COVID instead of labeling it into ten classes as in ImageNet. The Models were trained on 327 Covid Images and 601 Normal X-Ray Images. Also, batch normalization is used in reasonable amounts which helps to reduce over-fitting. RMSProp Optimizer is used in the case of InceptionV3 Model. The Xception model is 71 layers deep. It uses Depth wise Separable Convolution. Convolution size is d × d × 1 instead of d × d × c where d is the filter size, and c is the channel. While training and testing, for calculation of the losses, binary cross-entropy was used as the function because output was binary that is COVID or Non-COVID. Loss is calculated as: Loss = y i (− log(yi )) + (1 − y i )(− log(1 − yi))

(3)

574

H. Goyal et al.

where yi is the ith vector in the model output, yi is the value. Binary loss entropy was used we have to classify between Covid Positive and Covid Negative. SGD (Stochastic Gradient Descent) was used as the optimizer with a learning rate kept as low as 0.0005. Its output is derivative of a set of parameters of input. It takes a significantly shorter training time than old Gradient Descent. The path taken to reach the minima is usually noisier but efficient. The batch size was also kept low as 16. The model was trained for around 200 epochs for all four models and achieved high accuracy.

4 Result Results obtained from training the deep learning models like Xception, VGG16, ResNet50, and Inception V3, were compared based on their accuracy, recall value, and F1 score in order to choose the most efficient model that performs the best on the available dataset. First, each model is trained with the available dataset. The accuracy score per epoch is elaborated in Fig. 6. The ResNet50 exhibits a very unstable accuracy in the testing data compared to the training data. Xception model performs very well in the training set but performs poorly in testing set. Only VGG16 and Inception V3 perform well both in training and testing data with Inception V3 showing a better performance compared to VGG16. To quantify and evaluate the CNN model’s performance using synthetic data augmentation technology, recall (or sensitivity), F1-score, and specificity are used. Precision is the model’s ability to label a wrong sample as a negative and correct sample as positive in other words, it asks the question “What proportion of positive identifications was actually correct?”. Recall is the ratio of model’s ability to identify all those with the problem correctly in other worlds is asks the question, “What proportion of actual positives was identified correctly?”. The weighted average of precision and recall is the F1-score. Here are the formulas of the measures: sestivit y = r ecall = pr ecision = F1-scor e = 2 ∗

TP (T P + F N )

TP (T P + F P)

(4)

(5)

r ecall ∗ pr ecision (r ecall + pr ecision)

(6)

TN (T N + F P)

(7)

speci f icit y =

Predictive Model for COVID-19 Using Deep Learning

Fig. 5 Comparison of different CNN models

Fig. 6 Training and testing for different CNN models

575

576

H. Goyal et al.

Fig. 7 Detailed results for different CNN Models with and without using GANs

According to our requirement, we need a very high recall value on the cost of having a lower precision value. For example, it is necessary to identify every COVID positive patient on the cost of identifying a few negative cases as positive, i.e., a person having COVID should never be misidentified. Figure 5 compares the various models based on their Mean Precision and Mean Recall values. From the table given in Fig. 7. We can see that Inception V3 outperformed other models. It showed a precision of 0.95 which is better than other models. in Fig. 6, we can also see that Inception V3 had better training performance unlike other models such as ResNet50 which showed large deviations throughout training. Inception V3 was very precise and even its Recall and F1-Score performed the best.

Predictive Model for COVID-19 Using Deep Learning

577

5 Conclusion and Future Scope Timely recognition of patients with COVID-19 is vital for opting correct handling and also prevent widespread of the virus. The results from the above research show that an effective model can be trained using existing methodologies with slight modification and a sufficient amount of images in the dataset. Deep Learning is an essential to achieve such a result. The proposed method doesn’t have any clinical study to support its efficiency and reliability. Thus, right now it can’t replace a medical diagnosis by an expert medical professional. Therefore a more thorough investigation and a model trained to a comparatively larger dataset is required. Under such scenarios, the work shows a high probability of a precise, automated, quick, and affordable method for the diagnosis. For future work, the amount of images of both the classes can be increased in the collection of images by adding more X-ray images of people already tested positive for COVID and also adding other diseases which affect lungs in a similar way as in COVID-19, thus making the approach more efficient and generic. This will allow doctors and medical professionals throughout the globe to carry out more extensive testing without the use of existing testing techniques, which are slow and can also act as a hotspot for the spread of virus. Furthermore, the proposed approach can be compared with techniques based on fine-tuning, and many other models can be trained and tested from scratch.

References 1. Waheed A, Goyal M, Gupta D, Khanna A, Al-Turjman F, Pinherio P (2020) CovidGAN : data augmentation using auxilliary classifier GAN for improved Covid-19 detection. IEEE Access 2. Ohata EF, Bezerra GM, das Chagas JVS, Lira Neto AV, Albuquerque AB, de Albuquerque VHC (2020) Automatic detection of COVID-19 infection using chest X-ray images through transfer learning. In: IEEE 2020 Tahima Zebin , ShahadateRezvy, COVID-19 detection and disease progression visualization: Deep learning on chest X-rays for classification and coarse localization. SpringerLink 2020 3. Xiango J et al (2020) Towards an artificial intelligence framework for data-driven perdiction of coronavirus clinical severity, CMC 4. Mertyuz I, Mertyuz T, Tasar B, Yakut O (2020) Covid-19 diesease diagnosis from radiology data with deep learning algorithms, IEEE 5. Mikołajczyk A, Grochowski M (2018) Data augmentation for improving deep learning in image classification problem. In: 2018 International Interdisciplinary PhD Workshop (IIPhDW), pp 117–122 6. Beers A, Brown JM, Chang K, Campbell JP, Ostmo S, Chiang MF, Kalpathy-Cramer J (2018) High-resolution medical image synthesis using progressively grown generative adversarial networks, ArXiv, vol. abs/1805.03144 7. Wu F et al (2020) A new coronavirus associated with human respiratory disease in China. Nature 579(7898):265–269 8. Hooda P, Akshi K, Dabas V (2018) Text Classification algorithms for mining unstructured data , a SWOT analysis. IJIT 9. Narin A, Kaya C, Pamuk Z (2020) Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks, arXiv preprint arXiv: 2003.10849

578

H. Goyal et al.

10. Tang Z (2020) Adaptive Griup testing models for infection detection of COVID-19. In: IEEE 2020 11. Dhaya R (2020) Deep net model for detection of covid-19 using radiographs based on roc analysis. J Innov Image Process (JIIP) 2(03):135–140 12. Manoharan S (2020) Improved version of graph-cut algorithm for CT images of lung cancer with clinical property condition. J Artif Intelligence 2(04):201–206 13. Li C, Dong D, Li L, Gong W. Classification of severe and critical COVID-19 using deep learning and radiomics. IEEE J Biomed Health Informatics 24(12)

Automatic White Blood Cell Detection Depending on Color Features Based on Red and (A) in the LAB Space Tahseen Falih Mahdi, Hazim G. Daway, and Jamela Jouda

Abstract Automatic detection of white blood cells remains an unresolved issue with medical imaging. Researchers from the fields of computer display and medicine participated in the analysis of WBC images. A new algorithm is proposed in this paper to detect white blood cells, and this algorithm is based on the binary conversion of red, and A component in the LAB space after determining some strong values and eliminating unwanted small areas depending on the medium filter. In the results of experimental white blood cell imaging, the proposed algorithm was compared with several other detection algorithms. The quality was determined based on the precise calculation of the comparison to determine the area of white blood cells, and theoretically, through the proposed algorithm, the automated diagnosis of these regions. The diagnostic accuracy of WBC is the measurement from the following accuracy, sensitivity, and specificity as the proposed algorithm achieved a high differential accuracy of 98.5%. Keywords WBCs detection · Leukocytes · Image processing · Binary image · Basic color space · And LAB transform

1 Introductıon White blood cells (WBCs), one of the cellular elements of our blood, play an essential role in our immune system. They fight foreign substances that enter our bodies and protect us from infection. WBCs have various kinds: neutrophils, lymphocytes, eosinophils, monocytes, and basophils. Each of them has a different defense function T. F. Mahdi (B) · H. G. Daway Physics Department, Science College, University of Mustansiriya, Mustansiriya, Iraq e-mail: [email protected] H. G. Daway e-mail: [email protected] J. Jouda Biological Department, Science College, University of Mustansiriya, Mustansiriya, Iraq e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_44

579

580

T. F. Mahdi et al.

to protect us. Some of them are involved in the detection of entered microbes. Other phagocytes or antibodies are produced to kill them. We can take advantage of color conversion in many digital image processing applications such as optimization [1–4]. Image detection plays an important role in bioengineering applications [5, 6]. Naxos et al. [7] discussed the methodology for completely automatic classification of acute lymphocytic leukemia from microscope images of the blood film. Their work aimed to show the blood peripheral. Their method suggested was specifically for individuals in the blood. Scotti [8] suggested a method for improving microscope images by removing unwanted microscope system, robust cell diameter, estimation total self-satisfactory segmentation, and strategy for high-strength definition of white blood cells. This approach makes it possible easy extract White cell features for post-spontaneous diagnosis of Blood diseases (such as acute leukemia). Dorini et al. [9] presented a method for WBC segmentation into the cytoplasm and nucleus. That was especially important for differentiated counting, which allows us to diagnose many diseases utilizing easy morphological operators and to explore a toggle operator’s scale-space characteristics to increase the precision of segmentation. A huge number of images have been applied to the proposed system and showed favorable results for various cell appearances and image quality. Frameworks that incorporate picture preparing procedures may give subjective assessment and in this manner improve decisions. Along these lines, a programmed framework dependent on picture handling strategies can support the hematologists and speed up the procedure. The utilization of picture handling procedures has developed quickly in the ongoing years. Rezatofighi et al. [10] introduced an image acknowledgment method to identify five classes of white blood cells in the blood. The orthogonalization process Gram– Schmidt is utilized for nucleus segmentation which can be considered as a colordependent technique from the segmented areas. Several characteristics are extracted by the sequential forward selection (SFS) algorithm and output of two classifiers, artificial neural network (ANN) and support vector machine (SVM). Then, they used to compare the most discriminatory features. The results showed that the methods proposed are precise and quick sufficient to be used in hematological LABs. Putzu and Di Ruberto [11] presented an automatic method for the identification of WBC from microscopic photographs. The proposed method recognizes white blood cells from which the cytoplasm and nucleus are subsequently extracted. The findings obtained indicate that the proposed method can classify the white blood cells present in the image in a robust manner. Joshi et al. [12] suggested automatic Otsu’s blood cell segmentation threshold approach in addition to improved image and WBC segmentation arithmetic. To differentiate blast cells from regular cells of lymphocyte, the KNN classifier was used for the study of leukemia. The framework refers to 108 images available in the dataset of public images. This technique offers 93% precision. Punitha et al. [13] suggested a microscope image of malaria infection and reviewed image processing studies aimed at automatic diagnosis or screening thin blood film

Automatic White Blood Cell Detection Depending …

581

smears. The image segmentation smoothing processing techniques are used to detect malaria parasites in images obtained from gradient edge detection techniques in peripheral blood samples stained by Giemsa. The method is robust in such a way that exceptional conditions are unaffected, and high percentages of sensitivity, specificity, negative prediction, and positive prediction values are obtained. Red blood cell extraction achieves reliable efficiency and the real classification of infected cells. Putzu et al. [14] introduced an automated approach to define and classify WBC using microscopic imaging that distinguishes the entire leukocyte and the nucleus from the cytoplasm. In order to analyze each cell component in-depth, this approach is appropriate. Various features from any cell part their method identified from 245 total 267 white cells (92% accuracy). He proposed an automated approach to the diagnosis of leukemia In the Manual Process of Leukemia Detection Experts Testing the microscopic image found of the accuracy in the proposed system is 93.57%.also the proposed counts the infection percentage in the image of blood. Hoang and Tran [16] proposed a technique focused on image processing to automate the pipe corrosion detection mission. The texture of the image including image color statistics, a gray-level co-occurrence matrix, and a gray-level run length is used to extract the features of the pipe surface. George et al. [17] showed the image strategy segmentation based on the calculation of K-means clustering. The suggested solution used clustering to assign the dominant colors in medical tissue images for output segmentation purposes. The device initialization stage is the choice of the color model used for segmentation. The evaluation of the results of image matching for localization is performed by using object error criterion mean absolute error (MAE). Muntasa and Yusuf [18] introduced a color-dependent hybrid modeling to classify and separate. Acute lymphoblastic leukemia is proposed to solve classification tasks. The results of the task of the suggested model reached 95.38% accuracy.

2 Suggested Method In this study, a white blood cell detection algorithm based on the LAB color space is proposed in accordance with the Munsell color system. The system of LAB color coordinate is designed to offer an easy color measure. The room of the CIELAB has been planned as a perceptually symmetric space. If a minor disruption to a component value is roughly similarly perceptible across the spectrum of that value, a device is perceptually standardized [19]. The lightness on the L-axis is in CIELAB changes from 0 (black) to 100 (white), greenness-redness, and blueness-yellowness, consecutively. Specimens for which a = b = 0 is achromatic are the other two coordinates a and b. The L-axis, therefore, the achromatic grayscale describes from black to white. The three L, a, and B coordinates are determined as follows from the tristimulus values X, Y, and Z [20]

582

T. F. Mahdi et al.

⎧  1 ⎪ ⎨ 116 Y 3 − 16 if Y > 0.008856 Yn Yn L=   13 ⎪ ⎩ 116 Y if YYn ≤ 0.008856 Yn    Y X − f a = 500 f Xn Yn    Y Z b = 200 f − f Yn Zn

(1)

(2) (3)

where a specified white object color stimulus is represented by Xn, Yn, and i and function f is defined as

1 X3 if X > 0.008856 f (x) = (4) 16 if X ≤ 0.008856 7.787X + 116 Yellowness-blueness is defined by the b-coordinate. The b and coordinates have a scale of roughly [−100, 100] [11]. The proposed method depends on the characterization of WBC and depends on the binary red transformation and A component of LAB space, and color gamut according to certain threshold limits. We then use the necessary algorithms to discover the WBC region detect according to the threshold value. This is the first step algorithm image to converted binary depending on threshold values according to red, and A component in LAB color space [15, 16], By analyzing the histogram values of the colorimetric compounds, we note that the best WBC detection is within the range [th1, th2] in the red component and within the range [th3,th4] in the A-component ⎫ if (th1R(x, y)th2) ⎪ ⎪ ⎬ (th3A(x, y)th4) then bi(x, y) = 1 ⎪ ⎪ ⎭ else bi(x, y) = 0

(5)

The best threshold values are (th1 = 40, th2 = 160, th3 = 50 and th4 = 18), and this is shown in Fig. 1a, b, which shows the actual image convert to be based on binary the threshold values. To remove unwanted areas and remove the distortion in which few spaces, we can apply the median filter binary image by the window (9 * 9). Depending on the size condition (480 * 640), if the image size is larger, larger window size can be given: bwm (x, y) = medianfilter(bw(x, y))

(6)

Figure 1c shows the binary image and elimination of unwanted areas through the medium filter.

Automatic White Blood Cell Detection Depending …

583

Fig. 1 Stages of detection of white blood cells

Where the number of regions is i. After the areas with a few spaces and smaller than (T = 800 pixel) are neglected Eq. (7). ⎫ if Bi(x, y) > T then ⎬ (xw =, yw = y) is the coordinate of WBCs ⎭ else non

(7)

The coordinates (xw , yw ) are plot on the original image to obtain the white cell region and can be plot or frame. Figure 2 shows the steps of the suggested algorithm.

3 Quality Assessment Based on the calculation, the quality was determined of accuracy between its comparison in the determination of the white blood cell region and theoretically the automated detection of these areas by the proposed algorithm. The diagnostic accuracy of WBC is calculated depending on the accuracy (Acc), sensitivity (Se), and specificity

584

T. F. Mahdi et al. Input color images

Convert to LAB color space

Re-size Exccret A component RGB- component

If th3¬A¬ th4 If th1˂R˂ th2

Yes Convert to binary

Median filter

WBC Region detect according to threshold value

Fig. 2 Block diagram for the suggested method

(sp) TP + TN TP + TN + FP + FN

(8)

SP =

TN TN + FP

(9)

Se =

TP TP + TN

(10)

Acc =

Figure 2 shows original in (a), binary image in (b), in (c) median filter for binary image and final detection for WBCs in (d). TP is Real Positive, TN being Real Negative, FP is False Positive, FN represents False Negative. TP: It is the region discovered by the WBC as the WBC. TN: a non-WBC region identified as a non-White blood cell.

Automatic White Blood Cell Detection Depending …

585

FP: A non-WBC area that has been identified as white blood cell. FN: A zone of white blood cells that are recognized as a non-white blood cell is based.

4 Results and Discussion We are proposing a new algorithm depending on the color feature by using the LAB color space and red component. This algorithm was applied to 2592 × 1944 pixel blood smear microscopy images. A sample of 30 images collected was used for the evaluation from a microscope camera under the same illuminance level. The suggested algorithms are done by MATLAB (R2018a). A technique for examining white blood cells in images is taken from stained blood samples taken from The AlMustansiriya University Hematology Center and Baghdad Medical City Hematology Center. These pictures are shown in Figs. 3 and 4 and show some samples to detect white blood cells in this algorithm identified Acc = 0.98552, Sp = 0.990238, and Se = 0.99012 as shown in Table 1 which is an excellent value for excellence.

Fig. 3 Microscope medical image used in detecting WBCs

586

T. F. Mahdi et al.

Fig. 4 Some images of white blood cells after being detected by using the proposed algorithm

Table 1 Accuracy, sensitivity, and specificity

Average

SE

SP

ACC

0.988626

0.99012

0.990238

0.98552

5 Conclusion In this paper, a new algorithm is proposed automatic detection of white blood cells which adopted the color features, by analyzing the results, high accuracy was obtained Acc = 0.98552, Sp = 0.990238, and Se = 0.99012. This indicates the great success of the proposed algorithm in distinguishing the white blood cells captured by optical microscopy. In future studies, other methods of discrimination can be proposed, depending on color characteristics and using other spaces.

Automatic White Blood Cell Detection Depending …

587

Acknowledgements I would like to thank Mustansiriyah University, Hematology Center, and Baghdad Medical City. I would like to thank everyone who helped me complete this research in the department of physics.

References 1. Daway HG, Daway EG (2019) Underwater image enhancement using colour restoration based on YCbCr colour model. In: IOP conference series: materials science and engineering. IOP Publishing, p 12125 2. Karam GS, Abood ZM, Kareem HH, Dowy HG (2018) Blurred image restoration with unknown point spread function. Al-Mustansiriyah J Sci 29 3. Mirza NA, Kareem HH, Daway HG (2019) Low lightness enhancement using nonlinear filter based on power function. J Theor Appl Inf Technol 96:61–70 4. Daway HG, Al-Alawy IT, Hassan SF (2019) Reconstruction the illumination pattern of the optical microscope to improve image fidelity obtained with the CR-39 detector. In: AIP conference proceedings. AIP Publishing LLC, p 30006 5. Sathish SM (2020) Improved version of graph-cut algorithm for CT images of lung cancer with clinical property condition. J Artif Intell Capsul Netw 2:201–206. https://doi.org/10.36548/ jaicn.2020.4.002 6. Jacob IJ (2019) Capsule network based biometric recognition system. J Artif Intell 1:83–94 7. Naxos G, Scotti F (2005) CIMSA 2005-IEEE international conference on computational intelligence for measurement systems and applications automatic morphological analysis for acute Leukemia identification in peripheral blood microscope images, pp 20–22 8. Scotti F (2006) Robust segmentation and measurements techniques of white cells in blood microscope images. Conf Rec IEEE Instrum Meas Technol Conf 43–48. https://doi.org/10. 1109/IMTC.2006.235499 9. Dorini LB, Minetto R, Leite NJ (2007) White blood cell segmentation using morphological operators and scale-space analysis. In: Proceedings of SIBGRAPI 2007—20th Brazilian symposium on computer graphics and image processing, pp 294–301. https://doi.org/10.1109/ SIBGRAPI.2007.33 10. Rezatofighi SH, Soltanian-Zadeh H (2011) Automatic recognition of five types of white blood cells in peripheral blood. Comput Med Imaging Graph 35:333–343. https://doi.org/10.1016/j. compmedimag.2011.01.003 11. Levkowitz H (1997) Color theory and modeling for computer graphics, visualization, and multimedia applications. Springer 12. Joshi MD, Karode AH, Suralkar SR (2013) White blood cells segmentation and classification to detect acute Leukemia. Int J Emerg Trends Technol Comput Sci 2:147–151 13. Punitha S, Logeshwari P, Sivaranjani P, Priyanka S (2017) Detection of malarial parasite in blood using image processing. SSRN Electron J 2:124–126. https://doi.org/10.2139/ssrn.294 2420 14. Putzu L, Caocci G, Di Ruberto C (2014) Leucocyte classification for leukaemia detection using image processing techniques. Artif Intell Med 62:179–191. https://doi.org/10.1016/j.artmed. 2014.09.002 15. Patel N, Mishra A (2015) Automated Leukaemia detection using microscopic images. Procedia Comput Sci 58:635–642. https://doi.org/10.1016/j.procs.2015.08.082 16. Hoang ND, Tran VD (2019) Image processing-based detection of pipe corrosion using texture analysis and metaheuristic-optimized machine learning approach. Comput Intell Neurosci 2019. https://doi.org/10.1155/2019/8097213 17. George LE, Rada HM, Abdul-haleem MG (2019) Anemia blood cell localization using modified K-means algorithm. 11:9–21

588

T. F. Mahdi et al.

18. Muntasa A, Yusuf M (2020) Color-based hybrid modeling to classify the acute lymphoblastic leukemia. Int J Intell Eng Syst 13:408–422. https://doi.org/10.22266/IJIES2020.0831.36 19. Sony Wine SJ, Horne RE (1998) The color image processing hand book. Int. Thomson

Facial Expression and Genre-Based Musical Classification Using Deep Learning S. Gunasekaran, V. Balamurugan, and R. Aiswarya

Abstract Facial expression is one of the natural ways to express emotions, and it plays a vital role in extraction of human emotion. Manually, setting the playlists with an oversized assortment of songs is an intensive task that require more time. For automating the song playlist, various algorithms are proposed recently. Most of the existing algorithms are slow in performance and not up to the level of expectation by the customer, and some of the music players are higher in cost due to their usage of hardware like sensors. This paper proposes a deep learning approach for classifying human expressions and playing music tracks based on the detected expressions, thus saving time and labor in the task of manually creating the playlist. It also aims to improve the usability of the music system in terms of the user’s preferences. During this project, we have a tendency to propose a system that contains music analysis, where a sturdy approach that is capable of classifying an audio stream into totally different genre and a picture analysis, wherever the popularity of the face expression is employed. The music analysis divides audio files into totally different genre supported by convolution neural network. The primary step of the image analysis is face detection using Haar feature, and also, the second step of the image analysis is that the emotion recognition using the saved convolution neural network model. Keywords Music information retrieval (MIR) · Image processing · Audio processing · Deep learning

1 Introduction Music aside from entertainment has a very important role in our life as it is an alternative form of an act of expressing one’s feelings and mental state. Usually, people tend to play music that reveals their emotions. Other than words to communicate one’s S. Gunasekaran (B) · R. Aiswarya Department of CSE, Ahalia School of Engineering and Technology, Palakkad, Kerala, India V. Balamurugan Department of ECE, Ahalia School of Engineering and Technology, Palakkad, Kerala, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_45

589

590

S. Gunasekaran et al.

thoughts, music can be an alternative factor. The ability to interpret one’s emotions is a challenging skill. The face is accountable not just for communicating thoughts but also emotions. The way of communicating emotions via faces is similar and comparable by all persons. In the existing world, everyone depends on computers to perform most of their task. This enhances the human desire to expect the applications to be responsive to users or among other applications. Media Player is software application or hardware device that can handle multimedia files such as movies, short videos, and music of different formats with media control icons to play, pause or stop, and volume control. Various media players can have different objectives and features. The proposed system could be a smart media player that extracts the facial expression from the input camera and performs the accustomed playlist of songs using a convolutional neural network, which could create an acceptable playlist based on an individual’s emotional characteristics. The proposed system additionally provides playlist generation by genre classification which is performed using deep learning. The proposed system was therefore aimed to provide people with appropriate music using facial recognition and saving the time which is required to navigate into the files and scroll at a never-ending list of songs to choose. The main concern of playlist generation by scanning is that it is extremely userspecific that is some users do not wish to hear specific songs in a particular mood. The various algorithms that exist today do not have this advantage so that the algorithms are performed with a pre-defined set of songs by the developer. This could be a bit frustrating for the user since the same song is played until the user explicitly navigates through the directory and modify from there. The proposed system has the capability of allowing users to set the list of songs to play when he/she is in a particular emotion. The user can add or delete the songs from the respective playlist via the graphical user interface. Additionally, the playlist information module (reporting system) mainly deals with displaying all kinds of information required by the user about the playlist and songs it contains. This system is used by the manager as a guide during the management business. Since the proposed system is a music player, it may be also be used on a varied background and with many people nearby. This is also a concern in the use of the face as an emotion detector as it can detect other faces and feed them into the model [1]. To reduce complexity, a simplified approach has been proposed with the aim of detecting the nearest face for emotion recognition [2]. The reason why the system does not use face recognition is that it will be user-specific and it increases the complexity and response time of the system [3, 4]. The main concern of playlist generation by music genre is that the various genres are not properly defined, and this requires the extraction of relevant features that differentiate one genre from another. The task of extracting musical information has been a topic of discussion for various researches under the musical information retrieval (MIR) field. Recently, many algorithms describe various signal features such as short-time Fourier transform (STFT), mel-frequency cepstral coefficients (MFCCs), and linear predictive coding (LPC). These algorithms explore the genre

Facial Expression and Genre-Based Musical Classification …

591

classification using several machine learning algorithms like naïve Bayes classification, and support vector machine [2]. The software such as Spotify and Pandora uses genre classification to aid recommendation systems and playlist generation. In this project, we proposed a convolution neural network (CNN) that looks for similarity in features [5]. For feature extraction, mel-spectrograms are used which can be used to differentiate the genre from one another. The proposed project aims to categorize music based on different genres and to classify facial expressions from a camera input in real time.

2 Related Works Emotion is very important for understanding human nature, which can be identified by facial features or facial expressions [6, 7]. To convey emotions, nonverbal indicators such as hand motions, facial gestures, and voice tone are used. When it comes to transmitting human emotion, the face performs better. An emotion-based music player [8] proposed by Nikhil Zaware, Tejas Rajgure, Amey Bhadang, D. D. Sapkal is a music player, developed in Java language. It had basic music player features like options of play or pausing and forward or rewind the song. Mainly, the face is captured at an interval of pre-set time and classified the emotion from detected face. Using the detected emotion, it then plays songs that are pre-set in the directory. They implemented image processing using OpenCV. Emo player: An emotion-based music player proposed by Rahul Hirve, Shrigurudev Jagdale, Rushabh Banthia, Hilesh Kalal, and K.R. Pathak [9] had similar functionality of the above work. They used JAFFE dataset. The face is detected from the image captured using a webcam using the Viola–Jones algorithm. Their work also involved landmark point detection which is capable of detecting 68 landmark points. The training data were given as input to the SVM model. A music player was created using Python in which wxPython was used to create the GUI. An emotion recognition system [10] proposed by Mase K uses the facial muscle movements for emotion detection. For classification, the k-nearest neighbor was used, and it was capable of detecting mainly four basic emotions—happy, angry, sad, and surprise. Chaudhari et al. [11] proposed a media player which is capable of playing music according to human gestures. The proposed system provides an interface to capture the human gestures. The face was captured by webcam and detected by the Viola–Jones algorithm. Then, using an edge detection, the irrelevant features were eliminated. Canny edge detection was used based on a threshold value. Expression-based music player [12] proposed by Prof. Jayshree Jha, Akshay Mangaonkar, Nipun Jambaulikar, and Prathamesh Kolhatkar is an android application where the songs were preprocessed and set into different playlists. The playlist generation was based on the extracted feature. Viola–Jones face detection algorithm was used for face detection implemented on OpenCV.

592

S. Gunasekaran et al.

Mohammadpour et al. [13] proposed the facial emotion recognition using deep convolutional networks. The extended Cohn–Kanade dataset was used that includes the expressions that are neutral, happy, sad, surprise, fear, anger, disgust, and contempt. Face recognition based on convolutional neural and support vector machine [14] proposed by Shanshan Guo, Shiyu Chen, and Yanjie Li was capable of extracting features, whereas the SVM was used for classification. This player allows users to manually enter the mood that wants to be heard, and then, the software recommends the songs list. Stereomood is an another software that enables the user to manually select his mood from the options. The various algorithms on emotion-based playlist generation that exist today are performed with a pre-defined set of songs by the developer [5, 15]. This could be a bit frustrating for the user since the same song is played until the user explicitly navigates through the directory and modify from there.

3 Dataset Two datasets are used in the proposed system: one for the facial expression detection and other for genre classification.

3.1 FER2013 Dataset for Facial Expression Detection The FER2013 dataset contains around 35,887 well-structured 48 × 48 pixel grayscale images. The dataset consists of seven distinct emotions that are classified as follows: 0 means angry, 1 means disgust, 2 means fear, 3 means happy, 4 means sad, 5 means surprise, and 6 means neutral. The entire dataset is contained in the format of .csv files. Columns such as “emotion” and “pixels” can be found in the .csv file. Numeric codes ranging from 0 to 6 reflect various emotions in the emotion column. There are 2304 pixel values in the “pixel” column (Table 1). Table 1 The FER2013 dataset No

Emotion

Pixels

Usage

0

0

70 80 82 72 58 58 60 63 54 58 60 48 89 115 121…

Training

1

0

151 150 147 155 148 133 111 140 170 174 182 15…

Training

2

2

231 212 156 164 174 138 161 173 182 200 106 38…

Training

3

4

24 32 36 30 32 23 19 20 30 41 21 22 32 34 21 1…

Training

4

6

4 0 0 0 0 0 0 0 0 0 0 0 3 15 23 28 48 50 58 84…

Training

Facial Expression and Genre-Based Musical Classification …

593

3.2 Kaggle GTZAN Dataset for Genre Classification The GTZAN database was the first publicly accessible genre recognition database which contains thousand audio tracks of the length 30 s that are sampled at 22050 Hz. GTZAN dataset is grouped into 10 popular music genres: blues, classical, country, disco, hip-hop, jazz, metal, pop, reggae, and rock. Each genre has a total of 100 songs.

4 Proposed System The proposed system consists of two major sections: audio processing and image processing. The audio processing part accepts any audio file as input and classifies them based on the musical genre. The image processing part accepts the image frame of the face from the webcam and identifies the expression of the face. The overall system design is shown in Fig. 1. The proposed system allows users in two modes: mood-based music play and genre-based music. The mood-based music play mode allows users to play the songs corresponding to detected facial expression. For more accuracy, the facial expression detection is prolonged for 10 s, and the frequently detected facial expression detection is considered as the final one (Fig. 2). The genre-based music play mode takes all wave files in the directory and classifies them into various genres. Then, playlist is generated based on genre classification.

Fig. 1 Proposed system design

594

S. Gunasekaran et al.

Fig. 2 Use case diagram of proposed system

The proposed system has the capability of allowing users to set the list of songs to play when he/she is in a particular emotion. Additionally, the user can create playlists and add songs to it.

4.1 Method The system consists of two major sections: audio processing and image processing. The audio processing part accepts any audio file as input and classifies them based on the musical genre. The image processing part accepts the image frame of the face from the webcam and identifies the expression of the face.

4.1.1

Basic Idea of Musical Genre Classification

This paper was implemented with the architecture of CNN-based deep learning method for audio processing where the representation of the audio (mel spectrogram) is used to extract features and classify music clips into ten different music genres such as blues, classical, country, disco, hip-hop, jazz, metal, pop, reggae, and rock. In this paper, the inbuilt functions for retrieving music information are provided with a Python package named LibROSA. This package is used for music and audio processing. The concept of genre classification is shown in Fig. 3.

Facial Expression and Genre-Based Musical Classification …

595

Fig. 3 Concept of genre classification

4.1.2

Data Preprocessing for Genre Classification

The input to genre classification in any WAV file in the directory. For each WAV file in the direction, the algorithm splits the input signal into multiple signals of 0.05 s window with overlapping of 0.5 s.

4.1.3

Feature Extraction for Genre Classification

Each of the split signals is converted to the mel spectrogram using the inbuilt function in LibROSA. An array of converted spectrograms will be used further for CNN classification purposes. The following lines of code (1) generate a mel spectrogram: melspec = libr osa. f eatur e.melspectr ogram (x = x, sr = sr n_ f f t = n_ f f t, hop_length = hop_length, n_mels = 128)

Fig. 4 Mel spectrogram for blues genre and classical genre

(1)

596

S. Gunasekaran et al.

Fig. 5 Mel spectrogram for country genre and disco genre

Fig. 6 Mel spectrogram for the hip-hop genre and jazz genre

Fig. 7 Mel spectrogram for pop genre and rock genre

The mel spectrogram plotted for each genre is shown in Figs. 4, 5, 6, and 7. It generates 128 time bins with 129 windows each making the data a two dimensional as 128 × 129.

4.1.4

Building the CNN Model for Genre Classification

An input layer, an output layer, and one or more hidden layers compose the CNN model. Feature extraction could be attained using the hidden layers. The initial hidden layers extract only low-level features like edges or lines, while the deeper layers look for high-level features. Convolution and pooling layers are the hidden layers used alternatively. After the hidden layers, the classifier part of CNN carries the remaining work. For music genre classification, the training set consists of mel spectrograms for each song. The data after preprocessing had a shape of 128 × 129 which represents 128 meals with 129 windows. These data needed to be reshape into 128 × 129 × 1 to represent the single channel. The input layer was constructed with neurons of 128 × 129. We have built five hidden block layers in this proposed CNN architecture. The RELU activation mechanism is used with the convolution kernel of size 3 × 3 is

Facial Expression and Genre-Based Musical Classification …

597

Fig. 8 Model summary

present in each block layer. It is followed by a max pooling layer. Then, flatten them into the 1D array and feed them into a dense layer followed by the output layer. The output layer uses softmax function which consists of 10 neurons each neuron for each emotion. To prevent overfitting, it is followed by dropout, which has a 25% drop rate. After that, the model is built using Adam optimizer and the loss function categorical cross-entropy. Next is fitting the model with the fixed batch size (128), epochs (150), and validation data. And finally, model weights are saved to HDF5 format. The CNN model summary is shown in Fig. 8. The model returns the probability that the input can be in a particular genre and the genre with maximum probability is selected as the predicted label.

4.1.5

Basic Idea of Image Processing

The user’s face is scanned using a camera input in real time. The frame read by the application is preprocessed so that it becomes suitable to extract the features. The input to CNN is a vector of three values: height, width, and number of channels. For an RGB image as shown in Fig. 9, the image dimension will be as 5 × 5 × 3. Since an unprocessed RGB image has the following problems, it does not differentiate the uneven features of a face. The texture of the face cannot be distinguished. So, an RGB is converted to a grayscale image. The open-source library called OpenCV provides an inbuilt function cvtColor to convert an RGB image to a grayscale image. It takes the input image in one color space and converts it to another color space specified. Figure 10 shows how the grayscale image emphasizes the brightness. The grayscale images are then used to detect faces using OpenCV Haar cascade. As Fig. 10 shows, the detected face region is cropped and resized to 48 × 48 pixels which are fed into the CNN architecture (Fig. 11).

598 Fig. 9 Input vector of an RGB image

Fig. 10 Preprocessing of camera input

Fig. 11 Concept of image classification

S. Gunasekaran et al.

Facial Expression and Genre-Based Musical Classification …

599

In this proposed CNN architecture, we created four hidden block layers. Each block layer contains two consecutive convolution kernels of size 3 × 3. It uses the ReLU activation function. It is followed by a max pooling layer. The process of using two consecutive convolutions in one layer is called grouping. This can aid in better feature extraction. Since the output of one convolution layer is fed into the next one, the second convolution layer can extract high-level features than the previous one. Then, flatten them into a 1D array and feed them into a dense layer followed by the output layer. The output layer uses softmax function which consists of seven neurons, each neuron for each emotion.

4.1.6

User Interface

The proposed system is a music player that requires user-friendly, easy-to-use graphical user interface. The proposed system has been designed with a GUI using the Python Flask package. Flask is employed to make Web applications using HTML front end with Python back end processing. To load audio and control volume, HTML elements and JavaScript are used. Python scripts are written to load Web pages as well as run algorithms behind a button click. The Python scripts also provide functionality for the user playlist settings feature.

4.1.7

Evaluation

The genre classification using a convolutional neural network was found to possess an accuracy of 88.5% when used 20% of test data from each genre for testing. Confusion matrix is plotted in Fig. 12 which was obtained for the saved CNN model.

Fig. 12 Confusion matrix for genre classification and precision and recall

600

S. Gunasekaran et al.

Table 2 Concept of confusion matrix

Predicted label Actual label

Class-1

Class-2

Class-1

True positive

False negative

Class-2

False positive

True negative

Additionally, precision and recall for every genre are calculated from the confusion matrix (Table 2). The confusion matrix is a technique of describing the performance of a model specifically a classification algorithm. It gives the idea of what our model is. The confusion matrix shows the number of correct and incorrect predictions for each class. The classification accuracy is calculated as the number of correct predictions divided by the number of total predictions. The classification report showing precision and recall for each class is obtained (Fig. 12). The precision of each class is calculated as diagonal value divided by the sum of all values of confusion matrix column of respective class. The recall of each class is calculated as diagonal value divided by the sum of all values of confusion matrix row of respective class. The probability of correct classification for each class is calculated as diagonal value of that class divided by the sum of all values of confusion matrix. From the confusion matrix (Fig. 12), some of the incorrect predictions are “blues” predicted as “classical,” “country” predicted as “blues” and “rock,” “hip-hop” predicted as “metal,” and “rock” genre predicted incorrectly the most (Fig. 13). The facial emotion classification using a convolutional neural network was found to have an accuracy of 71.45% when used test data from the dataset. Figure 25 shows the confusion matrix obtained for the saved CNN model. Additionally, precision and recall for each genre are calculated from the confusion matrix (Fig. 14). The incorrect predictions are mainly due to some ambiguity in the training dataset. Figure 16 gives some details about the ambiguity in the training data. Figure 16(a) is labeled as “surprise,” (b) is labeled as “Fear,” (c) is labeled as “surprise,” and (d) is labeled as “Fear.” This could be confusing and misleading the classification (Fig. 15). Fig. 13 Classification accuracy bar plot for each genre

Facial Expression and Genre-Based Musical Classification …

601

Fig. 14 Confusion matrix of facial emotion classification and precision and recall

Fig. 15 Classification accuracy bar plot for each emotion

5 Summary of the Work This paper proposes a deep learning approach for facial emotion recognition and genre classification which could be applied for automate playlist generation process. The method suggested in this paper would be very useful for users who are listening the songs based on their mood and emotional situations. It will help reduce the complexity of manually setting the playlists with an oversized assortment of songs which is an intensive task that requires more time. The proposed music player would be able to analyze facial video, identify expressed movements in terms of basic emotions, and then play the music based on these emotional responses. The main advantages of this proposed work are automation and independent to user and environment. The system proposed in this paper can be designed with a user-friendly

602

S. Gunasekaran et al.

Fig. 16 Ambiguity in training data

platform of android studio and OpenCV where Python language is used for implementing algorithms. This project does not necessitate the purchase of any costly software, which makes it cost-effective for developer and consumer. The proposed system will be a useful application for music listeners with a camera and an internet connection. The system’s potential scope will be to develop and include a framework that would be useful in music therapy treatment. In the medical community, the proposed scheme can be tweaked to treat people suffering from anxiety, acute depression, and trauma. Additionally, in future, the proposed system also tends to avoid uncertain results due to inadequate camera resolution and terrible surrounding lights. The music system can be additionally controlled using hand gestures along with facial gestures.

References 1. Prisacariu C. Face detection: a survey. Department of Informatics, University of Oslo, PO Box 1080 Blindern, N-0316 Oslo, Norway 2. Pradeep KD, Sowmya BJ, Srinivasa KG (2016) A comparative study of classifiers for music genre classification based on feature extractors. In: 2016 IEEE Distributed Computing, VLSI,

Facial Expression and Genre-Based Musical Classification …

603

Electrical Circuits and Robotics (DISCOVER), IEEE 3. Huo B, Yin F (2015) Research on novel image classification algorithm based on multi-feature extraction and modified SVM classifier. Int J Smart Home 9(9):103–112 4. Chao W-L (2007) Face recognition. GICE, National Taiwan University 5. Medjahed SA (2015) A comparative study of feature extraction methods in ımages classification. Int J Image Graphics Signal Process (IJIGSP) 7(3) 6. Dhawan S, Dogra H (2012) Feature extraction techniques for face recognition. Int J Eng Bus Enterprise Appl (IJEBEA) 12-202 7. Jensen OH (2008) Implementing the viola-Jones face detection algorithm. Diss. Technical University of Denmark, DTU, DK-2800 Kgs. Lyngby, Denmark 8. Zaware N, Rajgure T, Bhadang A, Sapkal DD (2014) Emotion based music player. Int J Innov Res Dev 3(3):182–186 9. Hirve R, Jagdale S, Banthia R, Kalal H, Pathak KR (2016) EmoPlayer: an emotion based music player. Imperial J Interdisc Res 2 10. Povoda L, Burget R, Masek J, Uher V, Dutta MK (2016) Optimization methods in emotion recognition system. Radioengineering 25(3):565–572 11. Chaudhari H, Waghmare A, Ganjewar R, Banubakode A (2015) A media player which operates depending on human emotions. Int J Adv Res Comput Commun Eng 4(5) 12. Jha J, Mangaonkar A, Mistry D, Jambaulikar N, Kolhatkar P (2015) Facial expression based music. Player Int J Adv Res Comput Commun Eng 4(10) 13. Mohammadpour M, Khaliliardali H, Hashemi SMR, AlyanNezhadi MM (2017) Facial emotion recognition using deep convolutional networks. In: 2017 IEEE 4th international conference on Knowledge-Based Engineering and ˙Innovation (KBEI), pp 0017–0021, IEEE 14. Guo S, Chen S, Li Y (2016) Face recognition based on convolutional neural network and support vector machine. In: 2016 IEEE ınternational conference on Information and Automation (ICIA), pp 1787–1792, IEEE 15. Chi C-Y, Tsai RTH, Lai J-Y, Hsu JY (2010) A reinforcement learning approach to emotionbased automatic playlist generation. In: 2010 ınternational conference on technologies and applications of artificial ıntelligence, pp 60–65, IEEE

Post-Quantum Cryptography: A Solution to Quantum Computing on Security Approaches Purvı H. Tandel and Jıtendra V. Nasrıwala

Abstract Ever since the initial idea of quantum computing, the quest for processing data faster never took an offspeed. With the invention of Shor’s and Grover’s algorithm, it has given the boost amongst researchers to develop a successful quantum computer that can outperform any classical system available this day. Due to its working principle and different properties of qubits, many of the tasks which are difficult to perform for current systems can be done efficiently using quantum computers. Revolution in computing will breach existing security approaches and lead us to find alternatives to withstand quantum attacks. Post-quantum cryptography is the most promising way to secure our existing digitized world, as it is not based on discrete logarithms and integer factorization concepts. They are based on other hard mathematical concepts that are hard to break in polynomial time. Therefore, tremendous research has been done in the last decade to design stable, efficient, and secure post-quantum cryptographic approaches. Hash-based, code-based, and lattice-based approaches are well understood, and many of their approaches are standardized for practical implementations. Keywords Quantum computing · Post-quantum cryptography · Artificial intelligence · Cryptography · Shor’s algorithm

1 Introduction Humans’ every invention thrives for improving the quality of human lives. Throughout the centuries, all kinds of developments have made human life better or easier. One of such new age inventions is quantum computers. Quantum computing is one of the top ten inventions of the twenty-first century. In 1980, Yuri Manin proposed P. H. Tandel (B) Department of Information Technology, C G Patel Institute of Technology, Uka Tarsadia University, Bardoli, India e-mail: [email protected] J. V. Nasrıwala Babu Madhav Institute of Information Technology, Uka Tarsadia University, Bardoli, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_46

605

606

P. H. Tandel and J. V. Nasrıwala

the idea of quantum computing [1], and in 1981, Richard Feynman presented a logical model of a quantum computer at the conference on physics and computerization [2]. Ever since the invention of programmable classical computers in the early twentieth century, the principle of working is transistors. In short, classical computers are the devices with transistor switch which processes information in either 0 or 1 (binary representation). A group of these transistors on special circuits are called logic gates which are responsible to perform calculations and to take decisions. A number of transistors are used to improve a computer’s processing power. As per Moore’s law, every two years number of transistors on a microchip will be doubled, but the cost of computers will be reduced by half. In 2019, 56-core Xeon Platinum 9282 processor is available commercially by Intel which has 8 billion transistors. Increasing the number of transistors on a small single board to improve processing power will be infeasible, thriving for a solution to all the researchers that works on a different principle than classical computers to achieve excellence.

1.1 Quantum Computing Quantum computers are the perfect blend of physics, engineering, and computer science. Quantum computing works on the elementary principle of quantum physics that it can be in multiple states simultaneously [3, 4]. This type of phenomenon is called superposition which occurs before the measurement and defines the particle’s permanent state. Superposition and entanglement are the elementary principles behind the working of quantum computers [5]. In classical computers, basic unit of information is one bit (either 0 or 1), whereas in quantum computers basic unit of information is a quantum bit (qubit) which is described by 0 and 1 both simultaneously. This basic principle of quantum computing provides new possibilities for the effective processing of databases and many more possibilities of solving problems that are difficult for a classical computer. Qubits are represented in two states called a superposition. These two states are denoted by Dirac notations |0 and |1. A single qubit is represented by the following equation. |ψ = α|0 + β|1 where α and β are complex numbers such that |α|2 + |β|2 = 1. Superposition is a special property of qubit that states can be 0 and 1 both at the same time before actual measurement. Figure 1 shows the quantum state representation using the Bloch sphere. State |0 is located at the south pole, and |1 is located at the north pole. Here, z-axis is known as the longitudinal axis, while the x-axis and y-axis are known as the transverse axis [6]. To perform operations, different quantum gates rotate the qubit state at some angle about a relevant axis.

Post-Quantum Cryptography: A Solution to Quantum …

607

Fig. 1 Bloch sphere representation of the quantum state |ψ = α |0 + β |1 [6]

To understand the advantage of working with qubits, let us consider a 2-bit data system to find the correct combination of 2 bits to access or crack system on both classical and quantum computers (Table 1). Classical computer with a 2-bit data system will have 4 possibilities: 00, 01, 10, and 11. This means a 2-bit classical computer can simultaneously perform at most one of these four possible functions, while a 2-qubit quantum computer can perform all four operations in a single operation at the same time. Classical 2-bit computers only contain information about one state, where 2-qubit quantum computers contain information about 22 = 4 states at the same time [7]. In general, n-qubit quantum computers can analyze 2n parallel states in a single operation. A classical computer has to repeat operation 2n times to complete where a quantum computer can perform it in a single operation. Thus, quantum computers have tremendous processing power compared to a classical system that can solve or process data faster. One day, even a 1000-qubit computer may able to process all the atoms present in the universe. If the researcher’s community designs the stable quantum computer, many problems can be solved in the future which classical computers are incapable of. Table 1 Illustration of classical and quantum systems for finding correct combination of 2-bit data Classical 2-bit system

Quantum 2-bit system

Single-bit possibility at a time

0 or 1

0 and 1 both simultaneously

Number of operations to be performed to find correct combination of 2 bits

00 01 10 11 4 separate operations required

|0 0 |0 1 |1 0 |1 1 Single operation will calculate all possible options parallelly

Storage capacity of 2-bit system per operation

Single digit (either 0 or 1 or 2 or 3)

All numbers from 0 to 3

608

P. H. Tandel and J. V. Nasrıwala

1.2 Benefits of Quantum Computing Quantum computing works on quantum mechanical effects to process information which drastically improves the performance compared to classical computing concepts [8]. Quantum computing can provide faster solutions due to the parallelism discussed earlier. This is one of the biggest advantages of the quantum computing concept that many of our problem solutions can be derived faster and efficiently which is hard for classical systems. Quantum computers are able to solve problems like integer factorization, faster-searching algorithms, chemical formula testing, optimization problems, efficient prediction. Researchers are working to find many such quantum algorithms to make solutions easier than advanced. Most of the successful quantum algorithms use quantum Fourier transform as a core concept because they have fewer hardware requirements [9]. The number of gates needed for computing n qubits using quantum Fourier transform is very less than a classical computer. This is the main reason for faster integer factorization using Shor’s algorithm [10]. Another such algorithm is Grover’s search algorithm. This algorithm provides a fast solution for multiple processes running at the same time in a single system computed using quantum Fourier transform [11]. Quantum computers are consuming less power compared to even existing classical computers and supercomputers, due to the principle of quantum tunneling [2]. D-Wave 2000Q takes 25 kW power consumption, whereas the average supercomputer takes 2500 kW, almost 100 times higher than the currently designed quantum computer by D-Wave. Based on the author’s study [2], quantum computers could perform computational tasks with smaller energy requirements compared to classical computers existing.

1.3 Limitations of Current Quantum Computers Quantum computers are facing implementation as well as environmental challenges. There are plenty of limitations when it comes to practical implementation. One of the main disadvantages is decoherence. Coherence loss is usually caused by vibrations, temperature fluctuations, electromagnetic waves, and other interactions with the outside environment, which destroys the working properties of the quantum computer [12]. Another important phenomenon is quantum error correction. There are quantum error correction schemes available, but they consume a large number of qubits for error correction leaving few qubits for actual computation [13]. Also, current quantum computers are having some environmental limitations which make them difficult to develop. Quantum computers are very expensive, due to their specific hardware design and some constraints under which only quantum computer works. Latest D-Wave 2000Q costs around $15 million [2]. Quantum computer requires a cooling environment -273 ˚C which is 180 times cooler than interstellar space [14]. Also, the quantum computer requires very low pressure,

Post-Quantum Cryptography: A Solution to Quantum …

609

almost 10 billion times lower than atmospheric pressure to work. Quantum computers require huge storage space comparatively. The D-Wave 2000Q system dimensions are approximately 10 ft × 7 ft × 10 ft length, width, and height, respectively [14]. These are a few of the limitations, upon which researchers are working around the clock to make quantum computers commercialized as well as portable devices.

2 Applications of Quantum Computing in Various Domains In the banking sector, many of the trading algorithms may outperform using quantum computers compared to currently available trading approaches. Every bank, government sector, and companies with higher revenue are always concerned regarding risks due to its digitized platform. These types of risk identification and management tasks which process a huge amount of data can be optimally done using quantum computers. Fraud detection using machine learning concepts can perform fast learning of such frauds in a shorter time than currently available solutions. Ever since quantum computer’s successful implementation, companies like Google, NASA, USRA, IBM, Intel, Microsoft are working on various areas like computer vision, drug discovery, robotics, navigations, prediction modeling, searching unstructured databases, pattern matching, cryptography, molecule simulation, quantum chemistry and many more. Advancement in the area of machine learning, big data, and security will outperform in many sectored problems using quantum computers which are way faster than the existing high-performance computing devices.

3 Impact of Quantum Computing on Cryptographic Approaches Every Web service, banking sectors, e-government sectors, even the latest cryptocurrency all are dependent on public-key cryptographic concepts, where public–private key pair plays a vital role to make the current digitized world secure, by keeping private key secure with the user and public key open to all other users involved in an entire cryptosystem. All the currently available public-key algorithms are based on discrete logarithms or prime factorization, which has its strength depending on the key length. Larger bit version of all such cryptographic primitives is currently keeping our entire digitized world safe. And day by day as processing capabilities increasing in the computation world, cryptographic primitives with higher bit versions are getting used. But the question arises that up to what extent we can increase bit length in these primitives is the motivation for many researchers who have started working on cryptographic solutions rather than public-key cryptosystems.

610

P. H. Tandel and J. V. Nasrıwala

Table 2 Overview of existing cryptographic approaches and their security level after quantum attacks Cryptographic approach

Key/hash size

Current security level (in bits)

Quantum algorithm used to break approach

Post-quantum security level (in bits)

AES-128

128

128

AES-256

256

256

Grover’s algorithm 64 (brute-force to find 128 applied key)

RSA-2048

2048

112

RSA-15360

15,360

256

ECDSA

256

128

ECDH

256

128

SHA-256

256

256

SHA3-256 SHAKE-256

256 Arbitrary

256 256

Shor’s algorithm (integer factorization)

Broken Broken

Shor’s algorithm (discrete logarithms)

Broken

Broken

Grover’s algorithm 128 (collision attack) 128 256

Especially when considering the quantum computers, a big breakthrough encountered after quantum mechanics as a successful prototype for quantum computing is Peter Shor and Lov Grover’s quantum algorithms. In 1996, Lov Grover has invented Grover’s algorithm which gives square root speedups on search problems using quantum computers [14]. This improves brute-force approaches to break ciphers by checking every possible key. Also, executing Grover’s algorithm which takes square root time to factor reduces the exponent of the time complexity by half. In 1994, Peter Shor invented a quantum algorithm for more efficient factoring. Shor’s algorithm solves integer factorization and discrete logarithms in polynomial time on a quantum computer [15]. Table 2 summarizes the impact of quantum attacks on various cryptographic approaches. It can be observed that only AES and hash functions with sufficient large security parameters can withstand quantum computer attacks. All public-key cryptographic primitives will be broken using Shor’s algorithm. This gives motivation to many researchers to find other cryptographic primitives able to withstand quantum computing capabilities. Many researchers are working in the quantum cryptography area, which enables secure communication in the quantum computing paradigm. A lot of research was carried out in the era of quantum Internet, quantum communications, quantum cryptography in the last decade [16]. But quantum cryptography has limitations concerning special communication channels and their environmental requirements. Also, this technology will take time to standardize its protocols and methods. Another promising way to secure the current digitized world is Post-Quantum Cryptography, in which methods rely on mathematical hardness. Whether quantum computers become real for commercialization or not, Post-Quantum Cryptography can protect our communications. Also, PQ methods can be implemented in existing infrastructure such as post-quantum cryptography family which includes

Post-Quantum Cryptography: A Solution to Quantum …

611

hash-based, code-based, lattice-based, multivariate, and supersingular elliptic curve isogeny cryptography.

4 Survey on Post-quantum Cryptographic Methods The different families of post-quantum schemes vary heavily in their resource requirements. Post-quantum approaches generally require larger public keys and larger signature/message sizes than classical approaches. However, they are secure against both classical and quantum attacks.

4.1 Hash-Based Cryptography Hash-based approaches rely on the security of hash functions under the assumption that current hash functions are collision-resistant and pre-image-resistant. Their security against quantum attacks is well studied and understood. Approaches under this category use tree structures like Merkle constructions, hash functions, and one/many time signatures to provide strong security. Hash-based approaches are relatively fast in signing and verifying procedures and also inherently forward secure. Hash-based cryptographic approaches are advantageous because they are relying on hash functions like SHA3 or BLAKE2 versions which can be easily replaced with newly secured hash functions in the future if they are found to be vulnerable. They are either stateful or stateless. Figure 2 illustrates both stateful and stateless hash-based signature schemes. The stateful hash-based algorithms need to maintain the record/state of previously used one-time keypairs to avoid the usage of the same keys again in the future [17]. They can produce only a limited number of signatures which is a drawback. Stateless approaches on the other side do not maintain the used key state using few-time signature schemes. These few-time signature schemes can sign many messages resulting in larger tree and signature sizes. Stateless approaches are having larger signature sizes than stateful. The number of signatures can be increased, but it also increases the signature size which is the greatest challenge of this Post-Quantum Cryptography family.

4.2 Code-Based Cryptography Code-based approaches rely on the theory of error correcting code hardness [19]. In 1978, McEliece proposed a cryptosystem based on the hardness of decoding a generic linear code. McEliece’s scheme describes a method to compute Gpub and decode the message efficiently having a private key. McEliece is using Goppa codes for a key generation [20].

612

P. H. Tandel and J. V. Nasrıwala

Fig. 2 a Stateful Merkle signature scheme using one-time signature where public keys are hashed using hash functions and concatenated till root node. To verify the signature, authentication path has been followed for validation and b stateless signature scheme where upper layers of the tree are using a stateful scheme with one-time signature scheme to sign the root of their ancestor and lower layers are using the few-time signature scheme to sign the messages [18]

Table 3 illustrates encryption using McEliece as well as Niederreiter cryptosystems which is an efficient variant over McEliece and summarizes the difference between both cryptosystems [20, 21]. On the other side, Niederreiter proposed a cryptosystem to reduce the key size by encoding the original message as an error. A general linear code block cannot be decoded in polynomial time, which is the biggest advantage of this family. On the other side, most of the code-based primitives have very large key sizes. Many researchers have proposed solutions to reduce the public key size, but reducing public key size is still a challenge for this family. Table 3 Illustration of encryption function for both McEliece and Niederreiter cryptosystems McEliece cryptosystem General purpose

Niederreiter cryptosystem

Adds intentional error to protect Encoding original message as error original message from eavesdropper to reduce public key sizes

Encryption Function c = mG  pub ⊕ e where c is codeword m  is original message e is secret error vector G pub is public generator matrix

s = H pub eT where s is syndrome eT is original message encoded as bit string e with weight t H pub is parity check matrix

Merit/demerit

Faster and smaller key sizes

Larger key sizes

Post-Quantum Cryptography: A Solution to Quantum …

613

Fig. 3 Lattice-based encryption example in two-dimensional lattice [22]

4.3 Lattice-Based Cryptography Lattice-based approaches rely on the concept that it is computationally hard to find the shortest vector in a high-dimensional lattice. Security of lattice-based approaches is proven secure under worst-case hardness. To understand the concept, Fig. 3 represents lattice-based encryption in two-dimensional lattice space. For key generation, → → s1 } as the private key and schemes consider two-dimensional lattice base {− s0 , − − → − → { p0 , p1 } as the public key. Now for encryption, sender maps the message to a point m  in the lattice using the public scrambled base and adds an error vector to find point c which is closer to m  compared to another lattice [22]. Now to decrypt the message, → → s1 } and generate the original message. As the attacker does not receiver will use {− s0 , − → → s1 }, it is computationally hard for attacker to have a well-formed secret base {− s0 , − recompute well-formed base known as shortest vector problem (SVP). Some latticebased schemes are based on learning with errors (LWE) which contributes strong security but facing large key size problems for practical application. NTRU-based schemes are efficient to implement, but their security proofs are still under study. A lot of application-oriented research has been carried out as attributebased encryption, code obfuscation, and homomorphic encryptions, although they are less trusted compared to code-based approaches.

4.4 Multivariate Cryptography Multivariate cryptography is based on multivariate polynomials over a finite field for asymmetric cryptographic approaches. Solving cryptosystems using a set of random

614

P. H. Tandel and J. V. Nasrıwala

multivariate polynomial equations is considered to be NP-complete. Several multivariate cryptosystems have been proposed in past decades, from which many have been broken. One of the biggest advantages of this family is smaller signature sizes and simple arithmetic operations to implement. But approaches under this family are still considered immature approaches to withstand quantum attacks.

4.5 Supersingular Elliptic Curve Isogeny Cryptography Supersingular elliptic curve isogeny cryptography is computing a sequence of isogenies of elliptic curves. The concept of isogenies is to have operations on different elliptic curves rather than calculating a point on a curve. In 2006, Rostovtsev and Stolbunov have introduced a public-key cryptosystem based on isogenies. A major drawback of this approach is encryption and decryption are time-consuming. Attacks and security of these approaches are still under study by the researchers. But they are having advantages like small public keys, and no decryption errors are expected. This variant is quite new, and researchers are working hard to convert the variant efficient as well as secure. Hash-based, code-based, and lattice-based families are more matured and stable compared to other discussed PQ approaches. These families are relying on hard mathematics problems that are hard to solve even if the attacker has access to a quantum computer. Table 4 summarizes post-quantum signature schemes comparing their key size, signature size, signing and verification time, performance on constrained devices, and briefs the open challenges under each category. In the future, majority of the applications will use constrained devices to automate our lives. To maintain security and privacy in those small or large IoT networks, we need strong cryptographic primitives which are not vulnerable in the future. Whenever the constrained devices considered that there are two categories memory constrained devices and performance constrained devices. From the comparative analysis, it can be concluded that PQ schemes which are considering much time for encryption and decryption or signing and verifying are not suitable for performance-constrained devices. A new category of PQ schemes supersingular elliptic curve isogeny cryptography is not suitable for performanceconstrained devices. PQ schemes that are generating large public keys compared to others are not suitable for memory constrained devices due to their key sizes. Due to large key sizes, code-based cryptography and multivariate cryptography are not suitable for implementation on memory constrained devices. Researchers are working hard to make PQ approaches more efficient, although few approaches like hash-based, codebased, and lattice-based are mature enough for standardization and deployment.

Post-Quantum Cryptography: A Solution to Quantum …

615

Table 4 Comparative analysis of post-quantum schemes for digital signatures Post-quantum signature schemes

Public key sizes (bytes)

Signature size (bytes)

Advantage (s)

Hash-based: XMSS (stateful) [17] SPHINCS (stateless) [23]

64 1056

2500–2820 Small 41,000 public key sizes

Code-based: McEliece [24]

958,482–1,046,739 187–194

Lattice-based: NTRU-Encrypt [25, 26]

1495–2062

Fast encryption and decryption

Disadvantage Open (s) challenge (s) Much larger signature sizes

Slow algorithms

– Very large Unstructured key sizes codes suffer – Not from huge suitable public keys for memory constraint environment

1495–2062 – Secure Less trusted under compared to worstcode-based case schemes hardness assumption – Suitable for constrained environment

Ring-LWE can help to decrease key sizes

Multivariate-based: 500,000–1,000,000 25–32 HFEv [27]

Small signature sizes

– Larger public key sizes – Not suitable for memory constraint Environment

Use them to build a white box encryption scheme

Supersingular isogenies: SIDH [28]

Support perfect forward secrecy

Slowest encryption, decryption, and key generation

Find digital signatures



564

5 Conclusion Quantum computers can solve complex problems that are hard for existing systems. Advancement in quantum computing will completely change the ways to solve difficult problems in many sectors like banking, government, predictions, health

616

P. H. Tandel and J. V. Nasrıwala

sector, public service, etc. Shor’s and Grover’s algorithm will jeopardize all cryptographic primitives relying on discrete mathematics and integer factorization. Postquantum cryptography opened the door to replace vulnerable cryptographic primitives successfully. To withstand future quantum attacks, not only general-purpose but constrained devices also need to be secured with PQ approaches. Hash-based, lattice-based, and isogenies cryptographic approaches are best suitable for memory constrained devices.

References 1. Peter S (2000) Introduction to quantum algorithms. In: AMS proceedings of symposium in applied mathematics, 58. https://doi.org/10.1090/psapm/058/1922896 2. Elsayed N, Maida AS, Bayoumi M (2019) A review of quantum computer energy efficiency. In: IEEE Green Technologies Conference(GreenTech), Lafayette, LA, USA, 1–3. https://doi. org/10.1109/GreenTech.2019.8767125 3. Siddhartha Sankar Biswas (2017) Quantum computers: a review work. Adv Comput Sci Technol 10(5):1471–1478 4. Feynman RP (1982) Simulating physics with computers. Int J Theoretical Phys 467–488. https://doi.org/10.1007/BF02650179 5. Hidary JD (2019) A brief history of quantum computing. In: Quantum computing: an applied approach. Springer, Cham. https://doi.org/10.1007/978-3-030-23922-0_2 6. Krantz P, Kjaergaard M, Yan F, Orlando T, Gustavsson S, Oliver W (2019) A quantum engineer’s guide to superconducting qubits. Appl Phys Rev 6:021318. https://doi.org/10.1063/1.5089550 7. Kanamori Y, Yoo S-M, Pan W, Sheldon FT (2006) A short survey on quantum computers. Int J Comput Appl 28. https://doi.org/10.2316/Journal.202.2006.3.202-1700 8. Cincotti G (2009) Prospects on planar quantum computing. J Lightwave Technol 27(24):5755– 5766. https://doi.org/10.1109/JLT.2009.2032371 9. Menon PS, Ritwik M (2014) A comprehensive but not complicated survey on quantum computing. IERI Proc 10:144–152. ISSN 2212-6678. https://doi.org/10.1016/j.ieri.2014. 09.069 10. Bowden CM, Chen G, Diao Z, Klappenecker A (2002) The universality of the quantum Fourier transform in forming the basis of quantum computing algorithms. J Math Anal Appl 274(1):69– 80 11. Grover LK (1996) A fast quantum mechanical algorithm for database search. In: Proceedings of the twenty-eighth annual ACM symposium on Theory of Computing (STOC ’96). Association for Computing Machinery, New York, NY, USA, pp 212–219. https://doi.org/10.1145/237814. 237866 12. Kasivajhula S (2006) Quantum computing: a survey. In: Proceedings of the 44th annual Southeast regional conference (ACM-SE 44). Association for Computing Machinery, New York, NY, USA, pp 249–253 13. Roffe J (2019) Quantum error correction: an introductory guide. Contemp Phys 60:226–245 14. The D-Wave 2000Q Quantum Computer Technology Overview (Online). Available: https:// www.dwavesys.com/sites/default/files/DWave%202000Q%20Tech%20Collateral0117F.pdf 15. Shor PW (1999) Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Rev 41(2):303–332 16. Zhang H, Ji Z, Wang H, Wu W (2019) Survey on quantum information security. China Commun 16(10):1–36. https://doi.org/10.23919/JCC.2019.10.001 17. Buchmann J, Dahmen E, Hulsing A (2011) XMSS—a practical forward secure signature scheme based on minimal security assumptions. In: Yang BY (ed) Post-quantum cryptography, PQCrypto 2011, vol 7071. LNCS. Springer, Berlin, pp 117–129

Post-Quantum Cryptography: A Solution to Quantum …

617

18. Suhail S, Hussain R, Khan A, Hong CS (2021) On the role of hash-based signatures in quantumsafe Internet of Things: current solutions and future directions. IEEE Internet Things J 8(1):1– 17. https://doi.org/10.1109/JIOT.2020.3013019 19. Niederhagen R, Waidner M (2017) Practical post-quantum cryptography. SIT-TR-2017-02 20. Baldi M, Bianchi M, Chiaraluce F et al (2016) Enhanced public key security for the McEliece cryptosystem. J Cryptol 29:1–27. https://doi.org/10.1007/s00145-014-9187-8 21. Véron P (2013). Code based cryptography and steganography. https://doi.org/10.1007/978-3642-40663-8_5 22. Niederhagen R, Waidner M (2017) Practical postquantum cryptography. SIT-TR-2017-02 23. Bernstein DJ, Hopwood D, Hulsing A, Lange T, Niederhagen R, Papachristodoulou L, Schneider M, Schwabe P, WilcoxO’Hearn Z (2015) SPHINCS: practical stateless hashbased signatures. In: Fischlin M, Oswald E (ed) Advances in cryptology, EUROCRYPT 2015, vol 9056. LNCS. Springer, Berlin, pp 368–397 24. Repka M, Zajac P (2014) Overview of the mceliece cryptosystem and its security. Tatra Mountains Mathematical Publications. 60. https://doi.org/10.2478/tmmp-2014-0025 25. Hoffstein J, Howgrave-Graham N, Pipher J, Whyte W (2009) Practical lattice-based cryptography: NTRUEncrypt and NTRUSign. In: Nguyen P, Vallée B (eds) The LLL algorithm. Information Security and Cryptography. Springer, Berlin. https://doi.org/10.1007/978-3-64202295-1_11 26. Güneysu T, Lyubashevsky V, Pöppelmann T (2012) Practical Lattice-based cryptography: a signature scheme for embedded systems. In: Prouff E, Schaumont P (eds) Cryptographic hardware and embedded systems—CHES 2012. CHES 2012. Lecture Notes in Computer Science, vol 7428. Springer, Berlin. https://doi.org/10.1007/978-3-642-33027-8_31 27. Petzoldt A, Chen M-S, Yang B-Y, Tao C, Ding J (2015) Design principles for HFEv-based multivariate signature schemes. In: Iwata T, Cheon JH (eds) Advances in cryptology—ASIACRYPT 2015, vol 9452. LNCS. Springer, Berlin, pp 311–334 28. Costello C, Longa P, Naehrig M (2016) Efficient algorithms for supersingular ısogeny DiffieHellman. In: Robshaw M, Katz J (eds) Advances in cryptology—CRYPTO 2016, vol 9814. LNCS. Springer, Berlin, pp 572–601

A Detailed Analysis of the CIDDS-001 and CICIDS-2017 Datasets K. Vamsi Krishna, K. Swathi, P. Rama Koteswara Rao, and B. Basaveswara Rao

Abstract The contributions of this paper are threefold—(i) to provide a detailed analysis of two benchmark datasets CIDDS-001 and CICIDS-2017, (ii) to evaluate three prominent feature ranking methods and to quantify the closeness factor between the features and the class label through statistical analysis, and (iii) to evaluate the performance of different traditional classifiers on cloud environment using these datasets. These datasets are generated on cloud environment which contains contemporary attacks. These contributions will provide a prior knowledge to the defenders for building an ideal NIDS with selection of suitable algorithms for feature learning and classification. Machine learning and dimensionality reduction algorithms are applied by many researchers with the lack of knowledge about which algorithm is suitable to get good performance. Having prior knowledge of the dataset structure and statistical behavior of various features will help in implementing the suitable algorithms to obtain maximum detection rates with minimum computational time. To fulfil this task and to achieve the above-mentioned contributions, the experiments are carried out. Finally, the results are presented and conclusions are drawn. Keywords Network intrusion detection system · Machine learning · CIDDS-001 · CICIDS-2017 · Feature selection

K. Vamsi Krishna (B) Department of CSE, Koneru Laksmaiah Education Foundation, Vaddeswaram, Andhra Pradesh 522502, India K. Swathi Department of CSE, NRI Institute of Technology, Agiripalli, Andhra Pradesh 521212, India P. Rama Koteswara Rao Department of ECE, NRI Institute of Technology, Agiripalli, Andhra Pradesh 521212, India B. Basaveswara Rao Computer Center, Acharya Nagarjuna University, Guntur, Andhra Pradesh 522501, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_47

619

620

K. Vamsi Krishna et al.

1 Introduction As there is a drastic growth in the cloud service utilization, security for the cloud environment has become a major issue. As usage of computer applications on computer networks is increasing, the need for network security has also become increasingly important. All information or communication systems suffer from lapse in security methods. For manufactures, it is difficult to fill these security gaps for both implementation and financial reasons. IDS plays a vital role in identifying anomalies and attacks in the network [1]. The real-time commercial applications include misuse detection systems. For research, the use of anomaly-based IDS is very important. The reason for this is IDS has the ability to detect new attacks. Machine intelligence methods are used in IDS to complete the task efficiently. Over the past few decades, NIDS has become more significant methodologies when compared to other methodologies in detecting attacks. NIDS is also playing an important role and evolved as a major tool to provide security from attackers in cloud environment [2, 3]. According to the work carried out by a lot of researchers, it has found that the existing datasets contain potential problems. Performance of existing anomaly intrusion detection methods is very low. The reason for this is verification of available datasets is not done and no reliable tests are conducted. Approximately, eleven datasets are used by several researchers since 1998 till date. There are a lot of drawbacks in existing datasets. Metadata information about the feature set is not available in some datasets. Some datasets do not contain different or latest attack patterns. Hence, generating IDS will become easy if a detailed analysis of benchmark datasets is available. Analyzing NIDS on available benchmark datasets like KDD CUP 99 and NSL-KDD does not yield expected results, because these datasets do not cover latest attack patterns. It might not be suitable for the cloud environment. So, the two datasets CIDDS-001 and CICIDS-2017 are adopted for this study. Some of the studies that analyzed the datasets are listed in the following paragraphs. Gharib et al. [4] have identified 11 criteria to build a dataset. The 11 criteria of this framework are complete network configuration, complete traffic, labeled dataset, complete interaction, complete capture, available protocols, attack diversity, anonymity, heterogeneity, feature set, and metadata. None of the existing IDS datasets could cover all the 11 criteria. According to researchers at Canadian Institute of Cyber Security, most of the available datasets are out of date and unreliable for evaluation purposes. In the study carried out by Villacampa [5], comparison between various feature selection methods was done. The methods compared in this study are information gain (IG), correlation-based feature selection, Relief-F, wrapper, and hybrid methods. To evaluate the performance of these feature selection methods, three most popular classifiers, namely decision trees, kNN, and SVM, are used. From the experimental results, it is concluded that the Relief-F method has better performance among all the tested feature selection methods.

A Detailed Analysis of the CIDDS-001 and CICIDS-2017 Datasets

621

Karegowda et al. [6] proposed two filter-based models for selecting the relevant features. The models are gain ratio (GR) and correlation-based feature selection (CFS). CFS filter uses a genetic algorithm for search process. Backpropagation neural network and radial basis function network are used to evaluate the validity of the extracted feature subsets from both filters. From the experimental results, it is concluded that the classification accuracy is high with the features selected by CFS filter when compared with the accuracy obtained from features extracted by GR filter. Previous works used the KDD Cup Dataset, etc., and also used some of the traditional feature selection techniques that are not addressing the new type of attacks and also contemporary network scenarios. Due to the information revolution, several venders migrated from traditional network scenarios to the cloud environment. In this scenario, the researchers also generated new benchmark intrusion datasets to address cloud-related attacks in the cloud environment. In this connection, there is a need to analyze these types of datasets for better implementation of machine learning algorithms. To fulfil this problem, this work is carried out. Most of the studies conducted experiments on outdated datasets, and they have not addressed the cloud-related attack detection. So, in this paper an attempt has been made to give an idea to the defenders for implementing various phases of NIDS on cloud environment. For this purpose, two datasets CIDDS-001 and CICIDS-2017 are chosen. The rest of this paper is organized as follows: In Sect. 2, a detailed description of datasets is presented. Section 3 gives preliminary description for feature ranking methods, whereas Sect. 4 provides prominent classification approaches. Section 5 describes the experimental setup for conducting experiments. Section 6 presents the experimental results and discussions. Finally, Sect. 7 provides conclusions and the future work.

2 Description of the Datasets 2.1 CIDDS-001 Dataset Coburg Intrusion Detection Dataset (CIDDS-001) is a labeled unidirectional flowbased dataset generated by emulating small business environment in cloud for the evaluation of NIDS. It consists of real traffic data from an internal server with open stack environment (Web, E-mail servers, etc.) and external server (file synchronization and web server). Python scripts emulate normal user behavior on the clients. Table 1 provides the description of the dataset. The dataset contains 16 attributes, and the attributes 1–12 are default NetFlow attributes, whereas the attributes 13–16 are additional attributes which describe the attacks.

622 Table 1 List of features in CIDDS-001 dataset

K. Vamsi Krishna et al. Sl. no.

Feature name

1

Date first seen

2

Duration

3

Proto_type

4

Src_IP_Addr

5

Src_Pt

6

Dst_IP_Addr

7

Dst_Pt

8

Packets

9

Bytes

10

Flows

11

Flags

12

Tos

13

Class

14

Attack type

15

Attacked

16

Attack description

2.2 CICIDS-2017 MachineLearningCSV dataset which is available as part of the CICIDS-2017 dataset from ISCX Consortium is used for this study. This dataset is available as an open source [7]. The dataset consists of eight comma-separated value (CSV) files. Each file contains the traffic monitored on eight different sessions. The CICIDS-2017 dataset is a benchmark dataset available with more complex features. Large volumes of traffic records are available in this dataset. Each record has data in terms of values noted for a large set of features. The available records are broadly categorized into two types—‘benign’ for normal traffic and ‘attacks’ as malicious. Further, the attack records are categorized into 14 different attack types. The description of the dataset, which contains 8 traffic monitoring sessions and record-wise statistical information with 14 types of attacks, is explained in [8]. All features of CICIDS-2017 dataset are given in Table 2.

3 Feature Ranking Models This section discusses and explains in detail various prominent feature ranking models that are proposed by the earlier researchers. They are supervised feature learning algorithms, IG, GR, and correlation feature ranking methods. These methods are used to identify the optimal threshold value for filtering the irrelevant features.

A Detailed Analysis of the CIDDS-001 and CICIDS-2017 Datasets

623

Table 2 List of features in CICIDS0-2017 dataset S. no.

Feature names

S. no.

Feature names

1

ACK Flag Count

40

Fwd Packet Length Min

2

PSH Flag Count

41

Init_Win_bytes_backward

3

Fwd IAT Total

42

Bwd Packets/s

4

Flow Duration

43

Active Mean

5

Idle Max

44

Active Max

6

Fwd IAT Max

45

Total Length of Fwd Packets

7

Flow IAT Max

46

Subflow Fwd Bytes

8

Idle Mean

47

Active Min

9

Idle Min

48

Flow IAT Min

10

FIN Flag Count

49

Fwd IAT Min

11

Bwd IAT Total

50

Fwd Header Length

12

Bwd Packet Length Mean

51

Bwd Header Length

13

Avg Bwd Segment Size

52

Subflow Fwd Packets

14

Fwd IAT Std

53

Total Fwd Packets

15

Packet Length Mean

54

Idle Std

16

Bwd IAT Max

55

Total Backward Packets

17

Packet Length Std

56

Subflow Bwd Packets

18

Average Packet Size

57

Total Length of Bwd Packets

19

Bwd Packet Length Std

58

Subflow Bwd Bytes

20

Flow IAT Std

59

act_data_pkt_fwd

21

Init_Win_bytes_forward

60

Active Std

22

Destination Port

61

Bwd IAT Min

23

Bwd Packet Length Max

62

Fwd PSH Flags

24

Max Packet Length

63

Fwd Avg Bytes/Bulk

25

Bwd IAT Std

64

Flow Bytes/s

26

min_seg_size_forward

65

Bwd Avg Bulk Rate

27

URG Flag Count

66

SYN Flag Count

28

Packet Length Variance

67

RST Flag Count

29

Down/Up Ratio

68

Fwd URG Flags

30

Fwd IAT Mean

69

Fwd Avg Packets/Bulk

31

Fwd Packets/s

70

Fwd Avg Bulk Rate

32

Bwd IAT Mean

71

Flow Packets/s

33

Flow IAT Mean

72

ECE Flag Count

34

Bwd Packet Length Min

73

CWE Flag Count

35

Fwd Packet Length Std

74

Bwd URG Flags

36

Min Packet Length

75

Bwd PSH Flags (continued)

624

K. Vamsi Krishna et al.

Table 2 (continued) S. no.

Feature names

S. no.

Feature names

37

Fwd Packet Length Mean

76

Bwd Avg Packets/Bulk

38

Avg Fwd Segment Size

77

Bwd Avg Bytes/Bulk

39

Fwd Packet Length Max

78

Label

3.1 Information Gain (IG) For evaluating attributes, IG is considered as one of the most popular measures as a feature selection method. As per [9, 10], the informational gain of each attribute A to the class C is computed using the following formula. I G(A) = H (C)−H A(−C)

(1)

H(C) calculates entropy of the class C. Entropy is a mathematical function, which relates the quality of information from current variable with the total information available. Entropy of class C is computed using the formula given in Eq. (2). H (Y ) =



−P(Vi ) log2 P(Vi )

(2)

i

For a class i, P(V i ) represents the probability to have the value V i out of all possible values.  The function HA(C) = K j = Pj H att (C j ) gives the entropy of an attribute A with respective class C. Here, K is the number that represents the count of class labels in C. The probability Pi is used to represent the total samples of an attribute partition, and C j presents the number of aj th partition by class.

3.2 Gain Ratio (GR) The gain ratio is also one of the most commonly used feature ranking measures that use entropy. Even though IG yields good results in feature ranking models, there is a limitation. It is not efficient when the attributes are of different values. The goal of the GR is to minimize the distribution of samples when data is evenly spread [6, 9]. Equation (3) represents the formula for GR for each attribute A. G R( A) = I G( A)H (A)

(3)

 where H(A) = j − P(V j ) log2 P(V j ). Here, P(V j ) represents the probability of value V j in A by the total number of values for an attribute j.

A Detailed Analysis of the CIDDS-001 and CICIDS-2017 Datasets

625

3.3 Correlation Coefficient Correlation coefficient analysis measures the relationship between two variables. In the feature evaluation process, correlations between each feature and class label are evaluated. For the nominal features, the correlation is considered as the indicator. The final correlation of an attribute which is of nominal type will be calculated using a weighted average. The correlation coefficient value between any two variables lays in the range −1 and 1. The correlation output will spread over a negative correlation to a positive correlation. In this paper, correlation coefficient is implemented to measure the relationship between each feature with the class label feature. Equation (4) is the formula for correlation coefficient, used in this feature selection procedure [11]. n 

  xi − X yi − Y p(X, Y ) =  2 n  2 n  i=1 x i − X i=1 yi − Y i=1

(4)

where Y is the class label and X is the ith feature of the dataset. X and Y are means of the corresponding features. X i is the xth feature value of ith sample.

4 Traditional Classifiers Several classification techniques suitable for IDS are implemented by the several researchers during past. Most of the used techniques are broadly categorized into four types depending on the nature of the classification. They are distance, tree, parameter, and probability-based classifiers. In this section, classifiers like SVM, NB, and kNN are explained.

4.1 k-Nearest Neighbor Classifier Classification, clustering, and association rule mining are some of the data mining techniques that are utilized in NIDS. Among all the machine learning techniques, k-nearest neighbor is easy to implement. kNN algorithm is used for classification and regression. kNN is most prominently used in NIDS as its detection rate is high. Even though it is prominently used, still it has some limitations. As its computational time is very high, kNN is called as lazy learning algorithm. kNN is a distance-based machine learning model used to predict unknown sample labels. The class label is chosen based on the majority voting of the nearest neighbors. Euclidean distance is the most commonly used measure in kNN classifiers. The following is the formula for Euclidean distance.

626

K. Vamsi Krishna et al.

  n  d(x, y) = (xi − yi )2

(5)

i=1

where ‘d’ is the distance function, ‘x’ and ‘y’ are two samples for which distance needs to be calculated, and ‘n’ is the number of variables in each sample. There are many works addressing feature selection and intrusion detection methods using kNN and variations of kNN classifiers [12, 13].

4.2 Support Vector Machine (SVM) SVM is a supervised ML model that can be used as a classification algorithm for two-group classification problems. It takes labeled training data for each category and generates a model. After building the model, SVM is able to categorize new text. SVM locates a hyperplane that maximizes the distance from the members of each class to the optimal hyperplane. Let us imagine the data has n record with two features ‘x’ and ‘y’ for each record. Let the feature ‘y’ have values 1 and −1 only, indicating the class to which the value x belongs to. We want to find the ‘maximum margin hyperplane’ that divides the group of points X i for which Y i = 1 from the group of points for which Y i = −1. The maximum margin hyperplane is defined so that the distance between the hyperplane and the nearest point X i from either group is maximized. Maximum margin hyperplane and margins for an SVM trained with samples from two classes are shown in Fig. 1. Samples on the margin are called the support vectors. In Fig. 1, this hyperplane is defined as w × x i − b = 0, where X is a point lying on the hyperplane, parameter W determines the orientation of the hyperplane in space, and b is the bias as the distance of hyperplane from the origin. For the linearly Fig. 1 Support vectors and hyperplane for linearly separable datasets

A Detailed Analysis of the CIDDS-001 and CICIDS-2017 Datasets

627

separable case, a separating hyperplane can be defined for two classes as: W × xi − b ≥ +1 for all y = +1

(6)

W × xi − b ≤ −1 for all y = −1

(7)

These equalities can be combined into a single inequality: yi × (W × xi − b)−1 ≥ 0

(8)

4.3 Naïve Bayes Classifier It is a classification technique based on Bayes’ theorem and assigns a class label to a given sample by calculating posterior probabilities. It is not a single algorithm but a family of algorithms. All of them share a common principle; i.e., every pair of features being classified is independent of each other. It is easy to build a NB classifier. If dataset size is very large, then NB classifier is preferred. Simplicity is the major property which made NB a best classifier when compared with other classification methods. Naive Bayes technique is used in many real-time scenarios like text classification, spam filtering, recommender systems, etc. The formula to calculate the posterior probability P(c|X) from the values of P(c), P(X), and P(X|c) using Bayes theorem is given in equation 0.9. P(c|X) =

P(X|c) P(c) P(X)

P(c|X) = P(X 1 |c) × P(X 1 |c) × . . . × P(X n |c) × P(c)

(9)

4.4 Decision Tree (J48) It is one among the widely used algorithms in data mining. C4.5 (J48) is a supervised algorithm. Decision trees are generated using C4.5. To overcome the drawbacks of ID3 algorithm, C4.5 was developed. C4.5 is well known for its simple interpretation and understanding. It generates decision trees based on probabilities to classify the labels which are represented as leaf nodes in a tree, in the form of information gain. C4.5 algorithm also works on continuous data by converting the data into discrete. It can represent any Boolean function on discrete attributes using the decision tree.

628

K. Vamsi Krishna et al.

Data set CIDDS-001/CICIDS 2017

Data Preprocessing

Feature Ranking Methods IG

GR

CC

Classification Approaches kNN

Comparative Analysis

J48

SVM

Naïve Bayes

Comparative Analysis

Results and Discussion

Fig. 2 Work flow of the experiment

5 Experimental Setup In this paper, all experiments are carried out on a 64-bit Windows 7 operating system with 4 GB RAM and Intel core i3 processor using WEKA-3.7.0, on the 10% of the CIDDS-001 and CICIDS-2017 datasets. Datasets are uploaded in CSV form and normalized using min–max normalization. These normalized datasets are applied for various feature ranking measures and prominent classification techniques, which are discussed in the previous section. The statistical analysis is carried out based on the feature scores of various feature ranking methods. This analysis is used to identify, which feature ranking method influences the class label. The statistical methods mean, standard deviation, and coefficient of variation are calculated for both of these datasets. The rank correlations are calculated for the three feature ranking methods for identifying, which pair of the methods are better than the other combinations. The statistical measures are computed using Microsoft Excel. The work flow of the experiment is depicted in Fig. 2.

6 Results and Discussion This section discusses the impact of the three feature ranking methods and four classification methods for CIDDS-001 and CICIDS-2017 datasets. The statistical analysis is carried out to quantify the closeness factor as well as comparison of feature ranking methods through rank correlation. The results and discussion are presented separately below for each dataset.

A Detailed Analysis of the CIDDS-001 and CICIDS-2017 Datasets

629

6.1 CIDDS-001 The number of features is small compared to other prominent benchmark datasets related to NIDS. Even though it contains less number of features, it contains four types of recent attacks on cloud. The 13th feature is the class label, and the remaining last three features are the description of the class label. So for finding the feature ranks, the last three features are omitted. The three feature ranks are evaluated from the features 1–12 with class label of 13th feature. The three types of feature ranks along with scores are given in Table 3. The 13 features are rearranged in the alphabetical order for easy comparison of scores and ranks of these methods. The main idea behind this statistical analysis is to find how each feature influences the class label by using three feature ranking methods. To identify the level of closeness of the features to the class label, they are categorized into three groups based on their feature ranking method values. These levels of closeness categories are high, average, and minimum. The features whose values are more than μ + σ are treated as high closeness features. The features whose values lie in between μ and μ + σ are considered as average closeness category. The minimum closeness category features lie below the μ value. The gain ratio of the features lies in the range from 0.00676 to 0.22003. The mean of the gain ratio is 0.063, and the standard deviation is 0.064. From Table 3, it is observed that the feature Flags is highly close to the class label. The features DstIP, DstPort, SrcIP, and SrcPort are averagely close. The remaining seven features are relatively less close. The probability values of these three closeness categories are Table 3 Details of the feature scores and ranks for various feature ranking methods of CIDDS-001 dataset Feature name

Feature ranking method Gain ratio

Information gain

Correlation coefficient

Score

Rank

Score

Rank

Score

Rank

Bytes

0.03505

6

0.03248

8

0.00643

11

Date first seen

0.00676

11

0.01063

10

0.03592

8

DstIP

0.07461

5

0.1757

4

0.09806

3

DstPort

0.11667

3

0.31523

3

0.01774

9

Duration

0.03334

7

0.04271

7

0.06395

7

Flags

0.22003

1

0.31713

2

0.23715

1

Flows

0

12

0

12

0

12

Packets

0.03147

8

0.04694

6

0.01371

10

Proto

0.01234

10

0.00731

11

0.08014

6

SrcIP

0.07856

4

0.15808

5

0.09628

4

SrcPort

0.12222

2

0.32586

1

0.08903

5

Tos

0.02689

9

0.02047

9

0.11797

2

630

K. Vamsi Krishna et al.

1/12, 4/12, and 7/12. From these observations, it is concluded that 42% of features are closer to the class label by combining high and average. The information gain of the features lies in the range from 0.00643 to 0.3271. The mean and standard deviation values are 0.121045 and 0.132, respectively. The features SrcPort, Flags, and DstPort are highly closer to the class label. DstIP and SrcIP are the features that are averagely closer to the class label, and the remaining features are exhibiting minimum closeness. The probability values of the three closeness categories are 3/12, 2/12, and 7/12. This means that 42% of features have acceptable gain ratios for high and moderate categories. The correlation coefficient of the features lies in the range from 0.00731 to 0.3278. The mean and standard deviation values are 0.121045 and 0.132. The feature Flags is highly close to the class label. The features those are averagely closer to the class label are Tos, DstIP, SrcIP, SrcPort, and Prototype. The probability values of these categories are 1/12, 5/12, and 6/12; i.e., 50% of features have acceptable gain ratios for high and moderate categories. The following observations are drawn from Table 4 as well as Fig. 3. • The three methods act similarly and exhibit oscillation curve nature. • The feature Flows score is 0 for all three methods, and rank is high, so it does not influence the class label. • The feature Flags gets least rank with highest score for gain ratio and correlation coefficient and next to better rank for information gain. • From Fig. 3, the Flags and SrcPort got minimum ranks for gain-related methods. • The feature Tos got second rank for correlation coefficient, and the remaining two methods got high ranks. This means the feature ‘type of service’ is highly associated with class label as per correlation coefficient having minimum rank. Table 4 Details of various statistical measures that are calculated based on feature scores evaluated by various feature ranking measures for CIDDS-001 dataset

Statistical measure

Feature ranking methods Gain ratio

Information gain

Correlation coefficient

Mean (μ)

0.0631

0.12104

0.07136

Standard deviation (σ )

0.0640

0.13205

0.06619

Closeness %

42%

42%

50%

Gain ratio

Information gain

Correlation coefficient

Gain ratio

1

0.95

0.15

Information gain

0.95

1

0.24

Correlation coefficient

0.24

0.15

1

Rank correlation

Feture Rank

A Detailed Analysis of the CIDDS-001 and CICIDS-2017 Datasets 14 12 10 8 6 4 2 0

631

Effect of Feature Ranks

Gain Ratio Information gain Correlation feature rank

Name of the Feature

Fig. 3 Feature ranks for CIDDS-001 dataset

• By observing the rank correlation values, the gain ratio and information gain combination are high and positively correlated with the value of 0.95. The remaining rank correlations are very less in values but are positive. • The standard deviation of the gain ratio got minimum value among the other two methods. The standard deviation values of gain ratio and correlation coefficient are almost all equal with a minute difference of 0.01. After going through the above observations, it is noted that the gain ratio is best ranking measure when compared with the other two methods. The four classifiers are evaluated, and the values of the three performance metrics are given in Table 5. From Table 5 and Fig. 4, the following observations are drawn. • The SVM classifier exhibits least accuracy, whereas J48 and kNN give highest accuracies with negligible difference. • The SVM yields least precision when compared to other three classifiers, and the remaining three classifiers give highest precision. • The recall metric value of the Naïve Bayesian classifier is lesser in value when compared to other three classifiers. The remaining three classifiers behave equally with high values. • The misclassification rate of attacks for SVM classifier is high due to this reason, and the recall rate gets highest value. Table 5 Effect of various classification approaches on CIDDS-001 dataset

Classifier

Accuracy

Precision

Recall

Naïve Bayesian classifier

96.44

99.85

96.37

SVM

94.53

94.53

100

J48

99.86

99.92

99.93

kNN

99.69

99.89

99.88

632

K. Vamsi Krishna et al.

%

Effect of classification on CIDDS-001 dataset 102 100 98 96 94 92 90

Accuracy SVM

Precision Metric Naïve Bayesian Classifier

Recall J48

kNN

Fig. 4 Effect of various classifiers on CIDDS-001 dataset

• The misclassification rate of benign samples is high for Naïve Bayesian classifier. So, the precision values are high. • All the three performance measures got high values (more than 99.6%) for both J48 and kNN classifiers. The J48 and kNN classifiers behave almost equally for three types of performance metrics.

6.2 CICIDS-2017 The number of features in the dataset is highest when compared to other prominent datasets like NSL-KDD, Kyoto 2006+, etc. It contains 14 distinct attack types in the class label which is the 78th feature of the dataset. Among 78 features of the dataset, 10 features contain all zero values for all sample records. Therefore, these 10 features are omitted for finding the feature rankings. The feature scores and ranks are calculated for 68 features with class label. The scores and feature ranks are given in Table 6. All these features are rearranged in alphabetical order for the easy comparison of feature ranking methods (Table 7). In gain ratio, SubflowBwd Bytes, Total Length of Bwd Packets, and Init_Win_bytes_backward are highly close features, there are a total of 50 features that are averagely close, and the remaining 15 features are less close to the class label, whereas in information gain a total of 11 features are highly close to the class label. They are Init_Win_bytes_forward, AvgBwd Segment Size, Bwd Packet Length Mean, Packet Length Mean, Average Packet Size, Flow IAT Max, Packet Length, Bwd Packet Length Std, Max Packet Length Variance, Flow Duration, and Fwd Packet Length Max. Further, a total number of 40 features are averagely closer and 17 features are less close to the class label. For the correlation coefficient ranking, a total of 16 features, namely AvgBwd Segment Size, Bwd Packet Length Mean, Bwd Packet Length Std, Bwd Packet Length Max, Fwd IAT Std, Packet Length Std, Idle Max, Idle Mean, Fwd IAT Max,

A Detailed Analysis of the CIDDS-001 and CICIDS-2017 Datasets

633

Table 6 Details of the feature scores and ranks are given for various feature ranking methods of CICIDS-2017 dataset S. no.

Feature name

Feature ranking model Gain ratio

Information gain

Correlation feature ranking

Score

Score

Rank

Score

Rank

Rank

1

ACK Flag Count

0.1042

58

0.102101

57

0.3475

24

2

act_data_pkt_fwd

0.0483

68

0.116712

56

0.00498

58

3

Active Max

0.1878

25

0.407183

39

0.04304

48

4

Active Mean

0.2005

20

0.412941

37

0.05902

42

5

Active Min

0.191

24

0.404601

40

0.05367

46

6

Active Std

0.1534

37

0.059343

63

0.03457

51

7

Average Packet Size

0.1637

31

0.760737

6

0.57516

14

8

AvgBwd Segment Size

0.2673

7

0.774866

3

0.627

1

9

AvgFwd Segment Size

0.071

63

0.431573

35

0.05853

43

10

Bwd Header Length

0.1693

27

0.61257

19

0.00446

61

11

Bwd IAT Max

0.1543

36

0.462465

33

0.25244

27

12

Bwd IAT Mean

0.1249

45

0.403987

41

0.08804

39

13

Bwd IAT Min

0.113

52

0.121417

55

0.04105

49

14

Bwd IAT Std

0.1429

40

0.408113

38

0.27487

26

15

Bwd IAT Total

0.137

42

0.432754

34

0.10656

38

16

Bwd Packet Length Max

0.2492

10

0.719221

9

0.61719

4

17

Bwd Packet Length Mean

0.2673

8

0.774866

4

0.627

2

18

Bwd Packet Length Min

0.2586

9

0.31192

50

0.37298

20

19

Bwd Packet Length Std

0.2324

15

0.59448

21

0.61971

3

20

Bwd Packets/s

0.1125

53

0.315706

49

0.07545

41

21

Down/Up Ratio

0.0791

62

0.090249

60

0.23824

28

22

ECE Flag Count

0.0492

66

0.000179

67

0.01192

55

23

FIN Flag Count

0.2186

17

0.101306

58

0.34988

23

24

Flow Bytes/s

0.1066

55

0.294613

51

0.03395

52

25

Flow Duration

0.1333

44

0.695174

11

0.47491

18

26

Flow IAT Max

0.1501

38

0.728697

7

0.60537

10

27

Flow IAT Mean

0.1402

41

0.647556

14

0.34746

25

28

Flow IAT Min

0.1235

48

0.190961

53

0.05488

45

29

Flow IAT Std

0.1568

35

0.598731

20

0.56089

16

30

Flow Packets/s

0.0882

60

0.357549

44

0.15651

32

31

Fwd Header Length

0.1201

49

0.537422

27

0.00387

64

32

Fwd Header Length.1

0.1201

50

0.537422

26

0.00387

65

33

Fwd IAT Max

0.1818

26

0.668656

13

0.60612

9 (continued)

634

K. Vamsi Krishna et al.

Table 6 (continued) S. no.

Feature name

Feature ranking model

Score

Rank

Score

Rank

Score

Rank

34

Fwd IAT Mean

0.1493

39

0.618481

18

0.3533

22

35

Fwd IAT Min

0.1083

54

0.186341

54

0.03962

50

36

Fwd IAT Std

0.1929

23

0.56648

23

0.61554

5

37

Fwd IAT Total

0.1602

34

0.619988

17

0.47664

17

38

Fwd Packet Length Max

0.199

22

0.693971

12

0.0032

68

39

Fwd Packet Length Mean

0.071

64

0.431573

36

0.05853

44

40

Fwd Packet Length Min

0.2485

11

0.343524

45

0.18458

29

41

Fwd Packet Length Std

0.1366

43

0.375507

42

0.04477

47

42

Fwd Packets/s

0.0802

61

0.331655

46

0.16677

31

43

Fwd PSH Flags

0.1237

46

0.031421

66

0.12662

34

44

Idle Max

0.2475

12

0.50468

28

0.60872

7

45

Idle Mean

0.2373

13

0.492584

30

0.60689

8

46

Idle Min

0.2327

14

0.494659

29

0.60152

11

47

Idle Std

0.1999

21

0.095727

59

0.07747

40

48

Init_Win_bytes_backward

0.2901

3

0.576699

22

0.12088

37

49

Init_Win_bytes_forward

0.2811

4

0.877734

1

0.13293

33

50

Max Packet Length

0.2217

16

0.031421

2

0.5969

12

51

Min Packet Length

0.2766

5

0.363449

43

0.35969

21

52

min_seg_size_forward

0.0614

65

0.071181

61

0.12601

36

53

Packet Length Mean

0.1676

30

0.764886

5

0.58052

13

54

Packet Length Std

0.2076

18

0.721411

8

0.61209

6

55

Packet Length Variance

0.2065

19

0.696287

10

0.57286

15

56

Protocol

0.2742

6

0.241189

52

0.46388

19

57

PSH Flag Count

0.0942

59

0.066583

62

0.1682

30

58

RST Flag Count

0.0492

67

0.000179

68

0.01192

56

59

SubflowBwd Bytes

0.2997

1

0.562149

25

0.00354

66

60

SubflowBwd Packets

0.1682

28

0.492486

31

0.00472

59

61

SubflowFwd Bytes

0.1603

32

0.62602

15

0.02668

53

62

SubflowFwd Packets

0.1055

56

0.327584

48

0.0044

62

63

SYN Flag Count

0.1237

47

0.031421

65

0.12662

35

64

Total Backward Packets

0.1682

29

0.492486

32

0.00472

Gain ratio

Information gain

Correlation feature ranking

60 (continued)

A Detailed Analysis of the CIDDS-001 and CICIDS-2017 Datasets

635

Table 6 (continued) S. no.

Feature name

Feature ranking model

Score

Rank

Score

Rank

Score

Rank

65

Total Fwd Packets

0.1055

57

0.327584

47

0.0044

63

66

Total Length of Bwd Packets

0.2997

2

0.562149

24

0.00354

67

67

Total Length of Fwd Packets

0.1603

33

0.62602

16

0.02668

54

68

URG Flag Count

0.1194

51

0.031421

64

0.005205

57

Gain ratio

Table 7 Total number of features exist in high, average, and minimum closeness categories in CICIDS-2017

Closeness to class label

Information gain

Gain ratio

Correlation feature ranking

Information gain

Correlation coefficient

High

3

11

16

Average

50

40

12

Minimum

15

17

40

Flow IAT Max, Idle Min, Max Packet Length, Packet Length Mean, Average Packet Size, Packet Length Variance, and Flow IAT Std, are extracted as highly close to class label, whereas the features Fwd IAT Total, Flow Duration, Protocol, Bwd Packet Length Min, Min Packet Length, Fwd IAT Mean, FIN Flag Count, ACK Flag Count, Flow IAT Mean, Bwd IAT Std, Bwd IAT Max, and Down/Up Ratio are averagely close and remaining 40 features fall under minimum close category. The following observations are drawn from Table 8 as well as Fig. 5. • The three methods act similarly and exhibit oscillation curve nature. • The features ‘SubflowBwd Bytes,’ ‘Init_Win_bytes_forward,’ and ‘AvgBwd Segment Size’ got least ranks with highest scores for gain ratio, information gain, and correlation coefficient, respectively. • The features ‘act_data_pkt_fwd,’ ‘RST Flag Count,’ and ‘Fwd Packet Length Max’ are the least score features with high ranks. • By observing the rank correlation values, it is observed that the gain ratio and information gain combination is highly correlated with the value of 0.52. The remaining rank correlations are less in values but are positive. • The standard deviation of the gain ratio got minimum value when compared with the other two methods. The standard deviation values of information gain and correlation coefficient are almost all equal with a minute difference of 0.01. After going through the above observations, it is noted that the gain ratio is best ranking measure when compared with the other two methods. The four classifiers are evaluated, and the values of the three performance metrics are given in Table 9.

636

K. Vamsi Krishna et al.

Table 8 Details of various statistical measures are calculated based on feature scores evaluated by various feature ranking measures for CICIDS-2017

Statistical measure

Feature ranking methods Gain ratio

Information gain

Correlation coefficient

Mean

0.111439

0.24112604

0.238045

Standard deviation

0.176818

0.44068693

0.238995

Closeness %

78%

75%

41%

Gain ratio

Information gain

Correlation coefficient

Gain ratio

1

0.52399113

0.343376

Information gain

0.523991

1

0.390021

Correlation coefficient

0.343376

0.39002128

1

Rank correlation

The effect of feature ranks 80 70

Feature Rank

60 50 40 30

20

0

ACK Flag Count Active Max Active Min Average Packet Size Avg Fwd Segment Size Bwd IAT Max Bwd IAT Min Bwd IAT Total Bwd Packet Length Mean Bwd Packet Length Std Down/Up Ratio FIN Flag Count Flow Duration Flow IAT Mean Flow IAT Std Fwd Header Length Fwd IAT Max Fwd IAT Min Fwd IAT Total Fwd Packet Length Mean Fwd Packet Length Std Fwd PSH Flags Idle Mean Idle Std Init_Win_bytes_forward Min Packet Length Packet Length Mean Packet Length Variance PSH Flag Count Subflow Bwd Bytes Subflow Fwd Bytes SYN Flag Count Total Fwd Packets Total Length of Fwd Packets

10

Gain Ratio

Information Gain

Correlation Coeffient

Fig. 5 Feature ranks for CICIDS-2017 dataset Table 9 Effect of various classification approaches on CICIDS-2017 dataset

Classifier

Accuracy

Precision

Recall

Naïve Bayesian

78.868

67.548

64.232

SVM

93.28

37.255

29.712

J48

98.903

74.894

37.352

kNN

98.883

74.780

37.144

A Detailed Analysis of the CIDDS-001 and CICIDS-2017 Datasets

637

Effect of classifiers 120

%

100 80

Naïve Bayesian

60

SVM

40

J48

KNN

20 0 Accuracy

Precision

Recall

Fig. 6 Effect of various classifiers on CICIDS-2017 dataset

From Table 9 and Fig. 6, the following observations are drawn. • The Naïve Bayesian classifier exhibits least accuracy, whereas J48 and kNN give highest accuracies with negligible difference. • The SVM yields least precision and recall values when compared to other three classifiers, and the remaining three classifiers give highest precision. • The recall metric value of the Naïve Bayesian classifier is high when compared to other three classifiers. • The misclassification rate of attacks for Naïve Bayesian classifier is high. • The three performance measures got the highest values (more than 98.8%) for both J48 and kNN classifiers. The J48 and kNN classifiers behave almost equally for three types of performance metrics.

7 Conclusion In this paper, an attempt is made to describe the structure of the two benchmark cloud intrusion datasets and to analyze the statistical behavior of the features. Three feature ranking methods and four traditional classification techniques are adopted for conducting experiments. Each feature scores along with ranks with respect to the class label are evaluated. Further, the effect of different classifiers on these datasets is given. It is noted that oscillation curve nature is identified for two datasets of three ranking methods. From the experimental results, it is observed that the correlation coefficient ranking method exhibits 50% of closeness with six features in CIDDS-001 dataset, whereas CICIDS-2017 dataset yields 41% of closeness with 28 features. This means the correlation coefficient ranking method extracts less number of features which are closer to the class label. According to the rank correlation, the combination of gain ratio and information gain gets high value than the other combinations. In view of the standard deviation,

638

K. Vamsi Krishna et al.

gain ratio is less than information gain that means the features exhibit minimal variability. Finally, it is observed that kNN classifier detects attacks with the highest detection rate when compared to other classifiers. Finally, it is concluded that there is a need to conduct statistical analysis before implementing NIDS with ML algorithms. It is suggested that as a future study kNN classifier can be adopted in order to develop a fast intrusion detection system with less computational time without compromising the accuracy. Further, the statistical analysis can be extended to other supervised as well as unsupervised classification/clustering models.

References 1. Garcia-Teodoro P, Diaz-Verdejo J, Maciá-Fernández G, Vázquez E (2009) Anomaly-based network intrusion detection: techniques, systems and challenges. Comput Security 28(1–2):18– 28 2. Modi C, Patel D, Borisanya B, Patel A, Rajarajan M (2012) A novel framework for intrusion detection in cloud. In: Proceedings of the fifth international conference on security of information and networks, pp 67–74 3. Mahmood HA (2018) Network intrusion detection system (NIDS) in cloud environment based on hidden Naïve Bayes multiclass classifier. Al-Mustansiriyah J Sci 28(2):134–142 4. Gharib A, Sharafaldin I, Lashkari AH, Ghorbani AA (2016) An evaluation framework for intrusion detection dataset. In: International Conference on Information Science and Security (ICISS) IEEE, pp 1–6 5. Villacampa O (2015) Feature selection and classification methods for decision making: a comparative analysis. PhD thesis, College of Engineering and Computing, Nova Southeastern University 6. Karegowda AG, Manjunath AS, Jayaram MA (2010) Comparative study of attribute selection using gain ratio and correlation based feature selection. Int J Information Technol Knowl Manage 2(2):271–277 7. CICIDS (2017) https://www.unb.ca/cic/datasets/ids-2017.html 8. Stiawan D et al (2020) CICIDS-2017 dataset feature analysis with information gain for anomaly detection. IEEE Access 8:132911–132921 9. Khoshgoftaar TM, Gao K, Napolitano A (2012) An empirical study of feature ranking techniques for software quality prediction. Int J Software Eng Knowl Eng 22(02):161–183 10. Zdravevski E, Lameski P, Kulakov A, Jakimovski B, Filiposka S, Trajanov D (2015) Feature ranking based on information gain for large classification problems with mapreduce. IEEE Trustcom/BigDataSE/ISPA 2:186–191 11. Statistical Analysis 2018. https://www.statsref.com/StatsRefSample.pdf 12. Rao BB, Swathi K (2017) Fast kNN classifiers for network intrusion detection system. Indian J Sci Technol 10(14):1–10 13. Swathi K, Rao BB (2019) Impact of PDS based kNN classifiers on Kyoto dataset. Int J Rough Sets Data Anal (IJRSDA) 6(2):61–72 14. Wang W, Du D, Wang N (2018) Building a cloud IDS using an efficient feature selection method and SVM. IEEE Access 7:1345–1354 15. Safaldin M, Otair M, Abualigah L (2020) Improved binary gray wolf optimizer and SVM for intrusion detection system in wireless sensor networks. J Ambient Intelligence Humanized Comput 1–18

Single-Round Cluster-Head Selection (SRCH) Algorithm for Energy-Efficient Communication in WSN K. Rajammal and R. K. Santhia

Abstract Wireless sensor networks (WSNs) are emerging trends of communication technologies over the past two decades. They have been finding indispensable application in the fields of remote monitoring and control due to their inherent capability to be deployed in locations where human intervention or presence is undesirable. A challenging research issue in recent times in the area of WSNs is the energy optimization problem in WSN nodes. Nodes are provided with very limited battery power, and their frequent replacement is not possible. Hence, intelligent utilization of the available energy by nodes is the promising solution. Clustering is one of the possible solutions toward the challenging energy optimization problem. A singleround cluster-head selection algorithm (SRCH) is proposed and implemented in this research paper. A n-tuple attribute is taken for considering the optimal cluster head which is able to coordinate and manage the entire communication process from source to destination. Extensive experimentation has been accomplished in this research to evaluate the efficiency of the proposed approach. Comparative analysis has been done against benchmark methods like LEACH, C-LEACH algorithms and superior performance in proposed SRCH is justified in this paper. Keywords Wireless sensor networks · Energy optimization · Clustering · Cluster-head selection · Energy consumption

1 Introduction In recent times, rapid advancements have been made in the fields of communication technology with advent of state-of-the-art communication standards and gadgets. K. Rajammal (B) Department of Computer Science and Engineering, Sir Isaac Newton College of Engineering and Technology, Nagapattinam, India R. K. Santhia Department of Information Technology, Manakula Vinayagar Institute of Technology, Puducherry, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_48

639

640

K. Rajammal and R. K. Santhia

Access of data by consumers at the tip of their hands irrespective of time and location at high data transfer speeds is the defining standards of quality of communication in recent times. Another perspective of communication technology is remote monitoring and surveillance whose demand is in the rising trends in recent times. They play a huge role in IoT-based networks for remote monitoring of various parameters and attributes, surveillance of hostile territories in defense sectors, remote monitoring in healthcare industry, etc. A critical factor that makes WSNs to be an integral part of these applications is that, the nodes, which form the backbone of any WSN network, can be deployed in any terrain especially in places, where human presence is not desirable or impossible. This meritorious feature helps in deploying them for navigating and monitoring conditions or environments affected by natural calamities and disasters. Due to their inherent ability to be deployed in inhabitable conditions, as a blessing in disguise, the power allocated for their functioning, in the form of batteries, is also very limited. Frequent replacement of the batteries when they die out is impossible due to the nature of their deployment. Hence, the problem of utility of available battery power to establish the given communication between source and destination is the problem formulation and is quite a hot research topic in recent times. A typical WSN network scenario is depicted in Fig. 1. A typical WSN model is illustrated in Fig. 1. Sensors or nodes form the backbone of the network and are prominently deployed in a remote location away from the base station. The nodes near the scene of deployment are normally referred to as the sensing nodes while the others are referred to as the forwarding nodes. The primary objective of WSN deployment is to collect and transmit information in the form of packets of data from source to destination which is facilitated by a chain of nodes constituted by the forwarding nodes. Routing plays an essential role in dictating the

Fig. 1 Illustration of a typical WSN deployment

Single-Round Cluster-Head Selection (SRCH) Algorithm …

641

path which the packet of information has to take from source to destination. It is a multifactor attribute dictated by different parameters like energy consumption, hop distance, packet density, link and node stability, etc. In case of control operations, the WSN is bidirectional where control signals are transmitted from base station toward the nodes. The structure and functionality nodes in such applications may vary depending on the type of application involved. A simple single-round cluster-head selection based on clustering methodology is proposed and implemented in this paper. The rest of the paper is organized as follows. Section 2 presents a brief survey of existing methods related to energy consumption reduction techniques in WSNs followed by the proposed methodology in Sect. 3. Results are presented in Sect. 4 followed by the concluding remarks in Sect. 5.

2 Related Work Ahmed et al. [1] proposed an enhanced protocol named node ranked-LEACH protocol for improving the lifetime of the network based on the node rank algorithm. It solves the random process selection issue when the cluster heads fail to meet the expectations in some other LEACH versions by providing better performance and consumption of energy when compared with other LEACH protocols. Aaqil et al. [2] proposed an energy-efficient routing algorithm for extending the lifetime of the sensor nodes in 3D WSN, and it is based on chain-based routing algorithm PEGASIS as a base and also uses genetic algorithm for building the chain. This improves energy and CH selection technique for improving the load balancing. Aya et al. [3] use hierarchical (cluster based) routing protocols, i.e., PEGASIS protocol is more efficient for minimizing the energy consumption and reduce the issue of LLs problem for maximize the lifetime of the network. Komkit et al. [4] proposed a new routing algorithm called ALEACH-Plus protocol for improving the selection process for enhancing the network lifetime. It is evaluated by performance metrics, and the cluster-head protocols are compared with other protocols. Kusum et al. [5] proposed an energy-efficient cluster-head selection for WSN for selecting the good cluster head to prolong the network lifetime, and PEGASIS is used for distance and probability; the results are identified by fuzzy-based system for sharing the network load. Liang et al. [6] proposed a modified cluster-head selection algorithm based on the LEACH algorithm to overcome the issues of an excessive energy consumption and unreasonable cluster-head selection. It includes Zigbee mechanism by taking both the node network address and residual energy. Using the LEACH-M algorithm, it balances the burden of network energy and increases in energy efficiency. The proposed algorithm is used to extend the lifetime of the network and decreases the energy consumption. Huarui et al. [7] proposed an improved chain-based clustering hierarchical routing (ICCHR) algorithm based on LEACH protocol. The proposed algorithm deals with

642

K. Rajammal and R. K. Santhia

the cluster-head selection, chain formation, and data transformation. It is useful in complex wireless networking environment and improving the WSN energy efficiency and network lifetime. Mohammad et al. [8] implement the PEGASIS protocol in environmental monitoring system for handling the issues of energy consumption. Implementation of the proposed protocol is better than the LEACH protocol because of its performance. LEACH is about 600 and PEGASIS is about 1000 range. Rahul et al. [9] proposed a HEED algorithm for non-uniformly distributed nodes to extend the lifetime of the network and it is energy efficient then the performance is compared with the cluster radius and number of alive nodes. Rania et al. [10] proposed a LEACH protocol improvement by load balancing the nodes by taking the fixed average value for increasing the lifetime of the network, and it also minimizes the number of nodes when the cluster head dies and thereby provides throughput via load balancing. Razaqueet et al. [11] proposed a technique based on PEGASIS-LEACH for transmission of information in the wireless network and uses an energy-efficient algorithm. The proposed protocol works better in dead nodes and energy consumption. Somaye et al. [12] proposed HEED clustering algorithm that is used for improving the energy consumption and for detecting and recovering the cluster nodes and cluster member nodes faults, and the particular nodes are selected ad backup cluster head. Energy conservation, detection accuracy, and node survival are improved. Shokat et al. [13] proposed PEGASIS that uses firefly optimization technique and AI for enhancing the network lifetime, and this protocol is based on the lifetime of the sensors and the energy consumption; ANN is used to overcome the battery discharge problem. Vinod et al. [14] proposed an energy-efficient PEGASIS routing protocol for wireless sensor networks. It is a chain-based routing protocol and it is considered as sensor nodes that are connected with nearby nodes for communication, and it is used for enhancing the lifetime of the sensor networks and energy efficiency. Zaib et al. [15] proposed HEED-based energy-efficient clustering protocols for WSN, and it is based on the energy-based rotated HEED and rotated unequal HEED for proving their best performance in constrained domains and increase in lifetime of the network and the energy of SNs. Table 1 presents a comparative study of the related work.

3 Proposed Approach The problem of intelligent utility of limited battery power to existing nodes in WSN for a typical communication scenario forms the essence of this research work. The proposed method is ideally suited for dense networks characterized by a large number of nodes. In addition, the proposed method is suited for long-distance communication with minimal number of hops required to cover the distance from source to destination. Clustering is the backbone of the proposed approach by application of

Single-Round Cluster-Head Selection (SRCH) Algorithm …

643

Table 1 Summary of the related works Ref

Authors

Year

Issues

Techniques used

[1]

Ahmed et al.

2018

Energy consumption

LEACH

[6]

Liang et al.

2018

Network lifetime

LEACH

[4]

Komkit et al.

2019

Network lifetime

LEACH

[15]

Zaib et al.

2020

Energy consumption

HEED

[10]

Rania et al.

2018

Network lifetime

LEACH

[12]

Somaiye et al.

2019

Network lifetime

HEED

[9]

Rahul et al.

2018

Energy consumption

HEED

[2]

Aaqil et al.

2020

Network lifetime

PEGASIS

[5]

Kusum et al.

2019

Network lifetime

PEGASIS

[3]

Aya et al.

2019

Energy consumption

PEGASIS

[14]

Vinod et al.

2018

Energy consumption

PEGASIS

[13]

Shokat et al.

2018

Network lifetime

PEGASIS

[11]

Razaqueet et al.

2016

Energy consumption

LEACH & PEGASIS

[8]

Mohammed et al.

2018

Network lifetime

PEGASIS

[7]

Huarui et al.

2019

Energy consumption

LEACH

a K-means algorithm. To start with, the available number of L-nodes in the network is grouped into clusters based on similarity measure where Euclidean distance from one node to the next adjacent node is considered as the measure of similarity. Nodes with more or less, similar scores of Euclidean distance (ED) are grouped until they for P-clusters in the given M × N network size. The pseudocode for the K-means algorithm given the proposed scenario is presented below. A typical flow process of the overall proposed system and the fuzzy-based confidence score measure measurement and ranking system of the second stage of computation is depicted in Figs. 2 and 3.

644

K. Rajammal and R. K. Santhia

Start

Initialize network size, no. of nodes

Call Fuzzy ( )

Assign Rank (i) for every member of Pth cluster

Initialize cluster group, members of Pth cluster

Call Cluster ( )

CH(i) = max (Rank) While i≠0 N

Y

End Fig. 2 Flowchart of proposed work

Fig. 3 A general scheme of the fuzzification process

Input Fuzzification

Rule & Data Base

Inference Engine

Defuzzification

Output

Single-Round Cluster-Head Selection (SRCH) Algorithm …

645

Algorithm 1 K- means clustering to group the M X N network Input: L nodes ∈ Output: : Topic distribution for documents in corpus D : Word distribution for each topic 1: Begin ( ) 2: For each L in M X N : Choose ~ 3: While ∈ ≠ 4: { ) 5: For each ( ) ∈ 6: Compute ED (i) =L(i) L(i+1) 7: If ED (i) ≤ Thresh (C(index)) 8: Update ED (i) 9: Else 10: i=i+2; 11: } 12: end Here, M denote the length of the network area under consideration N reflects the breadth fo the network area under investigation L nodes present in the geographical area M X N P is the total number of clusters generated after application of Kmeans clustering process is the Euclidean distance ( + ) reflects the next adjacent neighbouring node denotes the cluster index

Following the first stage of the clustering method, the final phase in the proposed experimentation deals with selection of cluster heads (CHs) based on the rank. A Ntuple algorithm for defining the rules of the selection of CH is done using fuzzy rules namely ‘IF’ and ‘AND’. Based on the rules formulated using a simple fuzzy logic, a confidence score is assigned to the set of nodes in the cluster group G ∈ L. Based on the scores generated, ranks 1 N−1 are assigned to the ith member in the cluster group G ∈ L. Figure 3 reflects a general fuzzifer system used in the proposed work to formulate the confidence score based on a set of rules formulated from 4-tuple attributes. As illustrated in Table 2, a set of 72 rules has been formulated based on three states namely low, medium, and high for the 4-tuple attributes specified by the residual energy of the nodes, the distance of the node under consideration to the base station (BS), the traffic density and the number of hops required to complete the specified communication process.

646

K. Rajammal and R. K. Santhia

Table 2 Fuzzy rule formulation table for proposed work Rule number

Residual energy

Distance to BS

Traffic density

Hop count

Rule 1

L

L

L

L

Rule 2

L

L

L

M

Rule 3

L

L

L

H

Rule 4

L

L

M

L

Rule 5

L

L

H

L

Rule 6

L

L

M

M

Rule 7

L

L

M

H

Rule 8

L

M

L

L

Rule 9

L

M

M

L

Rule 10

L

M

M

M

Rule 11

L

M

M

H

Rule 12

L

M

H

M

Rule 13

L

M

H

H

Rule 14

L

H

L

L

Rule 15

L

H

M

L

Rule 16

L

H

M

M

Rule 17

L

H

M

H

Rule 18

L

H

H

M

Rule 19

L

H

H

H

Rule 20

M

M

L

L

Rule 21

M

M

M

L

Rule 22

M

M

M

M

Rule 23

M

M

M

H

Rule 24

M

M

H

M

Rule 25

M

M

H

H

……

…….

……

…….

…….

…….

…….

……

…….

…….

…….

…….

……

…….

…….

…….

…….

……

………

…….

………

…….

…….

………

………

………

………

………

………

………

Rule 72

H

H

H

H

Single-Round Cluster-Head Selection (SRCH) Algorithm …

647

4 Performance Evaluation The proposed work has been implemented on NS2 simulator which is an opensource platform. Various aspects of the experimentation conducted are discussed in this section. A.

B.

Experimental Settings The simulation settings used in the proposed experimentation are listed in Table 3. As shown in Table 3, an area of 1000 × 1000 m is taken for the proposed study. The number of nodes is varied from 200–500 to determine the impact of varying node density over the energy consumption before and after the application of proposed SR cluster-head selection algorithm. Experimental Process The proposed work is implemented through a two-stage process. In the first stage, the given network area of 1000 × 1000 m is analyzed where the nodes are randomly distributed. Clustering algorithm in the form of K-means clustering (unsupervised learning algorithm) is used in the proposed work to compute the cluster groups based on similarity measure based on Euclidean distance as discussed in proposed section. The clustering scenario is shown in Fig. 4.

Figure 4 depicts the clustering process where the K-means algorithm depicted above has been applied to group the nodes based on similarity measure which is the Euclidean distance. Alternately, Manhattan distance between one node and next adjacent node can also be taken as a similarity measure. Due to the generalization and common application of ED over most of clustering algorithms, the former similarity measure has been preferred over the latter in the proposed work. In the second phase of the proposed work, the SR cluster-head selection is invoked as depicted in the overall flow process, and the one-time selection of cluster head for all cluster groups in the network area is computed. The inputs to the SR cluster-head selection algorithm are 4-tuple namely residual energy (E res ), distance to base station (Dbs ), traffic density (ρ), and hop count (Hc ). A set of 72 fuzzy rules is applied to each cluster group in turn to the cluster members to select the cluster head for that particular Table 3 Experimental settings for proposed work

Parameter

Value

Network size

1000 × 1000 m

Number of sensor nodes

200–500

Radio propagation range

300 m

Channel capacity

2 M bits/sec

Initial node energy

2J

Physical layer

IEEE 802.11b DCF

Data packet size

1000 bytes

Simulation time

360 s

648

K. Rajammal and R. K. Santhia

Fig. 4 Generation of clusters after application of K-means

cluster group. This CH is taken to be the most reliable and trust worthy node to carry out the communication process from source to destination. This method is quite ideal in case of large or bulk transport of data from source to destination especially in places involving large number of hop counts. The cluster-head selection process is depicted in Fig. 5. Figure 5 clearly projects the grouping of available nodes in the network area into groups based on ED measure. Coloring has been enabled to segregate and distinguish the various clusters formed. The red circles indicate the cluster heads selected at a given time instant using the proposed SR selection method for given attributes of node distance, traffic density, residual energy, and hop distance. C.

Evaluation Methodology Three essential metrics have been taken in proposed work for evaluation of proposed methodology namely dead time analysis, throughput, and average

Fig. 5 Cluster-head selection after application of SR cluster-head selection method

Single-Round Cluster-Head Selection (SRCH) Algorithm …

649

Table 4 Performance comparison of various methods Node location (x, y) coordinates

Dead time analysis (s) LEACH

C-LEACH

Proposed SR cluster-head method

0, 0

18

16

14

0, 20

22

18

16

0, 40

26

20

18

0, 60

30

28

24

20, 40

40

30

26

20, 60

50

35

31

60, 80

80

65

58

80, 100

100

90

78

100, 100

140

135

100

energy consumption. As mentioned in previous sections, LEACH and CLEACH are the benchmark methods which have been taken for comparative analysis against the proposed single-round cluster-head selection method. 1KB data is taken for experimental purpose and the energy and throughput at which it is transmitted by the proposed routing methodology based on single-round cluster-head selection is computed, observed, and plotted. Table 4 projects the dead time analysis of the observed experimental scenario. It is to be noted that, the observations have been done for varying geographical locations of the nodes. The distance variation has been enforced in terms of increasing values from the base station to determine the dead time analysis. Beforehand, dead time analysis reflects the time taken before the entire set of nodes in the communication process die out completely due to drain of battery power in effecting the proposed communication process from source to destination. Table 1 gives a clear picture of the dead time analysis for proposed distances (increasing order) of forwarding nodes from the base station. On observing the above experimental results, it could be observed that proposed SR method of cluster-head selection-based routing provides optimal performance with respect to dead time analysis. Nodes tend to live longer in propose SR method as compared to LEACH and C-LEACH. It could be further observed that, LEACH and C-LEACH are typically energy adaptive and clustering-based protocols. Hence, superior performance of proposed SR method of routing in proposed method over LEACH and C-LEACH validates the superiority of proposed method. Improvement in dead time analysis consequently improves the overall network lifetime which is a major requirement in current research scenarios. Figure 6 provides the analysis of throughput of transmission of data from source to data under given experimental conditions. The throughput is measure in kbps. As observed in Fig. 6, proposed SR method of cluster-head selection achieves a 7% improvement in throughput on an average scale as compared to LEACH and CLEACH. This is attributed to the reason that the proposed method utilizes highlight

650

K. Rajammal and R. K. Santhia

THROUGHPUT ANALYSIS 200

150 100

LEACH

50

C-LEACH

0

Proposed SR 10 20 30 40 50 60 70 80 90 100

THROUGHPUT (KBPS)

Fig. 6 Throughput analysis—comparative observation

TIME DURATION (S)

energy-efficient nodes in the transmission process helping to achieve significant throughput rates. This is also reflected in the end-end delay analysis which is projected in Fig. 7. From Fig. 7, it could be observed that, given a transmission scenario, proposed SR algorithm exhibits the least end-end delay from source to destination. In the proposed work, the end-end analysis is taken as the time taken from transmission of the last stop bit at sensing terminal/transmission/source point to the reception of start bit at sink/base station/reception point. End-end delay analysis is an important factor as it is purely reflective of any factors that tend to deviate the transmission from its designated routing path due to link/node failure etc. In proposed case, node failure due to energy drain of battery is taken as the primary factor. A final analysis in the proposed work involves the observation of average energy consumption of nodes in the given communication scenario. Figure 8 depicts the average energy consumption analysis compared against LEACH and C-LEACH.

End- End Delay Analysis 600

DELAY (S)

500 400 300

LEACH

200

C-LEACH

100

Proposed SR

0 100 120 130 140 150 160 170 180 190 200 NO. OF NODES

Fig. 7 End-end delay analysis—comparative observation

Single-Round Cluster-Head Selection (SRCH) Algorithm …

651

AVERAGE ENERGY (MJ)

Energy Consumption 600

400 200 0 100 120 130 140 150 160 170 180 190 200 NO. NODES LEACH

Proposed SR

Fig. 8 Average energy consumption—comparative observation

Figure 8 projects the energy consumption analysis. It could be observed that proposed SR outperforms the LEACH and C-LEACH methods by more than 39% average improvement in average energy consumption. This consequently helps in improving the network lifetime of the network.

5 Conclusions and Future Directions Research on energy efficiency wireless sensor networks is one of the hot topics of research in recent times. This is solely attributed to the cause that WSNs find widespread utility in a number of critical applications especially in remote monitoring, surveillance, and control-based applications. A single-round cluster-head selection algorithm (SR) is proposed in this research paper which is best suited for WSN characterized by a single time but long-distance communication with heavy inflow of traffic across the network. The proposed method prevents rapid drain of battery power during the long and high-volume overhaul of data over the wireless environment. A cluster-head selection algorithm supported with fuzzy rule-based decision making of cluster head based on a four tuple attribute is proposed and implemented in this research paper. The observations have been exhaustively conducted and compared against benchmark methods like LEACH, C-LEACH which are also cluster-based and energy adaptive methods of routing in WSNs. In all cases of performance metrics taken in the proposed work namely throughput, end-end delay, average energy consumption, and dead time analysis, the proposed SR algorithm exhibits superior performance over its counterparts. In future, multiround-based cluster-head selection methods are to be investigated to aid in a continuous transmission scenario.

652

K. Rajammal and R. K. Santhia

References 1. Zhao L, Qu S, Yi Y (2018) A modified cluster-head selection algorithm in wireless sensor networks based on LEACH. EURASIP J Wirel Commun Netw 2018(1):1–8 2. Somauroo A, Bassoo V (2020) Energy-efficient genetic algorithm variants of PEGASIS for 3D wireless sensor networks. Appl Comput Inform 3. Hussein AA, Khalid R (2019) Improvements of PEGASIS routing protocol in WSN. Int Adv J Eng Res 2(11), 1–14 4. Suwandhada K, Panyim K (2019, March) ALEACH-plus: an energy efficient cluster head based routing protocol for wireless sensor network. In: 2019 7th international electrical engineering congress (iEECON) IEEE, pp. 1–4 5. Jain KL, Mohapatra S (2019, Aug) Energy efficient cluster head selection for wireless sensor network: a simulated comparison. In: 2019 IEEE 10th control and system graduate research colloquium (ICSGRC), IEEE, pp. 162–166 6. Al-Baz A, El-Sayed A (2018) A new algorithm for cluster head selection in LEACH protocol for wireless sensor networks. Int J Commun Syst 31(1):e3407 7. Wu H, Zhu H, Zhang L, Song Y (2019) Energy efficient chain based routing protocol for orchard wireless sensor network. J Electr Eng Technol 14(5):2137–2146 8. Mufid MR, Al Rasyid MUH, Syarif I (2018, Oct) Performance evaluation of PEGASIS protocol for energy efficiency. In: 2018 International electronics symposium on engineering technology and applications (IES-ETA), IEEE, pp. 241–246 9. Priyadarshi R, Singh L, Singh A (2018, Feb) A novel HEED protocol for wireless sensor networks. In: 2018 5th international conference on signal processing and integrated networks (SPIN), IEEE, pp. 296–300 10. Khadim R, Maaden A, Ennaciri A, Erritali M (2018) An energy-efficient clustering algorithm for WSN based on cluster head selection optimization to prolong network lifetime. Int J Future Comput Commun 7(3) 11. Razaque A, Abdulgader M, Joshi C, Amsaad F, Chauhan M (2016, April) P-LEACH: energy efficient routing protocol for wireless sensor networks. In: 2016 IEEE long island systems, applications and technology conference (LISAT), IEEE, pp. 1–5 12. Jassbi SJ, Moridi E (2019) Fault tolerance and energy efficient clustering algorithm in wireless sensor networks: FTEC. Wireless Pers Commun 107(1):373–391 13. Ali S, Kumar R (2018, Aug) Artificial intelligence based energy efficient grid pegasis routing protocol in wsn. In: 2018 7th international conference on reliability, infocom technologies and optimization (trends and future directions) (ICRITO), IEEE, pp. 1–7 14. Kumar VK, Khunteta A (2018, Sept) Energy efficient PEGASIS routing protocol for wireless sensor networks. In: 2018 2nd International Conference on Micro-Electronics and Telecommunication Engineering (ICMETE), IEEE, pp. 91–95 15. Ullah Z (2020) A survey on hybrid, energy efficient and distributed (HEED) based energy efficient clustering protocols for wireless sensor networks. Wireless Pers Commun 1–29

Three-Pass (DInSAR) Ground Change Detection in Sukari Gold Mine, Eastern Desert, Egypt Sayed A. Mohamed, Ayman H. Nasr, and Hatem M. Keshk

Abstract By using Synthetic Aperture Radar (SAR) sensors, change detection of earth’s surface can be tracked. The SAR is able to rectify the same region at frequent basis, giving information at very high-spatial resolution of the detected site. Radar interferometry (InSAR) and differential interferometry (DInSAR) techniques are unique remote sensing approaches that can be used to map topography and measure surface changes, respectively. Therefore, the main objective of this paper is the surveillance of man-made-induced ground changes based on selected data acquired by the (Sentinel-1) satellite. Sukari gold mine in the eastern desert of Egypt reveals changes due to crushing and grinding of the mountain. Three-pass differential interferometry (DInSAR) technique was applied for the purpose of detecting these changes. Other data such as optical and ground truth were used to verify the areas that show different changes. Keywords Synthetic Aperture Radar (SAR) · DInSAR · Three-pass interferometry · Change detection · Sentinel-1

1 Introduction The benefit of DInSAR system is that only one antenna is needed. However, because of the fact that the frames were not taken at the same time, the consistency of the production is very susceptible to the processing conditions. The image pair may be significantly influenced by weather and landscape shifts that exist within the sets of the two patterns. Gabriel et al. also introduced the (DInSAR) technique [1]. A double-difference interferogram was measured with two interferograms obtained from three separate SEASAT observations. The improvements seen in the scene were due to water-absorbing clay swelling. Differential interferometry was thought to track variations in height by less than 1 cm or less.

S. A. Mohamed · A. H. Nasr · H. M. Keshk (B) National Authority for Remote Sensing and Space Sciences (NARSS), Cairo, Egypt e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_49

653

654

S. A. Mohamed et al.

In the field of geophysics, volcanology [2, 3]; seismology [4]; subsidence research [5], landslide monitoring was conducted successfully with DISAR [6]. The persistent scatterer interferometry (PSI) technique [7–9] has been used in particular in an advanced DInSAR approach. In order to determine the velocity of the terrestrial deformation and deformation time series, this method uses wide collections of SAR images collected from the same region [8–10] for a scientific review [11– 15]. Sentinel-1 data offers significant advantages over other sensors: large coverages, interferometric wide swath mode, pictures of 250–180 km, frequent revisits of six days, and free availability.

2 Study Area and Data Acquisition 2.1 The Study Area Sukari gold mine is located approximately 15 km to the southwest of Marsa Alam city at the Red Sea coast, as shown in Fig. 1. The mine occurs within a Late Neoproterozoic granitoid that intruded older volcano sedimentary successions and an ophiolitic assemblage, both known as Wadi Ghadir me´lange (Mohamed et al., 2019). The granitoid of Sukari is elongated by NNE and is bounded by two steep shear zones from the west and east covering 10 km2 of territory. Fig. 1 Location map of Sukari gold mine

Three-Pass (DInSAR) Ground Change Detection in Sukari Gold …

655

Table 1 Sentinel-1A images which used in DInSAR Satellite: Sentinel 1A frequency: C-Band product type: SLC

Polarization: VV + VH direction(Asc/Dsc): Ascending frame: 76, path: 160

Scene 1:S1A_IW_SLC_1SDV_20170906T154647_20170906T154717_018257_01EB20_FDA3 Scene 2:S1A_IW_SLC_1SDV_20171012T154648_20171012T154718_018782_01FB3A_7719 Scene 3:S1A_IW_SLC_1SDV_20180901T154654_20180901T154724_023507_028F3B_54EC 2,017,0906

20,171,012

20,180,901

Orbit

18,257

18,782

23,507

Baseline(meter)

154

0

5

Temporal baseline(day)

36

0

324

2.2 Data Acquisition Sentinel-1; Synthetic Aperture Radar (SAR) instrument provides the continuation of data from the C-Band SAR after ERS-2 withdrawals and the Envisat mission ends. The satellite has a C-SAR sensor that provides medium and high imagery in any weather. They maintain high reliability and regional coverage as well as swift distribution of data in the field of priority naval monitoring, land surveillance, and emergency services to support operating applications. A series of 3 Sentinel-1A images with ascending orbit was used for this research, as shown in Table 1. Other data such as optical images and ground truth data were also used to know the exact surface changes that have been occurred.

3 Methodology About our methodology, DInSAR’s first job is to eliminate topographical and atmospheric effects, and its second is to construct the change map. Two-, three-pass, and four-pass differential interferometrics typically use three different techniques. In this study, we are going to use the three-pass technique.

3.1 Three-Pass Differential Interferometry Approach The interferometric differential processing is intended to distinguish the topographical and transitional concept into an interferogram. To define the displacement part, the topographical process must be removed. The DInSAR three passes are based on three SAR images of interferometric two pairs with two slave images recorded in the common master, as shown in Fig. 2. The topographical process which is known to

656

S. A. Mohamed et al.

Fig. 2 Three-pass DInSAR based on 3 SAR scenes

be the reference pair is estimated by one of the (topo-pair). The standard processing steps listed in Fig. 3 using SNAP software were followed. We followed where the selection of data is an essential and fundamental aspect of interferometry.

Fig. 3 The flowchart of the SNAP processing steps

Three-Pass (DInSAR) Ground Change Detection in Sukari Gold …

657

Table 2 Data set used for generating the interferograms Master

Slave

B⊥ (m)

Btemp (day)

inf (No)

20,171,012

20,170,906

154

−36

inf1

20,171,012

20,180,901

5

324

inf2

1.

2.

The greater the perpendicular baseline, the greater the sensitivity to topography of the interferometric phase. This can then be obtained with a limited buying interval (topo-pair). (To gain coherence and avoid terrain changes) and a large interferometric baseline (to gain elevation accuracy). The other pair (defo-pair) should be acquired with a large temporal difference (containing the terrain changes) and a baseline as small as possible.

3.2 Orbit and Baseline Calculation We implemented two types of three-pass differential interferograms inf1 and inf2. Table 2 illustrates the used data set for generating the interferograms. For the (topo) and (defo) pairs, the image dated 20,171,012 was chosen as a master, and the tow images 20,170,906 and 20,180,901 were chosen as slaves, respectively. Interferogram inf1 was choose topographically and used for the elimination of topographical effects for the longest perpendicular and shortest temporal baselines. The interferogram inf2 was used for calculating the changes and the deformations.

3.3 Co-Registration This step was implemented based on S1 TOPS co-registration. The two slave images were co-registered and interpolated (resampled) to the master’s grid within sub-pixel accuracy.

3.4 Interferograms Generation In order to locate the phase variations (interferograms) between each pixel, the images of both the masters and the slaves are pixels multiplied. You can measure the process induced by the reference ellipsoid by particular orbits or by the DEM. We also used exact orbits, and the reference WGS84 projection has been assumed for the flat earth. Figure 4 shows the generated topo (inf1) and the defo (inf2) interferograms.

658

S. A. Mohamed et al.

(a

(b

Fig. 4 a topo (inf1) and, b defo (inf2) interferograms

(a

(b

Fig. 5 De-bursting and filtering on, a inf1 and, b inf2 interferograms

3.5 De-Burst and Phase Filtering IW SLC scenes are a sequence of explosions in which a different SLC image was processed for each explosion. In a single sub-swath image, the independently centered dynamic burst imagery has a black-fill demarcation between them. The output of this operation is the de-burst interferograms. Then, they were filtered to reduce the noise which will ease the phase unwrapping and enhance the appearance of the fringes. After de-bursting and filtering of (inf1) and (inf2) using Goldstein approach, the fringes become noticeably sharper, as shown in Fig. 5.

3.6 Differential Interferogram Generation Because the produced interferograms are only distinguished by baseline length and orientation, the topographic interferogram should be multiplied before subtraction by the perpendicular baseline ratio. The method of unwrapping was performed using Snaphu software to include statistical network flow algorithms. The processing steps are: 1.

2.

The first interferogram (topo pair) was unwrapped and scaled by the ratio of the two baselines in order to remove the topographic term and retain only the changes or the displacement term, as depicted in Fig. 6. Its phase was wrapped again and subtracted from that of the second interferogram (defo-pair).

Three-Pass (DInSAR) Ground Change Detection in Sukari Gold …

659

Sukari Gold

Fig. 6 Unwrapped topo phase after scaling

3.7 Height Conversion Stage The subtracted stage was unwrapped and transformed to height values for offering the consequent changes in metric unit. The absolute process was translated into the azimuth-range radar coordinates and then geocoded; high values are applied to geographic latitude and length. This technique is useful to understand the obtained marginal patterns and compares them to the actual topography and changes in the field of analysis. This method is designed to measure the reference step at several heights, and then to compare it to the interferogram point, so that the heights are estimated. Figure 7 illustrates the calculated heights in the study area.

4 Results and Discussion Optical images and ground truth data (Figs. 8 and 9) are used to compare and verify the DInSAR change outputs, and identify places where the changes in the surface and the volume occurred. There are changes due to crushing and grinding of the mountain. Other kinds of changes are in the shape of the lake and the manufacturing area, as shown in Fig. 7. The traveling path for radar signal would be different from atmospheric in repeat-pass discrete interferometry where images containing interferogram are not simultaneously acquired. Therefore, we applied the atmospheric correction and restricted temperature and moisture details only by the average available. The final results of detecting the ground changes in Sukari gold mine area can be seen on Fig. 10.

660

S. A. Mohamed et al.

Fig. 7 Calculated heights in the study area

(a

(b

Fig. 8 Sentinel-2 optical data 10 m, acquired on: a (17/07/2017), b (10/09/2018)

(a Fig. 9 Ground truth data of the study area

(b

(c

Three-Pass (DInSAR) Ground Change Detection in Sukari Gold …

661

Fig. 10 The final ground changes after atmospheric correction

Table 3 Calculated changes in Sukari area

Deformation changes −Deformation CHANGEs (−30.47: 0.901 M) no Deformation changes + Deformation CHANGEs (0.901: 12.97 M) TOTAL AREA

Area KM2 0.46149 57.366 2.011 59.839

From Table 3, one can see that the negative changes are about 0.46 km2 , whereas the positive changes are 2 km2 . Thus, the total area of change is 2.46 km2 out of 59.84 km2 equivalent to 4.1%, approximately.

5 Conclusion In this paper, we have used a DInSAR method that exploits three-pass SAR images to detect the changes in Sukari gold mine area. The implementation for Sentinel-1 SAR takes full advantage of the frequent revisit, the small baselines, and the dual polarizations. We generated two differential interferograms with images spanning two time intervals, inf1 about one month (20,170,906–20,171,012) and inf2 about one year (20,171,012–20,180,901). DInSAR method depends on the length of perpendicular baseline between master and slave. For the best perpendicular baseline and temporal difference, we used the 20,171,012 image as master and the other two dates as slaves. Results from the employment of the procedure indicated that the negative changes are about 0.46 km2 and the positive changes are 2 km2 in a total area of about 59.84 km2 approximately. Optical images and ground truth data are used to compare and verify the change outputs and identify places where the changes in the surface

662

S. A. Mohamed et al.

and the volume occurred. DInSAR method, with the right selection of images for interferometric investigation, was successful in detecting the changes in the study area. Acknowledgements The authors would like to thank the National Authority for Remote Sensing and Space Sciences (NARSS) for offering its resources and supporting this research.

References 1. Gabriel AK, Goldstein RM, Zebker HA (1989) Mapping small elevation changes over large areas: differential radar interferometry. IEEE Trans Geosci Remote Sens 32(4):855–865 2. Massonnet D, Rossi M, Carmona C, Adragna F, Peltzer G, Feigl K, Rabaute T (1993) The displacement field of the Landers earthquake mapped by radar interferometry. Nature 364:138– 214 3. Massonnet D, Briole P, Arnaud A (1995) Deflation of Mount Etna monitored by spaceborne radar interferometry. Nature 375:567–570 4. Herrera G, Tomás R, López-Sánchez JM, Delgado J, Mallorquí JJ, Duque S, Mulas J (2007) Advanced DInSAR analysis on mining areas: La Union case study (Murcia, SE Spain). Eng Geol 90:148–159 5. García-Davalillo JC, Herrera G, Notti D, Strozzi T, Álvarez-Fernández I (2014) DInSAR analysis of ALOS PALSAR images for the assessment of very slow landslides: the Tena Valley case study. Landslides 11:225–246 6. Ferretti A, Prati C, Rocca F (2001) Permanent scatterers in SAR interferometry. IEEE Trans Geosci Remote Sens 39:8–20 7. Keshk HM, Yin XC (2020) Change detection in SAR images based on deep learning. Int J Aeronaut Space Sci 21:549–559. https://doi.org/10.1007/s42405-019-00222-0 8. Keshk HM, Yin XC (2021) Obtaining super-resolution satellites images based on enhancement deep convolutional neural network. Int J Aeronaut Space Sci 22:195–202. https://doi.org/10. 1007/s42405-020-00297-0 9. Keshk H, Yin X-C (2020) Classification of EgyptSat-1 images using deep learning methods. Int J Sens Wireless Commun Control 10:37. https://doi.org/10.2174/221032790966619020715 3858 10. Caló F, Notti D, Galve JP, Abdikan S, Görüm T, Pepe A, Balik Sanli ¸ F (2017) DInSAR-based detection of land subsidence and correlation with groundwater depletion in Konya Plain. Turkey Remote Sens 9:83. https://doi.org/10.3390/rs9010083 11. Bonì R, Meisina C, Cigna F, Herrera G, Notti D, Bricker S, McCormack H, Tomás R, Bejar M, Mulas J, Ezquerro P (2017) Exploitation of satellite A-DInSAR time series for detection, characterization and modelling of land subsidence. Geosciences 7. https://doi.org/10.3390/geo sciences7020025 12. Crosetto M, Monserrat O, Cuevas-González M, Devanthéry N (2016) Persistent scatterer interferometry: a review. ISPRS J Photogramm Remote Sens 115:78–89 13. Mohamed SA, Nasr AH, Helmy AK (2020) Surface monitoring by coherent change detection of time series (CCDTS) using interferometric (InSAR) Sentinel-1A data. Graphics, Vision and Image Processing Journal 20(1). ICGST LLC, Delaware, USA, ISSN 1687–398X 14. Agarwal V, Kumar A, Gomes R, Marsh S (2020) Monitoring of ground movement and groundwater changes in London using InSAR and GRACE. Appl Sci 10. https://doi.org/10.3390/app 10238599 15. Rezaei A, Mousavi Z (2019) Characterization of land deformation, hydraulic head, and aquifer properties of the Gorgan confined aquifer, Iran, from InSAR observations. J Hydrol 579:124196. https://doi.org/10.1016/j.jhydrol.2019.124196

Integral Images: Efficient Algorithms for Their Computation Systems of Speeded-Up Robust Features (Surf) M. Jagadeeswari, C. S. Manikandababu, and M. Aiswarya

Abstract Object detection in many applications, from video monitoring to robot tracking and autonomous vehicles, has evolved in recent years. Human faces are of importance among the objects of future interest both in the industry and in the academy and contribute to a variety of algorithms that are intentionally proposed to increase accuracy and speed performance (i.e., low luminance, occlusions, etc.). However, it consists of extremely dynamic energy hardware architectures in real time. In multi-scale local functional detection algorithms, for example, the speededup robust features (SURF) for quick, rectangular computations at a constant rate, regardless of the filter size, and overall image was observed. This article presents a new hardware algorithm based on the decomposition of these recursive equations in rows, along with four interconnected image values without significant expansion. It dealt with computing and storage problems in connection with integral images. In a parallel measurement perspective, the analysis was conducted. There are two hardware algorithms that are proposed based on decomposition to reduce computing energy. A parallel integrated image analysis unit also proposes an effective technology method to reduce the required internal memory size. This article discusses two algorithms, which make a substantial decrease (44.44% approximately) in memory demands by solving an integral image saving problem in embedded viewing systems. Finally, this article presents the architectures suggested in embedded vision systems. Keywords Surveillance · Speeded-up · Detector · Internal image

M. Jagadeeswari · C. S. Manikandababu · M. Aiswarya (B) Department of Electronics and Communication Engineering, Sri Ramakrishna Engineering College, Coimbatore 641022, India M. Jagadeeswari e-mail: [email protected] C. S. Manikandababu e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_50

663

664

M. Jagadeeswari et al.

1 Introduction Face recognition technology has gained significant importance in recent years [1]. Although implementing the face recognition system in a large environment is far easier, its actual deployment is quite complicated since all conceivable variations in appearance are induced by lighting adjustments, face features, variability in image quality, sensor noise, distance perception, observer occlusions, and so on [2] which must be kept on board. Forensic expert witnesses in Serbia have been targeted over the last decade to validate a growing number of equestrians who are naming those presented on videos or images from scene crime. Serbia has not developed an automatic or semi-automatic face recognition system, largely because of budget constraints. Certainly, only manual interpretation of forensic experts as police and court gather image-based evidence from video surveillance systems. Furthermore, the process is restricted to few trained forensic experts and their traditional anthropological procedure, which is not based on quantitative measurement. The scheme does not have standardized procedures, and optimization is essential. Furthermore, multiple experiments suggest that human vision alone (e.g., witnesses) does not necessarily have a trustworthy basis for identification verification. Differences in lighting, familiarity, stance, and viewpoint or location are strongly affected [3]. The use of face recognition technologies can be divided into two major parts: application of law enforcement and industrial use. Specific identities required can be included in law enforcement applications, while trade applications vary from static photographs customized to credit cards to ATM cards, passports, and driver licenses to real-time photo recognition to a still shot or video file series access control. Each request has different processing limitations [4]. The public protection system (e.g., criminal identification, automobile driver’s license) has also been submitted [5]. The most practical system in terms of compliance legislation is that of airports and transport (e.g., train stations, border crossings, and airports). In identification systems, such as digital banking, computer logins, and games, industrial use can be detected (Fig. 1). The algorithms for various types of face recognition algorithms are simple and efficient. The principal component analysis (PCA) for modeling linear variance is high-dimensional results. This aims to identify a set of mutually orthogonal functions based on the highest directions of variance and for which the coefficients are decorated in a pair of ways [6]. Regarding a series of basic functions or “owners”

Fig. 1 Block diagram of image processing on a FPGA

Integral Images: Efficient Algorithms for Their Computation …

665

functions, PCA was used for the recognition of face images. At an early point, Eigenfaces were implemented as an efficient use of PCA to address the issues of face recognition and detection [7]. The central aspect of facial recognition research is based on an approach to the principle of perception, which as accurately as possible extracts significant knowledge from the face image [8]. In software, most of the algorithms available are applied. Also, the detection rate is not as planned [9]. On the other hand, hardware development, such as the fieldprogrammable gate array (FPGA), offers several promises. There have been efforts to apply them to genetics and neuroscience because of their success in recent years. When compared to computer simulations, FPGA has more sophisticated PC-solution processes. Initially, parallel FPGA processing significantly improves application performance that efficiently addresses the time-consuming problem in a generaluse system. Secondly, because of their reconfiguration [10], FPGA implementation enables the establishment of a repertory module that includes several neuronal models for various purposes.

2 Literature Review An integral image is the improved execution rate for computer box filters. Using an integrated image, it eliminates expensive multiplication of the filter box for the calculation and reduces it to three more operations [9]. This supports the computer vision algorithm, especially the multi-scale methods of feature detection [3], which are of considerable advantage for all the filters that can be calculated at a constant speed irrespective of their size. Such algorithms typically require the calculation of box filters of variable size to produce many image pyramid scales. For instance, SURF requires 9 to 9 box filters for the smallest and 195 to 195 box filters for its photograph pyramid to be calculated at the large scale [10]; these larger filters require nearly 500 times more than the smallest filters that can be calculated without an integral picture. While speed gain and computer complexity reduced are the main advantages of an integral picture, overall performance is imposed on its measurement [11]. Image processing and computer view algorithms in architecture and integral image computing are usually computational and data intensive without exception [12]. This also comprises the total number of operations significant because of their dependence on the scale of the image input. In Viola and Jones [13], recurrent equations were suggested to reduce total, but series numbers were required to approximate the data based on dependence. For embedded vision systems with tight time limits and limited hardware to process a single framework, which is probably paired with power limitations, this is not ideal [14]. The rest of the article is structured as follows: Section 2 provides a calculation overview of the integrated graphic. Section 3 presents a parallel computing approach that offers two comprehensive picture values per clock cycle. Section 4 defines another parallel process, which provides four complete clock-cycle image values. Section 5 conducts a comparative study of the proposed parallel approaches. Two strategies are presented in Sect. 6

666

M. Jagadeeswari et al.

to reduce the memory space to store integrated images. In Sect. 7, the research is concluded.

3 Existing Method Parallel architectures demonstrated the recursive approach described above. This shows how the approach is used with a 4*4 frame input. To complete the IIM calculation task, seven steps are needed. Orange cells are IIM values consolidated by exploitation over the current time and consolidated IIM values by white cells. Via a stable, nonzero phase [15], the whole detection can be greatly accelerated. A decent amount of accuracy is retained at about the same time, and low-cost hardware deployment of the algorithm for most applications is possible. Based on the recursive rule, the RS values of adjacent rows must be compared and the resulting horizontal axis must be more parallel. In order to parallel process, two adjacent pixels in one xth line and operations are defined to concurrently perform the output (Figs. 2 and 3). This module allows the design to be portable in the simulation hierarchy. The original image and the scaling factor are taken from a basic linear content-based algorithm, and the downscaled image is recovered. The linear interpolation algorithm is used for two image height and image distance iterated loops [16]. Change, propagation, and time are assigned to reduce the scaling of the internal loop body. It takes a downscaled image from the scalar image and produces a complete image processed by BRAMs. It consists of image loops which are curled up and have an additional loop running over image height. In this image, the internal loop changes the value of integral pixels and applies the value of pixels to an integral image at the same point, but a row above it by accumulating pixels in the same row and on the left side of the pixel position in the image in the downscaled image. It also has a new loop, which passes through the integral image lines and columns and changes every iteration to the one-pixel direction of the origin of the sub-window. The position of the new sub-window and the whole image will be submitted for further processing to the cascaded classifier.

Fig. 2 Calculating the IIM by the existing double-parallel computing scheme

Integral Images: Efficient Algorithms for Their Computation …

667

Fig. 3 Double-parallel IIM computing scheme: a Designed circuit; b Analysis of the data dependency; c Packing of the inputs

4 Proposed Method The entire image is very new to the world of image processing, first proposed in the mid-1980s in computer graphics as a texture mapping analysis field table. As an intermediate aspect extraction by the Viola–Jones face sensor, the idea of having an integral image has been introduced [17]. Since that time, multi-scale algorithms such as speeded-up robust features (SURF) and easy approximated SIFT have been especially useful for fast image pyramid implementation. It is a two-stage algorithm with a diagonal pipeline method, which processes two rows of images and generates two integral image values for each clock cycle before the pipeline has been finished. The second pixel will in fact be calculated in both rows within the same period. The entire image is grouped into groups of two rows and analyzed at a time by one group, moving the input image from top to bottom. a.

Parallel computation for four and n rows: Four rows should be counted with additional pixel adds for the total four image values in rows, and four for each cycle, in order to generalize the abovementioned algorithm in parallel to the two rows. However, it is not an appealing solution, as more hardware is required [18]. Decomposition is being suggested

668

b.

M. Jagadeeswari et al.

in this field to reuse energy in hardware. The image input in the size of M x N pixel offers four integral parallel image values, MN + MN/2. Memory-efficient parallel–diagonal architecture:

Parallel integrated image computation poses different architecture challenges regarding speed, hardware resources, and energy consumption of embedded vision systems. Although recursive algorithms dramatically reduce running numbers for integral image computation, the internal memory required for the integrated image processing engine is disproportionately large for increased image scale [19]. In this part, the system design for a large, parallel embedded imaging engine is memory-efficient to achieve high performance with low hardware resources (Fig. 4). In order to compute the very next row, both series and parallel recursion methods require internal memory to store a complete line of interconnected picture values. The internal memory width required is rounded by log2 to the top integer, and the depth is equal to the total column in the image row (Fig. 5).

Fig. 4 Calculation of the diagonal scheme in IIM proposed

Fig. 5 Proposed architecture block diagram

Integral Images: Efficient Algorithms for Their Computation …

669

The image pixel value and integral image value of image location (x, y) are i(x, y) and ii in the suggested block diagram (x, y). S(x, y) is here in the row sum at the given spot. It displays an integrated image computation system advanced in a block diagram. In a single clock cycle, this pipeline architecture calculates two integrated image values. Only by setting a complete set of image values in the internal memory, can this modeling technique protect the differential value of the next row to compare the next row and the next columns. The first column of a separate list includes the highest image value from the stored values. Although the internal memory model is still the same as above, the suggested architecture solution would round the width to the upper integer value by log2 [20].

5 Results and Discussion The design frequency is 146, and 71 MHz. The parenthesis values represent the share of Virtex-6 resource (Fig. 6). Table 1 shows that, while ensuring high performance, the architecture can significantly minimize memory over multiple recursion approaches except for small image size. Fig. 6 Comparative analysis 1

Table 1 Memory-efficient design strategy

60000 50000 40000 30000 20000 10000 0

Image size

Memory Efficient Design Strategy Slece registers

Memory-efficient design strategy Slice registers

LUTs

Execution time

360*240

6307

2792

0.294

720*576

13,164

5537

1.413

820*640

14,602

6047

1.744

1280*720

24,668

9864

3.14

1920*1080

37,145

14,614

2048*1536

39,694

15,558

10.72

2048*2048

49,618

19,448

14.294

7.067

670 Table 2 Diagonal algorithm relative consumption of resource reduction

Fig. 7 Comparative analysis 2

M. Jagadeeswari et al. Image size

Diagonal algorithm relative consumption of resource reduction Slice registers (%)

LUTs (%)

360*240

30.50%

22.57

720*576

32.54

26.19

820*640

32.71

26.93

1280*720

32.03

27.19

1920*1080

33.39

29.82

2048*1536

35.48

32.03

2048*2048

35.49

32.05

4000.00% 3000.00% 2000.00% 1000.00% 0.00%

Reduction in resource Consumption relative in Diagonal algortihm Slece registers (%)

Table 2 represents the timing effects of the proposed Spartan-6 XC6VLX240T architecture and reduction of internal memory requirements. The parenthesis values are the percentage of capital used by Spartan-6 (Fig. 7). It offers the results for internal memory decrease for certain standard 8-bit pixel image sizes, when a prototype is made for FPGA, an XC6VLX240T Spartan-6 machine. This architecture is utilized by Verilog.

6 Conclusion This article investigates issues related to interconnected images in computing. From the viewpoint of parallel processing, the whole image computation was analyzed. It also suggests that the hardware algorithm for the decomposition of the recursive equations of Viola–Jones is to minimize computational resources. Four integral image values per clock cycle can be given externally by a large improvement in the number of more processes. It also recommended the creation of an advanced imaging system to decrease the internal memory of regular HD images by around 25% (1920 × 1080). The article discussed a minimizing memory method for saving a related image. This technique ensures a minimum memory decline of 44.44%, which can reduce by more than 50% if a box filter’s maximum size is much smaller than the image input.

Integral Images: Efficient Algorithms for Their Computation …

671

Eventually, an analysis demonstrates the utility of the proposed architectures. The key focus of this article is on integrated image calculation and storage in integrated image recognition constrained by capital, as they are usually used in mobile robots, etc. such devices do not necessarily use HDR images, but probably in a few years. They are extremely interactive, and HDR images also refer to the concepts leading to the algorithms outlined in the text and promising a future path. The proposed algorithm is also essentially simultaneous and thus can be used on non-FPGAs as they provide sample parallel computing power (such as a GPU). .

References 1. Murugan AS, Devi KS, Sivaranjani A, Srinivasan P (2018) A study on various methods used for video summarization and moving object detection for video surveillance applications. Multimed Tool Appl 77(18):23273–23290 2. Mitrokhin A, Fermuller C, Parameshwara C, Aloimonos Y (2018) Event-based moving object detection and tracking. In IEEE conference on Intelligent Robots and Systems (IROS) 3. Shi W, Alawieh MB, Li X, Yu H (2017) Algorithm and hardware implementation for visual perception system in autonomous vehicle: a survey. Integrat VLSI J 59:148–156 4. Zafeiriou S, Zhang C, Zhang Z (2015) A survey on face detection in the wild: past, present and future. Comput Vis Image Understand 138:1–24 5. Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: IEEE conference on Computer Vision and Pattern Recognition (CVPR), pp I511–I518 6. Feng X, Jiang Y, Yang X, Du M, Li X (2019) Computer vision algorithms and hardware implementations: a survey. Integrat VLSI J 69:309–320 7. Kyrkou C, Theocharides T (2011) A flexible parallel hardware architecture for AdaBoost based real-time object detection. IEEE Trans VLSI 19(6):1034–1047. Srivastava N, Dai S, Manohar R, Zhang Z (2001) Accelerating face detection on programmable SoC using C-based synthesis. In: 2017 international symposium on field-programmable gate arrays, ACM/SIGDA, pp 195–200 8. Irgens P, Bader C, Le T, Saxena D, Ababei C (2017) An efficient and cost effective FPGA based implementation of the Viola-Jones face detection algorithm. HardwareX 68–75 9. Watson D, Ahmadinia A (2015) Memory customisations for image processing applications targeting MPSoCs. Integrat VLSI J 51:72–80 10. Ehsan AF, Clark N, ur Rehman KD, McDonald-Maier (2015) Integral images: efficient algorithms for their computation and storage in resource-constrained embedded vision systems, Sensors 15 11. Khorsandi MA, Karimi N (2015) Reduced complexity architecture for integral image generation. In: 9th Iranian conference on machine vision and image processing, pp 80–83 12. Ouyang P, Yin S, Zhang Y, Liu L, Wei S (2015) A fast integral image computing hardware architecture with power and area efficiency. IEEE Trans Circ Syst II: Express Briefs 62(1):75– 79 13. Valenzuela-Lopez OG, Tecpanectal-Xihuitl JL, Aguilar-Ponce RM (2017) A novel low latency integral image architecture. In: 2017 IEEE International Autumn Meeting on Power, Electronics and Computing (ROPEC) 14. Spagnolo F, Corsonello P, Perri S (2019) Efficient architecture for integral image computation on heterogeneous FPGAs. In: 15th conference on PhD research in microelectronics and electronics, pp 229–232 15. Saveeth R, Uma Maheswari S (2019) HCCD: haar-based cascade classifier for crack detection on a propeller blade. In: First international conference on sustainable technologies for computational intelligence, pp 420–432

672

M. Jagadeeswari et al.

16. El Kaddouhi S, Saaidi A, Abarkan M (2017) Eye detection based on the Viola-Jones method and corners points. Multimed Tool Appl 76:23077–23097 17. Kisaanin B (2008) Integral image optimizations for embedded vision applications. In: IEEE southwest symposium on image analysis and interpretation 18. Comaschi F, Stuijk S, Basten T, Corporaal H (2013) RASW: a run-time adaptive sliding window to improve viola-jones object detection. In: Seventh international conference on distributed smart cameras 19. Theocharides T, Vijaykrishnan N, Irwin M (2019) A parallel architecture for hardware face detection. In: Proceedings of the IEEE computer society annual symposium on emerging VLSI technologies and architectures 20. Zhang N (2020) Working towards efficient parallel computing of integral images on multi-core processors. Proceedings of the Second International Conference on Computer Engineering and Technology, Chengdu, China 16–18:30–34

Wireless Communication Network-Based Smart Grid System K. S. Prajwal, Palanki Amitasree, Guntha Raghu Vamshi, and V. S. Kirthika Devi

Abstract The transmission and distribution systems of the power system have gone through a whirl in technology on the facets of integrating communication technologies to the traditional electricity grid for better management of energy through the usage smart electricity meters at the consumer end. The objective of this work is to increase efficiency of communication network with focus in the network access (physical and data link) and internet layers of the standard TCP/IP network model. The proposed model describes the utilization of networks as prescribed by IEEE 802.11 standard (WLAN)—with smart energy meters across four regions in lowvoltage consumer units. The design and analysis have been carried out using Packet Tracer by building a scenario of hierarchical network of four wireless local area networks and two wide area networks in infrastructure mode to check the capability of sending and receiving data packets from smart electricity meters to the distribution substation through a wireless network. The connectivity of the network is verified through Internet Control Messaging Protocol (ICMP) and even the verification of the requirements of time for data transfer and reception of mms’ goose type has been done according to the IEC 61850 standard for smart grid. Keywords Smart energy meter · Wlan · Data · Packet tracer

1 Introduction The aspect of communication in the electric grid is an important factor to focus on. There are so many communication mediums that can be used for the purpose. As time passes, new communication technologies come into place which can improve the condition of the grid [1]. Smart meters are the future [2] of the electric grid they precisely give the parameters. Communication from and to the smart meters K. S. Prajwal (B) · P. Amitasree · G. R. Vamshi · V. S. Kirthika Devi Department of Electrical and Electronics Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru, India V. S. Kirthika Devi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_51

673

674

K. S. Prajwal et al.

Fig. 1 Components of a smart grid

can benefit the generation units and also to the entities that manage the grid. There are a huge wide types of communication [3] technologies that can be used in smart grid communications such as wired, optical fiber, satellite based, power line [4] communications and wireless communication methods. Communication network is the neural network of smart grid, which transmits data to and from different central station to homes. Wireless [5] sensor networks are used to analyze the probability of sending messages from intelligent electronic meters to distribution substation [6] through a ZigBee network (Fig. 1). Zigbee is wireless technology which is mainly used to connect low power nodes to the internet for monitoring and control in small areas where the bandwidth of the signal is too small. The electricity meter nodes [7] can act as a data traffic generator for smart metering messages. The RF system is able to handle smart metering [8] communication traffic with a high reliability if the potential coverage gaps that are present are properly filled with repeater [7] nodes. Smart meters [9] can communicate frequently if required which makes them best option for real time [10] monitoring [11], and they can be used as a gateway for demand response aware devices in residential house. The use of technologies and tools are becoming a necessity in smart [10] metering activities. The wireless communication requirements that are essential for [12] smart grid applications can be met through ZigBee. Smart meter records the instantaneous [13] current of the appliances plugged in continuously and also calculates the power [12] consumption. ZigBee is a network [14] which is based on IEEE 802.15.4 standard that uses Personal Area Network and Peer to Peer networks. ZigBee is [15] well suited for wireless sensors and for controlling devices. The TCP/IP protocol model for internetwork communications was created in the early 1970s and is sometimes referred to as the internet model. An RFC is

Wireless Communication Network-Based Smart Grid System

675

Fig. 2 Showing the different layers of the TCP/IP model

authored by networking engineers and sent to other IETF members for comments (Fig. 2). In this work, the necessary infrastructure and devices to test the feasibility and efficiency of sending the information from smart energy meters located in residential localities to the distribution substation [6] through the transmission poles is modeled, simulated and analyzed. The better data packet transfer and receive efficiency is achieved through wireless networks.

2 Wireless Networking A wireless [5] LAN (WLAN) is a type of wireless network that is commonly used in homes, offices, and campus environments. Networks must enable people to share, access materials. People connect to the internet using cellular networks, broadbands or hotspots through computers, laptops, tablets and smart phones. There are many different network infrastructures that provide network access, such as wired LANs, service provider networks, and cell phone networks. But it is the WLAN that makes mobility possible within the home and business environments. With organizations across the world adopting to wireless infrastructure [8], there can be a cost savings any time equipment changes. A wireless infrastructure can adapt to rapidly changing needs and technologies.

2.1 Types of Wireless Networks Wireless networks are developed and defined by the institute of electrical and electronics engineers (IEEE) standards which can be classified broadly into four main types which are as follows.

676

K. S. Prajwal et al.

(i)

Wireless Personal Area Network: Bluetooth and ZigBee [8] based devices are commonly used in wireless personal area networks which are based on the IEEE 802.15, low rate low power that use 2.4-GHz frequency in the radio waves of the electromagnetic spectrum. Wireless Local Area Network: WLANs are defined in the IEEE 802.11 family standard which function in either 2.4-GHz or 5-GHz frequency in the radio waves. Wireless Metropolitan Area Network: WMANs are suitable for providing wireless access to a metropolitan town. WMANs use specific licensed frequencies. Wireless Wide Area Network: Uses transmitters to provide coverage over an extensive geographic area WWANs are suitable for countrywide and worldwide communication. WWANs also use specific licensed frequencies.

(ii)

(iii)

(iv)

2.2 Channel Selection The best method for wireless [5] local area networks that require more than one access points is to use the channels which does not overlap. For example, the 802.11b/g/n standards operate in the 2.4–2.5 GHz spectrum. The 2.4 GHz band is subdivided into multiple channels. Each channel is allotted 22 MHz bandwidth and is separated from the next channel by 5 MHz the 802.11b standard identifies 11 channels. When one signal overlaps a channel reserved for another signal interference occurs which causes distortion in the radio waves. The best practice for 2.4 GHz wireless [5] local area networks that require multiple access points is to use non-overlapping channels. If there are three adjacent access points, use channels 1, 6 and 11 otherwise it will lead to the loss of signal because of the overlapping of channels. WLAN networks operate in the 2.4 GHz frequency band and the 5 GHz band (Figs. 3 and 4).

2.3 IP Address An IPv4 address is a 32-bit hierarchical address that constitutes network as well as host portions. IPv4 address is the unique address of the device that acts like a host device. Subnet mask finds its application in identifying the network and host parts of the IPv4 address (Figs. 5 and 6).

Wireless Communication Network-Based Smart Grid System

677

Fig. 3 Showing the structure channels in the 2.4 Ghz band of frequency

Fig. 4 Non-overlapment of channels in the 2.4 Ghz band of frequency

Fig. 5 Differentiation of network and host parts of a class C IPv4 address

3 Solution Deployment 3.1 Architecture The smart energy meter’s at the homes in a particular region transfers the amount of power consumed in Watt hour (Wh) through electromagnetic waves (wireless) [5] to the home gateway of that region situated at the nearest electric pole of that

678

K. S. Prajwal et al.

Fig. 6 Types Ipv4 addressing

particular home [9] and so on. This information received by the home gateway is given through the standard copper cable to the switch that acts like the data collector from all the regions of the residential areas [9]. The switch further transmits the data through copper cable to the router mounted on the transformer pole that acts like the meter concentrator sink. It is this router that final routes the information to the distribution substation through serial cable that are used to connect devices over long distances. Then from the computing facilities of distribution substation [5], the information is sent to the router that forwards it to the meter concentrator sink situated at the transformer pole, which sends the data further to the switch. The data from the switch is gathered by the home gateway’s [9] mounted on the electric poles. From here, the data flows to the intended recipient. This system uses infrastructure mode of wireless topology (Fig. 7).

3.2 Block Diagram See Fig. 8.

3.3 Proposed Modeling (i)Wireless Local Area Networks: In four localities. The home gateway is configured in wireless [8] mode by assigning an ip address of 192.168.25.1, followed by subnet mask of 255.255.255.0. It uses channel 6 of the 2.4 GHz frequency band of the radio spectrum. The wireless local area network is well secured by using wireless protected access II method that utilizes advanced encryption system which is the strongest encryption protocol. The password for this

Wireless Communication Network-Based Smart Grid System

Fig. 7 Physical resemblance of the architecture of the model

Fig. 8 Network at distribution, transmission and consumer end

679

680

K. S. Prajwal et al.

network is given as 1234567890. The energy meters [9] will be connected to the network only if the password is entered correctly in all the devices along with the SSID. The IP addresses of the energy meters in WLAN 1 are given as follows. (i) Home 1: 192.168.25.2 (ii) Home 2: 192.168.25.3 (iii) Home 3: 192.168.25.4 and so on till home 25. The second home gateway is configured in wireless mode by assigning an ip address of 192.168.22.1, followed by subnet mask of 255.255.255.0. It uses channel 11 of the 2.4 GHz frequency band of the radio spectrum. The wireless [5] local area network is well secured by using WPA2 method that utilizes advanced encryption system which is the strongest encryption protocol. The password for this network is given as 1357997531. The IP addresses of the energy meters in WLAN2 are given as follows. (i) Home 26: 192.168.22.2 (ii) Home 27: 192.168.22.3 (iii) Home 28: 192.168.22.4 and so on till home 50 (Fig. 9). The third home gateway is configured in wireless [5] mode by assigning an ip address of 192.168.23.1, followed by subnet mask of 255.255.255.0. It uses channel 1 of the 2.4 GHz frequency band of the radio spectrum. The wireless local area network is well secured by using WPA2 method that utilizes advanced encryption system which is the strongest encryption protocol. The password for this network is given as ASEblr123. The IP addresses of the energy meters in WLAN3 are given as follows. (i) Home 51: 192.168.23.2

Fig. 9 Devices at wireless local area network in locality 1

Wireless Communication Network-Based Smart Grid System

681

(ii) Home 52: 192.168.23.3 (iii) Home 53: 192.168.23.4 and so on till home 75. The fourth home gateway is configured in wireless [5] mode by assigning an ip address of 192.168.24.1, followed by subnet mask of 255.255.255.0. It uses channel 11 of the 2.4 GHz frequency band of the radio spectrum. The wireless local area is well secured by using WPA2 method that utilizes advanced encryption system which is the strongest encryption protocol. The password for this network is given as EEESG12345. The IP addresses of the energy meters in WLAN4 are given as follows. (i) Home 76: 192.168.24.2 (ii) Home 77: 192.168.24.3 (iii) Home 78: 192.168.24.4 and so on till home 100 (Fig. 10). (ii) Internet: Wide Area Network 1. This is the network constituting all the four home gateways, meter data collectorswitch and meter concentrator sink router. In all the home gateways, the internet module is given an ip address of 192.168.1.1, followed by 255.255.255.0 as the subnet mask. All the home gateways are connected to the switch through the internet ports on the gateways and fast ethernet ports on the switch, respectively. The switch is further connected to the router by fast ethernet ports. The router is given the same ip address as that of the home gateways so that a common network is established for data communication (Fig. 11). The configuration commands are as follows. Switch > enable.

Fig. 10 Devices at wireless local area network in locality

682

K. S. Prajwal et al.

Fig. 11 Devices at wireless local area network in locality 1

Switch #configure terminal. Switch(config)#interface fastethernet0/1, fastethernet0/2, fastethernet0/3, fastethernet0/4, fastethernet0/5. Switch(config)#interfaceVlan1/Vlan2/Vlan3/Vlan4/Vlan5. Switch(config-vlan1) #name Home Gateway: Pole 1. Switch(config-vlan2) #name Home Gateway: Pole 2. Switch(config-vlan3) #name Home Gateway: Pole 3. Switch(config-vlan4) #name Home Gateway: Pole 4. Switch(config-vlan5) #name Meter Data Collector. Meter Data Collector(config-if) #ip address 192.168.1.2 255.255.255.0 Meter Data Collector(config-if) #ip default-gateway 192.168.1.1 Meter Data Collector(config-if) # no shutdown. Using the above set of commands on the switch is configured and the links on the copper cable is set high, that is, green in color. It plays a very important role in collecting the data from the smart energy meters situated at all the four localities (Fig. 12). The following commands are entered on the router which plays the role of meter concentrator sink that receives the information from the switch and sends it to the distribution substation. Router > enable. Router #configure terminal. Router(config)#interface fastethernet0/1, Serial 0/1. Router(config) #interface Vlan1/Vlan2. Router(config-vlan1) #name Meter Data Collector. Router(config-vlan2) #name Distribution Substation.

Wireless Communication Network-Based Smart Grid System

683

Fig. 12 Flow of data packets

Router(config-if) #ip address 192.168.2.1 255.255.255.0 Router(config-if) #ip default-gateway 192.168.1.1 Router(config-if) # no shutdown. (iii) Internet: Wide Area Network 2 The distribution substation router is connected to the meter concentrator sink through a serial cable that is mainly used for safe and reliable medium for longrange communication networks. The following commands are entered on the router situated at distribution substation which plays the role of transmitting the information received from smart energy meter to the computers present there. Router > enable. Router #configure terminal. Router(config)# interface Serial 0/1, fastethernet0/1, fastethernet0/2, fastethernet0/3, fastethernet0/4, Router(config) #interface Vlan1/Vlan2/Vlan3/Vlan4/Vlan5. Router(config-vlan1) # name Distribution Substation Router. Distribution Substation Router (config-if) # ip address 192.168.2.2/192.168.2.3/192.168.2.4/192.168.2.5 255.255.255.0 Distribution Substation Router(config-if) # no shutdown (Fig. 13).

3.4 Flowcharts The above flowchart describes the flow in which the process of transmission of data happens from the source point that is the smart meter to the destination point that is the

684

K. S. Prajwal et al.

Fig. 13 Flow of data packets

computer in the substation. The communication process is achieved by connecting gateway, router and data concentrator to the smart energy meter (Fig. 14). The above flowchart describes the flow in which the process of transmission of data happens from the source point that is the computer in substation to the destination point which is the smart energy meter. The communication process is achieved by connecting gateway, router and data concentrator to the computing facilities at the distribution (Fig. 15).

Fig. 14 Data flow to distribution substation from home

Wireless Communication Network-Based Smart Grid System

685

Fig. 15 Data flow to Home from Distribution Substation

4 Results and Analysis 4.1 Data Flow to Distribution Substation from Home (i) Smart Energy Meter to Home Gateway. The packet traverses through the port in the networking device and via radio waves reaches the router. The figure below shows the attributes of source, destination and type, followed by whether the packet has reached or not. It is the same data packet [9] sent from the host to the meter concentrator sink and the status becomes successful only if the packet is received as it is sent from the sender, that is, the same data without losing any data [8] in between. If at all a situation arises wherein the received packet is not the same as the one that was sent then the status message will be packet failed. Based on sending and receiving from all the end nodes in all the four localities, it shows that all the 100 packets transferred from all the nodes are reaching the meter concentrator sink without any loss of data in between [8]. The frame in data packet also holds the ip address of the source and destination. At first packet moves from physical layer to data link layer and then to the network layer of the TCP/IP model (Fig. 16). (ii) Home Gateway to Router. From all the 4 home gateways, the data packet flows to the meter data collector that is switch and then to the meter concentrator sink router through the respective physical and network addresses that are assigned and configured (Fig. 17).

686

K. S. Prajwal et al.

Fig. 16 Status of the data transferred through all the wireless local area networks

Fig. 17 Depiction of the wireless frame structure in the protocol data unit

Wireless Communication Network-Based Smart Grid System

687

(iii) Router to Router: Meter concentrator sink to Distribution Substation and vice versa. Router 1(Meter concentrator sink) performs encapsulation at the Layer 3 IP packet in a new Layer 2 frame. Router 1 adds its physical and datalink layer address as the source and the Layer 2 address for router 2(distribution substation) as the destination. The router accepting the frame de-encapsulates the received data frame. High-level data link control (HDLC) is a type of frames that will be present in data packets undergoing communication through serial cable (Fig. 18). (iv) Router to Computers. The distribution substation receives the data packet from the meter concentrator sink router. Further the distribution substation router forwards the data packets to the computers by creating new frames for the packets. This is applicable to meter concentrator sink and smart energy meter as well. The data packet from the smart energy meter is transferred to the computing facility at the distribution [5] substation through the router at the distribution substation from the meter concentrator sink router. The tick mark on the packet will come only if it is free from errors and is received as it is sent from the smart energy meter (Fig. 19).

Fig. 18 Showing the packet containing source and destination internet protocol address that is being sent to the meter concentrator sink

688

K. S. Prajwal et al.

Fig. 19 Status of the data transferred through the router

4.2 Data Flow to Home from Distribution Substation The data is sent to the smart energy meter [9] in locality 1. Here, 4 packets are sent by the meter concentrator sink to the smart energy meter. There is non-uniform delay in receiving the packets because of the varying signal strength and the bandwidth of the network. From Fig. 20 which consists of the index of the smart energy meter on the x-axis and the time delay in milliseconds on the y-axis, it can be seen that meter 1 is having least signal strength because of which it is experiencing high delay. This delay in time is inversely proportional to the signal strength of the individual smart energy meter and the available bandwidth from the respective home gateway in the particular regions.

Fig. 20 Comparison of the time delay at locality1

Wireless Communication Network-Based Smart Grid System

689

The minimum time taken for the packet to reach smart energy meter is 6 ms, whereas the maximum time taken is 284 ms. From Fig. 21, which consists of the index of the smart energy meter on the x-axis and the time delay in milliseconds on the y-axis. In case of data sent to the smart energy meter in locality 2 by the meter concentrator sink, it can be seen that meter 25 (meter 50 in the overall system) is experiencing high delay. Then, it can be seen that meter 23 (meter 48 in the overall system) is experiencing a slightly lesser delay than the one getting highest delay. The minimum time taken for the packet to reach smart energy meter is 5 ms, whereas the maximum time taken is 40 ms. From Fig. 22, which consists of the

Fig. 21 Comparison of the time delay at locality 2

Fig. 22 Comparison of the time delay at locality 3

690

K. S. Prajwal et al.

Fig. 23 Comparison of the time delay at locality 4

index of the smart energy meter on the x-axis and the time delay in milliseconds on the y-axis, it is seen that data sent to the smart energy meter in locality 3 by the meter concentrator sink, and it can be seen that there is highest delay in meter 13 (meter 63 in the entire system) receiving the packet. Then, it can be seen the meter 23 (meter 73 in the overall system) is experiencing a slightly lesser delay than the one getting highest delay. The minimum time taken for the packet to reach smart energy meter [9] is 17 ms, whereas the maximum time taken is 61 ms. The data is sent to the smart energy meter [9] in locality 4 has very frequent increase and decrease in the time delay of receiving the packets. It is evident from Fig. 23 that meter 14 (meter 89 overall) is getting the packet after huge delay. Then, it can be seen that smart meter 3 (meter 78 in the overall system) is receiving the packet at slightly less delay than the one experiencing highest delay. The minimum time taken for the packet to reach smart energy meter is 5 ms, whereas the maximum time taken is 26 ms. There is no loss of packets in between.

4.3 Average Time Delay and Message Types The average time delay in receiving the data packets is highest in case of meter 1 of wireless local area network:1 and least in case of meter 25 (100 in the overall system) of wireless local area network:4. From Fig. 24, one can infer that the signal strength in locality 1 is the poorest when compared to remaining 3 localities. Then comes locality 2 which is lesser than locality 3 and 4. Locality 4 is having good signal strength because of which the delay which is experiencing is less when compared to the entire system.

Wireless Communication Network-Based Smart Grid System

691

Fig. 24 Comparison of the time delay at locality 4

4.4 Time Limits for MMS and GOOSE There are two types of messages in terms of delay as per International Electrotechnical Commission (IEC) 61850 [6] standard prescribed for distribution substation. They are as follows: 1. upto 250 ms: MMS-type messages. 2. upto 10 ms: GOOSE-type messages. 3. below 250 ms: under MMS messages. 4. below 10 ms: under GOOSE messages (Figs. 25 and 26). A scalable network expands quickly to support new users and applications. It does this without degrading the performance of services [8] that are being accessed by existing users and is true in this case of the proposed model. Class C internet protocol (IP) address has been assigned to all the end devices and intermediary devices. There are 25 end node networks in each the four wireless local area networks. Number of networks = 2a−b where a is the number of network id bits and b is the number of leading bits in the respective classes. Addresses per Network = 2 y where y is Fig. 25 Depicting the types of messages as per time delay of IEC 61850 standard

692

K. S. Prajwal et al.

Fig. 26 Depicting the number of messages per type according to IEC 61850 standard for smart grid

the number of host bits. Here, the number of host bits is 8 so,28 = 256 addresses per network can be given. It is capable of accommodating up to 224−3 = 2,097,152 networks. Now, the number of new smart energy meters that can be added per wireless network is 256 − 26 which is equal to 230. The networks are having good security that is non-authorized users won’t be able to connect and also access the data sent and received. The proposed network architecture is fault tolerant because of the utilization of routing and switching mechanisms to eradicate corrupt frames in a packet.

5 Conclusion The proposed communication system is better in comparison with the wireless sensor networks-based smart grid systems as it has long range, channels are available for selection, default-gateway helps the devices to communicate with the intended devices even if the connection is lost through the network address that had been given and configured to the individual devices, and thus making sure that the data is sent to the desired device for further processing and to know the status. The data link layer which is a part of the network access layer of the TCP-IP model plays a crucial role in flow control according to the mac addresses of the source and destination that are sending and receiving data as well as error control and error free transmission of the data frames to the network layer. If the frames are corrupt due to missing of bits of data then there is rejection of such frames leading to failure in end device receiving the data packet that was supposed to get it. Thus, in this work, all the packets sent are transferred and received with no loss in between. The internet layer checks for the source and destination ip and also accompanies Internet Control Messaging Protocol

Wireless Communication Network-Based Smart Grid System

693

(ICMP) that is widely used to test the feasibility of the connection that devices must have in order to send and receive the data. Hence, the proposed model creates an impact on the society which transforms the traditional power system to decentralized power system for better connectivity and operation of electrical energy. This work of ours is in line with the “National Smart Grid Mission,” an initiative taken by the Ministry of Power, Government of India. Thereby taking a step toward building a sustainable energy grid for the nation.

6 Future Work i.

ii. iii. iv. v.

To build a web application and program it according to the sockets of transmission control protocol (TCP) or user datagram protocol (UDP) in the transport layer. To check the data received at the application layer and perform analysis. Implement different routing protocols and advanced cybersecurity features. Extending this proposed model to a large geographic area that is between multiple different cities. To further integrate cloud-based data storage which is in line with “Internet of Things” technology.

References 1. Kaushik B, Pranav I, Reddy T, Syama S, Kirthika Devi VS (2018) Wireless power transmission incorporating solar energy as source for motoring applications. In: 2018 International Conference on Emerging Trends and Innovations in Engineering And Technological Research (ICETIETR). IEEE, pp 1–5 2. Jain A, Mishra R (2016) Changes & challenges in smart grid towards smarter grid. In: International Conference on Electrical Power and Energy Systems (ICEPES), Bhopal, India 3. George N, Nithin S, Kottayil SK (2016) Hybrid key management scheme for secure AMI communications. Proc Comput Sci 93:862–869 4. López G, El Achhab EB, Moreno JI (2014) On the impact of virtual private network technologies on the operational costs of cellular machine-to-machine communications platforms for smart grids. Netw Protocols Algorithms 6(3):35–55 5. Devidas AR, Ramesh MV (2010) Wireless smart grid design for monitoring and optimizing electric transmission in India. In: 2010 Fourth International Conference on Sensor Technologies and Applications, IEEE 6. De Souza RWR, Leonardo et al (2018) Deploying wireless sensor networks–based smart grid for smart meters monitoring and control. Int J Commun Syst 31(10) 7. Lichtensteiger B, Bjelajac B, Müller C, Wietfeld C (2010) RF mesh systems for smart metering: system architecture and performance. In: 1st IEEE international conference on smart grid communications. IEEE, pp 379–384 8. Devidas AR, Subeesh TS, Ramesh MV (2013) Design and implementation of user interactive wireless smart home energy management system. In: 2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI), IEEE

694

K. S. Prajwal et al.

9. Mathew RT, Thattat S, Anirudh KV, Adithya VP, Prasad G (2018) Intelligent energy meter with home automation. In: 2018 3rd International Conference for Convergence in Technology (I2CT). IEEE, pp 1–4 10. Alahakoon D, Yu X (2015) Smart electricity meter data intelligence for future energy systems: a survey. IEEE Trans Industrial Informatics 12(1): 425–436 11. Neeraja TP, Sivraj P, Sasi KK (2015) Wide area control systems (WACS) implementation based on sensor network concepts. Proc Technol 21:303–309 12. Burunkaya M, Pars T (2017) A smart meter design and implementation using ZigBee based wireless sensor network in smart grid. In: 4th International Conference on Electrical and Electronic Engineering (ICEEE). IEEE, pp 158–162 13. Aleena GS, Sivraj P, Sasi KK (2015) Resource management on smart micro grid by embedded networking. Proc Technol 21:468–473 14. Menon DM, Radhika N (2015) Design of a secure architecture for last mile communication in smart grid systems. Proc Technol 21:125–131 15. Chauhan RS, Sharma R, Jha MK, Desai JV (2016) Simulation-based performance analysis of ZigBee in three-dimensional smart grid environments. In: International Conference on Communication and Signal Processing (ICCSP). IEEE, pp 1546–1550 16. Reka SS, Dragicevic T (2018) Future effectual role of energy delivery: a comprehensive review of Internet of Things and smart grid. Renew Sustain Energy Rev 91(2018):90–108

The SEPNS Model of Rumor Propagation in Social Networks Greeshma N. Gopal, G. Sreerag, and Binsu C. Kovoor

Abstract Social media is now a tool to access news and information. However, misinformation in social media is getting more attention from users and is being spread widely among them in a short period. Researchers have observed that rumors spread like epidemics in different social media applications. Hence, mathematical models that were widely used for describing the growth of epidemics were used to define rumor spreading. This work is focused on analyzing the epidemiological spread of rumors in a psychological aspect. The new model will show evidence of the sentiment that is spread along with rumors. Users share information along with their opinion on the social media platform. Understanding the psychological behavior of users helps to forecast the flow of rumors. It will also help to study the impact of such rumors in politics and the economy. Hence, a new sentiment-based rumor model has been developed to study and understand the nature of rumor spreading across the networks. Keywords Rumor spreading model · Sentiment · Epidemics · Compartmental modeling

1 Introduction Rumors have made a positive and negative impact on Social network and even across the global society, at all levels. Understanding that rumors spread like epidemics, mathematical models based on epidemiology had been developed to study and analyze rumor propagation. The compartmental modeling of information spreading started with the DK model proposed by Daley and Kendall [1]. In compartmental modeling, the whole population is divided into different compartments based G. N. Gopal (B) · B. C. Kovoor School of Engineering CUSAT, College of Engineering Cherthala, Kochi, India G. Sreerag School of Information Technology, Vellore Institute of Technology, Vellore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_52

695

696

G. N. Gopal et al.

Fig. 1 SIR compartmental model

on how people react to seeing information with their neighbor node. These models have helped in understanding and predicting the information dissemination across a network [2]. In most of these works, they have considered three major categories of users, the Susceptibles- who can see a rumor post at any time, Infected-who spread the rumor and Recovered-who have stopped spreading rumors. The research on rumor spreading is mainly based on models such as SI (Susceptibles-Infected), SIR (Susceptibles- Infected-Recovered), SIS (Susceptibles-Infected-Susceptibles) [3]. These models give a clear picture of the population density in each category of users. As shown in Fig. 1, in SIR model, there are are three compartments of people. The compartment S shows the group of social media users susceptible to receive the message. I represents the group of users who have shared the message they have received and R represents the group of users who stopped spreading the message. There will be a transition of people from each compartment to another as shown by arrows in the figure. It means that users who are susceptible in social media will share it at a probability rate of α. The users who have already shared the rumour may stop sharing at a rate of β There is a scope of improving this analysis by understanding the subcategories within a compartment through microscopic analysis. Understanding the sentiment is one such way for such an analysis. This analysis will help in analyzing the social impact of this rumor. For example, if a rumor about a candidate originates at the time of an election, this analysis will help to understand whether the public took it positively or negatively. The exponential increase in the number of rumors in a negative way can be one reason to predict that the candidate will fail. On the other hand, if positive response is more, analysts can make a conclusion that people are checking facts rather than believing in rumors. Rumor spreading is a continuous process that shows an exponential increase with time. So a mere analysis by taking the count of positive and negative spreading of rumor will not be always correct. For example, if the count is taken during the initial stage of rumor spreading, the analyst may fail to predict the impact. This is because during the initial stage the count of positive and negative responses will be almost the same. Later, one outnumbers the other. Thus, an epidemic model is required here to understand the whole course of rumor spreading. The current work is aimed at developing a compartmental model based on epidemiology for rumor propagation in social networks. Our model focuses on analyzing the epidemic spread of an opinion towards rumors.

The SEPNS Model of Rumor Propagation in Social Networks

697

2 Related Work Apart from the basic models like SI, SIR, and SIS, there are several improved versions of these models. SEIR (Susceptible-Exposed-Infected-Removed) model [4], SCIR (Susceptible-Contacted-Infected-Removed) model [5] and irSIR (infection recovery SIR) model [6] are few among them. Incorporating the psychology of users in rumor spreading started with the work of Yunuyan Bao et al. who proposed the SPNR (Susceptibles-Positively Infected-Negatively Infected-Recovered) model for rumor propagation [7]. Identifying how the people have taken up a rumor has a wide scope in business and politics. SPNR was modeled by splitting the infected set of nodes to Positively and Negatively infected sets. It is based on the fact that users spread fake news by quoting their opinion, which is either positive or negative. Later, Wang et al. developed an Emotion-based spreader-ignorant-stifler(ESIS) model that has considered the microscopic classes based on emotion [8]. The model has demonstrated the significance of categorizing the Spreaders based on their emotion. It was proved that communication strength varies in different emotional communities. Based on this work later Xi et al. proposed a model for emotion contagion [9]. STRS model proposed by Obadimu et al. was based on the hypothesis that toxicity is epidemic [10]. Here, toxic users refer to users who post negative comments and who reach up to the level of cyberbullying. On seeing a toxic comment, users who think alike will share or comment with abusive words. Since the toxicity like threat, hate speech, etc., are always connected with the sentiment towards a news, predicting the spread of sentiment in connection with a topic helps to outlook whether any entity connected with the news element is about to hit with cyberbullying in the future. Even though SPNR is considering the opinion along with the rumor spreading, the states defined by the model are not appropriate in the current social media platform. The state “recovered (R)” is irrelevant in social media since there is no practical definition for this state. If a user is not forwarding a rumor it does not mean that the user got recovered from that rumor. There may be several reasons; like the user was not active at that time or the user decided to be a stifler (the user who refrains from social media activities other than reading through the post). That means the user may post the rumor in future. Therefore, we have not considered the “recovered” state in our model. We have developed a new model that is more suitable for the psychological analysis and prediction of the rumor spreading in the real environment. The following sections describe the model in detail.

3 The SEPNS Model of Rumor Spreading In social media, it is very often that a user will see a post multiple times shared by the friends of the user. However, rumors are not visible to all users in the network. A user will see a rumor post only if their neighbor users share the news. Moreover, some inactive users will not access their social media regularly. That means, even

698

G. N. Gopal et al.

if the neighbors share the news, it does not guarantee that the user reads that. Thus, based on the work done by Yunuyan Bao et al., we have proposed a new model SEPNS (Susceptibles-Exposed-Positively Infected-Negatively Infected) that has an exposed state along with susceptible, positively infected, and negatively infected compartments. “Exposed” state defines that only a percentage of Susceptible nodes get exposed to the news. Some users share that news, with or without modification in the future in social media. The former SPNR model does not mention the chance for a recovered node to return to the susceptible state. The new model has incorporated the provision of a user to be in a susceptible state again. Usually, this happens when the user reads contradictory or supporting statements about the rumor. As mentioned in Section II, we have not taken the “Recovered” state in our model.

3.1 Sentiment in Rumor Spreading Even though rumor is considered unverified information, people believe it as correct information and usually spread this news by adding their sentiment towards these tweets. When people believe something, they always want others also to follow the same. They disseminate all related information that they receive along with their opinion. When the user shares the content without adding any sentiment to it, it shows that the user is either having a positive or neutral opinion. Under this circumstance, we have considered the following definitions for positive and negative infection. • Positively infected: When a user spread the news with positive or neutral words added to it. • Negatively infected: When a user spread the news with negative words added to it.

3.2 Rumor Propagation Model The SEPNS rumor propagation model is defined as follows. • Susceptible users are the one who joins the social media and is having a chance to meet the user who shares the rumour. • Exposed users are those who have seen the rumours shared by their friends. • A susceptible node on meeting an infected node rumor may become exposed with a probability of α. • Under certain probability, an exposed state can either be positively or negatively infected with probability β1 and β2 , respectively. • A positively infected node can either become negative infected and vice versa but under very less probability—negligibly equal to zero. Hence, this transition is not considered. There is only little probability (negligibly equal to zero) for a person who spreads a rumor about a topic positively, may start spreading about the same negatively.

The SEPNS Model of Rumor Propagation in Social Networks

699

Fig. 2 The proposed SEPNS model

• There is no recovery stage. However, may return back to susceptible state from infected state with probability δ1 and δ2 . Figure 2 shows the compartments in the SEPNS model. The users in social media move from one compartment to another as shown by arrows in figure.

4 Methodology 4.1 Discrete Compartmental Modeling The epidemic growth of information sharing in social media is usually done with Discrete Compartmental Models. The SEPNS model is described using the mathematical equation defined from (1–4) Let α be the probability that a node will move from susceptible to exposed state. β1 and β2 be the probability that an exposed node will move to positively infected and negatively infected state, respectively. δ1 and δ2 be the probability that a positively and negatively infected state will move to a susceptible state. Each state can be explained as; S(t): Individuals who are not yet infected but Susceptible for the same. E(t): Exposed individuals in the latent period—who met with an infectious node, but not infected. P(t): Who has been positively infected and are capable of spreading the same. N(t): Who have been negatively infected and are capable of disseminating the same. Then SEPNS model is mathematically defined as ds = δ1 p + δ2 n − αes dt

(1)

700

G. N. Gopal et al.

This gives the rate of susceptibility in the model. The state S has transitioned from Positively and Negatively infected nodes under δ1 and δ2 probability. Similarly, the nodes leave from Susceptible state to exposed under “α” probability. Similarly, other states can be defined as; de (2) = αes − β1 e − β2 e dt Now the transition for positively infected states can be written as; dp = β1 e − δ1 p dt

(3)

while growth of negatively infected can be found from the equation; dn = β2 e − δ2 n dt

(4)

Using next generation matrix method R0 is estimated [11] The infected states are E, P, andN . We have F which defines the rate of new infections and, V as the transfer of individuals out of minus into the next compartment. ⎡ ⎤ ⎡ ⎤ αES 0 0 (β1 + β2 ) 0 0 F = ⎣ 0 0 0⎦ andV = ⎣ −β1 δ1 0 ⎦ 0 00 0 δ2 −β2 R0 is obtained from the dominant eigenvalue of F V −1 as α/(β1 + β2 ).

4.2 Evaluation with Twitter Set For evaluating the mathematical model, it is important to understand how well the model can simulate real-world data. Here, tweets collected from Twitter is considered as real data in this experiment. We identified four different rumors that have spread across Twitter. Further tweets were collected using TwitterR API by using the keywords relevant to the rumor. For the analysis, the collected tweets were first pre-processed by removing all the unwanted text like URLs and special characters. After extracting tokens from each tweet the sentiment score is evaluated against a set of positive and negative lexicons. Here, sentiment analysis is done by using Bing Liu’s Opinion Lexicon [12], which include misspelled words geographical variations in language, and social media mark up. Finally, the tweets are marked with their sentiment. The statistical count of positive and negative tweets are found on each day, and this is used to investigate whether the proposed model fits the data obtained from the real world.

The SEPNS Model of Rumor Propagation in Social Networks

701

5 Results and Discussion Curve fitting is one of the widely used methods to understand the accuracy of the predicted values. The model was implemented using the Algorithm 1. The SEPNS Algorithm 1 Algorithm for SEPNS Model Require: Probability for susceptible to exposed state α Probability for positive accepted state β1 , negative accepted state β2 Probability for Positive to susceptible state δ1 , Negative to susceptible state δ2 State of nodes : state Node of susceptible state = S Node of exposed state = E Node of positively infected state = P Node of negatively infected state = N Ensure: State of nodes after time interval t:state 1: Generating scale free network N=V,E, adjacent matrix A 2: Initialization:original state: state=[ S,E,P,N] 3: while interval fitrs1 , fiti > fitrs2 , thus q2 < 1 < q1. Consider q1 = 0, then ith hen search for food is followed by other chickens in the group. In the largest difference of fitness values among two chickens, the smaller q2 leads to the biggest gap among two chickens’ position. So that hens cannot steal the food from other chickens. This is because fitness values of chickens and the rooster are considered as competition among other chickens in that group. The chicks move around their mother hen for food forage which is shown in the below equation   t t t yi,t+1 j = yi, j + L × ym, j − yi, j t where ym, j signifies the ith chick’s mother position and L denotes chicks following its mother food forage. Again, in this work the positions of the chickens are randomly selected which may lead to earlier convergence to overcome this chaotic logical mapping which is applied for better search space of food. Using chaotic function, the position update of the rooster who is nominated as leader updates the position as shown in the formula

   t 2 yi,t+1 j = yi, j × 1 + θi, j × Rnd 0, σ The elders and co-leaders position are updated as follows:    t  t t t t yi,t+1 j = yi, j + q1 × θi, j × yrs1, j − yi, j + q2 × θi, j × yrs2, j − yi, j Other members in the group position are updated as follows:   t+1 t t t Chks yi, j = yi, j + L × θi, j × ym, j − yi, j Algorithm for chaotic chicken swarm optimization to update the parameters of deep adaptive clustering Begin • Set a N as population of chickens, initialize other parameters • Compute fitness value of N chickens, itr = 0;

716

C. Dhanusha et al.

• While (itr < Max_Generation) If (itr % G == 0) Assign the weight values to the swarms Compare and move them to the best solution Sort the chick based on the fitness value obtained and hierarchical order of chick swarm is organized Separate the swarm into different groups and discover the relationship among mother hens and chicks in a group. End if // apply chaos theory for updating their position For k = 1 : N    t 2 . Ifki = = leader-rooster, update its position yi,t+1 j = yi, j × 1 + θi, j × Rnd 0, σ If i == co-leader-hen, update its position using    t  t t t t yi,t+1 j = yi, j + q1 × θi, j × yrs1, j − yi, j + q2 × θi, j × yrs2, j − yi, j If i == chick, update its position using   t t t yi,t+1 j = yi, j + L × θi, j × ym, j − yi, j End for Compute new solution. If new solution generated is better than previous solution, then update it. End while. End.

4 Results and Discussions This section discusses the performance analysis of the proposed chaotic chicken swarm optimization-based deep adaptive clustering (CCO-DAC) for Alzheimer disease detection. The Python code is used for developing CCO-DAC. The dataset used for Alzheimer disease detection is collected from ADNI clinical data [11] which consist of clinical details like demographics, cognitive assessment and physical assessment with 1534 records. The performance of the proposed CCO-DAC is compared with standard two clustering models, namely fuzzy C means (FCM) and intuitionistic fuzzy C means (IFCM). From Table 1, it is observed that to detect the presence of Alzheimer severity, three different unsupervised learning models are used. The performance of the proposed unsupervised model CCO-DAC is compared with other two standard clustering models FCM and IFCM. The results show that CCO-DAC produces the highest correctly clustered result compared to other clustering models, and it produces low

Chaotic Chicken Swarm Optimization-Based Deep Adaptive Clustering … Table 1 Performance comparison of three clustering models for Alzheimer disease detection

717

Correctly clustered Incorrectly clustered MSE FCM

87.2

12.8

0.26

IFCM

91.3

8.7

0.18

CCO-DAC 98.7

1.3

0.06

error rate, as it uses the deep learning-based clustering and chaotic chicken swarm optimization for parameter fine-tuning to produce accurate results. From Fig. 4, it is shown that the proposed model chaotic chicken swarm optimization-based deep adaptive clustering (CCO-DAC) produced more correctly clustered percentage for detecting the severity of Alzheimer using ADNI dataset. This is because, the proposed model integrates two major methods, and they are deep adaptive clustering and chicken swarm optimization. The depth knowledge about the ADNI dataset is analyzed while using the pairwise deep learning and the parameters of deep clustering are fine-tuned by applying the chaotic chicken swarm optimization. Thus, its correctly clustered percentage is 98.7% while FMC and IFCM generate 87.2% and 91.3%, respectively. Fig. 4 Performance comparison based on correctly clustered instances for Alzheimer disease detection

Correctly Clustered 98.7, CCODAC

87.2, FCM

91.3, IFCM

Fig. 5 Performance comparison based on incorrectly clustered instances for Alzheimer disease detection

Incorrectly Clustered 1.3, CCO-DAC 8.7, IFCM

12.8, FCM

718

C. Dhanusha et al.

Fig. 6 Performance comparison based on error rate for Alzheimer disease detection

0.3 0.25

MSE

0.2 0.15 0.1 0.05 0 FCM

IFCM

CCO-DAC

Clusturing Models

Figure 5 explores the incorrectly clustered instances of three clustering models FCM, IFCM and proposed CCO-DAC to detect the severity of Alzheimer disease. The proposed CCO-DAC has less incorrectly clustered rate because it uses the deep adaptive clustering to discover each instance of ADNI dataset which belongs to the class based on the pairwise cosine similarity distance measure. The softmax is used to retrieve a fine knowledge about the input patterns. Additionally, chaotic chicken swarm optimization is applied for assigning the parameter values, like weight assigned on the nodes instead of random assignment. The chaotic theory is used for overcoming the problem of earlier convergence of local optima. Figure 6 shows the error rate of each three clustering models involved in Alzheimer disease detection. The mean square error of the proposed CCO-DAC is very less compared to other two clustering models FCM and IFCM. Because the deep adaptive clustering extracts the important features of the ADNI dataset and using the dot products, it discovers the similar group instances more precisely. The chaotic theory and chicken swarm optimization assigns the parameter values of deep adaptive clustering more significantly; hence, the error rate of proposed model is very less in Alzheimer disease detection.

5 Conclusion This paper constructs a deep learning model-based Alzheimer disease detection which adapts the concept of unsupervised method. The dataset is clustered using deep adaptive clustering, and by applying a pairwise cosine similarity measure, softmax is applied on each instance pair to determine whether they belong to same cluster or different cluster. The parameters involved in softmax such as weights and bias are fine-tuned by applying behavioral approach of chicken swarm optimization. The best fitness value is used for finding the best values to be assigned for parameters involved in clustering process. The chicken swarm process itself enhanced by adapting chaotic theory for selecting the best food source search in a prominent way. The simulation

Chaotic Chicken Swarm Optimization-Based Deep Adaptive Clustering …

719

results proved that the proposed CCO-DAC achieves best clustering accuracy while comparing fuzzy C means and intuitionistic fuzzy means for Alzheimer disease detection at its earlier stage.

References 1. Ulep MG, Saraon SK, McLea S (2018) Alzheimer disease. J Nurse Practit 14:129–135 2. Liu S, Liu S, Cai W, Pujol S, Kikinis R, Feng D (2014) Early diagnosis of Alzheimer’s disease with deep learning. In: Proceedings of the IEEE ınternational symposium on biomedical ımaging 3. Mateos-Pérez JM, Dadar M, Lacalle-Aurioles M, Iturria-Medina Y, Zeighami Y, Evans AC (2018) Structural neuroimaging as clinical predictor: a review of machine learning applications. Neuro Image Clin 20:506–522 4. Chaves R, Ramírez J et al (2012) Association rulebased feature selection method for Alzheimer’s disease diagnosis. Expert Syst Appl 39(14):11766–11774 5. Bhagya Shree SR, Sheshadri HS, Joshi S (2014) A review on the method of diagnosing alzheimer’s disease using data mining. Int J Eng Res Technol (IJERT) 3(3) 6. Balamurugan M, Nancy A, Vijaykumar S (2017) Alzheimer’s disease diagnosis by using dimensionality reduction based on KNN Classifier. Biomed Pharmacol J 10(4) 7. Eman M, Seddik A, Mohamed H (2016) Automatic Detection and Classification of Alzheimer’s Disease from MRI using TANNN. Int J Comput Appl 148:30–34. https://doi.org/10.5120/ijc a2016911320 8. Toshkhujaev S, Lee KH, Choi KY, Lee JJ, Kwon G-R, Gupta Y, Lama RK (2020) Classification of Alzheimer’s disease and mild cognitive ımpairment based on cortical and subcortical features from MRI T1 brain images utilizing four different types of datasets. J Healthcare Eng 2020:14, 3743171 9. Battineni G, Chintalapudi N, Amenta F (2020) Late-Life Alzheimer’s disease (ad) detection using pruned decision trees. Int J Brain Disorders Treatment 6(1) 10. Hamid MMAE, Mabrouk MS, Omar YMK (2019) Developing an early predictive system for ıdentifying genetic biomarkers associated to Alzheimer’s disease using machine learning techniques. Biomed Eng Appl Basis Commun 31(05), 1950040 11. Risacher S et al (2010) Alzheimer’s disease neuroimaging initiative (ADNI). Neurobiol Aging 31:1401–1418 12. Jiang Z, Zheng Y, Tan H, Tang B, Zhou H (2016) Variational deep embedding: an unsupervised and generative approach to clustering. International Joint Conference on Artificial Intelligence, pp 1–22 13. Meng X, Liu Y, Gao X, Zhang H (2014) A new bio-inspired algorithm: chicken swarm optimization. In: Tan Y, Shi Y, Coello CAC (Eds) Advances in swarm ıntelligence. ICSI 2014. Lecture notes in computer science, vol 8794. Springer 14. Dhanusha C, Senthil Kumar AV (2019) Intelligent ıntuitionistic fuzzy with elephant swarm behaviour based rule pruning for early detection of alzheimer in heterogeneous multidomain datasets. Int J Recent Technol Eng (IJRTE) 8(4):9291–9298. ISSN: 2277–3878 15. Dhanusha.C, Senthil Kumar AV (2020) Enriched neutrosophic clustering with knowledge of chaotic crow search algorithm for alzheimer detection in diverse multidomain environment. Int J Sci Technol Res (IJSTR) 9(4):474–481. ISSN:2277–8616, Scopus Indexed. 16. Dhanusha.C, Senthil Kumar AV, Musirin IB (2020) Boosted model of LSTM-RNN for alzheimer disease prediction at their early stages. Int J Adv Sci Technol 29(3):14097–14108

Standard Analysis of Document Control as Information According to ISO 27001 2013 in PT XYZ Pangondian Prederikus, Stefan Gendita Bunawan, Ford Lumban Gaol, Tokuro Matsuo, and Andi Nugroho

Abstract Documentation is part of a series of information systems. PT XYZ has a lot of information from various areas of the organization that need to be documented in the form of softcopy or hardcopy. Information security in documentation requires the application of standards in document control so that the information held can be stored and maintained properly. The application of information security in documentation can comply with ISO 27001; 2013 clause 7.5 regarding documented information. PT XYZ strives to apply these standards accordingly so that the documentation in information storage can be maintained properly. According to our results, information documents must conform to information security management system requirements in terms of confidentiality, transparency, and availability in terms of access to documentation, record owner liability, and document destruction. Keywords ISO27001 · Information security management system · Document information

P. Prederikus · S. G. Bunawan Information System Management Department BINUS Graduate Program—Master of Information Systems, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] S. G. Bunawan e-mail: [email protected] F. L. Gaol (B) · A. Nugroho Computer Science Department, BINUS Graduate Program—Doctor of Computer Science, Bina Nusantara University, Jakarta, Indonesia e-mail: [email protected] A. Nugroho e-mail: [email protected] T. Matsuo Graduate School of Industrial Technology, Advanced Institute of Industrial Technology, Japan Department of M-Commerce and Multimedia Applications, Asia University, Taichung City, Taiwan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_54

721

722

P. Prederikus et al.

1 Introduction PT XYZ is a financial technology company in Indonesia with a simple mission: as an online marketplace that brings people who have funding needs to those who are willing to lend their funds. At PT XYZ has implemented, an information system in the field of the marketplace can support and make activities therein important to support the performance of the information system. But along with the development of technology and threats and risks from various media that threaten to paralyze, the activities that exist within the system. PT XYZ has a lot of information obtained from its business activities as a P2P lending marketplace, therefore, PT XYZ (Fig. 1). This requires documentation in order to ensure that the information it has is properly stored in its preparation, security, and control. The role of risk management and the provisions necessary to implement it, namely ISO 27001 Information Security Management System Requirements, are becoming increasingly important. PT XYZ applies the ISO 27001 for documentation information of standard to build and maintain an Information Security Management System (ISMS). Information Security Management System is an organization or company-related component used to manage and control information security risks and to protect the confidentiality and the availability of information.

Fig. 1 Organization in PT XYZ as P2P lending platform [1]

Standard Analysis of Document Control as Information …

723

2 Literature Review 2.1 Information Security (IS) Information Security is an action to prevent fraud (cheating) by detecting an information-based system (G.J. Simons). According to the Committee on the National Security System, the Information System is protection against information and the elements that are in it including the hardware device (hardware). Information Security is a step in protecting against various threats to ensure business processes so as to minimize the risks that will occur and increase opportunities for investment and business opportunities (ISO/IEC 17,799: 2005) [2].

2.2 Information Security Planning Information system security is a strategy that is implemented to reduce the weaknesses, threats, and risks of information technology that have developed. Planning in information system security requires a process of risk mitigation and control and evaluation [4]. Information system security planning is important because it is the basis for developing a business. The need for information system security plans in the form of information security policies, standards and procedures, information security controls on human resources (people), and information systems security technology [3]. Information systems security planning needs to collaborate and integrate the roles of policy, technology, and people. Organizations (people) carry out processes that need to pay attention to applicable policies (policies) as a guide for using technology safely (Fig. 2). Information security planning can provide the right steps in reducing the risk [2]. So, there is a need for security controls in the implementation of the plan which is as follows: (a) (b) (c) (d) (e)

Administrative security Logical control Intrusion detection Anti-virus Physical control

2.3 ISO 27001 ISO 27001 is a standard implementation of an information security management system or better known as the Information Security Management System (ISMS) that

724

P. Prederikus et al.

Fig. 2 People, policy, and technology model [2]

applies internationally [3]. As a standard measure, ISO 27001 specifies the requirements that must be met regarding the establishment, implementation, maintenance, and improvement of an information security management system in an organizational context. The requirements specified in ISO 27001 in implementing an information security management system are general so that they can be used in all organizations. As technology develops, ISO 27001 continues to update and improve the system. This can be seen in the changes that occur between ISO 27001: 2005 and ISO 27001: 2013, namely the number of controls and domains where ISO 27001: 2013 has 114 controls over 14 domains, while ISO 27001: 2005 has 133 controls over 11 domains. Besides, some other changes are written in the picture (Fig. 3). ISO 27001 has ten short clauses: (a) (b) (c)

Process and process approach Process approach impact The Plan-Do-Check-Act Cycle

Fig. 3 Comparison between ISO 27001: 2013 and ISO 27001: 2005

Standard Analysis of Document Control as Information …

(d) (e) (f) (g) (h) (i) (j)

725

Context of the organization Leadership Planning Support Operation Performance evaluation Improvement

2.4 Information Security Management System (ISMS) Information Security Management System is a series of activities in a business risk approach that consists of planning (Plan), implementation and operation (Do), monitoring and review (Check) as well as maintenance and enhancement or development (Act) of the organization’s information security [7]. Implementation of an Information Security Management System based on the Plan-Do-Check-Act (PDCA) concept will produce a model for the application of the principles in the OECD Guidelines (2002) that govern risk assessment, security design and implementation, security management, and reassessment. The process of implementing a safety management system is explained below [8] (Fig. 4). The ISO 27001 Information Security Management System is commonly used by managers to assist in the process of measuring, monitoring, and controlling information security, based on the ten clauses of the ISO 27001 application of Information Security Management Systems [9] (Fig. 5).

Fig. 4 PDCA model in Information Security Management System

726

P. Prederikus et al.

Fig. 5 ISO 27001: 2013 standard documentation, ımplementation, and audit requirement classified

3 Research Methodology This research will conduct an analysis to describe and define in controlling the Information Security Management System (ISMS) [5, 6] within PT XYZ. This research was conducted based on case studies in the application of document ownership standards in each area organization based on ISO 27001: 2013 standards—clause 7.5 regarding documented information. Researchers conduct descriptive analysis methods where the data used for research is based on the state of the company that occurs and based on facts [10–13].

3.1 Data Collection The data is collected at the secondary data collection stage. Secondary data is obtained by taking data directly from the source available at PT XYZ. The data obtained is document control standard data relating to risk management.

Standard Analysis of Document Control as Information …

727

3.2 Data Analysis Method Based on data collected and literature review regarding ISO 27001; 2013, researchers will describe the application of PT XYZ documentation standards following ISO 27001: 2013 [14–16], where researchers will adjust the application of PT XYZ information documentation standards to reach conclusions, namely: (a) (b)

(c)

To know the documentation information has been identified following ISO 27001: 2013 clause 7.5.1—general documentation To find out the documentation of information on PT XYZ has a standard format in identifying documentation information following ISO 27001: 2013 clause 7.5.2—creating and updating documented information To find out the documentation of information on PT XYZ has control following ISO 27001: 2013 clause 7.5.3—control of documented information.

4 Result and Dıscussıon 4.1 General Documentation General provisions in the PT XYZ document format are required to be identified in information relating to the document title, document date, document compiler, and document number. The document categories in PT XYZ are as follows (Table 1). Table 1 Document categories

Level

Document categories

I

Company regulations

II

1. Parent document: policy 2. Derivative document: a. Strategy b. Program c. Planning d. Guide e. Standard

III

1. Parent document: procedure 2. Derivative document: a. Work instruction b. Form/log

IV

1. Memo internal 2. Report 3. Other document

728

P. Prederikus et al.

Table 2 Document numbering provisions No Document categories Numbering provisions 1

Any documents

Initial document/YY/MM/XXXX • YY: last two figures of the year the document was published • MM: month with roman numerals • XXX: document serial number, numeric according to the requirements of the document compiler

4.2 Creating and Updating The document numbering provisions are as follows:

(a) (b) (c)

Document numbering is based on the rules in the document numbering provisions table The master document is a document that contains the main rules and is more general Derivative documents are documents that contain more detailed explanations or things to help implement the main rules for more general rules. PT XYZ document numbering provisions are as follows (Table 2): The provisions for writing document contents are as follows:

(a) (b) (c) I.

The font used for creating documents is uniform—calibri All foreign terms included in the document must be italicized or in the italic format The rules for using numbers in the document before the title or sentence refer to the conditions below: Bab A. 1. (a) (1) (a)

Subbab Body Body Body Body

The terms of writing of PT XYZ are as follows (Table 3):

4.3 Control of Documented Information In the document handling standard, there is a need for document validation by each user who plays an important role in the document creation. Document validation is verified by the signature [17–19] (either in wet or in electronic signatures). Here is the document validation matrix on PT XYZ (Fig. 6): Some steps to be considered in document review are as follows:

Standard Analysis of Document Control as Information …

729

Table 3 Terms of writing Items

Fonts size

Change case

Format

Paragraph

Spacing

Document cover Title

16

Capital

Bold

Center

Before:0 After:0

Date, NO

12

Capitalize each word

Bold

Center

Before:0 After:0

Name

14

Capital

Bold

Center

Before:0 After:0

Bab

14

Capital

Bold

Align left

Before:6 After:6

Subbab

12

Capitalize each word

Bold

Align left

Before:6 After:6

Body

11

Sentence Case (EYD)

-

Justify

Before:0 After:0

Table

10

Sentence Case (EYD)

-

Text: align left Number: center

Before:0 After:0

Fill document

Fig. 6 Document approval matrix

(a)

(b)

Documents need to be reviewed at least once a year to view compliance with current conditions by taking into account the company’s internal and external conditions The review must be carried out by the owner of the document and can involve parties related to the document or process relevant to the document.

730

P. Prederikus et al.

Updating documents is a matter of document control standards, namely as follows : (a)

(b)

All document changes must be noted in the document control page section of each document authorization section. On the document control page, there is a document version table that can be filled in to explain the date the new document was passed, the version number of the new document, a description of changes made to the old document, and the name of the document compiler. The documents that have been certified must be communicated to the relevant parties through media provided by PT XYZ.

Document retention is also a document control standard based on ISO 27001: 2013 viz: (a) (b) (c)

The document owner is responsible for determining the retention period for all documents managed based on the use of the value of each document Determination of document retention period is categorized as follows: Documents relating to finance, customer transactions, procurement and using the investment budget/capital expenditure (CAPEX) [20–23], and the law have minimum retention of 1tenyears. 1. 2.

(d)

Determination of retention period for documents may change at any time based on analysis of the document owner and related parties if: 1. 2. 3.

(e) (f) (g)

(h) (i)

(j)

Documents relating to proof of ownership of company assets have an unlimited retention period. Documents that are not listed in the document retention period have retention periods according to their use as determined by the document owner or can have a retention period of at least three years.

There is still a need by the company The supporting is evidence for legal, financial, or audit needs. Any official request from the customer

Determination of the retention period for company documents is documented in the Information Register Asset The document holder is responsible for reviewing documents that have entered a retention period of at least two (2) years. All documents that have entered the retention period will be stored in a locked storage cabinet in each area organization or can use document storage services provided by these third parties. Documents in electronic form will follow the rules related to managing electronic information. Documents that have entered the retention period must be arranged by function and ordered by year, month, and alphabetically to facilitate the search for documents when needed. Documents that have expired retention can be destroyed by means: 1.

Hardcopy documents are destroyed using a paper shredder

Standard Analysis of Document Control as Information …

2.

731

Softcopy documents are permanently deleted

5 Conclusion The application of documented information standards at PT XYZ based on ISO 27001: 2013 viz: (a) (b)

(c)

General provisions documented information is identified which corresponds to clause 7.5.1—general documentation Documentation of information created and added has a standard format in the categories of documents, numbering, writing, composing documents, along with the date of documentation which is following clause 7.5.2—creating and updating documented information [24–26] Information documentation must also have controls to comply with Information Security Management System standards in confidentiality, integrity, and availability in terms of access to documentation, the responsibility of document owners, and the destruction of documents. This standard is under clause 7.5.3—control of documented information [27].

References 1. Putra AA, Nurhayati OD, Windasari IP (2016) Planning and implementing information security management systems using the ISO/IEC 2007 framework. J Technol Comput Syst 4(1):60–66 2. Information Ass Spring, Security planning and risk analysis, CS461, 2008 3. Bakri M, Irmayana N (2017) Analysis and implementation of SIMHP BPKP information security management system using ISO 27001 standards. Jurnal TEKNO KOMPAK 11(2):41– 44 4. Sodiq A (2009) Information security aspects 2009 5. Information Security Directorate Team, Guidelines for Implementing Information Security Governance for Public Service Providers. Kominfo, Jakarta, 2011 6. Tuti Hartati (2017) Academic ınformation security management system planning using ISO 27001: 2013. KOPERTIP Sci J Inform Comput Manag 1(2):63–70. ISSN 2549-9351 7. Wibowo AM (2005) ISO 27001 ınformations security management systems 8. Al-Dhahri S, Al-Sarti M, Abdaziz A (2017) Information security management system. Int J Comput Appl 158(7):29–33. https://doi.org/10.5120/ijca2017912851 9. Šalgoviˇcová J, Prajová V (2012) Information security management (ISM). December 2012 Research Papers Faculty of Materials Science and Technology Slovak University of Technology, vol 20(Special Number), pp 114–119 10. Disterer G (2013) ISO/IEC 27000, 27001 and 27002 for information security management. J Inf Secur 04(02):92–100. https://doi.org/10.4236/jis.2013.42011 11. Siregar KR (2014) Analysis and ımplementation of ınformation security through quality standards ISO 27001 for ınternet services (Case Study of IP Security Networks and Services PT.Telkom). In: International seminar and conference on learning organization, Ritz Carlton, Jakarta, Indonesia, vol 2. https://doi.org/10.13140/2.1.3629.3923

732

P. Prederikus et al.

12. Sheikhpour R (2012) A best practice approach for integration of ITIL and ISO/IEC 27001 services for information security management. Indian J Sci Technol 5(2):2170–2176. https:// doi.org/10.17485/ijst/2012/v5i3.1 13. Ozdemir Y, Basligil H, Alcan P (2014) Evaluation and comparison of cobit, itil and ISO27K1/2 standards within the framework of information security. Int J Tech Res Appl 11:22–24 14. Hoppe OA, van Niekerk J, von Solms O (2002) The effective ımplementation of ınformation security in organizations. In: Proceedings of the IFIP TC11 17th ınternational conference on ınformation security: visions and perspectives. https://doi.org/10.1007/978-0-387-35586-3_1 15. Ahmad A, Maynard SB, Park S (2014) Information security strategies: towards an organizational multi-strategy perspective. J Intell Manuf 25(2). https://doi.org/10.1007/s10845-0120683-0 16. Singh AN, Gupta MP, Ojha A (2014) Identifying factors of "organizational information security management". Eur J Mark 27(5). https://doi.org/10.1108/JEIM-07-2013-0052 17. Amarachi AA, Okolie SO, Ajaegbu C (2013) Information security management system: emerging ıssues and prospect. J Comput Eng (IOSR-JCE) 12(3):96–102. e-ISSN 2278–0661, p-ISSN 2278-8727 18. Mihai I-C (2015) Int J Inform Sec Cybercrime IV(1) 19. Laybats C, Tredinnick L (2016) Information security. Bus Inf Rev 33(2):76–80. https://doi. org/10.1177/0266382116653061 20. Siponen M, Oinas-Kukkonen H (2007) A review of ınformation security ıssues and respective research contributions. ACM SIGMIS Database 38(1):60–80. https://doi.org/10.1145/121 6218.1216224 21. Mishra S (2011) Information security effectiveness: a research framework. In: Issues in information systems. Robert Morris University, Lewis Chasalow 22. Gladden M (2017) Introduction to ınformation security in the context of advanced neuroprosthetics. In: The handbook of ınformation security for advanced neuroprosthetics, 2nd ed. Synthypnion Academic, pp 42–60 23. Seemma PS, Sundaresan N, Sowmiya MO (2018) Overview of cyber security. IJARCCE 7(11):125–128. https://doi.org/10.17148/IJARCCE.2018.71127 24. Yildirim EY (2016) The ımportance of ınformation security awareness for the success of business enterprises. Adv Human Factors Cybersec 211–222. https://doi.org/10.1007/978-3319-41932-9_17 25. Alhassan MM (2017) Information security in an organization. In: The future of big data (Using Hadoop Methods). Tamale Technical University, Alexander Adjei-Quaye 26. Keller S, Powell A (2015) Information security threats and practices in small businesses. Inf Syst Manag 22(2):7–19. https://doi.org/10.1201/1078/45099.22.2.20050301/87273.2 27. Bakkar M, Institute H, Alazab A (2019) Information security: definitions, threats and management in dubai hospitals context. In: 2019 cybersecurity and cyberforensics conference (CCC). https://doi.org/10.1109/CCC.2019.00010

Comparative Asset Pricing Models Using Different Machine Learning Tools Abhijit Dutta and Madhabendra Sinha

Abstract The study tries to estimate the returns using three known models, namely the single-index model, capital asset pricing model (CAPM), and arbitrage pricing theory (APT) model. The data for the study was taken from Nifty. The top 100 largecap stocks were taken into account. The Nifty index was taken as the proxy for the market return. All data was taken with one period lag to make it stationary. These models are subjected to three software-based machine learning, namely decision tree, neural network, and ordinary least square. The results were iterated from the three best returns, and the area of the curve was estimated. The academic point score for the area was taken into consideration. The value 0.8 was taken as the cutoff for the acceptance of the model. The result shows that CAPM outperforms all the models used under different machine learning mechanisms. In the process, it was observed that the arbitrage pricing theory model too performs well. However, while we use the ordinary least square, the single-index model has the best performance. It is kind of a foregone conclusion since the single-index model is based on the return of stock and is close to paper trading. Keywords Single-index model · Capital asset pricing model · Arbitrage pricing theory · Decision tree · Neural network · Ordinary least square

1 Introduction A comparative analysis of machine learning has been conducted in this paper on asset pricing models (APA). The idea behind this is to use an iterative process using several machine learning tools that can predict the asset pricing in an appropriate method for the investors. The investors are perennially in search of a sensitive method to understand asset pricing. Asset pricing fundamental method is to develop a way to A. Dutta Department of Commerce, Sikkim University, Gangtok, India e-mail: [email protected] M. Sinha (B) Department of Economics and Politics, Visva-Bharati University, Santiniketan, West Bengal, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_55

733

734

A. Dutta and M. Sinha

understand the behavior risk premium. However, the risk premium is very different to measure since prices are dominated by the unforecastable and obscure risk premium. Snoek et al. [1] in their study observe that any machine learning mechanism sounds like a “Black art” where enormous expertise is required to create a process tentatively to function. Most of them work through a Gaussian process through a Bayesian optimization process. Hargreaves and Reddy [2] differential processes of the method are required to find the optimal method of using machine learning in the financial market. This is a clue that has been actively used in this paper. In the context of the Indian stock market, Dutta et al. [3] used a simpler process of machine learning and observed the retail investors’ behavior during uncertainty. The factors identified by the paper were regret, panic, cognitive dissonance, herding, anchoring, and heuristics. It is also observed that panic and herding are the results of deep regret. Definitions of “machine learning” are many and are inchoate and very often specific to the need for the research. We use the term in two contexts for this paper; (a) a diverse collection of high-dimension models for statistical prediction and (b) use known techniques and software to understand the extent to which these leaders can be assimilated and iterate the best possible machine learning technique. Fernández-Delgado et al. [4] took up the issue of use of various methods of machine learning and its usability. This was supported by various other authors such as Chandrashekar and Sahin [5], Kazemi and Sullivan [6], Wo´zniak et al. [7], Gong et al. [8]. These studies tried to understand as to why it is needed to have multiple machine learning techniques and their effectively. The rest of the paper is structured as follows. The next section attempts to discuss the method of study followed by the descriptions of different machine learning tools. Data specification is mentioned before documenting the results with discussions. The final section concludes the study.

2 Method of Study We have used three models, namely capital asset pricing model (CAPM), Sharpe’s single-index model, and arbitrage pricing model (APM). The paper tries to use the structure of these three models in three different expert mechanics of machine learning methods. These models were trained for estimating the asset price return for the data specified. Three best estimations were used to iterate the return in the study.

2.1 Asset Pricing Models The specification of the asset pricing model (APM) used for the study is discussed. The models used for the study are largely equity models to match the specification of the data.

Comparative Asset Pricing Models Using Different Machine …

2.1.1

735

Capital Asset Pricing Model

The model specification is given below: E( Ri ) = R f + βi [E(Rm )]−R f

(1)

In Eq. (1), the expected income/return on capital assets is E (Ri ) and the risk-free rate of interest like interest on government bonds is denoted by Rf . For this study, the prime disposition rate in India has been used. The sensitivity of the expected plus come back to the expected excess market return is β i . or, βi =

Cov (Ri , Rm ) ← Var (Rm )

(2)

In Eq. (2), the expected return of the market is E (Rm ), Restating Eq. (1), we get Eq. (3) as follows:   E(Ri ) − R f = βi E(Rm ) − R f

(3)

E (Rm ) − Rf = market premium E (Ri ) − Rf = risk premium.

2.1.2

Single-index Model

This model was proposed by Sharpe [9] to estimate the return to a stock. The model specification is;   rit −r f = ai + βi rmt − r f + εit

(4)

  εit ∼ N 0, σ12 In Eq. (4), the return of stock i for the period t is represented by r it , the market portfolio’s risk-free return in period t is represented by r f , the stocks’ abnormal return is represented by ai, , the stock’s beta or risk in the market and its return are denoted by β i , r it – r f is the surplus return to the stock, r mt − r f is the surplus return to the market, and the residual (random) return, which are assumed independently normally distributed and is white noise and is represented σi , is denoted by 1it . In turn, the equation shows that the stock returns are effected by the beta and have a firm-specific unexpected component (residual).

736

2.1.3

A. Dutta and M. Sinha

Arbitrage Price Theory Model

The arbitrage price theory (APT) model states that asset return follows a relation between expected return and sensitive factors. ri = a j + βi1 f 1 + βi2 f 2 + . . . + βjn f n + ε j

(5)

In Eq. (5), a constant for the assets is aj , the systematic factor is denoted by f n, the responsiveness of the jth stock (asset) to factor n, known as factor loading, is denoted by β ij , the risk assets characteristic chance shock with mean-return zero is denoted by 1j , and distinctive shocks in Eq. (5) are understood to be uncorrelated factors. The arbitrage price theory opines that if returns of assets pursue a certain factor arrangement, then the association that remains among expected returns, and the factor sensitive can be as follows in Eq. (6): E(ri ) = r f + β j1 RP1 + ji2 RP2 + . . . + βjn RPn

(6)

where RP is the risk premium of the factors, and the risk-free rate is the Rf . Hence, the expected return of an asset j is a linear function of the asset’s sensitivities to n factors. Arbitrage in this circumstance is the method of use of the constructive or positive expected return from overrated or underrated securities in the market which are inefficient, with no increase in risk and with nil additional investment.

3 Different Machine Learning Tools Machine learning is the study of algorithm that improves automatically through experience. Backed by artificial intelligence, machine learning models train the sample dataset in order to make predictions. In this context of study, we used two popular statistical packages namely, EViews V8 and SPSS V21, with add-on features of decision tree to train the data on the models specified by using three distinct machine learning techniques.

3.1 Decision Trees Decision trees are algorithm from predictive modeling. Studies by Tsai and Wang [10] show that decision trees can describe cause and effect relation information. The use of the program/algorithm is repetitive in nature. In the group, that can be formed can be subdivided using the same index strategy. This paper focuses on the binary classification. The function uses the specification models to the most identical branches having groups with responses which are similar and called a “Gini” score.

Comparative Asset Pricing Models Using Different Machine …

737

A “Gini” score gives an idea of the capability to divide the mixed responses period and is represented by Eq. (7):   G = Sum pk∗ (1 − pk)

(7)

where pk is the fraction of the identical class inputs, that is, there in these groups. In case pk is either 1 or 0, the group contains all inputs as of the identical class. Therefore, G = 0 as a nodule having a half the chance (50–50 split) of class in a group has the worst purity. Therefore, a binary categorization has pk as 0.5 and G as 0.5. Therefore, in this study, the return calculated through the models was subjected to analysis to get a chance occurrence by using the decision tree. The best chance found by the technique is taken as the return substitute for the group and, hence, the closest return. The area of the distribution is observed to find the extent to which the return can be used for analysis.

3.2 Neural Network Neural network is a machine learning mechanism. It has gained predominance in stock trading in recent times. This technique replicates the human brain and nervous system by using backward propagating algorithm. Usually, neural network is arranged in layers, and these layers are made of multiple nodes which connect an activities function. Therefore, input–output parameters with weight are associated with it. During the learning phase, the neural network learns by adjustment of the weight so as to correctly predict the value. In this study, we use multilayer perception model, perception model used by Quah [11], to predict stock return. They use it to show positive relation between the stock selection and prediction. We have used three specified models and trained them using multilayer perception model to reach the return of the stocks under selection. Under this method, weight (in this case, market return) inputs are sent or fed to the hidden layer, and the subsequent weighted inputs are fed o the next hidden layer. This keeps on being repeated until the final hidden layer gives a single value. The weights of the final layer are which give forecast from a set of given input sample. The area of the curve is matched to understand the predictive strength of the model.

3.3 Ordinary Least Square Ordinary least square is a linear least square equation which is used for estimation of unknown parameters.

738

A. Dutta and M. Sinha

4 Data Specification The data for the study has been collected from free Web sites. Nifty top 100 large-cap stocks daily return was used for the purpose of estimation of stock return. The Nifty for the corresponding period was used as the market return proxy as given the CAPM model. The data was made stationary using one period lag. The period of study in the case is 2014–2018 (both years inclusive). For CAPM model, we have taken the PLR of India as the risk-free rate of return, that is, 9.4 p.c. The study uses the area under the curve to iterate the three best returns given by the model in the named machine learning paradigm. The area under the curve which is commonly known as receiver operating characteristic curve (ROC) is a graphical representation of binary classification system as it discriminates by true positive rate and false positive rate at various threshold settings. An area of 1 represents the perfect set, and an area of 0.5 represents a worst set.

5 Results and Discussion In this study, the two most important parameters are return and area under the curve. As standard test, we took the academic point system for area under the curve. Excellent = 0.90–1. Good = 0.80–0.90. Fair = 0.70–0.80. Poor = 0.60–0.70. Fail = 0.50–0.60. In this study, we iterate three best predictions using three models for the return under the three methods of machine learning in Table 1. Table 1 Results for the models Algorithms

Return by decision tree

Area under the curve

Return by neural network

Area under the curve

Return by ordinary least square

Area under the curve

Single-index model (SIM)

0.582

0.728

0.562

0.862

0.538

0.969

Capital asset 0.462 pricing model (CAPM)

0.826

0.521

0.870

0.522

0.932

Arbitrage 0.523 pricing theory (APT)

0.785

0.432

0.731

0.368

0.726

Source Computed. Return is taken as beta specification of the model

Comparative Asset Pricing Models Using Different Machine …

739

Table 2 Rank of performance of the model under different machine learning mechanism Decision tree

Rank

Neural network

Rank

OLS

Rank

CAPM

1

CAPM

1

SIM

1

APT

2

APT

2

CAPM

2

SIM

3

SIM

3

APT

3

Source Compiled from calculations

We observe that return by decision tree and CAPM model yields 0.826 area under the curve though it has lower return specification. We, therefore, give a go ahead to CAPM under the decision tree approach. The next was use of neural network. We give a go ahead to CAPM as the area under the curve is high. However, in the case of OLS, we observe that single-index model yields better results as the area under the curve is 0.969. A rank is given in Table 2 where pf the performance of the various models in the corresponding machine learning mechanism is given. It can be concluded that while the estimation are largely based on regression close to ordinary least square technique, CAPM yields the best result, followed by APT. Thus, these methods can be used by the investors for appropriate decision mechanism.

6 Conclusion This study tries to estimate the returns using three known models, namely singleindex model, capital asset pricing model (CAPM), and arbitrage pricing theory (APT) model. These models are subjected to three software-based machine learning, namely decision tree, neural network, and ordinary least square, the results were iterated from the three best returns, and the area of the curve was estimated. The academic point score for the area was taken into consideration. The value 0.8 was taken as the cutoff for the acceptance of the model. The result shows that CAPM outperforms all the models used under different machine learning mechanisms.

References 1. Snoek J, Larochelle H, Adams RP (2012) Practical bayesian optimization of machine learning algorithms. In: Advances in neural information processing systems, pp 2951–2959 2. Hargreaves C, Reddy V (2017) Machine learning application in the financial markets industry. Indian J Sci Res 17(1):253–256 3. Dutta A, Sinha M, Gahan P (2020) Perspective of the behaviour of retail investors: an analysis with Indian Stock Market Data. In: Computational intelligence in data mining. Springer, Singapore, pp 605–616 4. Fernández-Delgado M, Cernadas E, Barro S, Amorim D (2014) Do we need hundreds of classifiers to solve real world classification problems? J Mach Learn Res 15(1):3133–3181

740

A. Dutta and M. Sinha

5. Chandrashekar G, Sahin F (2016) A survey on feature selection methods. Int J Comput Electr Eng 8:34–56 6. Kazemi V, Sullivan J (2014) One millisecond face alignment with an ensemble of regression trees. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1867–1874 7. Wo´zniak M, Graña M, Corchado E (2014) A survey of multiple classifier systems as hybrid systems. Inf Fusion 16:3–17 8. Gong Y, Wang L, Guo R, Lazebnik S (2014) Multi-scale orderless pooling of deep convolutional activation features. In: European conference on computer vision. Springer, Cham, pp 392–407 9. Sharpe WF (1963) A simplified model for portfolio analysis. Manage Sci 9(2):277–293 10. Tsai CF, Wang SP (2009) Stock price forecasting by hybrid machine learning techniques. In: Proceedings of the international multiconference of engineers and computer scientists, vol 1, no 755, p 60 11. Quah TS (2008) DJIA stock selection assisted by neural network. Expert Syst Appl 35(1– 2):50–58

The Digital Fraud Risk Control on the Electronic-based Companies Ford Lumban Gaol, Ananda Dessi Budiansa, Yohanes Paul Weniko, and Tokuro Matsuo

Abstract Cybercrime has become the main concern of an agency or business nowadays. It says that 71% of organizational threats occur because of the failure of the company to monitor the software, according to the Global Risk Management Report. Although, 74% is disputed because the IT system is not being updated. This research focuses on Indonesia, and according to Patroli Siber Web site, there are 815 cases of offensive material that have been disseminated, which are followed by web fraud in as many as 530 cases. And also focused on unauthorized access in 104 cases, data theft in 36 cases, device reduction in 17 cases, and data manipulation in as many as 54 cases. For 1361 instances, WhatsApp social media was ranked first in social media, followed by Instagram 1288 instances, Facebook 596 instances, and email 108 instances. This research is also carried out to provide input to non-bank organizations to take the future preventive steps against the risk of cybercrime. The outcome found that for non-bank organizations, certain steps need to be taken in risk reduction. Keywords Risk mitigation · Cybercrime · Cyber threats · Non-banking organization F. L. Gaol (B) Computer Science Department, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] A. D. Budiansa · Y. P. Weniko Information System Management Department BINUS Graduate Program—Master of Information Systems, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] Y. P. Weniko e-mail: [email protected] T. Matsuo Advanced Institute of Industrial Technology, Shinagawa City, Japan e-mail: [email protected] City University of Macau, Xian Xing Hai, Macau Asia University, Taichung City, Taiwan © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_56

741

742

F. L. Gaol et al.

1 Introduction Cybercrime is a perplexing society today. Global risk management survey outcome addresses that cyber risk in global risk is the big 10. From the smallest thing in the firm, the cyber danger is inflicted. It says that 71% of organizational threats occur because of the failure of the company to monitor the software, according to the Global Risk Management Report. Although, 74% is disputed because the IT system is not updated [1]. PwC noted that there has been a substantial rise in security incidents in the company or individuals over the last 5 years. In 2016, the highest graph was 69% and then there was a promising fall, but there was a 7% rise from 2018, which was 44 % to 51% in 2019 [2] (Fig. 1). The perceived impact of the respondents who had witnessed a cyber incident was also reported by PwC. The focus is on a few key points, such as the company’s essential assets being inaccessible for an extended period of time, the financial side being lost, confidential information being lost, and personal data being lost [2]. Kaspersky developed a portal as a special project that offers a broad variety of information tools, ranging from intelligence to emerging threats, risk reduction strategies to advisory and investigative services. Kaspersky shows a survey on the dashboard portal as shown [3] (Fig. 2). Indonesia is the sixth nation with a computer system that is targeted by cybercrime, according to Kaspersky, which is 30.6% of 16.3% of the world’s computer systems that are compromised by cybercrime. Indonesia took the fifth position after Vietnam in January 2019, but the percentage of attacks on the systems did not decrease. This shows that Indonesia is still unable to deal with cybercrime [3] (Fig. 3). It appears from the above image that the Internet, composed of hardware, flash disk, and email, is the main source of cybercrime [3] (Fig. 4). Kaspersky claims the main carriers of malware come from the .NET framework. The use of this mechanism is commonly used in Indonesia’s application system, Fig. 1 PwC cybercrime survey [2]

The Digital Fraud Risk Control on the Electronic-based Companies Fig. 2 Top ten countries of industrial computers attacked [3]

Fig. 3 Threat sources [3]

743

744

F. L. Gaol et al.

Fig. 4 Malware platform [3]

so Indonesia still ranks sixth among the top ten countries whose system is under attack [3]. The police of the Republic of Indonesia paid more attention to cybercrime while creating a portal called “Cyber Police.” There was a spread of provocative content of 815 cases in the portal reported during 2020, followed by online fraud in as many as 530 cases, unauthorized access in 104 cases, data theft in 36 cases, device reduction in 17 cases, and data manipulation in as many as 54 cases [4] (Fig. 5). The Indonesian Crime Patterns map reported in the Cyber Patrol as of February had 513 cases, decreased to 469 cases in March 2020, and decreased to 460 cases back in April 2020 [4] (Fig. 6).

Fig. 5 Types of cybercrimes reported by the public to police [4]

The Digital Fraud Risk Control on the Electronic-based Companies

745

Fig. 6 Indonesia’s cybercrime trend of 2020 [4]

For 1361 cases, WhatsApp social network was ranked first after breakdown for each category of channel, followed by Instagram 1288 cases, Facebook 596 cases, and email 108 cases[4] (Fig. 7). In an organization or business, cybercrime has become the primary focus. The right to privacy is given to any mechanism that runs both business and individuals. For now, there is a security that hopes for the cope with cybercrime. The best move to defend against cybercrime is to take cybercrime prevention steps. Since cybercrime is not new in the banking world, the bank would seem to be better equipped to deal with it. As for non-bank organizations, cybercrime is a new thing, especially for fairly

Fig. 7 Reported platform type [4]

746

F. L. Gaol et al.

small organizations. Not all companies can cope with cybercrime, which is going to happen. This research is, therefore, focused on Indonesia and is being carried out to provide input to non-bank organizations to take the future preventive steps against the risk of cybercrime.

2 Literature Review 2.1 Organizations of Non-banks Non-bank organizations are business organizations that perform financial activities directly or indirectly through activities to collect community funds and distribute productive activities back to the community. Groups of non-bank institutions include (1) an insurance firm that provides services to estimate risk losses, benefit losses, and legal liability. (2) TASPEN (saving insurance pension) is a legal agency administering and running a pension scheme. (3) Save loan cooperative, this is a legal body that raises mutual funds and loans back to members or societies. (4) Stock exchange/place of sale and purchase of securities in the capital market, which are a legal body offering facilities for the sale and purchase of securities. (5) Factoring firm, a business entity that carries out financing activities in the form of the purchase or transfer and management of receivables. (6) Venture capital firm, which is a company the funding of which is made available by a company to its business partners where the financing is made up of a capital spokesperson. (7) Pawnshop, which is an undertaking that lends funds with the assurance of moving goods to its customers. (8) Leasing company, which is a company that provides consumers with buying services in installments [5].

2.2 Cybercrime Cyber is derived from the technique of the Greek word that has software arts or abilities. It is possible to view criminality as a crime. Cybercrime is a crime committed by a corporation or organization’s information system. Crimes committed centered on the use of cyberspace which is an interconnected space by the use of a network to conduct everyday activities. Positive and negative results can be achieved in the growth of cyberspace. Cybercrime is caused by this negative effect. There are several types of cyber threats, namely: (1) hardware threats that attack hardware triggered by some installation activities of the system and (2) software threats that attack software. This threat is triggered by the incorporation of tools used to carry out information theft, destruction, and exploitation activities; (3) data/information risks arising from the distribution of such interest-based data/information activities [6].

The Digital Fraud Risk Control on the Electronic-based Companies

747

According to PwC cyber treats sources as follows: (1) employees/internals with unintended deeds, (2) criminal groups, (3) hackers, 4) number of emerging technology, and (5) statutory/regulatory criteria are the most commonly found in a business [2]. Different sources of cyber threats include international intelligence, disappointment, investigative reporters, terrorist organizations, hacker operations, and organized crime gangs. Potentially, the possibility of crime is the failure of data information systems, military operations, and other intrusion using computer networks and the Internet [6].

2.3 Mitigation of Risk In anticipation of the possibility of potential damages, risk reduction is an intervention or initial reaction. Risk mitigation strategies require the implementation of risk management mitigation plans to manage or minimize risk to an appropriate level. Once a strategy is adopted to determine its efficacy, it is constantly monitored. (1) Risk detection, which is to recognize the risk that may occur in the operating phase, is step-by-step in reducing risk. It is the first step in the process of risk management, (2) determining the effect of an impact assessment resulting from the defined risk, and (3) risk prioritization, which is to make risk objectives based on the evaluation of the risk effect that has been carried out. (4) Risk reduction, which is to refresh mitigation based on the risk objectives that have been identified, is the objective of forming the basis for the allocation of significant resources. The effectiveness of the preventive measures against the risks that exist must be evaluated on a regular basis [7]. The risk reduction phase requires the creation of mitigation strategies aimed at managing, removing, or reducing risk to an acceptable level. When a strategy has been adopted, its effectiveness should be tracked to revise the course of action if necessary [8].

2.4 Abbreviations and Acronyms See Table 1. Table 1 Abbreviations No.

Abbreviations

Meaning

1

PwC

PricewaterhouseCoopers

2

TASPEN

Tabungan dan Asuransi Pegawai Negeri (savings and insurance for civil servants)

748

F. L. Gaol et al.

3 Methodology and Resources 3.1 Resources and Methods The literature obtained from many sources, such as papers, journals, the Internet, and lecture materials, is used to perform research. Information protection risk analysis has been studied for a long period from the audit viewpoint, according to Sumner (2009) [12] and Rossi (2015) [13]. The most popular approach aims to develop a checklist collection for verifying the security elements and determine results of the evaluations based on auditor’s judgment. In [10], authors have proposed a matrix-based approach methodology to analyze the information security risk. The matrix approach model has enabled a wide range of quantitative analyses. This approach correlates the organization’s assets, vulnerabilities, risks, and controls and evaluates the value of various controls relating to the organization’s assets. Generally, the assets of the company encompass the sensitive items that require critical security. In particular, the properties like communications networks and data can be either intangible or tangible with increased confidentiality and credibility. Furthermore, it incorporates risk matrix, which includes risk elements and the relationship among them for the assessment process. The proposed research work aims to provide a systematic proposal for analyzing the quantitative risks. For the observation scenario, the analysis has utilized an intermediate-level company that has allocated less than 150 staff members for software growth. The main aim of this study is to develop a realistic model, which small and medium enterprises should replicate.

3.2 Systems In the considered IT systems, multiple layers of software technologies are included, and they are: (a) (b) (c)

Growth and operational instruments. Accounting and management applications and repositories for internal operations. Control of applications/virtual server-based operating clients.

The server/client architecture-based system environment consists of SQL database (2012 above) designed for visual studio.NET (C#, C++) with a programming language. Further, the data files, executable files and source code for the development and creation, and completed code are included in main platform. Data development files viz. compiled libraries, source code, source code, tables, and pre-defined procedures are stocked on the Seagate NAS and linked with HP server of Windows 2019 operating systems and working with the database engine named

The Digital Fraud Risk Control on the Electronic-based Companies

749

SQL 2016 R2. The program code is shared among different HP servers working with the Windows 2019. On the separated areas in Indonesia Island, the main building houses’ local servers were established. A group of virtual servers that are working with Windows 2019 and introduced in remote data centers located at Singapore and Bangkok, which runs executables/production systems. In the Peninsula Islands, users are physically put in one spot by establishing a physical link among their personal computers with the help of wide area network (WAN). Using VPN tunnel, programmers connect them to the development field. There is a business unit working in the Southeast Asia regions that encompasses more than 100 sectors, ranging from manufacturing to hospitals. In this unit, access is required to administrative records, sales, production, financial backlogs, and email services. The business network maintains a large number of trustworthy Windows Server 2019-managed domains. There is control of the base on the dashboard of the system.

3.3 Actual Threats Detection As part of their vision and intent, cyber risk analysis protects a company that adopts IT from a broad spectrum of threats to assure the progression of business, mitigate harm, and optimize the return on assets as well as the opportunities. The organization’s significant assets are any mechanism that supports information systems and networks [13]. Risk identification would allow the management to establish a control for minimizing the risk and effect of hazards that arise due to cybersecurity threats. In [13], authors have specified that for predicting threats, successful awareness it requires: (a) (b) (c) (d)

The explanation and identification of the institution’s data properties. Development of a framework for risk assessment to identify vulnerabilities and existing security threats and to evaluate risks on a specified scale. Proposing methods of planning and evaluation that mitigate the risks and hazards described in the risk analysis report. Readiness of a recommendation report detailing the results to promote the concept of an information security structure tailored to the organization’s evidence.

As mentioned by Wang and Chao, a converse-thinking strategy is used by current risk management systems to develop theoretical strategies to mitigate the likelihood of security infringements in a more economical way [13]. The same authors argue that with the help of risk assessment, defenders can define successful remedies by following three distinct tactics specific to the security strategy of the organization, as seen in Table 2.

750

F. L. Gaol et al.

Table 2 Defensive techniques associated with the secure of organizations Cost of defense Contra measures

Maximum

Minimum ✓

Lower the residual risk Defend against as many threats as possible



Display the total amount of avenues of attack



3.4 Effective Risk Management Effective risk management should be aware of factors that impact business, namely: (a) (b) (c)

What ought to be covered? What resources are considered critical? If any, detrimental effect is reduced by the steps adopted to protect or avoid.

Threats are related to potential factors with a possible adverse effect on the data to the degree that vulnerabilities or shortcomings in the controls that protect them are presented in the properties that may be affected. This latter meaning is summarized in the word vulnerability, which puts the organization at risk if the threat is exploited. This risk will be the product of the study of their occurrence probability and its impact on safety properties. In different instances, in the absence of a threat to a system, no system can be vulnerable and no unsafe circumstance occurs for an entity, subject, or system unless the object, system of subject are exposed to the potential action such as threat poses. In other words, there is no danger or independent vulnerability, because situations conceptually defined separately are collectively conditioned for methodological purpose and a better awareness of the risk [13]. The definition of danger typically refers to a system’s latent threat or external potential risk; statistically defined as the likelihood within certain circumstances and for a given exposure to time of reaching a level of occurrence of a certain extent. The emergence of threats and the identification of vulnerability need constant attention from information security professionals due to the inclusion of new assets and pose a constant challenge to achieving effective information protection [14, 15].

3.5 General Computing Environment Threats A threat is something that uses or exploits a vulnerability to compromise the protection of information or computing properties. Threats that arise due to the presence of bugs, irrespective of compromising the system protection. To identify risks to a given computer system, it is more significant to consider each and every network vulnerabilities and threats, which can damage resources and harm or failure impact various aspect of the organization [17].

The Digital Fraud Risk Control on the Electronic-based Companies

751

As previously mentioned, a direct relationship between danger and vulnerability has been established. To some point, if it does not exist, the other does not exist as well. Typically, emerging risks are split according to their scope: (a) (b) (c) (d)

Ecological catastrophe (physical protection). Machine risks (logical defense). Vulnerability from networks (telecommunication). Risks to individuals (insiders–outsiders).

Threats have been defined by looking at the enterprise, the parent corporation, and the market to develop our Information Security Defense Matrix. This instrument serves as a defense-in-depth checklist at each point and starts to answer the following questions [18]: (a) (b) (c) (d)

Specify the threat by asking if an intruder might raise a threat. Will anyone have the opportunity for a vulnerability to be exploited? Will there be a tradition that exploitation is successful? Does anyone have a track record of targeting your industry?

3.6 Assessment of Threats The level of risk measured by previously defined risk factors during the analysis phase must be assessed to conduct a risk assessment. A practical qualitative technique was used for risk. The first stage of the analysis is to classify or evaluate the properties to be covered.

3.7 Description of Effect When an asset is the target of a hazard, the effect is not uniform in all of its dimensions. In the event of an active attack, it is important to predict the possible consequences until it has been determined that a threat may affect an asset. According to Valero, the impact is described as the potential changes in the outcomes of one or more goals if the risk materializes. The danger effect is calculated on a cardinal scale between 0 and 9 for this work. As suggested by Caralli (2007) [16], the following levels were used to assess the magnitude of the impact (Table 3).

752

F. L. Gaol et al.

Table 3 Impact scoping Magnitude of impact Impact definition Strong [9]

The exploration of vulnerability (1) a highly expensive loss of making realistic assets or resources can result; (2) the intent, reputation, or interest of an individual can be significantly breached, harmed, or impeded

Moderate [3]

Vulnerability (1) is responsible for causing the expensive loss of tangible assets or resources; (2) is liable to infringe, harm, or obstruct an entity’s intent, reputation, or interest

Weak [1]

Exercising a weakness (1) can lead to the loss of some valuable resources or assets; (2) may have a serious impact on the purpose, credibility, or interest of the entity

4 Criteria for Risk Assessment 4.1 Risks that are Related with System Computer manufacturing is a highly competitive industry that constantly develops new software, algorithms, and commercial applications. Therefore, rivals continually seek to overtake each other. In order to safeguard the corporate resources and to avoid interruption of software development activities, information security is necessary. To translate raw vulnerabilities into risks, a risk calculation matrix was created. The following points were used to base the methodology: (a) (b) (c)

Categorizing weaknesses Pairing of vectors for risks Evaluating the likelihood of an event and potential effects.

4.2 Impact Scale Different areas are identified in order to perform a qualitative risk analysis, where possible threats produce some degree of impact with respect to the company’s operations. It also assesses the impact and likelihood of each, providing a baseline that will set an action plan to minimize these risks as they occur (Tables 4, 5, 6, and 7). Table 4 Description of operational effect Impact area

Weak

Moderate

Strong

Delivering service

Small influence on a business unit’s results and/or priorities

Moderate effect on the delivery of services through one or more business units due to extended loss of service

A significant compromise between the company’s business interests and targets

The Digital Fraud Risk Control on the Electronic-based Companies

753

Table 5 Reputation impact definition Impact area

Weak

Moderate

Strong

Integrity

Reliability is slightly affected; in order to recover, little to no effort or cost is required

Reliability is slightly affected; in order to recover, little to no effort or cost is required

Credibility is irrevocably impaired or harmed

Customer loss

The decrease in users of less than 30% leads to a decrease in interest

30 to 80% consumer reduction due to lack of trust

Due to lack of confidence, customers decreased by more than 80%

Table 6 Financial impact definition Impact area

Costs of service

Weak

Moderate

Strong

Increase in yearly running costs by less than 25%

Increase in annual operating costs by 25–100%

Annual running costs are growing by more than 100%

An annual sales loss of 25–40% loss of less than 25% annual sales

Higher than 40% annual loss of sales

Table 7 Legal impact definition Impact area

Weak

Moderate

Strong

Sanctions

Fines amounting to less than $2,000.00 were assessed

Penalties are assessed between US$2,000.00 and US$40,000.00

Penalties greater than 40,000.00 US dollars are levied

The disagreement

The corporation is charged with non-frivolous lawsuits or fines of less than US$5,000.00

Between US$5,000.00 and US$50,000.00 are non-frivolous cases or litigation brought against the company

Non-frivolous cases are brought against the company or lawsuits greater than US$50,000.00

4.3 Assessing Chances (Probability) An analytical evaluation is needed for the performance and objective assessment in order to measure the likelihood and effects of risks, as well as to understand their implications for project objectives. Discussions, sensitivity analysis, probability distribution, logistic regression, and measurements are the key tools used. This segment addresses a qualitative scale that has been used to calculate the likelihood of the risks under consideration (Table 8). Furthermore, confidential data that play a critical role in the company’s everyday operations were also accounted. The corresponding items are classified related to information asset’s use and origin (Table 9).

754 Table 8 Description of the likelihood limit

F. L. Gaol et al. Rating

Description

Meaning

5

Nearly sure

Predicted to happen in 1–4 months

4

Entirely possible to

Predicted to happen in 4–8 months

3

Potential

Predicted to happen in 8–12 months

2

Feasible but improbable

Predicted to happen in 1–4 years

1

Nearly rarely

Not predicted to happen in 4 years

Table 9 Asset group definition Asset

Location

Group

Data of worker

Unrevealed

Income statements

Unrevealed

Financial management and processes

Information about clients

Unrevealed

Databases for businesses

Unrevealed

Priority

Backups

Unrevealed

Databases of buyers

Unrevealed

Tools for software engineering H (9)

Software design document

Unrevealed

H (9)

Documenting architectures

Unrevealed

H (9)

Apps and libraries compiled

Unrevealed

M (3)

Source code

Unrevealed

H (9)

Password for network equipment Unrevealed

Governance of the framework

M (3)

Authentication of devices for servers (production)

Unrevealed

H (9)

Authentication of devices for servers (development)

Unrevealed

M (3)

4.4 Module of Risk A risk matrix was developed to develop a risk-level quantitative evaluation linked to possible risks and vulnerabilities. The company’s threats and the probability of occurrence were then accompanied by an overall risk rating matrix, which, as shown in Tables 10 and 11, reported the weight of the threats.

The Digital Fraud Risk Control on the Electronic-based Companies

755

Table 10 Risk scale matrix 8

16

24

32

40

3

6

9

12

15

1

2

3

4

5

Nearly Rarely

Feasible but improbable

Feasible

Extremely Likeable

Nearly Assured

Impact

Vigorous Medium Failed

Likelihood

Table 11 Max ranking of risks: [ 0–200 bottom], [201–400 medium], [+401 top] Threat

Probability

Impact

Risk Score

DOS

4

Definitely

6

Influence

Masquerading's Impersonating

2

Probable

4

Medium

6

Malevolent Script

2

Probable

3

Influence

10

Unintended Failures

3

Extremely

8

Medium

15

Insider Leaks

2

Probable

6

Influence

14

Intrusion

3

Highly

5

Influence

12

Spamming

3

Probable

8

Medium

12

Significant impairment to Peripheral

2

Unlikely

7

Influence

16

Internet apps error such as SQL injection, cross-site scripting

3

Unlikely

6

Influence

24

Internet Application error

3

Probable

5

Influence

24

Probable

4

Medium

9

Influence

27

User Machine Affected such as 3 Virus attack

32

Developer/administrator breached system: e.g. virus infection

4

Probable

7

Test Servers that are unreliable

4

Probable

6

Medium

12

Passcodes figuring

3

Maybe

8

Influence

27

Vulnerabilities of database servers

4

Probable

8

Influence

36

Total Failure System

4

Uncertain

7

Influence

36

Abuse in data

3

Uncertain

7

Influence

27

Environment Catastrophe

4

Probable

5

Medium

Total Risk Score

12 387

756

F. L. Gaol et al.

Table 12 The vulnerability matrix mapping

Rank

Total Score

Infrastructure System

Application System

Communication

Product reliability

Data Reliability

Market Missed / Income

Credibility (Believe)

Implication/ Priority Levelling 7 = Key Supporter 5 = Crucial 3 = Crucial, not Key Supporter 1 = Not Crucial Susceptibility Security Firewalls Data Transfer Databases Applications (data management and analysis, e-business) Internet Service Servers Strength of Passwords Nodes of Clients Internet Based Services (DSL, VPN)

Preference / Implication

Vulnerability Matrix 6 = Influence 4 = Medium 2 = Thin 0 = Not Related

7 7 7 5

7 6 6 6 4

6 4 6 4 6

5 6 4 6 6

4 6 6 6 2

3 4 4 4 6

2 6 6 2 6

1 6 4 4 4

266 238 224 170

1 2 3 4

5 5 3 3

6 6 2 4

2 2 4 0

2 4 4 2

6 2 2 6

4 4 4 6

2 4 2 0

6 2 6 0

180 72 72 54

5 6 7 8

4.5 Vulnerability Matrix Techniques such as checklists and advanced software that assess vulnerabilities at the operating system and network level are used to detect weaknesses in the technology infrastructure. The resulting matrix was computed according to Goel and Chen’s guidelines (2005) [17]. This approach correlates the organization’s assets, weaknesses, risks, and controls and evaluates the value of various controls relating to the organization’s assets. Assume that there are q controls that can help reduce p risks, and that control Z influences the T threat, as defined in the following formula (Table 12). Z0 =

l= p 

eol ∗ T l

l=1

5 Result and Discussion The steps in the management process that are introduced in the face of the threat of cybercrime are (1) detection, i.e., the possibility of cybercrime being regularly detected against the triggering of cybercrimes. Identification of all factors must then be assessed and split into two categories, namely probability and the probability of

The Digital Fraud Risk Control on the Electronic-based Companies

757

effects; (2) evaluation, which is the stage of risk assessment resulting from cybercrime that affects all aspects of life. The assessment is assessed using cybercrime-induced matrices and risk assessments. (3) The next step is to coordinate the risk response and intervention after performing recognition and risk assessment. (3) Danger, at this stage, information and data theft must be controlled individually or by institutions; (4) regulation, a monitoring and improvement stage that continues to determine the effectiveness of risk management. Early alert for security controllers to expect cybercrime has been included in the monitoring process [6]. The cybercrime eradication approach is by a violent criminal conduct program in the law such that the act falls into the cyberspace crime group. In addition to developing policies outside criminal law to help cybercrime prevention efforts, socializing against the potential of cybercrime, building cooperation with private parties, and establishing an institutional network in the national and international sphere to deter cybercrime [9]. Some businesses also have objectives to fight cybercrime, including (1) performing cybercrime awareness training, (2) accessing data in compliance with the authority, (3) introducing cyber information and security strategies, (4) restricting the network, (5) installing/creating malware detector, and (6) replacing old system upgrade technology [2].

6 Conclusion The risk mitigation measures that non-bank institutions must implement are based on some literature that has been studied and addressed in the proposed work. Here, the detection of hazards that exists in the organization or business for the appraisal of the identified risks. A matrix is constructed based on the results of the test. It is established to execute preventive measures against the results provided by the matrix. These include socialization and cybercrime training. The limit is to requests for data following its competence and also to limit the network access. Other methods of processing include installing/creating malware detectors, replacing older systems with newer systems, and creating databases for any risk mitigation measures taken. This also leads to a study of the risks and mitigation steps already in place, as well as monitoring the implementation of the compiled actions.

References 1. Deliote (2011) Global risk management survey, 11th edn. Reimagining risk management to mitigate looming 2. PwC (2019) Cybercrime survey 2018. Cybercrime Surv 3. Kaspersky (2020) Kaspersky ICS CERT. Kaspersky Industrial Control Systems Cyber Emergency Response Team 4. Polri (2020) Patroli Siber

758

F. L. Gaol et al.

5. Abdullah T (2015) Lembaga Keuangan, pp 1–43 6. Rahmawati I (2017) Analisis manajemen risiko ancaman kejahatan siber. J Pertahanan Bela Negara 7(2):51–66 7. Katende N (2019) Implementing risk mitigation, monitoring, and management in IT, July 2017 8. Katende N, Ann K, David K (2017) Implementing risk mitigation, monitoring, and management in it projects. Comput J, July 2017, pp 1–8 9. Bunga D (2019) Politik hukum pidana terhadap penanggulangan. J. Legis. Indones 16(1):1–15 10. Goel S, Che V (2005) Information security risk analysis a matrix based approach. Retrieved from http://www.albany.edu/~GOEL/publications/goelchen2005.pdf 11. Rossi B (2015) Critical steps for responding to cyber attacks. Retrieved from http://www.inf ormation-age.com/technology/security/123459644/6-critical-steps-responding-cyber-attack 12. Sumner M (2009) Information security threats: a comparative analysis of impact, probability, and preparedness. Inf Syst Manag 26(1):2–12. https://doi.org/10.1080/105805308023846399 13. Marcus R, John B (2000) Access control systems and methodology information security management handbook, four volume set. Auerbach Publications. 14. Demidecka K (2015) Communicating a cyber attack—a retrospective look at the talktalk incident. Retrieved from http://www.contextis.com/resources/blog/communicating-cyber-attackretrospective-look-talktalk-incident/ 15. Forester Research (2015) Protect your intellectual property and customer data from theft and abuse. Retrieved from https://www.forrester.com/reports/ 16. Caralli R (2007) The OCTAVE Allegro Guidebook, v1.0. Carnegie Mellon University 17. Creasey J, Glover I (2013) Cyber security incident response guide. Retrieved from http://www. crest-approved.org/wp-content/uploads/CSIR-Procurement-Guide.pdf 18. NIST (2012) Computer security incident handling guide. Retrieved from http://nvlpubs.nist. gov/nistpubs/SpecialPublications/NIST.SP.800-61r2.pdf

Retraction Note to: Software Effort Estimation of Teacher Engagement Application Sucianna Ghadati Rabiha, Harco Leslie Hendric Spits Warnars, Ford Lumban Gaol, and Benfano Soewito

Retraction Note to: Chapter “Software Effort Estimation of Teacher Engagement Application” in: G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_39 The authors have retracted this article because they did not have ownership of data or material in the following section 4 subsections: Figure 2 Explanation of the use case diagram in Figure 2 (points 1-6) All authors agree to this retraction.

The retracted original version of this chapter can be found at https://doi.org/10.1007/978-981-16-5640-8_39

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8_57

C1

Author Index

A Abbas, Mohamed, 113 Abdullah, Hesham Mohammed Ali, 709 Abinaya, S., 35 Adarsh, R. N., 469 Aggarwal, Gagan, 565 Aiswarya, M., 663 Aiswarya, R., 589 Aji, S., 299 Aldenny, Muhammad, 1 Amitasree, Palanki, 673 Anand, R., 399 Anita, H. B., 183 Antu, Probal Roy, 143 Arun, A. K., 469 Arunraja, A., 275 Ashok, Santosh, 417 Attri, Devahsish, 565

Chowdhury, Tahmid Rashik, 73, 89 D Danila Shirly, A. R., 327 Daway, Hazim G., 579 Devi, S. Kiruthika, 171 Devi, Uma, 13, 23 Dhanusha, C., 709 Dhinakaran, D., 431 Dutta, Abhijit, 733 E Esakki Vigneswaran, E., 275 F Faizi, Fawaz, 469

B Baiju, M. R., 539 Balamurugan, V., 589 Banerjee, Sumanta, 315 Barmon, Saykot Kumar, 143 Basaveswara Rao, B., 619 Baskaran, Kamaladevi, 417 Bhandari, Mahesh, 217 Bhatt, Aruna, 565 Budiansa, Ananda Dessi, 741 Bunawan, Stefan Gendita, 721

G Gaol, Ford Lumban, 1, 511, 721, 741 Geluvaraj, B., 499 George, Mino, 183 Ghosh, Pronab, 73 Gopal, Greeshma N., 695 Goyal, Hardev, 565 Guganya, K. P., 287 Gunasekaran, S., 589 Gupta, Vishal, 483 Gutte, Vitthal S., 217

C Challa, Manoj, 399 Charumathi, A., 103

H Hasib, Khan Md., 127 Hiremath, Rupa, 201

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 G. Ranganathan et al. (eds.), Pervasive Computing and Social Networking, Lecture Notes in Networks and Systems 317, https://doi.org/10.1007/978-981-16-5640-8

759

760 I Isaac, Meera Mary, 299

J Jagadeeswari, M., 663 Janardhana, Kedri, 275 Javed Mehedi Shamrat, F. M., 73, 89 Jaya Laxmi, A., 445 Jayanthy, S., 361 Joe Prathap, P. M., 431 Jogendra Kumar, M., 385 Jothi, A. M., 103 Jouda, Jamela, 579 Juneja, Mamta, 483

K Karthikeyan, J., 243 Kavitha Devi, M. K., 35 Keshk, Hatem M., 653 Khan, Aliza Ahmed, 73 Kirthika Devi, V. S., 673 Kiruthika, V., 159 Kovilpillai, J. Judeson Antony, 361 Kovoor, Binsu C., 695 Kristian, Hans, 1 Kumar, A. V. Senthil, 709 Kumar, M. Suresh, 51 Kuriakose, Neenu, 13

L Latha, Yarasu Madhavi, 341 Lumban Gaol, Ford, 265

M Mahdi, Tahseen Falih, 579 Majumder, Anup, 143 Manikandababu, C. S., 663 Matsuo, Tokuro, 1, 721, 741 Mishra, Megha, 549 Mishra, Vishnu, 549 Mohamed, Sayed A., 653 Mukherjee, Shyamapada, 315 Mundhe, Pramod, 217 Musirin, Ismail Bin, 523, 709

N Narayanan, H. Vishnu, 469 Nasr, Ayman H., 653 Nasrıwala, Jıtendra V., 605

Author Index Nethra, R., 287 Nila, B., 287 Nisha Jenipher, V., 255 Niveda, S., 159 Nowrin, Itisha, 143 Nugroho, Andi, 1, 721

O Otayf, Nasser, 113

P Pandey, Sudhakar, 63 Parish Venkata Kumar, K., 385 Parvathi, R., 103 Patil, Pradip, 201 Phani Kumar, V. V. N. V., 385 Poornimha, J., 523 Prajwal, K. S., 673 Pratapa Raju, M., 445 Pravija, Danda, 63 Prederikus, Pangondian, 721 Princy Suganthi Bai, S., 255 Priyanka, E., 327 Puli, Srilakshmi, 351 Purva, Prishu, 373

R Rabiha, Sucianna Ghadati, 511 Raghavendra Sai, N., 385 Rajammal, K., 639 Rajasekhar, N., 51 Rajeev, Haritha, 23 Rajeshwari, J., 51 Rama Koteswara Rao, P., 619 Ranjan, Rumesh, 127, 143 Rao, B. Srinivasa, 341 Rao, Tagaram Kondalo, 275 Ravindran, K., 255 Rege, Amit, 231 Resmi, R., 539 Roshini, R., 327

S Sai Kumar, S., 385 Santhia, R. K., 639 Senthil Kumar, A. V., 523 Shamrat, F. M. Javed Mehedi, 127, 143 Shanmugapriya, R., 159 Shanmuga Sundari, P., 243 Sharma, Saurabh, 483

Author Index Sheeba, Adlin, 255 Sheetlani, Jitendra, 549 Shema, Rokeya, 89 Siddique, Abdul Hasib, 127 Sindal, Ravi, 231 Sindhuja, M., 327 Sinha, Madhabendra, 733 Siva Sakthi, A., 159 Smitha Chowdary, C. H., 351 Soewito, Benfano, 265, 511 Sonkar, Nidhi, 63 Sreerag, G., 695 Srinitha, S., 159 Steffy Jones, A., 327 Subalalitha, C. N., 171 Subbulakshmi, S., 469 Sudarsanam, P., 399 Sudimanto, 265 Suganya, R., 287 Sultana, Zakia, 89 Sundaram, Meenatchi, 499 Suresh Babu, V., 539 Swathi, K., 619 T Tandel, Purvı H., 605

761 Tasnim, Zarrin, 73, 89

U Uddin, Md. Shihab, 73, 89

V Vamshi, Guntha Raghu, 673 Vamsi Krishna, K., 619 Venkatesh, A., 255 Verma, Manoj, 549 Vermani, Shalini, 373

W Warnars, Harco Leslie Hendric Spits, 265, 511 Weniko, Yohanes Paul, 741 Wilscy, M., 299

Y Yadav, Amit, 127 Yuvarani, A., 103